It has long been a dream of researchers to provide truly objective data gathering and analysis. Focus group facilities are conceived as laboratories, and moderators are discouraged from influencing respondents with their own opinions. On the quantitative side, sophisticated statistical methodologies are employed to remove both bias and subjectivity from results.

It has also long been understood, if tacitly, that a certain human and subjective element will inevitably color research findings. The traditional focus group setting creates its own biases and incentives. Likewise, it will always be human beings who interpret quantitative data, and along with that will come certain assumptions as well as predispositions.

But with the advent of research technologies based on digital algorithms, we’re promised data gathering, analysis and even predictions that are genuinely objective.

That is a myth.

What Is An Algorithm?

Let’s start by understanding what an algorithm is in the first place. Simply put, it’s a formula or a set of step-by-step instructions, similar to a recipe. Gather a, b and c, arrange them in a certain order, apply some mathematical calculations, and you’ll get d.

In the real world, the number of input variables can be enormous. The output can either deliver a single result, or multiple “answers”, depending on the algorithm’s design.

Algorithms are all around us. They filter what we see on social media, assess whether or not we’re likely to repay a loan, make movie selections for us, and can monitor our health. The goal of using algorithms is to replace subjective judgments with objective measurements. But, upon closer inspection, this isn’t entirely the case.

Algorithms are Designed by Human Beings

Algorithms embody the choices and biases of their designers. It’s humans who determine what variables are worth including, how they should be weighted, what mathematics should be employed and, finally, what determines a successful answer.

Mathematician Cathy O’Neil, author of Weapons of Math Destruction, illustrates this with an analogy to cooking dinner for her family. As she explained in a recent interview, the ingredients in her kitchen are the “data” she has to work with.

“But to be completely honest,” she continues, “I curate that data because I don’t really use [certain ingredients]…therefore imposing my agenda on this algorithm. And then I’m also defining success, right? I’m in charge of success. I define success to be if my kids eat vegetables at that meal…My eight year-old would define success to be whenever he gets to eat Nutella.”

This doesn’t mean that algorithms are bad. To the contrary, they provide us with the ability to gather and evaluate enormous data sets with blinding speed and efficiency.

But we need to recognize that algorithms are, above all, human creations.

You Can’t Keep Humans Out of the Equation

Algorithms reflect the predispositions, intuitions, and even the emotions of their creators. Yet they are, for the most part, opaque. For understandable economic reasons, their inner workings are kept a tightly guarded trade secret.

So unlike the pre-algorithmic world in which we could challenge the methodologies used by flesh-and-blood researchers, for example, the chance to interrogate an algorithm and its biases is unavailable.

We are asked to put our faith in a black box.

Once again, this is in no way an argument against algorithms. Rather it is a call to explore how we can best work with them. Perhaps it’s best to understand them not as mathematically objective observers, but as brilliant – yet subjective – human colleagues.

As we move forward in our new world, we can learn to collaborate with algorithms as partners, recognizing their strengths and weaknesses as we acknowledge our own.

That’s the recipe for success.