The concept of “artificial intelligence” appears to be everywhere. The industry of market research is no different. Every day, we seem to be an inch closer to the dystopia that Hollywood warned us about decades ago the rise of machines!
But as sci-fi envisioned, the robots are not orchestrating an insurrection. It’s just that they are increasing—and in certain cases, quietly—replacing the roles that we humans have become accustomed to taking on ourselves.
It’s not all bad. There are real cost savings in automated tasks and a lot of our lives are made simpler and more efficient by modern AI.
But there is some limitation on things that computers can do well. And before machines can grasp meaning, context, and emotion, we need to be pragmatic about what the automatons can outsource, and what the flesh-and-blood wants to hold in-house.
Where Does AI Hand it Back to Humans?
Every day, advancements are made to make computers understand and perceive inputs as human beings. And while we’re already light years ahead of where we were only 30 years ago, market researchers need to be careful to get caught up in the novelty and technological whiz-bang. And if we leave data processing to entities who can’t think or feel emotion, we could ask for the proverbial “garbage in.”
Metaphorical robots are doing an undeniably impressive job of gathering knowledge. The task of gathering respondent data becomes more and more reliable when left to AI technology, from logic programming to decision trees to chatbots that can practically “think on their feet”. The issue often occurs when such instruments are responsible for understanding and interpreting the data they are collecting.
We have recently seen an example of AI software trying to perform an analysis of feelings in real-time conversation, much in the same way that software is used to analyze our feelings about conversations that happen online and on social media. The text is interpreted and values are put on the text as algorithms are programmed. But in this case, as a result of the misinterpretation of a word choice, a false negative was inserted into the study!
AI detected the appearance of the word “bad” and immediately assigned a negative connotation to its use. What it didn’t understand was the context. It was a positive sentiment being expressed; for one, it missed the critical character “not” before the word “bad,” as the respondent indicated that the particular input was “not bad,” and yet, the artificial intelligence was perfectly willing and ready to return the negative sentiment as a legitimate input into the data set.
Furthermore, a noise outside the room was being interpreted by the machine which had nothing to do with the study being conducted; so the AI got the nuance completely backward. This was something no person would have misinterpreted in the scenario, but a computer theoretically inserted a fatal error into the data collection.
You can see where the market researcher would have gotten it wrong if AI were exclusively relied on here. Instead, a human should be providing oversight over the analysis, if not simply conducting it, so that heartless machines do not blindly get it wrong without a second guess.
Can’t Set Your Research on Auto-Pilot
Why airline companies are paying for pilots when most of the flying can be done by computers? And the answer is very simple that it is hard to rely on the computer to take off and to land safely in real-time scenarios… Perhaps the same paradigm can be applied to AI’s role in market research.
If you fully rely on the machine for a market research study then there is a chance where it can miss the context, it can fail to pick up on non-verbal cues or tone of voice, it misinterprets the text.
So yes, let’s leverage modern technology to take a 10-step process and make it a 5-step process. We’ll all save time, money, and tedium in the process. But it can’t replace humans from doing what we still do best…and that understands other humans.