Navigating the AI revolution in Market Research: Lessons from IIEX Europe
Last week I attended IIEX Europe - a consumer insights conference - that really highlighted the quickly evolving landscape of AI in market research. The presentations sparked numerous thoughts about the future of the profession, so I wanted to share my key takeaways and reflections from the event. There were several interesting talks, but I've chosen to highlight a couple that fit into the key themes that stood out for me.
Theme 1: The Rapid Evolution of AI in Market Research
My biggest takeaway from IIEX Europe was how quickly the landscape of AI in market research is evolving – the conversations have already moved on from e.g. the ESOMAR Congress in September 2023, only 9 months ago. New trends and technologies are emerging so quickly that even recent benchmarks soon become outdated. For example, ChatGPT-3.5 was used as a reference point in many of the presentations, even though recently released models like GPT-4o and Claude 3.5 have much more advanced capabilities. This fast-paced development means insights and comparisons from just a few months ago can quickly lose their relevance.
As far as I saw, none of the presentations relating to GenAI also didn’t reference any other models except ChatGTP. To me, this narrow focus risks limiting our broader understanding of AI capabilities and potential applications in market research – just as AI is a broader term than Generative AI, ChatGTP alone doesn’t represent the full range of GenAI models available to us.
We also need to balance how we leveraging GenAI and traditional machine learning technique -as exciting as they might seem, we don’t need to throw GenAI at everything because established methods often provide stability and clarity. With more complex and sophisticated models, it’s also becoming important to continuously evaluate and improve our approach to AI.
As AI's role in research grows, so does the need for a comprehensive understanding of its technical aspects. Concepts like knowledge graphs and Retrieval-Augmented Generation (RAG) are not just jargon, but vital for ensuring the reliability and transparency of our AI tools. Without this understanding, we risk viewing AI as a 'black box,' which could lead to misinterpretation of data or overlooking crucial nuances in research findings. Therefore, it's essential for researchers and clients to engage in technical discussions to make informed decisions about the effective and responsible use of AI in their work.
Theme 2: AI in Action
Heineken's presentation of their an AI-powered Knowledge and Insights Management (KIM) system was a good example of the potential of human-AI collaboration in market research. The system accelerates insight generation by providing fast and accurate results to support decision-making by leveraging the collective intelligence of the company. After their presentation, I wanted to learn more about the functionality included in Stravito, whose platform KIM is built on. I was both surprised and impressed by the capabilities for searching as well as the opportunities for serendipitous discovery – something that can be lost if you always need a clear idea of what you are looking for.
To me, this example perfectly illustrates the concept of co-intelligence, where AI augments rather than replaces human capabilities in the research process. And although systems like this can feel like a threat to research agencies, there is also another side to this: over the course of my career, many of the reports I’ve worked hard on have inevitably been lost inside organisations, and having a system like this can mean those insights are not lost.
However, the AI's role in market research extends beyond efficient knowledge management to actively supporting complex decision-making processes. A talk by Russell Evans from ZS Associates highlighted how AI can further support decision-making processes in market research. As with Heineken's system, the emphasis was on the synergy between AI capabilities and human expertise.
For me, his talk had several key lessons for the industry about how to use AI effectively:
Successful AI implementation relies on data relevance and quality, not just volume. This involves aligning data sources with specific research objectives, integrating explicit consumer statements with observed behaviours, and streamlining data sources to prioritize relevant information.
We can improve accuracy by focusing on relevant data subsets and developing comprehensive, structured knowledge representations (ontologies) which help to reduce hallucinations (although I prefer the word confabulation). Ontologies can also help with using AI for deep structural analysis of topics – an example of this in the scientific world is the Human Behaviour Change Project.
The true value of AI lies in producing meaningful, actionable insights - implementing and integrating frameworks can help with translating data into strategic recommendations. A key part of this is to treat AI as an integral partner in the research process rather than a mere tool – shifting us to a more symbiotic relationship, where AI augments human expertise rather than replaces it.
Theme 3: Co-intelligence and augmentation
A talk by Paul Marsden on the empathetic capabilities of generative AI highlighted how combining human and AI empathy can generate powerful insights. Marsden challenged the traditional view of empathy as a uniquely human trait, presenting evidence of AI systems outperforming humans in empathy tests and being perceived as more empathetic than human physicians in some studies.
Rather than replacing human researchers, he envisioned a collaborative future where AI and humans work in tandem. AI processes vast amounts of data, identifying patterns that humans might miss, while human researchers provide context, nuanced understanding, and ethical considerations - a symbiotic relationship that leverages the strengths of both AI and human intelligence.
This collaboration has potential applications across various fields, from psychology and healthcare to marketing and education. In each area, AI can handle data-intensive tasks, freeing humans to focus on complex, emotionally nuanced situations. These ideas align with broader discussions in the AI space, such as Ethan Mollick's concept of co-intelligence, which advocates for AI as an augmentation tool rather than a replacement for human capabilities (I'd also highly recommend his book).
The more practical takeaway is that in order to stay competitive, research and insights professionals need to embrace GenAI with an open mind. Although you can use books and other resources to learn about how to use it, the only truly effective way is to actually use it. As Ethan Mollick puts it, "everyone is in R&D now" - unlike previous waves of technological progress, the benefits of GenAI can only be realised if everyone in an agency gets comfortable with it and works together to find innovative use cases.
To understand their capabilities and limitations, you need to integrate AI tools into your daily workflow and experiment with it for different kinds of tasks (while respecting confidentiality) - then combine the results with human expertise. You also need to keep learning continously keep up with rapid development in capabilities. Of course, this also requires companies to foster a culture of experimentation and recognising there will be some wasted time in the process of exploring new applications.
In conclusion?
For me, IIEX Europe revealed a market research landscape in flux - with AI reshaping our approaches faster than many can keep up. From Heineken's knowledge management system to the provocative ideas on AI empathy, it's clear that AI is no longer just a futuristic concept - it's a reality we're all grappling with.
What strikes me most is the shift from viewing AI as a mere tool to seeing it as a collaborative partner in research. This idea of 'co-intelligence' - where AI and human researchers work in tandem - offers exciting possibilities but it also demands that we stay adaptable, learning continuously alongside the evolving AI capabilities.
The practical implications are also significant:
We need to be more discerning about our data, focusing on quality and relevance rather than sheer volume.
We must also develop more sophisticated ways to structure our knowledge, enhancing AI's accuracy while reducing its tendency to 'confabulate'
Most importantly, we also need to roll up our sleeves and actually use these AI tools - this hands-on approach, while potentially messy and time-consuming, is crucial for uncovering AI's true potential in our field.
As we are stumbling into an AI-augmented future, let's remember that our human expertise - our ability to provide context, nuanced understanding, and ethical considerations - is still crucially important. Case in point: this article is the result of heavy collaboration with ChatGTP-4o and Claude 3.5, yet it still required a lot of context and curation from me. However, without the LLMs doing the grunt work for me, I would probably not have written this article at all!
The challenge ahead lies in finding the right balance, leveraging AI's strengths while preserving the human insight that gives our work meaning and depth. In the end, embracing AI isn't about choosing between machine efficiency and human intuition. It's about creating a new synthesis of both, one that pushes the boundaries of what's possible.
Comments