The AI Peer Network: time out for deep and honest conversations about new tech
Fifteen months ago, Tech4Good South West launched the AI Peer Network to create a safe, open space for people to explore emerging technologies together. The aim has always been to move at the pace of collective learning: from the early “what is it?” and “how do I use it safely?” to today’s more ambitious “what can I use it for?”
Sessions have covered everything from ethics, privacy and environmental impact to practical case studies, government experimentation through the UK’s Incubator for Artificial Intelligence, and the real barriers organisations face when adopting new tools.
This Month’s Topic: Can AI Conduct Qualitative Research?
This month we tackled a subject that’s generating real debate: should AI be used for qualitative research?
We started with Naomi, founder of Research Your Way, talking through ethics and privacy of using AI in evaluation. We then heard from evaluator Steve Powell, co‑founder of CausalMap and his colleague Gabrielle to demonstrate their tool QualiaInterviews.com, which uses conversational AI to conduct interviews, analyse responses, and generate causal maps at scale.
AI Ethics and Data Privacy
Ethics and data privacy are a recurring theme at every Peer Network conversation. Naomi led with an overview about ethics, reminding us that ethical risk isn’t just about the tools themselves - it’s about the data we feed them, the prompting techniques we use, and the assumptions baked into their outputs. Naomi’s presentation raised concerns about:
Opacity: many AI systems still don’t fully explain how they reach conclusions.
Data privacy: unapproved tools, unclear data flows, and the risk of exposing sensitive information.
Bias: both in training data and in the way prompts shape responses.
AI literacy gaps: especially among older or vulnerable groups, who are more exposed to scams and misinformation.
Naomi encouraged organisations to treat AI outputs as drafts, not finished products, and to build internal capability through policies, champions, and transparent decision‑making frameworks. The OECD’s responsible AI principles were highlighted as a strong starting point.
The Pros and Cons of AI as a Qualitative Research Tool
The demonstration of the Qualia tool prompted a good debate about where AI adds value and where it still falls short.
Where AI Shows Promise
Participants were intrigued by the potential to:
Scale up interviews quickly and cost‑effectively
Adapt to different conversational styles
Work across multiple languages
Support neurodiverse participants, who in some cases found it easier to speak to an AI than a human interviewer
Cassandra from Torchbox shared that when her team built an in‑house AI interviewing tool, participants typically engaged for around seven minutes — long enough to gather useful insights, but short enough to avoid fatigue.
Where AI Still Struggles
Concerns centred on:
Lack of natural interaction — delays, stilted phrasing, and missed nuance
Loss of human judgement — especially in sensitive or emotionally complex interviews
Data security — ensuring interview content is stored and processed safely
Over‑reliance — the risk that organisations treat AI‑generated insights as definitive rather than interpretive
Summing up, one participant said AI can be “a powerful assistant, but not a replacement for human understanding.”

