In this episode of The AI Grapple, Kate vanderVoort is joined by Sarah Daly (founder AI360 Review), AI Strategist and Researcher, to explore what’s really holding organisations back from successful AI adoption. Rather than focusing on tools or trends, this conversation goes deep into trust, leadership responsibility, workforce impact, and the human systems that determine whether AI succeeds or fails at scale.
Sarah brings insights from six years of doctoral research into trust in AI at work, alongside her enterprise experience advising boards and senior leaders. Together, Kate and Sarah unpack why AI is not a technology problem, why people are already trusting AI more than they realise, and what organisations must do to navigate disruption honestly and responsibly.
Key Topics Covered
1. Why Trust Is the Real Issue in AI Adoption
Sarah explains that while public narratives focus on distrust, people are already placing deep trust in AI, often without realising it. From sharing personal information with AI tools to relying on outputs without verification, trust is already present but poorly calibrated. The challenge for organisations is not whether people trust AI, but whether they trust it in the right ways.
2. The Human Foundations of AI Performance
At AI360Review, Sarah’s work begins with people, not platforms. She shares why technology is often easier to control than human systems, and how trust can be deliberately designed through environment, leadership behaviour, and culture. When the right conditions exist, even AI sceptics can become strong advocates.
3. Strategy Before Tools
Rather than positioning AI as the strategy, Sarah argues it must support existing organisational goals. The starting point is always the problem being solved and the value being created. From there, organisations must consider governance, capability building, culture, education, innovation processes, and fit-for-purpose technology.
This approach is formalised in the AI360 framework, which assesses AI readiness across six organisational dimensions.
4. Leadership, Governance, and Risk
A recurring theme in the conversation is leadership clarity. When leaders lack confidence or avoid decisions, teams work around restrictions, often using AI in uncontrolled ways. Sarah reframes AI risk as a management issue, not a binary decision, and stresses that strong governance enables experimentation rather than shutting it down.
5. Australia’s AI Sentiment and the National AI Plan
Kate and Sarah discuss Australia’s low trust levels in AI compared to global peers, particularly in the workplace. Sarah shares why enterprise sentiment varies widely depending on enablement and leadership support. They also explore Australia’s national AI plan, with Sarah supporting the decision to embed AI governance within existing regulatory structures rather than creating new bodies.
6. AI as a Thinking Partner
The conversation shifts to how AI is changing how people think, write, and make decisions. Sarah highlights the difference between using AI as a creative partner versus outsourcing thinking entirely. Kate introduces discernment and personal responsibility as essential skills in the age of AI, especially given how readily people believe AI-generated outputs.
7. Workforce Impact and Difficult Conversations
One of the most powerful sections of the episode focuses on workforce disruption. Sarah speaks candidly about automation, role changes, and job loss, and why avoiding these conversations damages trust. She advocates for transparency, agency, and AI literacy so employees can create value for their organisation and their future careers.
8. Consumer Backlash and Lessons from Early Movers
Sarah shares lessons from organisations that moved too fast without accountability, including well-known AI failures. These examples show why companies must own AI-driven decisions, test rigorously, and protect customer experience. Second movers, particularly in Australia, have the advantage of learning from these mistakes.
9. Transparency and Ethical Use of AI
The episode explores whether organisations should disclose AI use publicly. Sarah explains how expectations shift when AI becomes embedded in everyday work, while stressing that transparency around customer data, privacy, and protection remains essential. Over time, AI disclosures may become as standard as privacy policies.
10. A Human-Centred Case Study: IKEA
Sarah shares an inspiring example from IKEA, where AI voice tools were introduced into call centres. Instead of job losses, staff were retrained as in-store interior designers, expanding customer experience and creating transferable skills for employees. This case shows what’s possible when organisations lead with people, not fear.
11. What the Future Could Look Like
Looking ahead, Sarah remains optimistic. While human drivers like autonomy, mastery, and purpose remain constant, AI has the potential to reshape how people work, think, and create meaning. Used well, AI can augment human capability rather than diminish it, opening new possibilities for work and life.
12. Sarah’s AI Toolkit
Rather than a single favourite tool, Sarah uses a mix of:
- Microsoft Copilot
- ChatGPT
- Claude
Each serves a different purpose, reinforcing the idea that effective AI use is about intentionality, not loyalty to one platform.
Resources & Links
- AI360Review: https://www.ai360review.com/
- Connect with Sarah on LinkedIn: https://www.linkedin.com/in/sarah-daly-au/
Final Thoughts
This episode is a must-listen for leaders navigating AI change, not just from a technical standpoint, but from a human one. Sarah Daly brings clarity, research-backed thinking, and real-world examples that challenge organisations to lead with courage, transparency, and responsibility as AI reshapes how we work.
If you’re responsible for strategy, culture, or people, this conversation will give you a clearer lens on what really matters in AI adoption.
No comments yet. Be the first to say something!