The AI Grapple
Unravel the complexities of AI with The AI Grapple Podcast, hosted by Kate vanderVoort. Dive into thought-provoking discussions on the most critical AI issues shaping our world. Perfect for marketers and business professionals, this podcast is your guide to integrating AI responsibly and ethically into your organization. Join us as we navigate the future of technology and its profound impact on humanity.
Episodes

4 days ago
4 days ago
In this episode of The AI Grapple, Kate vanderVoort, Founder and CEO of the AI Success Lab, sits down with Chris Nolte, Founder and CEO of Kayana Remote Professionals, to unpack what’s really happening at the intersection of AI, remote work, and human capability.
Chris brings the rare lens of an investor-operator who has led and scaled businesses across finance, retail, prop tech, healthcare, and real estate. As an early beta tester of OpenAI and a long-time advocate for global talent, he shares grounded insights into why AI adoption often stalls, and what leaders are getting wrong about jobs, productivity, and automation.
This is a practical, human-centred conversation about execution, not hype.
Meet Chris Nolte
Chris Nolte is the Founder and CEO of Kayana Remote Professionals, a company helping growth-minded businesses, nonprofits, solopreneurs, and PE and VC-backed portfolio companies scale using top-tier Filipino talent, supported by AI-enabled matching and workflow automation.
Before Kayana, Chris spent 17 years running a family office where he bought, built, and operated companies with long-term capital. He has served as President and CEO of Verlo Mattress, co-founded AneVista Group, and advised startups through Dragonfly Group. His work today sits squarely at the intersection of AI, remote talent, and the future of work.
Why AI Tools Don’t Automatically Change How Work Gets Done
One of the central themes of this conversation is what Chris calls the human execution gap.
Many organisations invest heavily in AI tools, only to find that very little actually changes. Chris explains why this happens and why the real barrier isn’t technology, but the expectation of full automation without redesigning roles, workflows, or accountability.
Kate and Chris explore why AI still needs human judgment, why unfinished automations are everywhere, and why execution breaks down when leaders expect tools to replace thinking.
AI, Repeatable Work, and the Future of Remote Roles
As AI takes on more repeatable, task-based work, the nature of many roles is shifting fast.
Chris shares why this shift doesn’t spell the end of remote work, but rather raises expectations for what remote professionals contribute. Entry-level, low-context roles are disappearing, while higher-level work is becoming accessible far earlier in a career.
The conversation reframes the fear around AI and jobs, focusing instead on how remote workforces can move up the value chain rather than being pushed out.
Global Talent, AI, and the Levelling of Opportunity
A powerful thread in this episode is how AI is reshaping global opportunity.
Chris explains why AI is acting as a great enabler for talented people in countries that historically lacked access to education, capital, or global markets. With AI closing gaps in language, research, and communication, opportunity is becoming more evenly distributed, even if outcomes are not.
Kate and Chris discuss what this means for competition, wages, and the reality that professionals are no longer competing locally, but globally.
What Makes Remote Professionals Irreplaceable
With AI available to everyone, differentiation now comes from distinctly human qualities.
Chris outlines why professionalism, discernment, curiosity, and output-focused thinking matter more than ever. He explains why remote professionals must actively learn how to operate at a professional standard, and why trust is built through consistency, judgment, and ownership, not just technical skill.
Kate adds real-world examples from her own team, showing how AI-supported workflows free leaders from bottlenecks while raising quality and expectations.
AI, Custom GPTs, and Scaling Expertise
The episode dives into how custom GPTs and AI workflows are changing knowledge transfer inside businesses.
Kate shares how embedding her expertise into AI systems allows her remote team to work in her voice and style, reducing rework and approvals. Chris builds on this by explaining how content, education, and even books are changing as AI becomes part of how people learn and apply information.
This section offers a glimpse into how expertise can now scale without burning out the expert.
Productivity, Pace, and the Reality of Change
AI is moving faster than people can adapt, and both Kate and Chris acknowledge the tension this creates.
They discuss why productivity gains don’t come from layering AI on top of broken processes, and why many organisations are stuck between fear, regulation, and falling behind. The conversation also touches on global differences in AI adoption and the long arc of change businesses need to prepare for.
Looking Ahead: The Future of Work That Never Sleeps
To close, Chris shares his view on what’s coming next.
He describes a future where even small and mid-sized businesses operate across time zones, supported by AI and distributed teams that follow the sun. Work doesn’t stop, even when leaders do. The result is faster execution, higher output, and more opportunity for people who know how to work with AI rather than around it.
Key Takeaways
AI adoption fails when leaders expect automation without changing how work is designed
Repeatable, low-context roles are fading, while higher-value work is becoming accessible earlier
Remote professionals compete on a global stage, with AI as the common denominator
Human judgment, curiosity, and professionalism matter more, not less
AI scales expertise best when paired with strong workflows and clear standards
Connect with Chris
Learn more about Chris’s work and Kayana Remote Professionals:Kayana Website: https://www.hirekayana.comKayana Capital Website: https://kayanacapital.com/ LinkedIn: https://www.linkedin.com/in/chrisnolte/

Monday Jan 12, 2026
Monday Jan 12, 2026
In this episode of The AI Grapple, Kate vanderVoort is joined by Sarah Daly (founder AI360 Review), AI Strategist and Researcher, to explore what’s really holding organisations back from successful AI adoption. Rather than focusing on tools or trends, this conversation goes deep into trust, leadership responsibility, workforce impact, and the human systems that determine whether AI succeeds or fails at scale.
Sarah brings insights from six years of doctoral research into trust in AI at work, alongside her enterprise experience advising boards and senior leaders. Together, Kate and Sarah unpack why AI is not a technology problem, why people are already trusting AI more than they realise, and what organisations must do to navigate disruption honestly and responsibly.
Key Topics Covered
1. Why Trust Is the Real Issue in AI Adoption
Sarah explains that while public narratives focus on distrust, people are already placing deep trust in AI, often without realising it. From sharing personal information with AI tools to relying on outputs without verification, trust is already present but poorly calibrated. The challenge for organisations is not whether people trust AI, but whether they trust it in the right ways.
2. The Human Foundations of AI Performance
At AI360Review, Sarah’s work begins with people, not platforms. She shares why technology is often easier to control than human systems, and how trust can be deliberately designed through environment, leadership behaviour, and culture. When the right conditions exist, even AI sceptics can become strong advocates.
3. Strategy Before Tools
Rather than positioning AI as the strategy, Sarah argues it must support existing organisational goals. The starting point is always the problem being solved and the value being created. From there, organisations must consider governance, capability building, culture, education, innovation processes, and fit-for-purpose technology.
This approach is formalised in the AI360 framework, which assesses AI readiness across six organisational dimensions.
4. Leadership, Governance, and Risk
A recurring theme in the conversation is leadership clarity. When leaders lack confidence or avoid decisions, teams work around restrictions, often using AI in uncontrolled ways. Sarah reframes AI risk as a management issue, not a binary decision, and stresses that strong governance enables experimentation rather than shutting it down.
5. Australia’s AI Sentiment and the National AI Plan
Kate and Sarah discuss Australia’s low trust levels in AI compared to global peers, particularly in the workplace. Sarah shares why enterprise sentiment varies widely depending on enablement and leadership support. They also explore Australia’s national AI plan, with Sarah supporting the decision to embed AI governance within existing regulatory structures rather than creating new bodies.
6. AI as a Thinking Partner
The conversation shifts to how AI is changing how people think, write, and make decisions. Sarah highlights the difference between using AI as a creative partner versus outsourcing thinking entirely. Kate introduces discernment and personal responsibility as essential skills in the age of AI, especially given how readily people believe AI-generated outputs.
7. Workforce Impact and Difficult Conversations
One of the most powerful sections of the episode focuses on workforce disruption. Sarah speaks candidly about automation, role changes, and job loss, and why avoiding these conversations damages trust. She advocates for transparency, agency, and AI literacy so employees can create value for their organisation and their future careers.
8. Consumer Backlash and Lessons from Early Movers
Sarah shares lessons from organisations that moved too fast without accountability, including well-known AI failures. These examples show why companies must own AI-driven decisions, test rigorously, and protect customer experience. Second movers, particularly in Australia, have the advantage of learning from these mistakes.
9. Transparency and Ethical Use of AI
The episode explores whether organisations should disclose AI use publicly. Sarah explains how expectations shift when AI becomes embedded in everyday work, while stressing that transparency around customer data, privacy, and protection remains essential. Over time, AI disclosures may become as standard as privacy policies.
10. A Human-Centred Case Study: IKEA
Sarah shares an inspiring example from IKEA, where AI voice tools were introduced into call centres. Instead of job losses, staff were retrained as in-store interior designers, expanding customer experience and creating transferable skills for employees. This case shows what’s possible when organisations lead with people, not fear.
11. What the Future Could Look Like
Looking ahead, Sarah remains optimistic. While human drivers like autonomy, mastery, and purpose remain constant, AI has the potential to reshape how people work, think, and create meaning. Used well, AI can augment human capability rather than diminish it, opening new possibilities for work and life.
12. Sarah’s AI Toolkit
Rather than a single favourite tool, Sarah uses a mix of:
Microsoft Copilot
ChatGPT
Claude
Each serves a different purpose, reinforcing the idea that effective AI use is about intentionality, not loyalty to one platform.
Resources & Links
AI360Review: https://www.ai360review.com/
Connect with Sarah on LinkedIn: https://www.linkedin.com/in/sarah-daly-au/
Final Thoughts
This episode is a must-listen for leaders navigating AI change, not just from a technical standpoint, but from a human one. Sarah Daly brings clarity, research-backed thinking, and real-world examples that challenge organisations to lead with courage, transparency, and responsibility as AI reshapes how we work.
If you’re responsible for strategy, culture, or people, this conversation will give you a clearer lens on what really matters in AI adoption.

Monday Jan 05, 2026
Ep 44: Building Business Resilience Through Better Data with Davis DeRodes
Monday Jan 05, 2026
Monday Jan 05, 2026
In this episode, Kate vanderVoort, CEO, AI Success Lab, sits down with Davis DeRodes, Head of Data Science Innovation at Fusion Risk Management, for a clear and practical look at the role data plays in business resilience. Davis has a rare gift for breaking down technical concepts, helping leaders understand how better data, smarter systems and simple planning can protect organisations from disruption.
Davis explains why resilience is no longer just an enterprise issue and shares tangible steps small to medium businesses can take right now to prepare for the rapid change AI is bringing. They talk about AI-generated scenarios, data simulations, model transparency, synthetic data, employee-facing agents and how organisations can approach data in ways that set them up for long-term stability.
This episode is perfect for leaders who want a grounded understanding of how data supports smart decisions, resilient systems and confident use of AI.
Key Themes
What enterprise resilience really means and why every organisation now needs it
How AI-generated scenarios work and why they outperform traditional tabletop exercises
The difference between data science and decision science
How small and medium businesses can transform their data into a resilience asset
The role of structured vs unstructured data in an AI-driven world
What model context protocol (MCP) means for how AI accesses business systems
Practical steps for leaders to strengthen resilience today
Future trends in data collaboration, governance and synthetic data
Why the “business brain” approach gives companies more control
What work looks like when AI becomes a close collaborator
Insights
Data science vs decision science
Davis explains the distinction in a way that helps non-technical leaders understand what data is actually for in a business and why waiting for perfect accuracy can slow teams down.
AI-generated scenarios
He walks through how Fusion uses AI to create highly tailored disruption scenarios that expose weak points organisations would never have spotted on their own.
Monte Carlo simulations
Davis describes modern simulation techniques that replace slow, expensive tabletop exercises with fast, repeatable, data-driven insights.
Resilience for smaller businesses
He outlines simple, accessible steps any organisation can take to strengthen resilience, including mapping revenue drivers, centralising key data, and understanding dependencies.
Data governance as a superpower
Why businesses that invest early in clean, structured data gain massive efficiency later.
Synthetic data, future risks and the pollution of the internet
A thoughtful conversation on how AI trains itself, the risks of AI training on AI, and why high-quality walled-off data sources will become even more valuable.
AI as an employee
How organisations will soon handle agents just like staff members, including permissions, access and responsibilities.
Links Mentioned
LinkedIn – Davis DeRodes https://www.linkedin.com/in/davis-derodes/
Fusion Risk Management https://www.fusionrm.com/
Kaggle synthetic datasets https://www.kaggle.com/datasets
Google AI Studio https://aistudio.google.com/
If you enjoyed this episode, share it with a colleague who needs practical clarity on AI and data. Subscribe to The AI Grapple on your favourite podcast platform so you never miss an episode.

Monday Dec 29, 2025
Ep 43: The Truth About AI, Sustainability, and Trust: Time Will Tell
Monday Dec 29, 2025
Monday Dec 29, 2025
In this episode of The AI Grapple, Kate VanderVoort (Founder of the AI Success Lab) is joined by sustainability author, consultant, and speaker John Pabon to explore one of the most pressing and uncomfortable questions facing AI adoption today: its impact on the environment, trust, and society.
With more than 20 years working across public policy, consulting, and sustainability strategy, John brings a calm, pragmatic voice to a conversation often dominated by fear or hype. Together, Kate and John unpack what businesses actually need to consider as AI becomes embedded into operations, reporting, and decision-making.
Meet the Guest: John Pabon
John Pabon is a sustainability expert with a background spanning the United Nations, McKinsey, AC Nielsen, and a decade living and working in China. He is the author of Sustainability for the Rest of Us: Your No BS 5 Point Plan for Saving the Planet* and is widely known as Australia’s only independent greenwashing expert.
John works with organisations to move sustainability out of marketing spin and into real, strategic action, with a strong focus on transparency, governance, and trust.
The Environmental Impact of AI: What We Know and What We Don’t
One of the most common concerns Kate hears in AI training sessions is about energy use, data centres, and AI’s carbon and water footprint. John explains why these concerns are valid, particularly when it comes to the rapid expansion of data centres and the resources required to cool them.
At the same time, he cautions against alarmist thinking. AI’s environmental impact is still being measured in different ways, and the technology is evolving quickly. The bigger challenge right now is uncertainty — and the pressure on companies to scale AI fast while still meeting sustainability targets.
Sustainability Is More Than the Environment
A key theme in the conversation is that sustainability is not just about emissions or energy use. John emphasises the importance of the social and governance sides of sustainability, especially as AI becomes more influential in reporting, decision-making, and communication.
From fabricated reports to unverified claims, AI introduces new risks when expertise is missing. This is where governance, oversight, and what Kate calls “expert in the loop” become critical to avoid misinformation and reputational damage.
Greenwashing, Greenhushing, and AI
John breaks down greenwashing in simple terms: when organisations use the language of sustainability without the substance to support it. He explains why AI creates fresh opportunities for greenwashing, particularly when companies make vague or exaggerated claims about “responsible” or “sustainable” AI without evidence.
The conversation also introduces the idea of greenhushing — when companies say nothing at all out of fear of getting it wrong. John argues that silence erodes trust just as much as misleading claims, and that openness, honesty, and progress matter more than perfection.
Can AI Support Sustainability Instead of Undermining It?
Despite the risks, John is clear that AI also holds real promise. From supply chain traceability to emissions reporting, AI can help businesses understand what is actually happening inside their operations — especially where sustainability impacts have traditionally been hard to measure.
Used well, AI can support better decision-making, reduce inefficiencies, and help organisations focus on what truly matters rather than chasing trends.
Trust, Transparency, and Consumer Backlash
As public awareness of AI grows, Kate and John discuss the very real possibility of consumer backlash, particularly when AI use conflicts with a company’s stated values. John stresses that trust is built through transparency — explaining not just what a company is doing with AI, but why.
People don’t expect organisations to have all the answers. They do expect honesty, clarity, and a willingness to take responsibility.
Regulation, Education, and Personal Responsibility
The episode also explores the uneven global approach to AI regulation, from Europe’s safety-first stance to America’s innovation push. John and Kate agree that education has not kept pace with adoption, leaving many people unsure how to use AI responsibly.
John shares how he personally uses AI as a thinking partner in his consulting work, while remaining cautious about outsourcing expertise or creative judgement. Both emphasise personal responsibility — how individuals and organisations choose to engage with AI matters.
A Hopeful Look Ahead
The episode closes on an optimistic note. John shares his vision of a future where sustainability is so embedded into business that every purchase becomes sustainable by default. In that future, AI plays a supporting role — helping organisations get there faster and more effectively, without leaving people behind.
Connect with John Pabon
To learn more about John’s work, visit https://www.johnpabon.com
Social Media Links:TikTok/Instagram: @johnapabonLinkedIn: https://www.linkedin.com/in/johnpabon

Monday Dec 22, 2025
Ep 42: Maker, Shaper or Taker? David Espindola’s Guide to Smart AI Strategy
Monday Dec 22, 2025
Monday Dec 22, 2025
In this episode, Kate vanderVoort (Founder and CEO at the AI Success Lab) speaks with futurist, author and technologist David Espindola, founder of Brainyus and author of Soulful: You in the Future of Artificial Intelligence. With more than 30 years in the tech industry, David has guided organisations through major waves of disruption. His work now focuses on human and AI collaboration, ethical adoption and how businesses can prepare for rapid change.
What We Cover
David’s AI journey David shares how his early work in technology set the stage for exploring AI long before it hit the mainstream. He explains the shift from AI being an academic topic to something every industry now has to face head-on. His first book, The Exponential Era, explored the convergence of fast-growing technologies, with AI standing out as the most powerful force shaping business and society.
Why AI is different from past technology waves While tech change isn’t new, the speed and scale of AI is. David highlights how robotics, quantum computing and AI are blending, creating a level of disruption few leaders are ready for.
The Maker, Shaper, Taker model David breaks down one of the most practical strategic models in this space:
Makers build frontier AI models.
Shapers fine-tune models on their own data and culture.
Takers use AI built into existing tools.
Most businesses don’t even realise these options exist. The conversation explores how smaller organisations can gain an advantage by choosing their place in this model with intention.
The human side of AI adoption Kate and David dig into the fear, uncertainty and culture challenges that show up inside organisations. David shares how one client used an AI champion, clear policies and structured training to build confidence, capability and responsible use. He stresses the importance of trust, transparency and honest conversations about job changes.
Workforce changes and agentic AI David discusses the shift ahead as agentic AI becomes part of everyday workflows. With half of entry-level roles at risk, he talks through the long-term impact on talent pipelines and how leaders should prepare their people now.
Education’s turning point Both Kate and David explore the role of AI in learning and how personalised tutoring could transform the way people develop skills. They look at why bans don’t work, how critical thinking becomes even more important and what students need in order to thrive in an AI-driven world.
David’s podcast and working with Zena, his AI colleague David shares the story behind his podcast Conversations with Zena and what happened when he trained an AI agent on his books, writing, values and language. He talks through the challenges of three-way conversations with AI, how context shapes quality and the surprising moments where Zena raised questions he didn’t expect.
Global AI ethics, regulation and the geopolitical tension ahead The discussion covers the EU’s AI Act, US innovation, China’s influence and the need for shared approaches to safety, human rights and access.
What’s possible if we get this right David closes with an optimistic view of what AI could unlock: abundance, less manual work, more meaningful creativity, and more time for humans to grow, reflect and connect. He also speaks to the risks and the need for strong global safeguards.
Links and Resources
David Espindola’s Website: davidespindola.com
Brainyus: brainyus.com
Book: Soulful: You in the Future of Artificial Intelligence
Podcast: Conversations with Zena, My AI Colleague
Connect with David
LinkedIn: https://www.linkedin.com/in/davidespindola/Instagram: https://www.instagram.com/despindola23/X: https://twitter.com/despindola23YouTube: https://www.youtube.com/@despindola23
Connect with Kate at the AI Success Lab
AI Success Lab
AI Success Lab Facebook Community
LinkedIn

Tuesday Dec 16, 2025
Ep 41: Raising Future-Ready Kids: The Family AI Game Plan with Amy D. Love
Tuesday Dec 16, 2025
Tuesday Dec 16, 2025
In this episode of The AI Grapple, Kate vanderVoort speaks with Amy D. Love – founder of the international movement Discovering AI and best-selling author of Raising Entrepreneurs and Discovering AI: A Parent’s Guide to Raising Future-Ready Kids. A former Fortune 500 Chief Marketing Officer and Harvard MBA, Amy has turned her focus to helping families prepare their children for life and success in the age of AI.
Amy and Kate dive into why families – not just schools or governments – are critical to AI readiness. They explore the need for practical, values-led guidance in navigating AI with kids and discuss how the FAMILY AI GAME PLAN is empowering parents to raise children who are not only aware of AI, but equipped to thrive alongside it. This episode is packed with practical strategies, real-life anecdotes, and thoughtful reframes that challenge the way we think about parenting, education, and technology.
What We Cover:
Why families are the frontline of AI education
The vision behind Discovering AI and Amy’s shift from tech exec to children’s advocate
Moving from fear to confidence as a parent in the age of AI
A walkthrough of the FAMILY AI GAME PLAN – and how any family can use it
“Create more, consume less” – why this mantra matters now more than ever
The hidden risks of leaving AI education solely to schools or governments
Real-life family AI activities that promote creativity, ethics and digital literacy
What’s possible if every family gets this right in a single generation
About Amy D. Love: Amy D. Love is the founder of Discovering AI, an international movement helping families prepare children to thrive in an AI-powered world. With a background as a Fortune 500 CMO and a Harvard MBA, Amy has advised AI leaders and policymakers on aligning tech with human values. She is the author of the best-selling Raising Entrepreneurs and the newly released Discovering AI. Her signature FAMILY AI GAME PLAN offers parents a practical framework to guide children’s use of AI with confidence, creativity and care.
Resources and Links:
Website: www.discoveringai.org
www.discoveringai.com
Books:
Raising Entrepreneurs – Available on Amazon
Discovering AI: A Parent’s Guide to Raising Future-Ready Kids – Available on Amazon
Free resources, MindSpark activities and the FAMILY AI GAME PLAN available on the Discovering AI website
Connect with Kate:
Website: www.aisuccesslab.com
LinkedIn: Kate vanderVoort
Subscribe & Review: If you enjoyed this episode, please subscribe, rate and leave a review on your favourite podcast platform. Share this episode with a fellow parent or educator who’s navigating the world of AI with kids.

Monday Dec 08, 2025
Monday Dec 08, 2025
In this episode, Kate vanderVoort, Founder of the AI Success Lab, speaks with Neil Tunnah - former elite rugby coach, global leadership consultant and founder of The Performance Chain Group. Neil works with organisations across Australia and North America to help leaders build behavioural consistency, navigate uncertain environments and guide their people through rapid AI-driven change.
Neil brings grounded thinking and honest reflection to some of the biggest leadership challenges of this moment. Together, we explore why clarity is the currency of trust, how fear spreads when leaders avoid hard conversations, and why AI won’t replace good leaders but will absolutely expose the weak ones. He also shares lessons from elite sport on resilience, habit-building and culture that apply directly to today’s workplaces.
The discussion moves through strategy, psychology, culture and the realities facing teams on the ground. Neil also speaks openly about raising kids in this era and what the future of learning could look like with AI in the mix.
What We Cover:
How AI is disrupting leadership and why behavioural consistency matters more than ever
Why many leaders are confused about AI strategy - and how that confusion cascades through organisations
Creating clarity when the truth is that leaders don’t have all the answers yet
The danger of top-down AI strategies that ignore frontline experience
Human friction points organisations keep missing when adopting AI
The cultural gaps that stop AI projects from gaining traction
Fear, job security and why avoidance only increases anxiety
Lessons from elite sport that shape how leaders can develop resilience and habits that actually stick
How AI can enhance coaching, development and performance conversations
Why the future of learning needs to shift away from memorising and towards real personalised development
Raising children during an AI-driven transformation and building the foundations they’ll need
What the “re-engineering” of workplaces and society might look like over the next few years
Guest Bio: Neil Tunnah is a former elite rugby coach turned global leadership consultant and founder of The Performance Chain Group. He helps organisations across Australia and North America navigate change, embed behavioural consistency and lead well in an AI-shaped world. Known for a no-fluff approach to people and performance, Neil works at the intersection of culture, behaviour and leadership. He’s also a dad of two, a gym regular and still deeply connected to the rugby community.
Connect with Neil: LinkedIn: https://www.linkedin.com/in/neil-tunnah-0a2071122/
The Performance Chain GroupListen & Subscribe: If you’re a leader, marketer or business professional wanting to understand how to navigate the human side of AI adoption, this episode offers timely, grounded guidance. Listen on your favourite podcast platform and follow the show for future conversations on practical AI in business.

Monday Dec 01, 2025
Monday Dec 01, 2025
Australia is racing ahead with AI, but not everyone is getting a fair start. In this episode of The AI Grapple Podcast, Kate vanderVoort sits down with Doug Taylor to unpack the reality facing young people who are being pushed further to the edges of opportunity because they can’t get online.
Doug leads The Smith Family, one of Australia’s most trusted charities supporting students from low-income backgrounds. He brings a clear view of what digital access really looks like on the ground, why the divide is widening, and how AI could strengthen or break the pathways young Australians rely on.
Across this conversation, Kate and Doug explore the true cost of digital exclusion, the pressure on families, and the tough choices schools, parents and organisations are facing as AI sweeps through education, work and daily life. Doug also opens up about the leadership approach needed to guide teams through rapid change while protecting the people who rely on them most.
What We Cover in This Episode
How digital exclusion is affecting young Australians right now
Why AI literacy is becoming essential for future jobs
The impact on students who lack a device or stable internet at home
What The Smith Family is doing to close the gap, including AI-enabled tutoring
How frontline workers are using digital assistants to support families
The privacy, safety and bias risks organisations need to plan for
Why trust, empathy and human judgement still matter in AI-enabled work
Doug’s hopes for a future where AI helps reduce inequality rather than deepen it
Why This Conversation Matters
Digital access is no longer optional. It shapes education, work and connection, and its absence cuts young people off from the very tools they need to build their future. As marketers, business leaders and technologists move quickly to adopt AI, Doug’s message is a vital reminder: progress only counts if it includes everyone.
This episode gives you a grounded look at the real issues behind AI adoption and a strong sense of the responsibility we share in ensuring technology lifts people up rather than locking them out.
About Doug Taylor
Doug Taylor is the CEO of The Smith Family, a 103-year-old organisation working to break the cycle of educational inequality in Australia. He brings decades of experience in the not-for-profit sector and is a national voice on digital access, education and community wellbeing.
Links and Resources
Support The Smith Family’s Christmas Appeal: thesmithfamily.com.au
Connect with Doug on LinkedIn
Follow Kate on LinkedIn for updates and AI training
Join the free AI Success Lab for more AI updates and skills
Listen and Subscribe
If you enjoy the episode, share it with someone working in education, technology or community impact. Your support helps more people join the conversation about building an AI-enabled future that works for everyone.

Tuesday Nov 25, 2025
Tuesday Nov 25, 2025
Artificial intelligence is advancing faster than our ability to regulate it, understand it, or even decide how much we trust it. In this deeply insightful episode of The AI Grapple, host Kate vanderVoort sits down with Darren Menachemson, Chief Ethicist and Head of AI and Digital Societies at ThinkPlace, and Chair of the Education Futures Foundation, to explore what it really takes to create a future where technology serves humanity — not the other way around.
Darren’s journey is as fascinating as it is relevant. From growing up in apartheid South Africa and witnessing systemic injustice first-hand, to leading global initiatives in ethics, governance, and human-centred design, his story offers a powerful lens through which to view AI’s moral and social implications.
Together, Kate and Darren discuss how ethics, compassion, and discernment must guide the next wave of AI innovation. They unpack what “trust” in AI truly means, how governments and institutions can balance regulation with innovation, and why young people might already be the best equipped to envision a responsible AI future.
🧭 Key Themes & Insights
Trust and Accountability in AI: How can societies build confidence in AI systems that are shaping our daily lives and decision-making?
Ethics and Alignment: What does it mean for technology to align with human values, and how can governments, organisations, and individuals ensure that happens?
The Compassion Gap: Darren’s national research reveals why Australians still see AI as less compassionate than humans — and why empathy may be our most important competitive advantage.
Regulation vs Innovation: How can we design guardrails that keep AI safe and fair without stifling progress and discovery?
Bias and Fairness: Real-world examples of AI perpetuating social inequities - and how we can use AI itself to identify and correct those biases.
Human-Centred Design for the Digital Age: Why design thinking, ethics, and leadership training must evolve together to create a fairer digital society.
🧠 About Darren Menachemson
Darren Menachemson is Chief Ethicist and Head of AI and Digital Societies at ThinkPlace - a global public good consultancy working with governments, NGOs, and industry to design ethical systems that improve lives. He also chairs the Education Futures Foundation, a not-for-profit organisation dedicated to preparing young people to become the ethical leaders of tomorrow.
A pioneer in the intersection of ethics, design, and technology, Darren has advised public sector leaders worldwide on how to align AI development with societal values. His work spans regulatory reform, social innovation, and future-focused education, helping communities adapt to the age of intelligent technology.
Connect with Darren:
🌐 ThinkPlace
💼 LinkedIn
🔗 Connect with Kate
🌐 AI Success Lab
🎧 The AI Grapple Podcast
💼 LinkedIn – Kate vanderVoort

Thursday Oct 16, 2025
Ep 37: Is This The Last Book Written By a Human? Jeff Burningham
Thursday Oct 16, 2025
Thursday Oct 16, 2025
In this profound and thought-provoking conversation, Kate vanderVoort (founder of the AI Success Lab) sits down with Jeff Burningham - tech entrepreneur, venture capitalist, and author of The Last Book Written by a Human: Becoming Wise in the Age of AI.
Jeff has built and invested in companies worth over $5 billion, co-founded Peak Capital Partners and Peak Ventures, and even ran for Governor of Utah. But today, he’s turning his attention to a far deeper question:
As machines become smarter, how do we become wiser?
This episode explores how technology, entrepreneurship, and spirituality intersect - and why the future of AI isn’t a technological problem but a human challenge.
Kate and Jeff unpack what it means to “rehumanise” business, how to bring consciousness into capitalism, and why our next evolution as leaders, parents, and humans depends on our ability to choose wisdom over speed, and connection over control.
You can connect with Jeff at:
www.jeffburningham.comwww.peakcapitalpartners.com
Social Media Links
@jeffburningham everywhere
Learn more at www.aisuccesslab.comWatch more interviews: YouTube/@AISuccessLabJoin the AI Success Lab Elite Membership for practical, human-centred AI training.







