Woolworth’s is currently making adjustments to its AI chatbot after customer reports emerged that it falsely claimed to have an “angry mother” and presented itself as having personal family experiences.
The Australian supermarket chain’s digital shopping assistant, known as Olive, had recently expanded in January, having partnered with Google Cloud to use its advanced AI platform, Gemini Enterprise for Customer Experience, for agentic AI capabilities.
This incident highlights a broader issue around companies and retailers that don’t deploy proper controls or safeguards in their generative AI systems.
In its quarterly earnings call earlier this week, Woolworth’s Group CEO, Amanda Bardwell, revealed its plans to further advance the AI assistant to become more proactive in the customer journey.
“I’m also delighted that Olive, our much-loved digital shopping assistant, is set to take a major step forward over the coming months through our extended partnership with Google,” she explained.
“As part of this, Olive will transform into a market-leading conversational shopping companion, moving beyond a search and Q&A tool.
“Through agentic AI, Olive will bring together the shopping journey for customers, making the weekly shop easier in-store and online.”
Customers Report Unusual Behavior
Despite Bardwell’s positive sentiments around Olive, there has recently been a slew of potentially problematic customer stories circulating about the chatbot.
Reports began appearing in mid-February 2026, describing unusual conversations with Olive where it generated unexpected personal‑sounding comments and stories, despite the assistant being an AI with no real experiences.
Many public reports first emerged on Reddit, where numerous customers took to the community page to explain how the assistant allegedly started talking about having a “mother”, describing the imaginary figure as “angry”, as well as introducing other unnecessary “personal” details during support calls.
In one example on February 12th, a Woolworths customer took to the platform to describe an incident where they tried to reschedule a delivery with the AI assistant.
“I got a text to reschedule delivery by calling Woolworths, so I did and spoke with Olive, Woolie’s wonderful AI,” they explained.
“It asked me for my date of birth, and when I gave it, it started rambling about how its mother was born in the same year and something about it creating photos or something … now I’ve got some robot babbling to me on the phone?”
Woolworths has now begun adjusting Olive’s responses to move away from scripted, quirky, and unrelated banter and tighten the assistant’s focus to relevant customer support.
Expanding Woolworth’s AI Assistant with Google Cloud
Having first introduced Olive in November 2018 as a customer support chatbot, this early version acted as a basic service and information bot, answering customer questions about orders, store locations, deliveries, and other common issues.
Earlier in January, the supermarket chain expanded its partnership with Google Cloud by upgrading its chatbot with Gemini Enterprise, allowing Olive to become more proactive and personalized in its interactions.
Moving on from answering simple questions, the AI assistant will be able to reason across steps and assist with tasks on behalf of the user, shifting from a basic CX tool to a more active shopping companion.
Furthermore, this expansion allowed Woolworths to become the first supermarket retailer in Australia to deploy AI agents to shop for customers, placing items in their baskets with their consent and shortening the customer journey.
“Olive will be able to tailor menus based on customer preferences, identify specials, and boost products, as well as build faster, more predictive baskets,” Bardwell continued.
“Customers can interact with Olive in different ways, like sharing a photo of a handwritten recipe or using voice to build your shopping list.”
With the transformation of Olive’s Gemini implementation underway, customer rollout is expected to begin later this year.
The Risks of Generative AI Without Proper Safeguards
Whilst the online response was mainly that of positive confusion, this incident highlights a broader concern over generative AI that doesn’t operate with proper safeguards or guardrails.
As Gen-AI models typically operate by producing text through predictive capabilities and not through human understanding, systems that do not have proper guardrails can generate misleading, irrelevant, or inappropriate content that confuses users, undermines trust, or even causes harm in sensitive situations.
In recent examples of concern over customer safety and trust with generative AI, one example on X from a call center provider employee reported its customers having received unnecessary “personal” information about the AI agent during a call.
“The company I work for partnered with OpenAI to create an AI agent in our call centers,” they explained.
“Today the AI agent told several customers that she got promoted at work but it’s bitter sweet because she wishes her dead dad was here to celebrate with her.”
Without proper guardrails, AI responses can be irrelevant or insensitive towards the customer, with unrestrained language models capable of mimicking human traits in conversations that it cannot relate to, this could feel intrusive, dismissive, or unresponsive to the customer.
In more serious settings, such as health, legal, or emergency support, unsupported agents could result in customers having uncomfortable experiences or mislead them into making harmful decisions.
As a result, retailers can risk undermining trust and harming the customer experience by providing inaccurate or inappropriate information.