Nearly two weeks ago, when Harsh Shah — owner of Mumbai-based bakery Dessert Therapy — was going through the refund claims by users for ‘damaged items’, he found something unusual.
One of the customers claimed that her Almond Praline Strawberries Dark Chocolate cake — costing ₹2,500 — had melted and sought a refund of ₹1,820.
On a closer look, Shah realised that the image sent to him was morphed using an artificial intelligence (AI) tool.
“There were hints. Strawberries looked different from the ones we use. Moreover, that cake flavour does not have a melting tendency,” says Shah, adding that the user might have received some compensation as the fraud was caught on their level and not by the aggregator.
“Aggregators prefer quick redressal and, in most cases, the entire conversation is done on text, which starts with a chatbot. With the volume of orders and level of checks, it may be easier to slide in such cases from there,” says Shah.
Acknowledging that food complaints are treated with utmost seriousness and trust, Aditya Mangla, chief executive officer (CEO), Zomato, said the platform has employed advanced systems to detect rare misuse. “We’ve already rolled out safeguards and early-detection models and are scaling them responsibly,” Mangla told Business Standard.
The practice, still new in India, has seen a sudden spike with at least four such cases being reported in the past 20 days.
An Indore-based restaurant reportedly called out two such cases where the users claimed to have found a dead fly in the food packets. Disclosing the details on its social media, the eatery says: “The cake looked completely clean, even though the user claimed that the fly was inside the cake. We ran the picture through an AI detection tool and the picture was 99 per cent AI-generated. Similar was the case with a Biryani packet.” According to Zorawar Kalra, vice-president, National Restaurant Association of India and founder of Massive Restaurants, the fragile hospitality industry works on trust and thus responsible use of technology is important. “Ecosystem has to be fair for customers, delivery partners and restaurants,” he says.
The challenge is similar for quick commerce (qcom) firms selling raw food items. In a recent case, a Swiggy Instamart user received an instant refund for a tray of eggs after he used Gemini Nano app to show that most of the eggs were cracked.
In the real scenario, one egg was cracked. “It is a matter of one command and one click for duplicating an image,” says Kushal Soni, founder of AI photo application Pixelera.ai.
“The solution, however, is also AI. There are multiple tools to detect these images. It is about who cracks it first,” says Soni. On the AI watermark solution, he says, there are the same AI tools to remove those.
Karthic Somalinga, vice-president of engineering, fulfilment at Zepto, says that the firm is exploring new open-source tools that detect AI-generated or altered images, and will integrate them soon to add another layer of security. “We use a mix of automated systems and human review,” he adds.
Addressing that this has already been a priority, Somalinga says, “Over the past year, we have strengthened fraud detection systems to ensure that genuine users are protected. Our AI- and ML-led models continuously analyse behavioural patterns to flag suspicious or inconsistent refund activity in real time, supported by periodic manual checks from our operations teams to validate decisions and maintain accuracy.”
While the practice is still new in India, a survey by US-based fraud prevention company Forter says 45 per cent customers in the US and 52 per cent in the UK accepted to have misused retail policies like refunds with the help of AI.
Common scams included changing the colour of the meat, say in a burger, to make it look undercooked. Chinese media also reported that such refund scams rose multiple times during the nationwide Double 11 shopping festival on November 11. The trend forced many sellers to cancel the refund only option, while many introduced credit scores for the users based on their past purchasing practices and seller reviews.
Apart from putting AI detection tools in place and adding more layers of human intervention in the checking process, experts suggest that asking for video proofs in some cases can help.
“Current generative models struggle to produce convincing, consistent videos. So, this adds friction against fraudulent claims while still respecting consumer grievance processes,” says AI ethicist Sundar N.
“Companies can maybe try offering replacements instead of instant refunds in some cases,” he adds.
