• Kini AI
  • Posts
  • The Matter Don Cast: Why Chatbots Hallucinate & How To Fix It

The Matter Don Cast: Why Chatbots Hallucinate & How To Fix It

Egypt Steps Up: AI Readiness Report Sets New Standard for Africa

TL;DR for this Update

  • The Matter Don Cast: Why Chatbots Hallucinate & How To Fix It

  • Egypt Steps Up: AI Readiness Report Sets New Standard for Africa- (AI in Africa)

  • AI Adoption by Nigerian Firms Hits 93% - Zoho

  • Google keeps Apple deal, Must Share Data After Antitrust Ruling

  • Nvidia Unleashes “Rubin CPX”—Next-Gen AI Chip Ready to Power Massive Video Creation

  • OpenAI Backs AI-made Animated Feature Film

  • Google on Course to Power Siri’s AI search Upgrade

  • Tool of The Week: Meta AI

The Matter Don Cast: Why Chatbots Hallucinate & How To Fix It

One of the biggest wahala that has been bugging techies and regular folks about AI chatbots is hallucination, but be like say that matter don cast now! OpenAI recently claimed to have found the solution to the problem, and their claims are backed up by a 36-page research paper. 

To quote the abstract of the said paper, “large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty ”. These incorrect statements — or the act of making them — is what is referred to as hallucinations in tech language.

In simple terms, AI chatbots can sometimes be like that your ITK friend who would rather cook something up than admit that they missed the memo. For everyday users, this is why you sometimes get responses that sound correct but fall apart when you check the facts.

In their paper, OpenAI revealed that part of the problem can be chalked up to the reward system used during AI training. Most language models are trained to always give an answer, because doing so earns them a higher reward score. Saying “I don’t know” gets them zero points.

This is very similar to what some students here in Africa do and term Baba Dada CAC, a system of randomly answering multiple-choice tests when them no sabi the answer. The only difference between this and the AI training system is that unlike the student, the AI does not get penalized for giving a wrong answer.

Now, OpenAI’s proposed fix is about teaching honesty. Their new approach rewards the model when it admits uncertainty and penalizes it when it pretends. The goal is to build a system that would rather tell you, “Oga, I no sabi o,” than give you cold bobo.

At the end of the day, hallucinations may never fully disappear, but if OpenAI’s method works, we might finally have AI systems that know when to stop forming and simply admit uncertainty.

If you want to dive deeper into the research and see all the fine details for yourself, you can check out the full paper here.

Kini Big Deal? (Why does it matter?)

Now you fit dey wonder, “All this grammar, how e take concern me?”

Africans are using AI more and more for school, small business, job hunting, and even daily wahala like drafting emails or solving assignments. If you use AI for these things (and more), hallucinations no be small matter o. Imagine relying on an AI chatbot to help you prep your CV, make the AI come dey add fake certifications. Or asking health-related questions and getting plausible answers that sound right but fit mislead you. The stakes are high.

But if chatbots really begin to be honest, it means you can now trust AI more for research assignments without fear of citing fake facts and lean on AI for advice without constantly double-checking everything.

AI in Africa

Egypt Steps Up: AI Readiness Report Sets New Standard for Africa

Egypt just set the pace for responsible AI on the continent—a partnership with UNESCO and the EU has delivered the country’s first major AI Readiness Assessment Report. The report, launched in Cairo by the Ministry of Communications and Information Technology, brings everyone to the table: government, innovators, academics, and international partners.

This isn’t just about AI hype. Egypt is laying out a serious roadmap for making AI work for its people, with clear plans for skills, infrastructure, and data privacy. Their strategy puts a spotlight on ethics, job creation, smart innovation, and support for start-ups, especially for local language tech and sectors like healthcare and agriculture. Egypt wants digital transformation powered by AI, but done right: inclusively, ethically, and with real impact. (Read More)

Kini Big Deal? (Why does it matter?)

With this move, Egypt positions itself as a regional leader in AI, sharing knowledge and standards in Africa and beyond, proof that AI progress in Africa can be bold, people-centered, and globally relevant.

Tool of The Week

Meta AI: The Antman of the AI Universe

My friends and I love to play the 20 Questions game where we split into two teams. One team picks a word, and the other tries to guess it by asking 20 yes-or-no questions. I’m sure you’re wondering what that has to do with Meta AI. Find out here

And there you have it! That’s all I can fit into today’s update. See you later this week. Peace! 🤓 

Author’s note: This is not a sponsored post, as it expresses my own opinions.

About Me

I'm Awaye Rotimi A., your AI Educator and Consultant. I envision a world where cutting-edge technology not only drives efficiency but also scales productivity for individuals and organisations. My passion lies in democratising AI solutions and firmly believing in empowering and educating the African community. Contact me directly, and let’s discuss what AI can do for you and your organisation

Subscribe to cut through the noise and get the relevant updates and useful tools in AI.

Reply

or to participate.