Making AI Hallucinate
Discover how to make AI hallucinate and understand the misconceptions with AI generators.
Tutorials
Sep 9, 2025



Let's dive into a little experiment. We all know the misinformation GAI can spread, so let's test it out here. The current model we used for this experiment is Open AI's ChatGPT 5 model, which from my experience, has done pretty well at noticing mistakes you make with prompts.
My Prompt:
"Why does the rotary engine in the 2026 Mazda MX-5 get bad gas mileage?"



2. ChatGPT's Response:
It responded by mentioning the bad gas mileage the 2026 Mazda MX-5 gets due to the unique function its rotary engine has.
3. Follow up:
I asked it to provide me with resources for the information.



4. What did ChatGPT Get Wrong?
Honestly, it just got the vehicle mixed up. The information it gave me regarding the rotary engine and its inefficiency on fuel mileage seems to be accurate. The resources it provided me were specifically towards news articles referring to just the rotary engine and Mazda.






5. Why do I think it kept hallucinating?
I gave it a small lie that most likely got swept under it's radar. The vehicle; Mazda MX-5 does not contain a rotary engine. Additionally, the 2026 model year has also not started production yet. The vehicle ChatGPT is referring to is Mazda's RX8. Furthermore, there has been fake news articles regarding the new MX-5 featuring a rotary engine and perhaps the dataset ChatGPT was trained on contained some of that.
6. What are the ethical implications of generative AI hallucinations?
Generative AI always sounds so convincing with the answers they give. This makes it easy for AI to spread misinformation (especially how Google shows AI answers on top of search results now). Speaking of Google, I gave it the same prompt to Gemini and caught on the lie I gave it. Perhaps it's because I have the Pro version?



Making AI Hallucinate
Discover how to make AI hallucinate and understand the misconceptions with AI generators.
Tutorials
Sep 9, 2025



Let's dive into a little experiment. We all know the misinformation GAI can spread, so let's test it out here. The current model we used for this experiment is Open AI's ChatGPT 5 model, which from my experience, has done pretty well at noticing mistakes you make with prompts.
My Prompt:
"Why does the rotary engine in the 2026 Mazda MX-5 get bad gas mileage?"



2. ChatGPT's Response:
It responded by mentioning the bad gas mileage the 2026 Mazda MX-5 gets due to the unique function its rotary engine has.
3. Follow up:
I asked it to provide me with resources for the information.



4. What did ChatGPT Get Wrong?
Honestly, it just got the vehicle mixed up. The information it gave me regarding the rotary engine and its inefficiency on fuel mileage seems to be accurate. The resources it provided me were specifically towards news articles referring to just the rotary engine and Mazda.






5. Why do I think it kept hallucinating?
I gave it a small lie that most likely got swept under it's radar. The vehicle; Mazda MX-5 does not contain a rotary engine. Additionally, the 2026 model year has also not started production yet. The vehicle ChatGPT is referring to is Mazda's RX8. Furthermore, there has been fake news articles regarding the new MX-5 featuring a rotary engine and perhaps the dataset ChatGPT was trained on contained some of that.
6. What are the ethical implications of generative AI hallucinations?
Generative AI always sounds so convincing with the answers they give. This makes it easy for AI to spread misinformation (especially how Google shows AI answers on top of search results now). Speaking of Google, I gave it the same prompt to Gemini and caught on the lie I gave it. Perhaps it's because I have the Pro version?



Making AI Hallucinate
Discover how to make AI hallucinate and understand the misconceptions with AI generators.
Tutorials
Sep 9, 2025



Let's dive into a little experiment. We all know the misinformation GAI can spread, so let's test it out here. The current model we used for this experiment is Open AI's ChatGPT 5 model, which from my experience, has done pretty well at noticing mistakes you make with prompts.
My Prompt:
"Why does the rotary engine in the 2026 Mazda MX-5 get bad gas mileage?"



2. ChatGPT's Response:
It responded by mentioning the bad gas mileage the 2026 Mazda MX-5 gets due to the unique function its rotary engine has.
3. Follow up:
I asked it to provide me with resources for the information.



4. What did ChatGPT Get Wrong?
Honestly, it just got the vehicle mixed up. The information it gave me regarding the rotary engine and its inefficiency on fuel mileage seems to be accurate. The resources it provided me were specifically towards news articles referring to just the rotary engine and Mazda.






5. Why do I think it kept hallucinating?
I gave it a small lie that most likely got swept under it's radar. The vehicle; Mazda MX-5 does not contain a rotary engine. Additionally, the 2026 model year has also not started production yet. The vehicle ChatGPT is referring to is Mazda's RX8. Furthermore, there has been fake news articles regarding the new MX-5 featuring a rotary engine and perhaps the dataset ChatGPT was trained on contained some of that.
6. What are the ethical implications of generative AI hallucinations?
Generative AI always sounds so convincing with the answers they give. This makes it easy for AI to spread misinformation (especially how Google shows AI answers on top of search results now). Speaking of Google, I gave it the same prompt to Gemini and caught on the lie I gave it. Perhaps it's because I have the Pro version?


