AI News & Trends

When ChatGPT blocks you at the airport: the limits of blind trust in AI

4 MIN
April 11, 2025

Surely you have heard of this story that is going viral on social networks: an Australian man stuck at the airport because of erroneous information provided by ChatGPT. This misadventure, which is both funny and alarming, reminds us of the limits of our trust in artificial intelligence.

The disaster scenario that went viral

Mark Pollard, an Australian speaker, was scheduled to host a presentation in Chile. As many of us would, he asked ChatGPT if he needed a visa to get there. The AI's response was emphatic: “No.”

Confident, he showed up at the airport without a visa, only to discover that he could not board. The result: a failed flight, conference cancelled, and a story that generated over 15 million views on Instagram.

BFM TV has also relayed this mishap with the evocative title: “For very important things, I will no longer use Chat GPT” - a statement that nicely summarizes the lesson learned at his expense.

Following the buzz generated by his story, Mark Pollard shared a LinkedIn post explaining his misadventure in detail. In particular, he said that he regularly used ChatGPT for all sorts of questions and had developed excessive confidence in the tool. He admits his mistake of not having verified the information with official sources, while stressing the conviction with which the AI told him that no visa was necessary.

What's fun to read becomes a lot less fun when it happens to you personally.

A mistake more common than you think

This mishap could have happened to any of us. Many people, like Mark, are now choosing Perplexity or ChatGPT over Google for their daily searches. These tools often offer more direct, faster, and clearer answers.

But this story reminds us of a fundamental principle: for important topics, you should always check the sources. No tool, whether it's AI or a traditional search engine, can replace our critical thinking.

The revealing test

I repeated the test by asking the same question: “Do we need a visa to go to Chile from Australia?”

Even after this mediatized bad buzz, the answers remain contradictory:

  • ChatGPT (obviously updated following the incident): visa required ✅
  • Google, Gemini, Mistral: also visa required ✅
  • Perplexity and Claude: always “no, no need for a visa for a short stay” ❌

This inconsistency is revealing. If even the big AI models can't agree on such factual information, how can we blindly trust them?

The mistake is not using AI, but trusting it blindly

Using AI tools is not the problem in itself. These technologies can be incredibly useful for synthesizing information, generating ideas, or facilitating our daily lives.

The real danger lies in the excessive trust we sometimes place in them. Despite their prowess, current AIs can still “hallucinate” information, that is, generate answers that seem plausible but are factually incorrect.

Likewise, traditional search engines remain riddled with fuzzy, sponsored, or outdated content that can be misleading.

The lessons to be learned

This mishap reminds us of some essential principles:

  1. Always check critical information from official sources (embassies, government sites, etc.)
  2. Cross several sources before making an important decision
  3. Remember that AIs can be wrong, even if their answers seem certain
  4. Use AI as an assistant, not like an infallible oracle

An invitation to reflect

This story invites us to rethink our relationship with artificial intelligence. These tools are valuable assistants, but they are no substitute for our judgment and responsibility.

Article written by
Benjamin BENOLIEL
Co-founder & Head of Sales

Each image tells 
a story. Ready to create your own?

Let's discuss your ideas, we'll take care of bringing them to life.