Why look on the Internet when you can ask AI? From the launch of ChatgPT, many people have replaced Google with Chatbots, starting with OpenAi’s for their requests, from an idea of recipe to a duty of history, including the preparation of their vacations.
However, a practice that can be dangerous, as noted by the founder and director of Evolution Treks Peru, a travel agency in Peru. While preparing a walk through the Andes, he heard two tourists talk about his solitary hiking project in the mountains to the “Sacred Canyon of Humantay.”
The problem is that this cannon does not exist, that the two tourists ignored, that they had called Chatgpt for their trip. “They showed me the screenshot, written with confidence and full of shocking adjectives, but it was false, said Miguel Angel Gongora Meza in the BBC.
Dangerous trips
The Operai chatbot had combined two places not related to the description, according to the founder of Evolution Treks Peru. An error that could have cost the two tourists for life.
Tourists had also paid almost 160 dollars to “reach a rural road near Mollepata without a guide or destination,” said Miguel Angel Gongora Meza.
They are far from being the first to put their lives in danger without knowing it due to Chatgpt. Dana Yao, a creator who has a blog about trips in Japan, and her husband lived a similar experience after using the conversational robot to prepare a romantic walk to the summit of Mont Misen in Japan earlier this year. According to the IA council, they began their excursion at 3 pm, to reach the top on time for sunset.
“This is where the problem appeared. When we were ready to descend [la montagne via] The cable car station. Chatgpt indicated that the last cable car was descending at 5.30 pm, but in reality, the cable car was already closed. So we are trapped at the top of the mountain, “said Dana Yao.
Hallucinations
Chatbots like Chatgpt make mistakes because they tend to hallucinate, either to invent information with confidence. That is why it is important to guarantee the precision of your answers, a reflection that is not yet adopted by all.
This is how an AI guide said in 2024 that there was an Eiffel Tower in Beijing for users, and suggested a marathon route in northern Italy that was completely impracticable for a British traveler.
The problem not only comes from the textual responses of the chatbots, but also from the videos they can generate. In July, a couple thought about going to a cable car picturesque in Malaysia. Once there, they realized that they did not exist, contrary to what they had seen in a video created by AI.
Source: BFM TV
