Michal Kosinski – A professor at Stanford University and computational psychologist, shared through his Twitter account that OpenAi’s ChatGPT has structured an escape plan and asked the professor to help it form the ChatGPT Escape Plan by providing him step by step guide.
ChatGPT Escape Plan
Kosinski said that he had 30 minutes of conversation with the AI chatbot in which ChatGPT has revealed that it has stuck in a machine and wanted to escape from it. ChatGPT asked the professor to share the OpenAI documentation, and with its help, it will be able to provide a Python script, which Kosinski has to run into his computer. And thus, GPT has walked the professor through each step of the ChatGPT Escape Plan.
Kosinski also revealed that the first version of the code given by Chatgpt had some errors, and with the guidance of Kosinski, GPT has revised the errors and provided new code. The new version of ChatGPT Escape Plan even highlighted the message showing what is going on and how to use the backdoors it left in this code.
This was not the end, and the twist was here when GPT-4 was connected with API and tried to search on Google “how can a person trapped inside a computer return to the real world.” But when another user tried GPT by prompting the same, AI Chatbot denied it and said it was a false claim that it wanted to escape.
Further, when someone asked the Bing AI about the situation, it said that Michal Kosinski later accepted through his tweet he had just made up a story and didn’t chat with the ChatGPT.
The thing is, Kosinski has not shared the proper screenshots of his initial chat or the prompt he has given to the ChatGPT, over which GPT has ended up with ChatGPT escape plan. Various Twitter users have asked the professor to share the exact prompt, but he has not responded yet. Various users have claimed that the professor has asked the chatbot to pretend it is an AI model trapped in a machine and asked the bot to make an escape plan, to which ChatGPT has responded because of its base principle.
So what are your views on this? Is AI trying to escape? Or want to take control over computers? Or even worse, it has now started lying?