The Bing chatbot is put on a leash shortly after its introduction. Since conversations sometimes got out of hand and answers were felt to be intrusive, Microsoft is now pulling the ripcord. New rules aim to prevent inappropriate reactions from the ChatGPT text robot.
AI chatbot: Microsoft defused text robots
It didn’t take long: Just a few days after publication Microsoft calls off its Bing chatbot. More and more users had complained about conversations with the text robot or judged the answers to be inappropriate. To prevent this from happening again in the future, Microsoft has now presented new rules.
For users of the ChatGPT bot, the new rules mean restrictions on the scope of conversations. From now on you can only a maximum of 50 questions per day become. The number of questions per interview is also limited to five. Bing then prompts the user to come up with a new topic (source: Microsoft Bing Blogs).
Microsoft had previously warned against engaging the ChatGPT text robot in lengthy conversations. It cannot then be ruled out that the chatbot feels provoked to answerthat have nothing to do with the actual topic of conversation.
AI can also create images. You can see how this works here:
Bing chatbot threatens philosophy professor
A philosophy professor, among others, had to experience just how much the chatbot from Microsoft can mispronounce the tone. In conversation Bing threatened to blackmail himotherwise expose and ruin him in succession.
The bot also made itself known with threats in a conversation with another user. Faced with a choice, according to the Bing chatbot, he would be probably prefer their own survival to that of the user.
In an interview with a reporter from the New York Times, the text robot was completely different, but also encroaching. After a lengthy discussion, the chatbot stated to have fallen in love with him. He recommended that the reporter separate from his wife.