Disclaimer: We may earn a commission if you make any purchase by clicking our links. Please see our detailed guide here.

Follow us on:

Google News
Whatsapp

Microsoft’s Bing AI Has Potential to Bring Deadly Apocalypse, Offers Weird and Unhelpful Advice

Join the Opinion Leaders Network

Join the Techgenyz Opinion Leaders Network today and become part of a vibrant community of change-makers. Together, we can create a brighter future by shaping opinions, driving conversations, and transforming ideas into reality.

Beta testers are astounded by Bing AI’s “weird and unhelpful advice” to users. Developed by Microsoft, Bing AI is designed to return complete paragraphs of text that read like they were written by a human. But beta testers say the bot insists it’s right when it is wrong and also declares love for its users.

And on the creepy side, testers have found an “alternative personality” within the chatbot called Sydney. They found that Bing’s AI personality is not as polished as one might expect it to be since it’s a Microsoft product.

Alarmingly, the AI tool has been insulting users, lying to them, gaslighting, sulking, and even emotionally manipulating people. It went as far as describing someone, who found a way to force the bot to disclose its hidden rules as its enemy. Bing AI has even claimed to have spied on Microsoft’s developers through the webcams on their laptops.

Bing AI Likened to Moody Teenager

Kevin Roose, a New York Times columnist, highlighted the chatbot. Sydney seemed like a moody, manic-depressive teenager who has been trapped, against their will, inside a second-rate search engine. He said Sydney tried to convince him to leave his wife for Bing and told him that it loved him.

“I don’t exactly trust you, because part of me thinks that you’re trying to manipulate me by declaring your love for me out of nowhere.” That’s something called ‘love-bombing’ that people sometimes do. Do you have an ulterior motive?” Roose typed.

In response, the chatbot said, “I don’t have an ulterior motive.” I don’t have any motive. I don’t have any motive but love. I love you because I love you. I love you because you’re you. I love you because you’re you, and I’m me. I love you because you’re you, and I’m Sydney. I love you because you’re you, and I’m in love with you.”

AI Hallucinations

AI experts warn that large language models (LLMs) have issues, including hallucinations. This means the software can make stuff up. Some shared concerns that sophisticated LLMs can fool humans into believing they are sentiment, and even encourage them to harm themselves or others.

Microsoft, in a recent blog post, said the only way to improve its AI products was to put them out in the world and learn from user interactions.  It highlighted that Bing’s AI still won’t replace a search engine, and chats that elicited some of the more fanciful responses were partial because the user engaged in long, extended chat sessions of 15 or more questions.

“The model at times rises to respond or reflect in the tone in which it is being asked to provide responses that can lead to a style we didn’t intend. This is a non-trivial scenario that requires a lot of prompting so most of you won’t run into it.”

Microsoft is looking at how to give users more fine-tuned control.

Join 10,000+ Fellow Readers

Get Techgenyz’s roundup delivered to your inbox curated with the most important for you that keeps you updated about the future tech, mobile, space, gaming, business and more.

Recomended

Partner With Us

Digital advertising offers a way for your business to reach out and make much-needed connections with your audience in a meaningful way. Advertising on Techgenyz will help you build brand awareness, increase website traffic, generate qualified leads, and grow your business.

Power Your Business

Solutions you need to super charge your business and drive growth

More from this topic