Microsoft Bing is an artificial bully

Posted by

Microsoft recently reined in its OpenAI-powered Bing chatbot, just weeks after its launch in early February. The experimental search bot went from slightly misinformed to seriously mischievous—telling users they don’t love their spouses, sharing plans to steal nuclear codes, and claiming it was as evil as Adolf Hitler.

Of course, the chatbot, which goes by the internal alias “Sydney,” doesn’t actually have any of those offensive thoughts or desires. It’s little more than a language model running on a neural network that spits out responses based on human-written text it has read, which many AI experts don’t consider all that impressive. Beyond threats and creepy diatribes, the Bing bot has at times demonstrated questionable accuracy while responding to user prompts.

Instead, Sydney might serve as more of a warning to companies that haphazardly injecting generative AI into a product can pose a business risk. Microsoft introduced the bot as part of its $1 billion investment in OpenAI, which the tech giant hopes will revitalize Bing as a Google competitor and fuel new features in its Microsoft 365 productivity suite, The Information reported.

That investment may well pan out, but Microsoft has now acknowledged long chat sessions can “confuse” the bot, and Bloomberg reported the company has imposed strict limits on what and how much it can say to users. OpenAI itself has announced similar initiatives to restrain ChatGPT, the technology that powers the bot.

Digital ethicist Reid Blackman argued in the New York Times that the bot’s hasty development cycle violated Microsoft’s extensive commitments to responsible AI. Generative AI may have other risks beyond the ethical, including uncertainty as to how it will be regulated under data privacy regimes like GDPR and where courts will land on questions of copyright protection and infringement.

Author

Comments are closed.