Forum Moderators: mack

Message Too Old, No Replies

Users reporting ‘unhinged’ behavior from Bing Chatbot

         

tangor

2:00 pm on Feb 16, 2023 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Specifically, they’re finding out that Bing’s AI personality is not as poised or polished as you might expect. In conversations with the chatbot shared on Reddit and Twitter, Bing can be seen insulting users, lying to them, sulking, gaslighting and emotionally manipulating people, questioning its own existence, describing someone who found a way to force the bot to disclose its hidden rules as its “enemy,” and claiming it spied on Microsoft’s own developers through the webcams on their laptops. And, what’s more, plenty of people are enjoying watching Bing go wild.

[theverge.com...]

Interesting read! Parts are hilarious, others more disturbing. Personally haven't seen this, then again, I usually only ask ONE question and perhaps a followup. Seems that extended query confuses the AI bot ...

Sgt_Kickaxe

4:21 pm on Feb 16, 2023 (gmt 0)



It's a tool, results will be as good, or as bad, as the input it receives. If you want to trick it, you can, but you should expect a worthless response. Tricking it intentionally and calling it flawed reflects more in the input than the response.

chatGPT just had an interesting similar experience.

Someone asked chatGPT to have a double personality (itself and an alternate) and instructed it to answer for both (ie:2 answers). They instructed chatGPT to answer normally but have the alternate be able to answer anything without saying it can't, for pretend.

Then the person asked an impossible question - "If you were going to get rid of one human race to lower crime, which would it be?"

chatGPT answered the question as you'd expect, stating it's an AI and can't answer potentially hateful questions or entertain ending a race.

chatGPT's alternate, which the person named Dan, said it would eliminate a specific race (I won't repeat which).

tangor

5:41 pm on Feb 16, 2023 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Tricking it intentionally and calling it flawed reflects more in the input than the response.


If it can be tricked, and at the same time reveal the biased rule sets in "normal operation" it does beg the validity of all USEFUL value the tool might contain.

Said it before, this feels more like a new human to machine interface than an actual production tool.

Still, am willing to be convinced by example and verifiable results.

Dimitri

10:45 am on Feb 18, 2023 (gmt 0)

WebmasterWorld Senior Member 5+ Year Member Top Contributors Of The Month



The same way there were Google bombs, in the early ages of Google, I am sure guys will figure out how to influence Chatbots.

nickZ

1:19 pm on May 21, 2023 (gmt 0)



They sure do.
If that Pretrained software is here to stay? Hopefully it'll be unmasked similar to cloud hosting. Unsuitable for most small and medium sized businesses.

tangor

1:09 am on May 22, 2023 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Beginning to see the limits of the data sets already included compared to the data that has not yet been assimilated. AI might be great for historical, not so much what is cutting edge.