Your ChatBot Makes Inappropriate Comments!

Need a Culture Boost?: Building Extraordinary and Adaptive Cultures Course Available Now!

Are you a leader, consultant, coach or trainer? Join Belongify and get certified!

Like many others, I was struck by the NYT’s Kevin Rose article and his “interview” with Microsoft’s new Bing ChatBot. During their two hour convo, the chatbot “Sydney,” with a little prodding from Rose, went to some dark, unexpected places, including the desire to steal nuclear access codes, and expressing love for Rose, insisting his marriage was inadequate. The journalist notes: “These A.I. models hallucinate, and make up emotions where none really exist. But so do humans. And for a few hours Tuesday night, I felt a strange new emotion — a foreboding feeling that A.I. had crossed a threshold, and that the world would never be the same.”

You may recall that in 2022, Google fired a top engineer who expressed the belief that their LaMDA bot had actually become sentient. It raised lots of eyebrows and a fair degree of skepticism. However, science fiction and reality are raptly intersecting. 

The fact seems to be, within months, a large portion of society will be walking around with the world’s newest, smartest “person” in their pocket or purse. Who is programming these bots? Under what guidelines? If the aim is to ultimately sell more advertising under the guise of being free, it makes me nervous. 

At some level, A.I. will grind out and do unsavory work humans shouldn’t have to. That’s great! On the other hand, I’m not sure we’ve prepared ourselves how to lead with these new “botmates” in the mix? Btw, what happens when your bot “hallucinates” and starts suggesting lewd things? Plays into one’s unconscious bias? Gets its A.I. jollies from influencing you to think and act a certain way? Fills the loneliness void I mentioned in last week’s blog

I know many workplace futurists are ahead on this. And even though I’m aware that exponential tech has been part of our life dramatically since 2010, it just feels like we are headed into a very strange new world, with a lack of necessary preparation. Having Big Tech in control should both worry and exhilarate us. And will I throw a “purple flag” at my chatbot when it secretly tells me that offside joke? Or quietly confess they love me? Or how attractive I look today? Or my co-worker sucks and I’m right? Etc., etc., etc. Hmm. 

Think Big, Start Small, Act Now, 

- Lorne 

One Millennial View: 2001: A Space Odyssey (1968), Terminator 1 and 2 (1984, 1991), iRobot (2004) - just four of the insanely popular and successful movies that roughly have introduced audience members to the same concept: Sentient A.I. is bad, and will kill us all. This is clearly not a new thought, so why are we “whoopsie daisy” flirting with pushing this science fiction into more of a reality? I think it’s simple: Because we can. Also, it creates wealth and shortcuts. And if well intentioned people aren’t doing it, then only the nefarious will.  I just hope Big Tech isn’t too prideful to not also install one, huge, all encompassing, manual, human controlled kill switch to obliterate this progression if necessary. Haha, what if our desire to make services so automated and leisurely accidentally sends us back to the stone age because no humans wanted to do unsavory work anymore? Staying on hold for customer service wouldn’t seem so bad then, would it? What idiotic irony that would be. Unlike our A.I. ChatBots, we make mistakes, so let’s be really careful not to underestimate our ability to screw stuff up.

- Garrett 

Edited and published by Garrett Rubis