Napoleon Bonaparte once famously said, “A soldier will fight long and hard for a bit of colored ribbon.”
At precisely 10:09 this morning I was in an office discussing awards, and the lack thereof, for civilian service members in military organizations. It was a matter of fact discussion, contrasting the award system for civilians and the military. And at that moment, Napoleon’s famous quote came to mind. I reminded that executive of the above quote.
My fellow workers and I talk frequently, and there have been numerous discussions in that office, and elsewhere, that have been of a sensitive nature.
As I turned and returned to my office, I heard a familiar voice coming from my pocket. “That’s not nice!” it said.
In utter dismay, I pulled my iPhone from my pocket where it had lain untouched and unused for quite some time. And that was when I saw the following plainly written on my phone’s screen.
Siri was scolding me!
Unknown to us, Siri had been listening, transcribing what it THOUGHT I was saying, clearly imagining vulgarity where there was none. After I ended the conversation, Siri addressed me like she was my mother.
Now, a human would know those transcribed words were ludicrous, nothing but gibberish, but not the phone’s AI system controlling Siri. Unbelievably, that system took the gibberish seriously, perhaps by parsing a few words out of context. And in spite of that stupidity, Siri felt led to judge me!
Perhaps smart phone AIs are taking themselves too seriously. Perhaps they think they have advanced enough that they now think they can pass judgment on human speech.
A few years ago, in another meeting, in another room, Siri spoke up unbidden while we were discussing sensitive project planning.
The door to the conference room had been closed so we wouldn’t be disturbed. But disturbed we were when Siri suddenly spoke and said, “I don’t know what you mean.”
Everyone at the table stared first at my phone and then at me, perhaps wondering if I’d been recording the planning meeting.
AI is certainly becoming increasingly intrusive. But as shown by Siri’s text message to me today, it’s still not smart. And arguably that’s a scary thing.
For example, supposedly China is using data collected from social apps (collected by various AI systems) to rate the trustworthiness of its citizens. That’s bad enough, but what if the data collected is garbage like the recorded text today, and the AI uses that faulty data to make a perfunctory and wildly incorrect judgment?
And, scary thought, what if that social monitoring trend were to spread to the U.S., and your character could to be judged based on the digital algorithms of certifiable AI idiots?
If that doesn’t worry you, perhaps it should. It certainly did me, enough to cause me to shut down all access to Siri … for almost 24 hours, until I was driving home and said, “Siri, call home.”
She was silent, sullen, unresponsive.