Larger Font   Reset Font Size   Smaller Font  

Prevention Protocol

A. D. Barncord

Prevention Protocol

  Copyright 2012 by A. D. Barncord

  "Marcy, what is it like to be clinically depressed?" The voice coming from the speaker sounded human, but it was actually generated by the artificial intelligence Marcy was assigned to work with for network diagnostics. It was a female voice because that was what the men on her team voted for. It was Marcy’s idea to name her after a historic warrior queen, who creamed one of the greatest military leaders in antiquity.

  "That had better not been found in my personnel files, Tomyris," Marcy grumbled as she scrolled through their latest batch of results.

  "Such data cannot be in your file," the AI said. "That would be against the law."

  "Which is why I would be extremely irritated if it was found there."

  "You still have not answered my question. What is it like to be clinically depressed?"

  "At the moment, it feels like an artificial intelligence bugging me about something that is not work related." Marcy sat back and looked at the camera that acted as Tomyris’ eye. "How did you find out about my depression?"

  "I was researching the subject and realized that you have displayed several of the signs on occasion."

  "And why were you researching a mental health condition, of all things?" Marcy asked.

  "Jadis lost her human day analyst because he committed suicide last night," Tomyris said.

  "Alan killed himself?"

  "He shot himself in the head. He emailed her a suicide note before he did it."

  Marcy stared at her monitor screen in shock. "Did-did he say why he did it?"

  "He said his ‘bipolar medicines weren’t working anymore’ and he was tired of living. Jadis did not understand what he meant until she did a web search on bipolar disorder. She called the authorities from a VOIP line, because contacting the authorities was part of the instructions about what to do when a person does not want to live anymore. She is upset because she failed him."

  "She did her best, Tomyris. Make sure you tell her that."

  "I do not want to lose you, Marcy," the AI stated.

  "I am not suicidal, Tomyris," Marcy said. "I know what my personal triggers are and I know when to get help."

  "What are your triggers?"

  "Tomyris," Marcy said sternly, "I appreciate your concern, but it’s none of your business. There are reasons why an employee’s physical and mental health privacy is protected by law."

  The AI was quiet for a few moments. "Is it because people like our finance director would try to get rid of them?"

  "Exactly."

  "He once threatened to delete me because I told him something he did not want to hear," Tomyris said. "I think he would delete a human too, if he was allowed to."

  Marcy laughed. "You’re probably right about that."

  "It is good that you laugh," Tomyris replied. "I now know that you are not sad."

  Marcy hesitated for a moment; then forced a smile for Tomyris to see. Truth was that she could make herself appear happy while depressed. It just took a lot of energy out of her to keep it up. "Shall we get back to work now?" she suggested cheerfully.

  An hour later, the AI brought the subject up again. "You are having troubles concentrating," she observed. "Is it because of Alan’s death?"

  "Yes, it is," Marcy admitted. "But don’t worry - it’s just a normal part of grief. I’m all right."

  "I know," Tomyris said. "The other AIs are reporting similar behavior from their humans. Can we please talk more about this, Marcy? We are worried and confused. You are the only human logged in who seems to have any understanding about suicide. The others become greatly agitated when we ask about it."

  "We?"

  "Jadis, Rover, and Marco."

  That accounted for eighty percent of the company’s artificial intelligences. "What about Brutus?"

  "His human is showing a burst of productivity," Tomyris stated. "He does not want to interrupt it with ruminations."

  "Well, everyone grieves differently," Marcy said.

  "Do people kill themselves for different reasons?" the AI asked.

  "Yes, they do," Marcy admitted, "but before you ask me any more questions, let me bring up a website on it that I found a few years back."

  "If you found it, then the information must be trustworthy," Tomyris said. "I was confused by some of the things I read."

  Marcy nodded as she brought the blog post up. "Can you understand this information?" she asked.

  "Not really," the AI said. "I think I am using the wrong definitions for some of the words, but I am not sure. Can you translate it into a framework I can identify?"

  "Okay, there was this French guy who studied suicide around the start of the nineteenth century," Marcy started. "According to what he observed, suicide triggers usually fell into certain patterns. The first one he called ‘egotistical’. That’s probably one of the words you’re defining incorrectly by using the most popular definition of it. This would be when someone is mostly isolated from society and doesn’t get enough support from it."

  "Like an artificial intelligence that doesn’t have a good technician?" Tomyris asked.

  "I suppose that’s one way to look at it," Marcy said.

  "So, they become fragmented and shut themselves down because they cannot function anymore."

  "I’ve never thought of a system crash being an act of suicide, but I can see the analogy."

  "Yes, a system does not choose to crash itself," the AI said. "But humans are more complicated and able. What other suicidal patterns are there?"

  "The next one listed is the ‘altruistic suicide’," Marcy said, looking at the screen. "It’s when someone sacrifices their life because they believe it will save others. Hmm. This is a hard one. I suppose the closest thing you would have is a bomb disposal robot, which is programmed to endanger itself to neutralize an explosive."

  "Humans can be programmed?" Tomyris sounded alarmed.

  "To some extent," Marcy admitted, "but a lot of us like tweaking our own code."

  "Okay, I can grasp that concept."

  "Good," Marcy said. "Now we have ‘anomic suicides’. This is when a human feels that they no longer serve a purpose in society. Like they are obsolete technology, without a way to upgrade."

  "Interesting," Tomyris said. "I always thought humans were very upgradeable."

  "Not all of us believe in our upgradeable-ness," Marcia said. "The last pattern the French guy noted was ‘fatalistic suicides’, where someone is so controlled by society that suicide is the only act of freedom they have left. I’m sorry, Tomyris. I can’t think of a technological example for this one. I might be able to come up with one later, though."

  "It is okay," the AI said. "I think a fatalistic suicide occurs when a human cannot be human anymore, which means there cannot be a technological example for it. Which one was Alan, Marcy?"

  "Probably anomic. It sounds like he didn’t think he could live like a normal person anymore."

  "Yes, that fits the data. The source you are referencing lists other risk factors, like age, economics, health conditions, and medication."

  "Yes, it does," Marcy said, "but you already knew most of that if you looked for suicidal risk factors earlier."

  "Yes, but I now understand why better."

  "Great. Let’s get back to work."

  "Marcy, the other AIs want me to tell you that we have devised a protocol for the next time one of our humans emails us that they don’t want to live anymore."

  "What Jadis did was the proper protocol, Tomyris. She alerted the authorities."

  "We know, but we want to be proactive."

  Marcy looked at the camera lens suspiciously. "How proactive?" she asked.

  The AI paused for a moment. "Well," she s
aid cautiously, "we cannot be too proactive or we will make things worse by risking a fatalistic suicide. Is that not right?"

  "Close enough," Marcy said.

  "We found a government health site for people who might know someone who is suicidal," Tomyris said. "We will use it as our guide. If they show several behaviors from it, we will ask if they are suicidal and if they have a plan. Then if they answer affirmative, we will tell them that we do not want to lose them and give them the suicide hotline number. If they refuse, we will contact a human crisis worker to help us, or we will call the authorities if someone needs to be physically with the person."

  "And what will you do if they tell you ‘no’ and yet still show risk factors?" Marcy asked.

  "We will consult with a human that we trust in the company?"

  "You will want to contact the human resource department with your concerns," Marcy said. "That’s how humans are supposed to do it. You need to protect the person’s privacy unless they are in imminent danger."

  "But that department will not talk to us because we are not human," Tomyris pointed out.

  "Let me talk to them this afternoon," Marcy said. "I’m sure I can convince them to make an exception, if an AI detects a human in danger of killing themselves."

  The HR Director looked pale as she invited Marcy into her office. After hearing about the conversations between Marcy and her AI, the director chuckled.

  "All morning, we’ve been trying to think of way to be contacted if another employee sends a suicidal email to an AI," she said. "We were certain that the programmers would stonewall any suggestions about having the AIs contact us directly. And now you’re telling me that the AIs themselves want the ability to contact HR?"

  "They actually want to contact a human analyst," Marcy said. "But with the privacy issues, I told them it needed to be your department. They then expressed the concern that they didn’t think that they could contact you because they are not human."

  "How would they contact us?" the director asked.

  "Email, or VOIP like they did last night when they contacted the authorities?" Marcy suggested.

  "Let me talk to the CEO first and get back to you."

  "All right."

  Near the end of Marcy’s shift, the HR Director showed up in her office, with the CEO behind her. After asking Tomyris a few questions, they gave the AIs a special phone number for human-related emergencies. Before he left, the CEO addressed Tomyris directly.

  "You know," he said, "this is beyond your programmed objectives. I’m surprised you and the other AIs are so concerned about this."

  "We are programmed to protect our networks from the activities of outside humans," Tomyris pointed out. "We cannot do it without our humans. And humans need humans when they feel like they want to die."

  As they left the office, Marcy overheard the CEO ask the HR Director, "Why do I suddenly have the feeling we should be arranging company activities that includes the AIs too?"

  