I think ai can be usefull in cases like this. Especially in a case where a person who is literally about to commit the suicide. Ai might not be 100% accurate but if it can prevent someone from taking their life by offering some support, that is a positive thing.
Considering there have already been news stories of AI chatbots telling users to kill themselves and feeding into suicidal ideation, it is absolutely not a reliable fact that the AI will not cause further harm.
Edit: It’s also not just a problem with suicidal ideation. The founder of Business Insider recently wrote a post on his blog about “using AI to generate an AI news room cast”. He openly admits to making comments to the female AI newscaster he created that would definitively be sexual harassment irl. The damn thing complimented him on his directness, reinforcing this creepy asshat being a sex pest to the point that he saw nothing wrong or embarassing about posting about this shit publicly.
There are going to be ai’s that tell users to kill themselves or not to. Depending on how well they are trained or set up by the creator and how the user is acting towards the ai. It’s not just one factor all those factors are linked to how the ai will respond so i’m not particularly victim blaming.
I do have to ask why you bring up a news anchors fetish when we’re talking about the potential of an ai chat bot helping someone with suicidal issues. It really just sounds like you have a bias against ai and aren’t taking into account the potential of preventing suicide for a person when they have no one they feel comfortable to talk to.
However it is important to know the context of these cases. Ai character creators are able to make ai characters (on platforms like character ai) and are able to direct the behavior of the ai to how they would want. Now Gemini and grok on the other hand, ai’s like that where no one knows where Google or Musk are guiding them i’l acknowledge might not be as trustworthy because on character ai the user knows up front how the ai is intended to behave by the creator. If the user want to make a modification to the ai’s behavior or vibe they can literally just tell the ai character how they want the ai to behave and it will adapt. So ai does certainly have the potential help people with their tramas through conversation.
I think ai can be usefull in cases like this. Especially in a case where a person who is literally about to commit the suicide. Ai might not be 100% accurate but if it can prevent someone from taking their life by offering some support, that is a positive thing.
Considering there have already been news stories of AI chatbots telling users to kill themselves and feeding into suicidal ideation, it is absolutely not a reliable fact that the AI will not cause further harm.
Edit: It’s also not just a problem with suicidal ideation. The founder of Business Insider recently wrote a post on his blog about “using AI to generate an AI news room cast”. He openly admits to making comments to the female AI newscaster he created that would definitively be sexual harassment irl. The damn thing complimented him on his directness, reinforcing this creepy asshat being a sex pest to the point that he saw nothing wrong or embarassing about posting about this shit publicly.
There are going to be ai’s that tell users to kill themselves or not to. Depending on how well they are trained or set up by the creator and how the user is acting towards the ai. It’s not just one factor all those factors are linked to how the ai will respond so i’m not particularly victim blaming.
I do have to ask why you bring up a news anchors fetish when we’re talking about the potential of an ai chat bot helping someone with suicidal issues. It really just sounds like you have a bias against ai and aren’t taking into account the potential of preventing suicide for a person when they have no one they feel comfortable to talk to.
However it is important to know the context of these cases. Ai character creators are able to make ai characters (on platforms like character ai) and are able to direct the behavior of the ai to how they would want. Now Gemini and grok on the other hand, ai’s like that where no one knows where Google or Musk are guiding them i’l acknowledge might not be as trustworthy because on character ai the user knows up front how the ai is intended to behave by the creator. If the user want to make a modification to the ai’s behavior or vibe they can literally just tell the ai character how they want the ai to behave and it will adapt. So ai does certainly have the potential help people with their tramas through conversation.