Although the app can help many people, some data privacy experts are concerned about how machine learning will work with the data.
Matt Nock, a professor of psychology at Harvard University, who studies self-harm in young people, told NPR that he used electronic health records and AI to identify suicide risk. A lot of the predictions were false positives. “Is there a cost there?” Nock said. “Does it do harm to tell someone that they’re at risk of suicide when really they’re not?”
Researchers acknowledge that it is crucial to do everything possible to prevent people from harming themselves. However, AI-backed tools can cause more harm than good and lead to punishment instead of help.
Still, suicide is the second leading cause of death for people ages 10 to 24 in the U.S.; finding help is a top priority for medical professionals.
“Technology is going to help us, we hope, get better at knowing who is at risk and knowing when,” Nock said. “But people want to see humans; they want to talk to humans.”