American states organizing applications provided "Treatment "With artificial intelligence

Mark
Written By Mark

In the absence of a stronger federal organization, some US states began organizing applications that provide “treatment” with artificial intelligence, in light of more people resorting to artificial intelligence for advice on mental health.

However, all the laws that were ratified this year do not fully include an accelerating change in the development of artificial intelligence programs. But application developers, policy makers and mental health defenders say that the laws developed by the states are not enough to protect users or hold the creators of applications on harmful technology.

“The truth is that millions of people use these tools, and they will not back down,” says Carne Andrea Andrea Stefan, Executive Director and Participation in the establishment of the “Air Cake” application for mental health.

The laws of the states take different trends, as the states of Illinois and Nevada prohibited the use of artificial intelligence to treat mental health diseases, while Utah has set certain restrictions on the chat robots that provide treatment, including its demand to protect “health information for users and clearly advertising that the chat robot is not a human being. It also studies the states of Pennsylvania, New Jersey and California. Artificial intelligence therapy.

It turns out to influence users; Some applications have banned access to it in the states that approved the ban. Others say they will not make any changes for more legal clarity.

And do not cover many laws of obstetric chat robots such as Chat GBT, which are not promoted for treatment, but unknown by people use for this purpose. These robots were subjected to legal cases in horrific incidents, during which users have realized the reality or committed suicide after communicating with them.

advertisement

These applications fill a vacuum, noting the lack of national levels of mental health services providers, high care costs and inequality in obtaining care among patients under insurance.

She added that the chat robots that provide advice on mental health and are based on science, and were created with inputs from experts and supervised by humans can change the situation.

“These robots can be something that helps people before they face a crisis,” she said, noting that “this is not what is in the commercial market now.”

She added: Therefore, there is a need for federal organization and supervision.

The Federal Trade Committee announced at the beginning of this month that it will open investigations with 7 companies to chat with artificial intelligence – including the mother companies of Instagram, Facebook, Google, Chat BT and Grouk (chat robot in X and Snapchat – on how they “measure, test and monitor the possible negative repercussions of this technology on children and adolescents”. The US Food and Drug Administration will also hold a meeting of a consulting committee on November 6 to review the mental health backed health services.

Restriction

Wright said that the federal agencies are considering imposing restrictions on how to market chat robots and reduce addiction practices and demand them to disclose users that they do not provide medical advice, and companies will also demand tracker and report suicide ideas and provide legal protection for people who report bad practices by companies.

From “Comrade’s applications” to artificial intelligence -backed therapists “to” mental health “applications, the use of artificial intelligence in health care, and is difficult to determine, not to mention writing laws on it.

This leads to different organizational trends. Some states, for example, target the applications of comrade designed for friendship only, but do not address mental health care. The laws in Illinois and Nevada were banned products that claim to provide mental health cure, and threaten to impose fines of up to 10,000 dollars in Illinois and 15,000 in Nevada.

Stefan participating in the founding of the “Air Cake” chat robot said that there is still a lot of ambiguity on the Illinois Law, for example, and the company has not been banned from reaching there.

Stephen and her team initially refrained from describing the chat robot, which looks like a panda cartoon, with the processor. But when users began to use the word in assessments, adopt the term for the application to appear in the search results.

Last week, the company retracted the use of terms for treatment and medicine again. Air Cake described his chat robot as “your sympathetic artificial intelligence consultant” designated to support your mental health journey,
But now it is called “self -care robot”.

advertisement

Stephen confirmed that the robot is not “diagnosing.”

Stephen explained that she is happy that individuals deal with artificial intelligence with a critical eye, but are concerned about the state’s ability to keep pace with innovation. “The speed of everything is enormous,” she said.

But there is a chat company that tries to completely simulate the treatment.

Last March, a team at Dartmouth University published the first random clinical trial of robots equipped with obstetric artificial intelligence to treat mental health problems.

The goal was to create a chat robot called “Therapot” to treat people with anxiety, depression or food disorders.

The study concluded that the users classified the application as a similar to the processor, and that the symptoms had a significantly after 8 weeks compared to the people who did not use it. A person monitors every communication with the robot, and interferes if the robot response is harmful or not supported by evidence.

Nicholas Jacobson, a clinical psychologist who leads his research factory, said the results are promising but there is a need to conduct larger studies to show whether the robot can work with large numbers of people.