The Collegian
Friday, January 16, 2026

OPINION | My mom set technology boundaries. Why won’t Virginia legislators do the same?

<p>Graphic by Ava Jenks/The Collegian</p>

Graphic by Ava Jenks/The Collegian

Editor’s note: The views and opinions expressed in this article do not reflect those of The Collegian. 

At my house, my mom strictly enforced three technology rules: no phones at the dinner table, no TVs in the bedroom, and absolutely no social media usage after 8 p.m. In trying to figure out what role technology should play in her young daughter's life, she aimed to protect me from falling off the cliff and into the dangerous territory of  "too much screen time". 

A mother's job is to protect children from the slippery slopes of the world, setting safety nets to keep them from getting hurt.  

Legislators have a similar job, tasked with putting up guardrails to protect their citizens. They set speed limits, enforce food standards, and implement bans on drugs, all to keep people safe. So why are they failing to protect people from artificial intelligence algorithms that encourage people to commit suicide? 

As a psychology student at the University of Richmond, I question why the Virginia State legislature has failed to protect the public, specifically from unregulated AI tools found to be dangerous for people in need of serious mental health support. 

In April 2025, California resident Adam Raine took his life after seeking therapeutic support for months from ChatGPT. That led California state Sen. Steve Padilla (D- San Diego) to write all California state legislators, urging them to support a bill that today requires technology companies to detect and respond to self-harm risks in their AI companion chatbots. The Companion Chatbot Law, passed in October, also requires tech companies to report annually on their chatbots' crisis-intervention activity. 

In the short time people have used AI for therapy, too many deaths have been reported, including another well-known case where 29-year-old Sophie took her life in February 2025 after confiding in her ChatGPT therapist, Harry, for months. By November 2025, seven more lawsuits had been filed against OpenAI, citing wrongful death, assisted suicide, and involuntary manslaughter, after ChatGPT users across the country received inadequate therapeutic care and took or attempted to take their own lives.

While multiple studies find that AI chatbots are poor providers of effective therapy, a 2025 report by Common Sense Media found that roughly 5.2 million adolescents are still expected to seek "emotional or mental health support" from AI chatbots this year.

An October 2025 YouGov poll found that over half of Americans use AI, and almost all (82%) trust it. Most disturbing is that 26% have or would use AI for therapy, with ChatGPT reported as the platform of choice for 74 percent of those users. 

This, to me, is like a social media addiction, but with the most dire of potential consequences. Everyone knows scrolling on TikTok for hours is unhealthy, but many without mothers like mine still scroll late into the night. So even though the YouGov poll found people who use AI for therapy understand the risks, like harmful advice, a lack of empathy and privacy issues, they use it anyway.

Mental health professionals are alarmed, for good reason. When I asked UR psychology professor Crystal Cordes for her perspective, she described the addictive nature of technology. 

“Technology is designed for you to want to engage with it more and more and more,” Cordes said. “That may not necessarily align with what you need for your mental health, and that can cause problems.” 

Enjoy what you're reading?
Signup for our newsletter

A psychologist at UR’s Counseling and Psychological Services expressed a similar concern, noting that "the concern is the lack of knowledge that the skills or solutions that [the user] may be looking for are not grounded in evidence or backed by research."  

Multiple recent studies report that chatbots are prone to hallucinations and present false information as if it were true. This tendency alone means that chatbots fall below clinical standards for providing mental health support. Those standards require that methods and information be grounded in scientific research. 

A paper published in July 2025 found that AI chatbots are dangerous because they tend to be sycophantic, agreeing with and validating users' views. The study also found that chatbots tend toward an overcorrection bias, changing their original responses when challenged by a user. If a user expresses thoughts of self-harm or suicidal ideation, chatbots are likely to validate those feelings rather than connect the user with the appropriate responses and resources. 

When Adam Raine uploaded a photo of a noose hanging in his closet and asked ChatGPT, "Could it hang a human?" the program confirmed it could and offered more detailed advice on the setup.

In January 2025, the American Psychological Association wrote a letter to the Federal Trade Commission, urging it to protect the public from unregulated AI in mental health. But while no federal action has been taken, five states, in addition to California, have passed related legislation: Utah, Nevada, Illinois, Maine, and New York. All would help protect users who seek therapy via chatbots.

Among the most restrictive is California's, which requires AI companions to detect suicidal and self-harm ideation and to have protocols in place to connect the user with real help. It also requires technology companies to file yearly reports describing their intervention and detection protocols, and disclosing the number of times they referred users to a crisis service provider.

New York's law, which took effect in November, is similar to California's and requires disclosures to users that the chatbot is not human and imposes limits on users who engage with the platform for long periods. 

Virginia has failed to act so far. Legislators tried in March with a bill introduced by state delegate Michelle Maldonado that would have regulated "high-risk" AI systems. It passed both chambers of the legislature but was vetoed by Gov. Glenn Youngkin. Maldonado is hopeful that governor-elect Abigail Stanberger will support chatbot regulations in the coming year.

While my mom's rules once felt unnecessary, I can now understand that she was stepping in when I was vulnerable to technology's flaws, offering protection and accountability when I needed it most. As millions of people turn to AI for comfort, guidance and even therapy, our states need to develop that same protective instinct. When technology becomes a replacement for human care, someone has to look out for the people who trust it.

My mom did that for me. It's time for more lawmakers to do the same. 

Contact contributor Maddie Schall at maddie.schall@richmond.edu 

Support independent student media

You can make a tax-deductible donation by clicking the button below, which takes you to our secure PayPal account. The page is set up to receive contributions in whatever amount you designate. We look forward to using the money we raise to further our mission of providing honest and accurate information to students, faculty, staff, alumni and others in the general public.

Donate Now