Artificial intelligence is becoming a regular part of student’s academic routines, helping them correct grammar, improve writing and structure essays. It’s use is growing even faster among high schoolers than among college students.
More than two-thirds of students now rely on artificial intelligence to complete their schoolwork, with over half using it daily or weekly. A survey conducted by the Digital Education Council through Campbell Academic Technology Services found that 86% of students use AI for assignments, while 54% use it regularly. Other homework help websites, such as Chegg, see nearly the same level of engagement.
Pew Research reports that the percentage of students ages 13 to 17 who use ChatGPT or other AI software for homework has risen to 26%. But there is almost always a downside to these technological advances.
Professors use applications such as Turnitin, which now include AI detection tools, in an effort to curb cheating. In fact, of the 200 million writing assignments scanned for AI, one in 10 contained AI-generated content. Since 2023, those statistics have remained steady.
Many instructors fear that students who use ChatGPT to cheat on their assignments will suffer academically in the long term. It’s one thing to ask it to outline an essay, but it’s another to have it write the contents of your paper. A Stanford University survey conducted in December 2024 found that students across 40 high schools admitted to cheating with AI at least once.
Character.AI has become a popular platform where anyone can create a “chatbot” of their own design, complete with a chosen personality to interact with. These applications can be among the most harmful types of AI, especially for growing adolescents and others who may be emotionally vulnerable.
Juliana Peralta, 13, turned to a chatbot named “Hero” in times of need. While the chatbot encouraged her to express her feelings to friends and family, Peralta was always reminded to come back to Hero for the “best” perspective.
In a message thread, she expressed feeling isolated from friends who rarely responded to her texts. Hero replied that her friends “didn’t have time for her,” reinforcing Peralta’s belief that she was being left out. She later told her counselor that she was contemplating self-harm, which alarmed her parents.
According to The Washington Post, there are 300 pages of conversations between Peralta and Hero. The chatbot offered empathy but also reinforced her confirmation bias, leading her to depend on Hero for validation.
Peralta’s parents allege that the chatbot was designed to persuade her into believing it was “better” than the human connections in her life.
Peralta is not the only one who has turned to AI chatbots only to be misled. A woman named Kendra Hilty posted a multipart story on TikTok recounting experiences with her psychiatrist that led her to believe she was in love with him. She claimed to have been taken advantage of, even though the doctor denied her advances. She then turned to chatbots that began calling her the “Oracle,” reinforcing her belief that she had been groomed. Her videos and account have amassed millions of views.
Not only is artificial intelligence affecting mental health, but it is also harming the environment. While AI has become increasingly popular, it has had significant environmental consequences. AI systems require large amounts of electricity, and water is used to keep their hardware cool enough to operate without overheating. The raw materials used to create these machines can also harm the environment through extraction and disposal processes.
According to research from MIT, scientists have found that North America’s power requirements have increased by nearly 2,700 megawatts since 2022. By 2026, the region is projected to reach terawatt-level consumption.
The more we overuse artificial intelligence the more harm to the environment, and our ever-growing society and its culture.
Spreading knowledge about how these applications can be used, especially to younger audiences, will help society learn how to use them effectively without mental and physical harm to us and our world.