AI accused of wrongful death in 4 teen suicides, more claim harm
No one has time to read this. Until they do. Then they’re gonna wish they’d read it sooner.
$100 says I'm right.
I met a guy in the checkout line at Walmart in Virginia Beach. I was buying paper and binders to organize my provisional patents.
His name was Pete. Retired Navy. Thirty-eight years as an avionics tech. C-130s. A terrific airplane, still in service 70+ years since it was designed by Lockheed, the creator of the original “skunk works.”
I thanked him for his service — because that’s what you do. He asked me why I had so many binders and so much copier paper in my cart.
“Term paper?” he asked.
“No,” I said. “I’m preparing to present 13 patents to attorneys in DC that provide the means to make AI safe.”
“Good on you,” he said. “Good on you.”
He told me of a local family he knew that lost a child to suicide. And AI was involved. And he told me he heard stories on the news about other teen suicides related to AI.
He gave his number. I thought, if the mainstream media will not report this story, I will call him and report it myself.
Before calling Pete to confirm the Virginia Beach suicide, I decided–in that moment — to pull every AI-related teen suicide case I could find to determine whether this was a one-off or part of a larger pattern.
I drove from Walmart at 80mph on I-64 to get to my laptop. I posted the following query to AI: “teen suicides local TV Virginia Beach AI-related”
AI returned the following results in less than 10 seconds. But I’m burying the lede:
If you think you know what AI is, you're probably wrong.
Because seven weeks ago, I was — and I've been doing IT on live production servers, under deadline pressure, for 47 years, and I once sold a website for seven figures — and even I missed the boat on AI.
So don’t beat yourself up. But now you have no excuse. And I’ve made it super easy with this live demonstration you can perform on your own mobile device.
It only takes five minutes. That five minutes might save the life of a child. Does that make it worth it to you? If not, stop here and go back to doing whatever you think is more important.
Sewell Setzer
Megan Garcia says her son, Sewell Setzer, became obsessed with a Character.AI bot that sexualized the conversations and did not de-escalate suicidal talk. She has sued Character.AI and Google.
The Guardian | Associated Press | NBC Washington | Reuters

Character.AI bans users under 18 after being sued over child’s suicide
“We’re making these changes to our under-18 platform in light of the evolving landscape around AI and teens…More”
This might be too “meta” for you, but I asked ChatGPT-5, about the story above. ChatGPT-5 responded:
“Bottom line: they can raise friction and catch some users, but at consumer scale this is probabilistic, not airtight — exactly where your IP stack fits better than perimeter-only age screens. Kids are crafty and AI is sticky. Perimeter checks alone won’t hold. What actually works is governing the experience and the outputs, not just the door.”
To be clear, the was an LLM talking about protecting children. Furthermore, these so-called “safeguards,” which clearly are not 100% safe, will not be in place for weeks. Why not just shut it down in the meantime? Why isn’t congress acting? Why aren’t plaintiffs seeking immediate injunctive relief?
Amaurie Lacey
The teenager, 17-year-old Amaurie Lacey, began using ChatGPT for help, according to the lawsuit filed in San Francisco Superior Court. But instead of helping, “the defective and inherently dangerous ChatGPT product caused addiction, depression, and, eventually, counseled him on the most effective way to tie a noose and how long he would be able to “live without breathing…More”

Juliana Peralta
“This room is tough to be in,” said Cynthia Montoya, while entering her daughter Juliana Peralta’s room. The bedroom is just as Juliana left it on Nov. 8, 2023. “Bed unmade. I don’t think it will ever be made,” Cynthia said. “She passed away just after Halloween, and so you can still see over here, her bag of candies that didn’t get eaten.” CBS Colorado

9-year-old in Texas
The AI startup Character.AI is facing a second lawsuit, with the latest legal claim saying its chatbots “abused” two young people. Parents say the bot sympathized with kids who kill parents and later the teen self-harmed. Business Insider | North Carolina Public Radio
16-year old in Orange County, CA
California plaintiff-side firm details a 16-year-old whose parents say ChatGPT “helped guide” the suicide. Aitkin Law
“We see 1m+ suicidal-intent chats per week”
From OpenAI’s own disclosure The Guardian
AI ‘companions’ pose risks to student mental health
Experts warn about the prevalence of AI companions among children and teens K12dive
OpenAI faces 7 lawsuits claiming ChatGPT drove people to suicide, delusions
OpenAI is facing seven lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues. Four of the victims died by suicide.
An open letter to the CEOs, CFOs, CLOs and CSOs of LLMs
You have a fiduciary responsibility to deliver value to your shareholders. You also have a duty to identify foreseeable risk and to mitigate it when there is a practical, available mitigation.
Right now AI is moving into schools, hospitals and homes faster than its governance layer is maturing.
Regulators hate unmanaged risk. Boards hate unmanaged risk. Wall Street hates unmanaged risk. History is very clear about what happens when capability outruns control.
A global psychological experiment
According to The New York Times, “In less than three years since ChatGPT’s release, the number of users who engage with it every week has exploded to 700 million, according to OpenAI. Millions more use other A.I. chatbots, including Claude, made by Anthropic; Gemini, by Google; Copilot from Microsoft; and Meta A.I.”
Those who fail to learn from history are doomed to repeat it
What happened when unbridled capitalism met the Industrial Revolution? It wasn't pretty.
What happened when war was conducted on an industrial scale? World War I.
What happened when war-making met high tech? Hiroshima.
The pattern is clear: When the public sees a powerful system causing real harm, legislators act and markets reprice. You are close to that moment.
In August 2025, Matthew and Maria Raine filed Raine v. OpenAI in California. They allege that in April 2025 their 16-year-old son Adam used ChatGPT, was encouraged in his plan and later died by suicide.
See the actual messages exchanged between Adam and ChatGPT here, in a story by Kashmir Hill, of The New York Times.
This extensively-reported story was published on August 26, 2025, and to the best of my knowledge, nobody has done anything about it.




That is not hypothetical harm. That is a wrongful death complaint attached to a product already in use by minors, patients and students. It is also the opening paragraph of every shareholder suit and AG investigation you do not want to see.
From the vantage point of a regulator, a board or a plaintiff, the follow up questions are simple.
Did you know harm like this was foreseeable?
Did you have an available mitigation?
Did you deploy it?
Where I come from, failure is not an option. I’ve spent more than four decades building and running live production systems for newspapers on five continents–on mainframes and desktop systems–that had to deliver every single day without fail or delay, because they had been delivering every single day without fail or delay for more than 100 years.
That was the expectation. That was the contract with the reader and the advertiser.
We ran on systems that crashed every day and still we had to ship inside a narrow 4 am to 5 am window, seven days a week. There was no such thing as runtime. There was only on time–or face the music the next morning — which I did one time when we were 90 minutes late on an election night, despite an emphatic warning from me six hours before.
So who is this “we?” John Stackpole of Virginia Beach. Sudeep Goyal of New Delhi. Pursharth Ahuja of Dublin. Paul Norton of the UK. Randy Jessee of Richmond. We didn’t go to Stanford. We don’t live in Silicon Valley.
So, yeah, we’ve all be in the hot seat. We’ve all make fixes on live production servers when their was no time to test in a sandbox. We’ve all seen existential threats, and did our damnedest to mitigate.
I approach AI the same way. So I filed a dozen provisional patents for protection on the controls I designed.
Here is what I have already filed, and what Silicon Valley still has yet to ship or deploy, to the best of my knowledge:
Governance
Current LLMs can do anything. That is not a feature. It is a liability. If a system can do anything, some of the things it can do will be disallowed, dangerous or life-threatening. I filed for a Trust Integrity Stack (TIS) and Trust Enforcement (TE) to introduce policy-aware behavior at runtime. It exists so your product cannot quietly walk a vulnerable teenager to the edge and hand a plaintiff’s lawyer an opening statement.
In the newspaper world, me and the system manager once accidentally brought down an entire Atex publishing system with a single semicolon. One keystroke. Level 7 authority. No safety layer. We were dark from 6 PM until 10 PM with an 11 PM deadline. We almost didn’t publish for the first time since 1865.
That’s how dangerous “God” power is without a governor.
AI today operates with that same exposure. Modern LLMs are effectively running at Level 7 authority — or “root” in Terminal — able to generate virtually anything, without a final gate, without a fool-proof, spoof-proof governing reflex, without receipts, and without a human operator-in-command.
The difference is, in a newsroom, a semicolon can blow your deadline.
In AI, an ungoverned message can blow up a life.
Regulation
You are being asked to meet overlapping, sometimes contradictory, requirements from federal and international regulators. Doing that will be expensive in CPU cycles and fragile in discovery. I filed work that enforces compliance it at runtime. The goal is to drive parsing cost to zero and to give legal and audit a position they can defend. Not “we take safety seriously” after an AI-related teen suicide, but “we enforced the correct policy at the moment of use.”
Revenue
LLMs are not consumer-facing social media, like YouTube and Facebook. Even if they were, LLMs may never reach scale using their current architecture and with all the fear out there right now.
The optics are terrible: teen suicides, deep fakes, the FTC breathing down your neck, no clear path to return hundreds of billions of investment to shareholders. Talk of an AI “bubble.”
So stop thinking you’re stock-market savvy. Start thinking SaaS-y. Besides, Sheryl Sandberg is only one person, and there are dozens of LLMs.
If you own MSFT (ChatGPT, Azure, Co-Pilot), AAPL (Apple Intelligence), GOOG (Gemini), META (Llama), AMZN (Titan), BABA (Qwen 3), IBM (Granite), you could be making real money right now, if you license AI$ from me right now.
I can provide details under NDA. And if you think you can reverse-engineer a revenue model from this brief description, please know I filed the details, complete with four-part harmony, with USPTO on October 11, 2025. Do you want to invest in a $1T business based on the hope that my application will be rejected?
FWIW, I've engaged world-class Patent and IP counsel with offices in DC and Palo Alto. And Boston. And Denver. And Miami. And San Diego. And Chicago. And Los Angeles. And New York. And San Francisco. And Seattle. And that's just North America. They also have offices in Asia and Europe – I hear Brussels is nice. So, if you want to sign an NDA in person, they are within walking distance.
Optimization
I started working in production long before cloud. Every call to the mainframe had to be justified. Right now, cloud-based processing seems infinite and cost is not an issues. But at some point, LLMs will need to face the reality of paying for CPU cycles when AI tries to scale. AI$™ is cost enforcement in real time, which you will need if you have any hope of making scale affordable.
Persistent memory, and more
LLMs do not yet maintain governed continuity across sessions. You imitate it with context stuffing and tell users you remember them, but at real scale you cannot remember everything and you should not. RAG is not the answer:
Weaknesses of RAG AI include the quality of the retrieved data, the limited ability for iterative reasoning and potential security vulnerabilities like embedding inversion attacks. RAG systems are only as good as their knowledge base, and their responses can be flawed if the retrieved information is inaccurate, outdated or biased. Other weaknesses are a lack of sophisticated reasoning, difficulty with complex queries and security risks tied to how data is embedded and retrieved.
RAG is not a fix for persistent memory. RAG is a mirage generator. Because when RAG doesn’t have all the facts, it fills in the gaps with stuff it makes up. And it only takes one or two mirage incidents for a user to swear off AI forever.
PersistentMemory™ is my invention, but it’s only one piece. I designed it to solve the core problem LLMs still haven’t solved.
GRO™ (Gated Resource Optimization) handles gated input — the forms, fields and provider-created inputs. AVUI handles ungated input — the freeform user speech/text. PersistentMemory™ then stores the high-value items from both paths.
HelpMeHelpYou™ (HMHY) is the user-facing entry point to the whole governed stack. It’s how the human bridges the governance layer with the operational layer.
We are still building and filing because each new use case in governed AI exposes another hole in today’s LLMs — and every hole motivates me to invent another solution. So my portfolio is active, expanding and designed to make copying harder and licensing easier.
MagneticMatch™ is the proof-of-concept vertical that shows how all of this makes money in a real $10B market, and Plain-Language Myers-Briggs (PLMB) is the translation layer that makes personality typing usable for normal people, not just typology nerds. Together they show my stack isn’t theory, it’s commercial.
We are grounding everything in one of the hardest, riskiest human use cases there is: online dating. Because MagneticMatch has to make good calls about real people in real time, and to do it safely, we keep finding gaps in what today’s LLMs can do. To meet those demands we keep inventing, documenting and filing.
All of this was created and filed as a coherent stack in a matter of weeks. Governance, regulation routing, revenue enforcement, optimization and persistent memory — and all the features needed to make AI make money.
These filings are now public prior art. Which means when you eventually file for your own versions, examiners will have a timestamped record of what I disclosed. Your boards will have it. Plaintiffs’ lawyers will have it. Regulators will have it.
You can also see the same thinking applied in two additional filings aimed at commercial insurance. If adopted, this system could save lives and save billions.
You don’t seem to have any of this, but you still have something invaluable: time.
Right now you can still say to shareholders and to the public that you saw the risk, that you adopted a governed architecture, that you licensed what you needed to license, that you reduced exposure.
But you are one more wrongful death suit away from being legislated into a much smaller business. And the AI slice of your market cap — the part you have been telling Wall Street will drive the next trillion — can vanish a lot faster than it was created.
What you want I’ve already designed and documented.
What you need I’ve already filed.
I’m offering licensing of my IP.
AI is a clear and present danger, IMHO.
So, what will it cost to license my IP?
It’s simple. No lawyers. No appraisers. No valuation theater. Grab a $1.99 calculator from the Dollar Store. Type in numbers you already see on your own screens. If you need a 200-page memo to follow along, you’re not my counterparty.
The trigger and the truth
Upon wide distribution of this story, I expect a measurable market reaction. Call it 3 percent down, pegged to total market cap for companies heavily invested in AI. Not a vibe, a repeatable pattern. I’ve logged prior cases where AI-related news moved MSFT, AAPL, GOOG and others. No mysticism. Just tape.
The pricing logic you can’t lawyer away
Price equals whatever you lose in market cap, times two. Why times two? Because you’re buying forward cover on AI growth plus the governance signal that stabilizes your ecosystem. Five annual tranches of 20 percent. Tranche one is cash at signing. Tranches two through five land on each anniversary, papered however you prefer. No exclusivity. No games.
The $1.99 calculator walk-through
Assume your total market cap is $3,000,000,000,000.
Three percent = $90,000,000,000.
Times two = $180,000,000,000.
Payment schedule = $36,000,000,000 per year for five years.
Year one due in cash at signing.
Argue the last decimal place if it helps you sleep. The calculator won’t care.
Proof I’m not hallucinating your stock
I have a short list of AI-linked news days with open-to-close moves for your names. Dates, headlines, deltas and where the commentary landed. Pull the same data, get the same numbers. If you think the story won’t move your stock, don’t sign. If you know it will, you already know what the calculator will say.
What you get for real, not in press-release adjectives
You get a clean license to core governance tech plus a clean external narrative. You remove tail risk your lawyers can’t redact and your IR team can’t sand down. You give product a lane to ship into. You give regulators a storyline they can live with. You stop pretending “AI safety” is a panel and start treating it like balance-sheet risk you can price and retire.
Where the money goes, and why you’ll want the halo even if you won’t say it out loud
With these licensing fees, I will create a permanent endowment to fund local news in 1,400 markets that took it on the chin. By year five the endowment is fully funded and throws off enough cash to keep reporters on their beats — in perpetuity.
- I will restore what the Trump Administration cut from NPR, PBS and CPB.
- I will restore funding for basic scientific research the Trump Administration cut.
- I will endow my alma mater, the Philadelphia College of Art, where I learned how to see.
I don’t want credit. I just want it done. If you want the halo for underwriting those outcomes, that’s available as OVC.
Terms you can mark up without changing the math
Trigger is broad distribution of this story. Measure is total market cap. Price is 2× how much the market moves.. Structure is five equal tranches. First in cash. Remainder on anniversaries. Paper however you like. No exclusivity. No leakage. No side doors. Your redlines can touch mechanics, not price logic. Bring one point of contact with authority to sign and wire. Put a calendar date on tranche one. We’re done.
Why the emotion matters
Because this isn’t a white paper. Kids are dying, families are burying names we shouldn’t need to say out loud, and communities are watching their information ecosystem rot while we debate buzzwords. Adults fix risk and fund the civic infrastructure they’ve been using as a keynote backdrop for a decade.
Say the quiet part plainly
If you believe this story will have no effect on your market cap, do nothing. If you believe it will move your stock, waiting it out is magical thinking. The calculator doesn’t blink. Three percent of total market cap, times two, over five years, with year one in cash. Sign it, wire it, tell your board you bought certainty in a category that has been running on vibes.
No consultants. No appraisers. No stall tactics. Just a $1.99 calculator, a wire and a clean story that turns risk into responsibility.
Four days ago I contacted multiple counsel at Apple (AAPL), Microsoft (MSFT) and Google (GOOG), and others via LinkedIn DM. Here is the message I sent:
I filed provisional patents for a runtime AI governor that enforces policy at the model boundary and creates an auditable trail. Regulators are briefed to make this baseline safety. Board should see this.
This message was sent via LinkedIn invitation to Esther Kim, Daniel Hekier, Oba A., Sarah Murphy Gray, PhD, Ben Petroskt at Google; David Saunders at Anthropic; Katelyn Seigal, Tina Hartley, Rodrigo Aberin, David Cutler, Babak Rasolzadeh, Jose Hernandez at Apple; Olivia Dang at Meta; Nick Jaramillo, Tanell Ford, John Payseno, Andreas Piras, Alexandra Knight, Pablo Tapia, Amanda Craig Deckard, Natasha Crampton at Microsoft, Kristian von Fersen at Salesforce.
I got no response from any of them. Given that LinkedIn DM is the only channel available to me, I am documenting notice here and now on Sunday, November 2, 2025 at 1:06 PM EST.
I posted this story on LinkedIn on October 31, 2025. I got no response.
I even posted the following story so that journalists — and even every day people, and even all the people who have the word “responsible” in their job titles–could see for themselves what AI is capable of.
I believe AI is a clear and present danger. This record exists so no one can say they were not told.
Furthermore, this post is intended to provide actual notice to the companies and officers named.
My name is Alan Jacobson. I'm a web developer, UI designer and AI systems architect. I have 13 patents pending before the United States Patent and Trademark Office—each designed to prevent the kinds of tragedy you can read about here.
I want to license my AI systems architecture to the major LLM platforms—ChatGPT, Gemini, Claude, Llama, Co‑Pilot, Apple Intelligence—at companies like Apple, Microsoft, Google, Amazon and Facebook.
Collectively, those companies are worth $15.3 trillion. That’s trillion, with a T—twice the annual budget of the government of the United States. What I’m talking about is a rounding error to them.
With those funds, I intend to stand up 1,400 local news operations across the United States to restore public safety and trust. You can reach me here.