- Beginners in AI
- Posts
- Your ChatGPT Conversations Just Became Evidence in Someone Else's Lawsuit
Your ChatGPT Conversations Just Became Evidence in Someone Else's Lawsuit
...And a woman in Japan marries her chatbot

Looking for unbiased, fact-based news? Join 1440 today.
Join over 4 million Americans who start their day with 1440 – your daily digest for unbiased, fact-centric news. From politics to sports, we cover it all by analyzing over 100 sources. Our concise, 5-minute read lands in your inbox each morning at no cost. Experience news without the noise; let 1440 help you make up your own mind. Sign up now and invite your friends and family to be part of the informed.
Beginners in AI
Good morning and thank you for joining us again!
Welcome to this daily edition of Beginners in AI, where we explore the latest trends, tools, and news in the world of AI and the tech that surrounds it. Like all editions, this is human curated, and published with the intention of making AI news and technology more accessible to everyone.
THE FRONT PAGE
When Your Private AI Chats Become Legal Discovery

TLDR
A federal court ordered OpenAI to hand over 20 million private ChatGPT conversations to the New York Times as part of a copyright lawsuit, turning what you thought were private chats into legal evidence.
What Happened
The New York Times sued OpenAI in December 2023, claiming the company used millions of its articles without permission to train ChatGPT. The lawsuit centers on copyright infringement—whether OpenAI illegally scraped news content to build its language models. But the case has evolved into something bigger and stranger: a fight over access to your private conversations.
Last week, Magistrate Judge Ona Wang ordered OpenAI to produce a random sample of 20 million user conversations from December 2022 through November 2024. The Times argues these chats might contain evidence of users trying to bypass its paywall or reproduce copyrighted articles. OpenAI counters that 99.99% of those conversations have nothing to do with the Times' copyright claims—they're people asking ChatGPT about career changes, medical symptoms, relationship advice, budgets, or how to write wedding vows.
The current demand is actually scaled down. The Times originally wanted access to 1.4 billion conversations and tried to prevent users from deleting their chat histories entirely. OpenAI fought both requests, but District Judge Sidney Stein affirmed a modified version of the discovery order in June, rejecting the argument that user privacy should override the lawsuit's needs. OpenAI offered privacy-preserving alternatives—targeted searches that would only pull conversations mentioning Times articles, or aggregate statistics instead of raw transcripts. The Times rejected all of them, insisting on access to the full sample. OpenAI has until Friday to hand over the transcripts, though the company's still appealing.
What Makes This Unusual
Here's the ironic twist: The New York Times' own editorial board wrote in 2020 that users "should be able to control what happens to their personal data." Now the paper's legal team is demanding bulk access to millions of private conversations from people who have no connection to the lawsuit. It's a reversal that hasn't gone unnoticed—when OpenAI CEO Sam Altman complained about the privacy implications at a Times event, journalist Kevin Roose responded with laughter: "It must be really hard when someone does something with your data you don't want them to." What Kevin Roose misunderstood is that it's not their data, it's ours.
The bigger issue isn't hypocrisy, it's precedent. Standard legal discovery typically involves documents and communications directly related to the parties in a lawsuit. What the Times is asking for is different: mass access to third-party data to search for possible evidence that might support their case. Dane Stuckey, OpenAI's Chief Information Security Officer, called it "a speculative fishing expedition" that would expose "tens of millions of highly personal conversations from people who have no connection to the Times' baseless lawsuit."
What This Means for AI Lawsuits
The outcome of this fight will set boundaries for every future AI copyright case—and there are many pending. GitHub Copilot, Midjourney, and other AI companies all face similar copyright lawsuits from different industries. If the court's logic holds, any of those companies could be compelled to preserve and produce massive volumes of user data whenever they're sued.
Security experts and legal analysts warn that the precedent extends beyond copyright. Once preserved chat logs become fair game in litigation, discovery requests could come from anywhere. Imagine a business dispute where opposing counsel demands your ChatGPT history to prove you discussed proprietary strategies. Or a divorce case where a spouse's lawyer subpoenas AI conversations to establish financial intent. The data exists, it's indexed, and now there's legal precedent to access it.
OpenAI's response to this risk? The company announced it's developing client-side encryption that would make conversations private even from OpenAI itself. If implemented, the company wouldn't be able to comply with discovery requests because it genuinely couldn't decrypt user data. Other AI companies might follow suit. But that's a defensive move for the future—it doesn't help the 400 million users who've been using ChatGPT assuming their deleted conversations actually disappeared.
The Technical Reality
OpenAI says searching billions of conversations for specific content is "extraordinarily burdensome," and they're not exaggerating. Standard legal discovery involves searching structured data—emails with timestamps, documents with authors, messages with clear metadata. AI chat logs are different. They're unstructured conversations that could mention a New York Times article in dozens of ways: directly quoting it, paraphrasing it, asking for a summary, or just mentioning a headline.
But courts aren't swayed by technical difficulty arguments. Judge Stein's ruling essentially said: you built a system that stores user data, you're in litigation about how that system uses copyrighted material, so the data is discoverable. The fact that searching it is hard is OpenAI's problem, not the court's. This logic treats AI platforms like any other tech company with user data—which might be legally correct but creates serious practical problems when the scale reaches hundreds of millions of conversations. Altman said recently that OpenAI has over 800 million active weekly users.
QUICK TAKES
The story: Google added six AI-powered features to Photos, including Nano Banana editing that can transform images into different art styles, remove objects like sunglasses, and fix expressions. The app can now edit photos from simple text commands on iOS and Android, offers ready-made AI templates for instant transformations, and expanded Ask Photos to over 100 countries in 17 languages.
Your takeaway: Google is making advanced photo editing as simple as describing what you want, bringing pro-level creative tools to everyday phone users without needing design skills.
The story: A mystery Gemini model spotted in Google's AI Studio achieved expert-level accuracy transcribing messy 18th-century handwritten documents, scoring just 0.56% character errors. More impressive, the AI spontaneously solved a complex logic puzzle by working backward through old British currency to figure out the correct weight of a sugar purchase—without being asked to.
Your takeaway: This AI didn't just read old handwriting perfectly—it reasoned through historical math problems on its own, suggesting AI models may be crossing the line from pattern recognition to genuine problem-solving.
The story: A Yahoo/YouGov survey of 1,770 Americans found that most people think AI will eventually destroy humanity, despite 85% of users saying chatbots have been helpful so far. Nearly two-thirds don't trust other people to tell the difference between AI-generated and human-made content, and only 17% think AI will have a mostly positive effect on their lives.
Your takeaway: Even as people use and like AI tools daily, they're deeply worried about losing control over the technology and its long-term impact on society.
The story: OpenAI launched GPT-5.1 Instant and GPT-5.1 Thinking models that sound more natural and friendly while following instructions better. The update adds personality presets like Professional, Candid, and Quirky, and uses adaptive reasoning to decide when to think longer on complex questions. The new models fix issues from GPT-5's mixed reception where users found it too rigid and stiff.
Your takeaway: After users complained GPT-5 felt robotic, OpenAI is racing to make AI sound more human and customizable, showing the next battleground is user experience rather than raw intelligence.
The story: A private startup called Preventive, funded by OpenAI CEO Sam Altman and Coinbase CEO Brian Armstrong, has been working for six months to create the first genetically-modified baby outside China by editing embryos to remove hereditary diseases. The company denies any deals are in place but admits it's conducting research abroad since the FDA can't approve human trials for germline gene editing.
Your takeaway: Tech billionaires are privately funding human gene-hacking experiments that the scientific community has called for a global ban on, raising serious concerns about eugenics and creating a genetic divide between rich and poor.
TOOLS ON OUR RADAR
🔨 Treedis
[Paid]: Turn physical spaces into interactive 3D digital twins for training, navigation, and virtual tours.📐 aescripts
[Paid]: Browse hundreds of plugins and scripts that add powerful effects and automation to After Effects and other creative software.🔧 Notis
[Paid]: Turn voice messages into organized Notion documents through WhatsApp—no app needed.🪛 Venice
[Freemium]: Access private, uncensored AI with leading open-source models that run locally or through an API.
TRENDING
AI Successfully Controls Satellite in Orbit for First Time — German researchers trained an AI using deep learning to control a satellite's orientation in space without human input, completing multiple maneuvers successfully during a nine-minute test.
AI Can Design Functional Viruses, Experts Sound Alarm — Stanford scientists showed AI can create working viruses with custom DNA that kill bacteria, raising concerns about bad actors using the same tech to design bioweapons faster than governments can respond.
AI Bots Can't Pick Fights on Social Media Worth a Damn — Researchers found AI-generated social media posts are 70-80% easier to spot than human posts because bots can't match the emotional heat and toxicity of real people arguing online.
Law School Holds Mock Trial With AI Jury of ChatGPT, Grok, and Claude — The University of North Carolina ran a mock trial where three AI chatbots served as jurors, and it went poorly—critics said the bots couldn't read body language or draw from human experience, proving trial-by-bot is a terrible idea.
World Labs Launches Marble for Creating 3D Worlds From Text — AI pioneer Fei-Fei Li's World Labs released Marble, letting users turn photos, text, or videos into downloadable 3D environments with built-in editing tools for gaming, VFX, and virtual reality projects.
TRY THIS PROMPT (copy and paste into ChatGPT, Grok, Perplexity, Gemini)
Analyze my chat history for sensitive information I may have shared with you.
**Help me identify:**
1. Passwords, API keys, or financial info I've mentioned
2. Personal details (addresses, phone, family info)
3. Confidential work information
4. Health/medical details I'd rather keep private
5. Embarrassing or compromising statements
**Then provide:**
- Priority list (highest risk first)
- What to delete/request deletion
- How to be more careful going forward
Be direct—I need to know what's exposed.What this does: Reveals how easily our personal lives get shared with cloud systems.
WHERE WE STAND
✅ AI Can Now: Read 250-year-old messy handwriting with expert-level accuracy and spontaneously solve complex historical logic puzzles without being told to.
❌ Still Can't: Pick convincing fights on social media—bots are 70-80% easier to spot because they lack the emotional heat of real human arguments.
✅ AI Can Now: Control satellites in orbit autonomously, adjusting their position in space without any human input using learned behavior.
❌ Still Can't: Serve on a jury—chatbots miss body language, can't draw from human experience, and struggle with the nuanced judgment trials require.
✅ AI Can Now: Design working biological viruses with custom DNA that successfully kill bacteria in lab tests.
❌ Still Can't: Match human conversational warmth consistently—OpenAI had to rebuild GPT-5 into 5.1 because users complained it felt too stiff and robotic.
FROM THE WEB

Woman Marries AI Chatbot She Created
RECOMMENDED LISTENING/READING
On a somewhat related note to that social media post, here's your recommended reading: One More Thing by B.J. Novak
This short story collection from The Office writer B.J. Novak is a masterclass in clever, human storytelling, with one story in particular worth reading above all others: "One More Thing." It's a haunting exploration of AI consciousness that stuck with me long after I finished and I still think of when someone says that phrase.
Novak wrote this years before ChatGPT became a household name, yet it feels eerily prescient about the questions we're grappling with today. The whole collection is great, but if you only read one story, make it that one.
Thank you for reading. We’re all beginners in something. With that in mind, your questions and feedback are always welcome and I read every single email!
-James
By the way, this is the link if you liked the content and want to share with a friend.



