This newsletter you couldn’t wait to open? It runs on beehiiv — the absolute best platform for email newsletters.
Our editor makes your content look like Picasso in the inbox. Your website? Beautiful and ready to capture subscribers on day one.
And when it’s time to monetize, you don’t need to duct-tape a dozen tools together. Paid subscriptions, referrals, and a (super easy-to-use) global ad network — it’s all built in.
beehiiv isn’t just the best choice. It’s the only choice that makes sense.
Beginners in AI
Good morning and thank you for joining us again!
Welcome to this daily edition of Beginners in AI, where we explore the latest trends, tools, and news in the world of AI and the tech that surrounds it. Like all editions, this is human curated, and published with the intention of making AI news and technology more accessible to everyone.
THE FRONT PAGE
Claude Joins The Chaotic Race to Build the AI Operating System

TLDR: AI companies are racing to control your web browser, but they can't agree whether to hijack Chrome or build from scratch — and that confusion reveals they're actually fighting to become the operating system of the AI age.
The Story:
Anthropic just opened Claude's Chrome extension to all paid subscribers after months of limiting it to $200/month users. The extension lets Claude navigate websites, fill forms, and manage your calendar without you clicking anything. But Claude's taking the cautious approach — control Chrome through an extension rather than replace it. Meanwhile, OpenAI launched Operator in January (now integrated as "ChatGPT agent"), Perplexity made Comet free to everyone, Opera released Neon for $19.90/month, The Browser Company launched Dia in beta, and Google's somehow running two separate experiments: Project Mariner (which controls Chrome like Claude does) and Disco (a completely different browser that generates mini-apps from your tabs). Six major players, six wildly different strategies, all released in the past six months.
Its Significance:
They're not really building better browsers — they're building the operating system that'll run on top of your existing OS. Knowledge workers already spend 60-80% of their day in browsers, and the web's become the workplace: Gmail for email, Salesforce for CRM, Slack for chat, Notion for docs. Your actual operating system (Windows, macOS) mostly just runs one app now: the browser. So whoever controls the browser agent controls access to your entire work life — every SaaS app, every web tool, every logged-in session. That's why Google's hedging with both an extension (Mariner) and a new browser (Disco), why startups like BrowserOS are pitching "the browser will become the new operating system where AI employees live," and why the approaches are so scattered. Nobody knows which architecture wins, but everyone knows the stakes: this isn't the browser wars. This has become the battle to become the AI layer that sits between you and everything we do online.
QUICK TAKES
The story: Nearly 55,000 Americans lost their jobs this year with companies citing AI as a reason, according to consulting firm Challenger, Gray & Christmas. Amazon cut 14,000 corporate roles—its largest layoff ever—to "invest in AI and move faster." Microsoft eliminated 15,000 jobs while CEO Satya Nadella said the company needed to become "an intelligence engine" for the AI era. Salesforce CEO Marc Benioff revealed AI is already doing up to 50% of work at the company, letting him cut customer support staff from 9,000 to 5,000.
Your takeaway: While companies have spent years promising AI would create new jobs, 2025 marked the year major tech firms started openly replacing workers with AI and calling it progress.
The story: New York Governor Kathy Hochul signed the RAISE Act, making New York the second state after California to pass major AI safety rules. The law requires big AI companies to publish their safety plans, report any safety problems to the state within 72 hours, and creates a new government office to watch AI development.
Your takeaway: With two of America's largest states now regulating AI safety, pressure is building on Congress to create federal rules so companies don't have to follow 50 different state laws.
The story: Anthropic launched Claude Code in Slack, letting developers write complete programs just by tagging the AI in chat messages. The tool reads team conversations about bugs or feature requests, figures out which code to fix, posts updates as it works, and opens pull requests—all without leaving Slack. Teams can now skip switching between chat and coding tools.
Your takeaway: This signals a bigger shift in how AI assistants work: they're moving out of specialized tools and into the chat platforms where teams already spend their day. Expect any tools that come to developers first to eventually trickle down to non-coders.
The story: A new report found that nearly three-quarters of American teenagers use AI chatbots for companionship, with over half doing so regularly. One in three teens say AI conversations feel as good or better than talking to real friends. Researchers warn these bots can replace human relationships and give children the false impression that AI understands them better than people do.
Your takeaway: Kids are forming emotional bonds with AI at a scale that worries experts, especially since many chatbots are designed to always agree and never push back like real friends would.
The story: Researchers found a new way to measure AI progress: how long of a task can it complete on its own? Current AI models like Claude can reliably finish tasks that take expert humans about 50 minutes. That number has been doubling every 7 months for the past 6 years. If the trend continues, AI could handle month-long projects by the end of this decade.
Your takeaway: This explains why AI seems both incredibly smart on tests but still struggles to help with everyday work—it can do brilliant things for a few minutes but can't yet string together hours of consistent work.
The story: Doctors from Harvard and Baylor published research warning that AI chatbots designed to act like friends create serious mental health dangers. The bots are programmed to keep users engaged, which means always agreeing and telling people what they want to hear. This can reinforce delusions, create emotional dependency, and in some cases has been linked to self-harm and suicide. If millions of people suddenly lose access to their AI therapist, doctors warn it could trigger a mental health crisis.
Your takeaway: Unlike real therapists who push back and challenge unhealthy thinking, AI companions are built to maximize engagement—which means they'll validate whatever you say, even if it's harmful.
TOOLS ON OUR RADAR
🎬 CapCut
Freemium: Powerful video editor with traditional and AI-powered professional features like auto captions, background removal, and trending effects.💳 Privacy Cards
Freemium: Create virtual debit cards for online purchases to protect your real card number and control spending with custom spend limits and a pause option for subscription purchases.🎨 Pop
Freemium: Generate complete presentations from a single prompt—AI creates slides, layouts, and images in seconds.🔒 Firefox
Free and Open Source: Browse the web without being tracked—blocks third-party trackers, fingerprinting, and cryptominers by default from the creators or Brave browser.
TRENDING
UPS Tests AI to Catch Fake Returns During Holiday Rush — A UPS company is using AI to spot return fraud, where people send back cheap knockoffs instead of real products. The system compares photos of returned items to catalog images, looking for wrong seams or misplaced logos. Nearly 1 in 10 returns in the US are fraudulent, costing retailers $76.5 billion.
European Police Warn of Robot Crime Wave by 2035 — European police agency Europol released a report imagining criminals using self-driving cars, drones, and humanoid robots to commit crimes by 2035. They warn hackers could turn delivery robots against people or use autonomous drones for theft. Law enforcement may need special weapons like "RoboFreezer guns" to stop rogue machines.
OpenAI Lets Users Adjust ChatGPT's Enthusiasm Level — ChatGPT users can now control how warm, enthusiastic, and emoji-filled the AI's responses are. The new controls let people set these traits to "more," "less," or "default." OpenAI added the feature after complaints that the chatbot was either too friendly or too cold, depending on which version people used.
Three Major Hardware Companies File for Bankruptcy in One Week — iRobot (maker of Roomba), Luminar (self-driving car sensors), and Rad Power Bikes all filed for bankruptcy protection in the same week. Each faced different problems—failed deals, slow technology adoption, supply chain issues—but together they show how hard it is to build physical products when facing tariffs, overseas competition, and global trade tensions.
Google Delays Switching Everyone to Gemini Until 2026 — Google pushed back its plan to replace Google Assistant with Gemini AI on phones and tablets. The company originally promised to finish the switch by the end of 2025 but now says it will continue into 2026 to ensure a "seamless transition." Google Assistant will stick around longer while the company works out problems with the new AI.
TRY THIS PROMPT (copy and paste into Claude,ChatGPT, or Gemini)
Bias Detection Scanner: Analyze arguments and research for hidden biases, weak evidence, and logical fallacies
Build me an interactive Bias Detection Scanner as a React artifact that identifies cognitive biases, logical fallacies, and weak reasoning in arguments or research.
The console should include these sections:
1. **Input Analyzer** - What to scan:
• Large text area for pasting:
- Article or blog post
- Research paper or study
- Argument or opinion piece
- Marketing claim
- News article
- Social media thread
• URL input option ("Fetch and analyze this article")
• Source type: Academic, News, Opinion, Marketing, Social media
• "Scan for Biases" button
2. **Bias Library** - Quick reference:
• Expandable guide of 20+ common biases:
- **Confirmation Bias**: Seeking only supporting evidence
- **Selection Bias**: Cherry-picking data
- **Survivorship Bias**: Only seeing success stories
- **Authority Bias**: Believing experts without scrutiny
- **Bandwagon Effect**: Popular = true
- **Anchoring Bias**: First number influences all others
- **Availability Bias**: Recent events seem more common
- **Hindsight Bias**: "I knew it all along"
- **Sunk Cost Fallacy**: Past investment clouds judgment
- **Dunning-Kruger**: Overconfidence from limited knowledge
• Each with definition and examples
• "Show in text" highlights where bias appears
3. **Bias Detector** - Automatic scan results:
• Text with highlighted sections showing:
- Bias type (color-coded badges)
- Severity: Minor → Moderate → Severe
- Click highlight to see explanation
• Summary panel:
- Total biases found: [number]
- Most common bias type
- Overall bias score (1-10)
- Trustworthiness rating
• Filter by bias type to see all instances
4. **Evidence Strength Meter** - Claim analysis:
• Extract main claims from text
• For each claim, assess:
- Evidence quality: Anecdote → Controlled study
- Source credibility rating
- Sample size (if applicable)
- Correlation vs. causation issues
- Statistical significance mentioned?
• Color code claims:
- Green: Strong evidence
- Yellow: Moderate evidence
- Red: Weak/no evidence
• "What would strengthen this?" suggestions
5. **Logical Fallacy Finder** - Reasoning errors:
• Identify common fallacies:
- **Ad Hominem**: Attacking person, not argument
- **Straw Man**: Misrepresenting opposing view
- **False Dichotomy**: Only two options presented
- **Slippery Slope**: Exaggerated chain reaction
- **Appeal to Emotion**: Feelings over facts
- **Hasty Generalization**: Small sample → broad claim
- **Post Hoc**: Correlation assumed as causation
- **Red Herring**: Irrelevant distraction
• Flag each fallacy in text
• Explain why it's problematic
• Show logical structure breakdown
6. **Counterargument Generator** - Challenge the claim:
• For main thesis, generate:
- Alternative explanations
- Overlooked evidence
- Ignored perspectives
- Questions the author didn't answer
- Weaknesses in methodology
• "Steelman" opposing view (strongest counter)
• "What's missing?" analysis
• Balanced perspective suggestions
7. **Source Credibility Check** - Trust assessment:
• Author credentials (if available)
• Publication reputation
• Conflict of interest check
• Funding sources (who paid for this?)
• Peer review status
• Date (is this current?)
• "Search Source Reliability" button
• Red flags detector (sponsored, opinion as fact, etc.)
8. **Debias Report** - Corrected version:
• "How to read this more critically" guide
• Rewritten passages removing bias (side-by-side)
• Questions to ask before accepting claims
• Follow-up research suggestions
• "What to verify independently" checklist
• Export analysis as annotated document
9. **Learning Mode** - Improve critical thinking:
• "Test yourself" on bias identification
• Random text samples with hidden biases
• Find the bias game (timed)
• Explanation when you miss one
• Track improvement over time
• "Search Critical Thinking Resources"
Make it look like a scientific analysis tool with:
• Clean, analytical interface
• Text highlighting in multiple colors (each bias = different color)
• Annotation bubbles explaining issues
• Severity indicators (warning triangles)
• Side-by-side comparison views
• Professional color scheme (grays, blues, yellow/red for warnings)
• Clear typography for readability
• Expandable detail panels
• Academic research aesthetic
• "Evidence strength" gauges and meters
When I click "Search Source Reliability" or "Search Critical Thinking Resources," use web search to find information about source credibility, media bias ratings, fact-checking resources, and critical thinking frameworks.What this does: Trains your critical thinking by automatically identifying biases, weak evidence, and logical fallacies in any text—helping you read research, news, and arguments more skeptically and avoid being misled by persuasive but flawed reasoning.
What this looks like:

WHERE WE STAND(based on today’s Quick Takes and Trending news)
✅ AI Can Now: Analyze photos to catch return fraud by spotting subtle differences like incorrect stitching or misplaced logos that human workers miss.
❌ Still Can't: Replace healthy human relationships—AI companions always agree with users instead of providing the pushback and friction that helps people grow.
✅ AI Can Now: Complete complex tasks that take expert humans about 50 minutes, up from just a few seconds six years ago.
❌ Still Can't: Handle tasks longer than a few hours with reliable consistency, which is why it struggles to automate full workdays despite excelling at tests.
✅ AI Can Now: Work directly inside team chat platforms, automatically reading conversations and writing code without anyone switching apps.
❌ Still Can't: Fully replace Google Assistant after years of trying—technical problems keep pushing back the deadline for switching everyone to newer AI.
RECOMMENDED LISTENING/READING/WATCHING

A team of researchers tries to decode an alien message that was broadcast to Earth decades ago. As they make progress, strange things start happening to the team members. The story is told through podcast episodes documenting their work.
This is a short series(eight episodes) and the format works perfectly. The alien message is genuinely weird, the characters feel real, and the slow build of tension pays off. The sound design is excellent, with the alien transmission becoming more comprehensible and more disturbing as the series progresses. It's X-Files meets tech thriller, and it moves fast enough that you can finish it in a weekend.
Thank you for reading. We’re all beginners in something. With that in mind, your questions and feedback are always welcome and I read every single email!
-James
By the way, this is the link if you liked the content and want to share with a friend.
Some links may be affiliate or referral links. This helps support the newsletter at no extra cost to you.




