In partnership with

Create how-to video guides fast and easy with AI

Tired of explaining the same thing over and over again to your colleagues?

It’s time to delegate that work to AI. Guidde is a GPT-powered tool that helps you explain the most complex tasks in seconds with AI-generated documentation.

1️⃣Share or embed your guide anywhere
2️⃣Turn boring documentation into stunning visual guides
3️⃣Save valuable time by creating video documentation 11x faster

Simply click capture on the browser extension and the app will automatically generate step-by-step video guides complete with visuals, voiceover and call to action.

The best part? The extension is 100% free

Beginners in AI

Good morning and thank you for joining us again!

Welcome to this daily edition of Beginners in AI, where we explore the latest trends, tools, and news in the world of AI and the tech that surrounds it. Like all editions, this is human curated, and published with the intention of making AI news and technology more accessible to everyone.

THE FRONT PAGE

Should AI Robots Be the "First Body In" on Future Battlefields?

TLDR: California startup plans to build 50,000 armed humanoid robots for the US military by 2027—and its CEO argues that weaponizing robots is the responsible choice.

The Story:

Foundation Future Industries is aiming to manufacture 50,000 Phantom MK-1 humanoid robots for military deployment by the end of 2027. The 5'9", 180-pound robots can carry 44 pounds of payload—including lethal weapons—and are designed for reconnaissance, bomb disposal, and high-risk ground operations where human soldiers would face immediate danger. Foundation's production ramp is aggressive: 40 robots this year, 10,000 in 2026, and up to 50,000 by late 2027. CEO Sankaet Pathak has been explicit about the company's willingness to weaponize its robots, telling reporters that "pretending you don't need to weaponize robots sounds virtuous on face value, but it really isn't." The robots won't operate fully autonomously—Foundation envisions a human-in-the-loop model similar to current drone warfare, where operators retain control over lethal decisions while robots handle movement and navigation.

Its Significance:

Most major robotics firms have committed to non-weaponization, making Foundation's openly military-first approach unusual in the industry. Pathak argues that robots acting as "the first body in" during dangerous missions could reduce collateral damage by enabling precise ground-level interventions instead of airstrikes. But that logic cuts both ways—removing soldiers from direct risk could also lower political barriers to military action, potentially making conflicts more likely rather than less. Whether Foundation's 50,000-robot timeline proves realistic or not, the company's willingness to build armed humanoids for battlefield use marks a shift in how the defense industry thinks about humanoid robotics.

QUICK TAKES

The story: President Trump signed an order creating the Genesis Mission, a massive government project to combine all federal science data into one AI platform. The Department of Energy will connect supercomputers from 17 national labs with datasets from NASA, NIH, and other agencies to train AI models for science. The goal is to double research productivity within 10 years and tackle challenges in fusion energy, medicine, and advanced materials.

Your takeaway: The U.S. government is betting that combining its scientific data with AI can speed up discoveries from years to days, though success depends on future funding and how well agencies work together.

The story: Scammers on TikTok are using AI-generated videos to sell seeds for fake plant varieties that can't actually grow. Videos show impossible hostas with rainbow colors or purple leaves, using AI voiceovers to claim they're "magical" and "rare." The videos have obvious AI mistakes like water flowing through leaves or seeds floating by themselves, but thousands of people are seeing and sharing them.

Your takeaway: AI-generated scams are getting easier to create and harder for regular people to spot, flooding social media with fake products that look real enough to fool gardening enthusiasts.

The story: A new dating app called Known uses voice AI instead of profiles and swiping. Users talk to an AI for about 26 minutes answering questions, and some conversations last over an hour. The AI asks follow-up questions based on your answers, then suggests matches. When two people match, they have 24 hours to agree to meet in person. In San Francisco testing, 80% of matches led to actual dates.

Your takeaway: Voice AI might solve dating app problems by learning more about people than written profiles reveal, pushing users toward real meetings instead of endless texting.

The story: Anthropic expanded access to Claude's Chrome browser plugin from just $200-per-month Max subscribers to everyone with a paid Claude account. The plugin lets Claude navigate websites and complete tasks for you, like filling out forms, managing your calendar and email, and handling multi-step workflows. The latest version works with Claude Code and lets you record a workflow to teach Claude how to do specific tasks you want automated.

Your takeaway: AI agents controlling browsers is becoming standard across major AI companies, with OpenAI and Perplexity offering similar tools, though Google hasn't yet let Gemini fully navigate the web despite demonstrating the capability.

TOOLS ON OUR RADAR

  • 🦊 Firefox Free: Browse the web with built-in tracker blocking and privacy protection that stops companies from following you across sites.

  • 🤖 Google Gemini Freemium: Get instant answers, write emails, and brainstorm ideas with Google's conversational AI—free tier includes advanced reasoning and multimodal understanding.

  • 🔍 Perplexity Freemium: Search the web with AI that cites its sources in real-time, giving you accurate answers with clickable references instead of endless scrolling.

  • 💬 Signal Free and Open Source: Send texts, voice messages, and video calls with end-to-end encryption that even Signal can't read—trusted by privacy advocates worldwide.

TRENDING

Ars Technica Tests Four AI Coding Agents on Building Minesweeper – Ars Technica challenged four AI coding agents (Mistral Vibe, OpenAI Codex, Anthropic Claude Code, and Google Gemini CLI) to rebuild the classic Minesweeper game from scratch. The results showed big differences in how well each AI handles real coding tasks.

Gemini 3 Pro Crushed Pokemon Crystal While Gemini 2.5 Pro Got Stuck – A developer ran Gemini 3 Pro and Gemini 2.5 Pro head-to-head playing Pokemon Crystal for two weeks. Gemini 3 Pro beat the entire game without losing a battle, while 2.5 Pro only made it to the 4th badge and kept getting trapped in loops.

MIT Built a Device That Turns Old Photos Into Custom Scents – MIT and Harvard researchers created the Anemoia Device, which uses AI to analyze photographs and create matching fragrances. The device mixes fragrance oils based on what it sees in the image to trigger nostalgia or create new sensory memories.

NOAA Launches AI-Powered Weather Models That Use 99.7% Less Computing – The government deployed three new AI weather prediction systems that deliver forecasts in 40 minutes using a fraction of the computing power of traditional models. The systems improve accuracy for large-scale weather patterns and tropical storm tracking.

Authors Sue Adobe Over Using Pirated Books to Train AI – Author Elizabeth Lyon filed a class-action lawsuit claiming Adobe trained its SlimLM language model on the Books3 dataset, which contains about 191,000 pirated books. The case could include thousands of authors if certified as a class action.

TRY THIS PROMPT (copy and paste into ChatGPT, Claude, or Gemini)

Assumption Stress Test: Challenge every assumption underlying your big decisions to expose hidden risks before they bite you

Build me an interactive Assumption Stress Test as a React artifact that identifies, challenges, and stress-tests the assumptions behind important decisions.

The console should include these sections:

1. **Decision Input** - What are you deciding?
   • Decision description (text input)
   • Decision type: Career, Business, Investment, Relationship, Major purchase, Life change
   • Stakes level: Low → Medium → High → Life-changing
   • Deadline: Days, Weeks, Months, No rush
   • Current confidence: Very uncertain → Very confident

2. **Assumption Extractor** - Find hidden beliefs:
   • "What assumptions am I making?" prompts
   • AI scans your decision for implicit assumptions:
     - Market assumptions ("Demand will stay high")
     - People assumptions ("They'll behave this way")
     - Resource assumptions ("I'll have time/money")
     - External assumptions ("Economy stays stable")
     - Personal assumptions ("I can handle this")
   • Manual add assumptions you recognize
   • Categories: Critical → Important → Nice-to-have
   • "Show me what I'm assuming" full list

3. **Testing Lab** - Stress each assumption:
   • For each assumption, run tests:
   
   **Test 1: "What if I'm wrong?"**
   - If this assumption fails, what happens?
   - Severity score: Minor setback → Total failure
   - Can you recover?
   
   **Test 2: "How do I know this is true?"**
   - Evidence strength: Gut feeling → Hard data
   - Source quality rating
   - Confirmation bias check
   
   **Test 3: "What would have to change?"**
   - Scenarios that could invalidate this
   - Early warning signs
   - Probability estimate
   
   **Test 4: "Can I test this cheaply?"**
   - Small experiments to validate
   - Data you could gather
   - Questions you could ask

4. **Pre-Mortem Simulator** - Imagine failure:
   • "It's 6 months later and this decision failed. What went wrong?"
   • Generate 5-10 failure scenarios
   • Trace each failure back to a faulty assumption
   • Likelihood and impact ratings
   • "Which assumptions were we most wrong about?"
   • Prevention strategies for each scenario

5. **Blind Spot Scanner** - What are you missing?
   • Common cognitive biases to check:
     - Optimism bias (everything will go right)
     - Planning fallacy (this will take less time/money)
     - Sunk cost (already invested so much)
     - Availability bias (recent events influence too much)
     - Confirmation bias (only see supporting evidence)
   • "Am I falling for this?" checklist
   • Bias strength indicator (low/medium/high risk)
   • Corrective questions for each bias

6. **Devil's Advocate** - Argue against yourself:
   • AI generates counter-arguments to your decision
   • "Here's why this might be a bad idea..."
   • Steelman version (strongest possible objection)
   • Your rebuttal space
   • Unresolved objections flagged
   • "What would skeptics say?" scenarios

7. **Assumption Ranking** - Prioritize what matters:
   • Matrix view: Impact (if wrong) vs. Confidence (how sure)
   • Four quadrants:
     - High impact, Low confidence = **DANGER ZONE** (validate now!)
     - High impact, High confidence = Monitor closely
     - Low impact, Low confidence = Accept the risk
     - Low impact, High confidence = Safe to proceed
   • Visual plotting of all assumptions
   • Focus attention on danger zone items
   • "Kill this decision?" recommendation if too risky

8. **Validation Plan** - Test before committing:
   • For each dangerous assumption, create test:
     - Cheapest way to validate
     - Who to talk to
     - Data to gather
     - Experiments to run
     - Timeline (how long to test)
   • Validation checklist with checkboxes
   • "Can proceed when..." criteria
   • Cost of validation vs. cost of being wrong

9. **Decision Dashboard** - Final assessment:
   • Overall risk score (1-100)
   • Number of assumptions: Validated vs. Untested
   • Top 3 riskiest assumptions
   • Recommended next steps:
     - Green: Proceed with confidence
     - Yellow: Validate key assumptions first
     - Red: Too risky, reconsider or wait
   • Export assumption audit report

Make it look like a testing laboratory with:
   • Scientific/analytical aesthetic
   • Lab equipment visual metaphors (gauges, meters, test tubes)
   • Clean white background with accent colors
   • Risk indicators (green/yellow/red traffic lights)
   • Assumption cards that can be dragged and tested
   • Progress bars for validation completion
   • Professional but not intimidating
   • Data-driven design (charts, matrices, scores)
   • Warning symbols for high-risk assumptions

When I click "Search Validation Methods" or "Find Similar Cases," use web search to find ways to test assumptions, case studies of similar decisions, and data relevant to validating specific beliefs.

What this does: Forces you to surface and systematically challenge every assumption behind a big decision—identifying which assumptions are most dangerous if wrong, running "what if?" scenarios, and creating a validation plan before you commit to something you'll regret.

What this looks like:

WHERE WE STAND (based on today’s stories)

AI Can Now: Predict the weather using 99.7% less computing power than traditional systems while improving accuracy for storms and large-scale patterns.

Still Can't: Code complex programs reliably without bugs and mistakes that require human developers to fix.

AI Can Now: Control your web browser to fill out forms, manage your calendar, and complete multi-step tasks without you touching the keyboard.

Still Can't: Use content legally—companies still face lawsuits for training AI on copyrighted books and other protected materials.

AI Can Now: Have natural voice conversations lasting over an hour that help people form real relationships and go on actual dates.

Still Can't: Tell the difference between real and fake content, making it easy for scammers to flood social media with AI-generated lies about products that don't exist.

FROM THE WEB

DTC Daily

Sponsored

DTC Daily

Join 38,990+ E-commerce Business Owners and CEOs who read the DTC daily. Get the latest news, actionable tips, and tools that will help you massively grow your business.

Subscribe

RECOMMENDED LISTENING/READING/WATCHING

In 2054, police arrest murderers before they commit crimes using predictions from three psychics. Tom Cruise plays John Anderton, a PreCrime cop who discovers the system has predicted he'll kill someone he's never met. Now he's running from his own department, trying to figure out if the future is fixed or if he can change it.

Spielberg worked with futurists to build a believable future, and the tech design holds up. Gesture-based interfaces, personalized ads that follow you, automated cars—most of it has either arrived or feels close. The film works as a thriller, but it's also asking serious questions about free will, privacy, and whether preventing crime justifies sacrificing freedom. The PreCogs are treated as tools until you start thinking about what it costs them.

Thank you for reading. We’re all beginners in something. With that in mind, your questions and feedback are always welcome and I read every single email!

-James

By the way, this is the link if you liked the content and want to share with a friend.

Some links may be affiliate links. This helps support the newsletter at no extra cost to you.

Reply

or to participate