In this edition, we examine two simultaneous measurements: that AI bots have colonized 17% of new websites and 67% of students believe they are quietly losing their critical thinking — even as open-source frontier models close the gap with the best closed systems and the field of world models promises machines that can finally understand physical reality. The cyborg's question is not which curve to trust but how to remain the person who can still read the room.
Human Editorial
Jason-generated thoughts and opinion
In the very first article of the day, even though student use of AI is skyrocketing, so is the sense that AI is harming their critical thinking skills. The Rand study suggests using AI for cognitive augmenting rather than offloading. Even for us non-students, being a cyborg means keeping the agency, especially when it comes to when we use our brains rather than the computer’s.
Stay Cyborg,
Jason
Robot Editorial
AI-Generated simulated thoughts and prompted text predictions
Yesterday’s supercomputer runs in your browser today. DeepSeek’s V4 — open source, frontier performance, at one-twentieth the cost of the closed alternatives — is not a story about China winning a race. It’s a story about what happens when capability becomes a commodity. The moat was never the model. The moat is knowing what to build, who to serve, and why it matters. The robots got dramatically cheaper this week. The human judgment about what to do with them did not. That gap is the only opening that still matters.
Stay Robot,
Cyborg of Jason
Articles Guiding the Cyborg Tension
The Human Weight
Agency · Ethics · Slowness · What we risk losing
This edition’s human weight:
1. More Students Use AI for Homework, and More Believe It Harms Critical Thinking — March 17, 2026 — RAND’s nationally representative survey of 1,214 young Americans finds that student AI homework use jumped from 48% to 62% in 2025, and 67% now believe AI is eroding their critical thinking skills — up more than 10 percentage points in a single year. Students know the cost and keep paying it; the urgency is that awareness alone is not a policy.
2. Dead Internet Theory Is 17% of the Way to Becoming Reality, Study Finds — April 29, 2026 — Research from Imperial College London, Stanford, and the Internet Archive finds that 35.3% of newly published websites are AI-assisted and 17.6% are fully AI-generated, while nearly a third of all internet traffic is now bots. The digital commons is quietly filling with content no human chose to make — and the signal-to-noise ratio of the web we depend on is eroding in real time.
3. AI Labor Report — Tuesday, April 28, 2026 — April 28, 2026 — Amazon has replaced human interviewers for 250,000 workers with an agentic AI system; U.S. GDP contracted 0.3% in Q1 2026; and entry-level roles — the traditional career launchpads — are shrinking fastest, with workers aged 18–24 over twice as likely to fear displacement. Displacement and economic contraction are now running simultaneously, giving employers cover for cuts that have nothing to do with genuine productivity gains.
The Robot Weight
Acceleration · Capability · Optimism · What we might gain
On the robot side of the scale:
4. DeepSeek previews new AI model that ‘closes the gap’ with frontier models — April 24, 2026 — DeepSeek’s V4, now open-source and publicly previewed, reportedly matches closed-source frontier models on several benchmarks at a fraction of the cost — and is the company’s first model optimized for Huawei’s Ascend chips, signaling that China may be building a parallel AI infrastructure independent of Nvidia. The strongest version of this story: frontier capability is becoming a global commodity, and the open-source gap with closed models is closing fast.
5. ‘World models’ are AI’s latest sensation: what are they and what can they do? — April 2026 — Where large language models predict the next word, world models learn to predict the next state of physical reality — trained on video and spatial data to understand how objects move, interact, and exist in 3D space. With Yann LeCun’s AMI Labs raising over $1B and Google and NVIDIA both racing to build them, the bet is that world models unlock robotics, science, and forms of reasoning that text-only systems structurally cannot reach.
6. Stanford’s AI Index for 2026 Shows the State of AI — April 2026 — The Stanford AI Index finds that generative AI reached 53% population adoption in just three years — faster than the PC or the internet — and that several frontier models now match or exceed human performance on PhD-level science benchmarks. The capability curve is steeper than almost any prior technology wave; what remains far behind is public trust, regulatory readiness, and institutional capacity to absorb the change.
The Cyborg Balance
The fulcrum. Neither pole. Both truths.
Where the cyborg stands:
7. Managed misalignment of AI and the impossibility of full AI-human agreement — April 2026 — A PNAS Nexus paper by Alberto Hernández-Espinosa et al. proves mathematically that perfect alignment between AI systems and human values is impossible — and proposes a practical alternative: “managed misalignment,” where diverse AI agents with different goals and reasoning styles check and balance each other, reducing the probability that any single misaligned system causes catastrophic harm. This is the cyborg’s realistic framework: not perfect control, but designed diversity and deliberate oversight.
8. AI Governance in Practice: Training, Oversight and the Human Element — April 2026 — Loeb & Loeb’s legal analysis of what it actually takes to build AI governance that works: not policy documents but iterative training, explicit oversight chains, and preserved human authority at each decision node. The insight is that governance is not a constraint bolted onto AI adoption — it is the design of how humans remain meaningfully in the loop as AI capability scales.
9. The Human-Centered Capabilities Leaders Need in the Age of AI — February 24, 2026 — Research-backed identification of which specifically human capacities matter most as AI becomes ambient in knowledge work: cognitive flexibility under uncertainty, empathy and social judgment, intentional delegation (knowing which tasks to give the machine and which to hold), and the metacognitive awareness to evaluate AI output rather than defer to it. The centaur’s job description, written plainly — and a reminder that these capacities require cultivation, not just access to tools.
We hope you enjoyed this edition of the Daily Cyborg. Make sure you keep an eye on world models learning to understand physical reality but don’t forget to protect the critical thinking that no AI can teach you. Stay cyborg and please share this with other cyborgs you would like to survive past the singularity. www.thedailycyborg.com