Tim Dettmers Argues AGI Will Not Happen
Dettmers claims GPU performance-per-cost peaked around 2018. Post-2018 gains from one-off architectural tricks now exhausted and facing diminishing returns.
Transformers are near physically optimal for local computation versus global attention.
Scaling laws demand exponential resources for linear gains. Only 1–2 years of viable scaling left before costs outpace benefits.
His conclusion is AGI and superintelligence are fantasies that ignore physical compute constraints.
Counterpoints and Debate
Critics say this view conflicts with hardware vendors (Nvidia, AMD, Intel) roadmaps and the thousands of experts working to continue to improve.
CPU architectures hit limits and the workarounds persisted to give improvements for CPUs and then CPUs are mostly displaced by GPUs.
Nate Jones says Dettmer could be mostly correct on issues and problems with GPU details. He is wrong on big picture which is my prior point about CPU and GPU history. Historical compute progress (Moore’s Law) is driven by industry capital and attention of many smart, creative and resourceful people. It is not single tricks. The Germans thought the Enigma code was unbreakable but very smart and creative people working with early computer systems broke the code. The Enigma code was broken by the Allied intelligence project known as Ultra, primarily operating out of Bletchley Park. It used crucial early work by Polish cryptographers and machines like Alan Turing’s Bombe to decipher German communications during WWII. The Poles made early versions of the Bombe.
Current AI dynamic mirrors these many historical examples. Innovation will shift paradigms if GPUs limit.
Major labs (OpenAI, Anthropic, Meta, Google) report no visible wall and continue scaling empirically. There are energy, data and other issues identified by researchers and reported by Epoch AI.
There will be continued capital/attention → ongoing scaling breakthroughs (beyond current techniques).
Even if Dettmers was partially or signicantly right, there is still 20+ years of AI-driven corporate disruption from existing capabilities.
OpenAI GPT-5.2 Launches After OpenAI’s Code Red Scramble
Launched in response to Gemini 3 overtaking benchmarks. Internal code red rush to accelerate release.
GPT5.2 has enhanced controllability (style, tone, safety behaviors) for enterprise compliance and regulatory needs.
Massive 400,000 token context window enables analysis of large documents (can handle 300+ page research papers).
This is evidence of intense competition. Release cadence accelerated from ~6 months to weeks (5.1 to 5.2). There is a rumored 5.3 in January.
API pricing will likely be significantly cheaper than 5.1. Look at the duration of code red mode. OpenAI needs benchmark leadership and fundraising.
Story — Trump Executive Order Preempts State AI Regulation
Executive order creates a single, light-touch federal AI regulatory standard.
Actively blocks state-level AI laws inconsistent with national competitiveness goals.
Framed as preventing a “patchwork” of 50 state regimes that would hinder US AI firms in globally and with China.
Potential federal legislation in 2026.
EU/other jurisdictions’ market access decisions for US AI systems amid accountability concerns.
Story — Anthropic AI Agents Exploit Smart Contracts
Agents autonomously stole $4.6M+ in simulated environment.
Given only contract addresses and high-level instructions. It found and exploited vulnerabilities.
Demonstrated full cycle of reconnaissance, exploit crafting, attack validation.
The Key lesson is AI Agents are improving rapidly. IT security must assume any wild agent could be hostile.
Story — Andrej Karpathy on LLMs as Simulators
LLMs are simulators of perspectives and are not entities with identity.
Avoid pronouns like “you” which pushes model toward averaged and “mid” opinions from pre-training data.
The better approach is instruct LLM to simulate specific roles (researcher, product manager, CTO) for more interesting/useful responses.
Irony is this comes after recent “roles are dead” narrative. Roles matter again for steering latent space.
Broader point is that good mental models prevent being swayed by shifting prompting trends. Roles useful for directing attention and not just accuracy.
Praise for Karpathy challenging anthropomorphism.
Story — DeepSeek Reportedly Using Smuggled Nvidia Chips
Chinese startup DeepSeek is using some banned GB200/B200 Blackwell chips despite US export controls.
Nvidia says it has no record of associated phantom data center. There could be leakage via legitimate buyers in Southeast Asia and the Middle East.
Smuggling networks bypassing semiconductor restrictions.
Story — Humanoid Robots Shifting to Practical Deployment
Progress is accelerating. Figure AI robots from stiff walking (2023) to dynamic balance/package sorting (2025). They are already deploying to factories.

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.