OpenAI Eyes $110 Billion Raise From Amazon and NVIDIA, Valuation Reaches $840 Billion

OpenAI is in advanced discussions to raise $110 billion from Amazon and NVIDIA, a round that would push its valuation to $840 billion and make it one of the most valuable private companies in history. The participation of both a cloud infrastructure giant and the dominant AI chip supplier signals that OpenAI is consolidating partnerships across the full stack of AI compute, not just seeking capital. This coverage follows yesterday’s briefing on OpenAI’s Pentagon deal, and together the two stories sketch a company moving rapidly to entrench itself across commercial, governmental, and infrastructure markets simultaneously.
The scale of the round also clarifies the economics of frontier AI competition: the capital requirements are now so large that only a handful of players can remain at the frontier without sovereign or hyperscaler backing. Watch whether Anthropic, Google DeepMind, or xAI respond with comparable infrastructure announcements in the weeks ahead.
OpenAI Reveals More Details About Its Agreement With the Pentagon

Sam Altman publicly acknowledged that OpenAI’s Pentagon agreement was rushed and carries poor optics, an admission that is unusual by corporate standards and suggests internal tension around how the deal was structured. The disclosure raises direct questions about where the line sits between AI systems that assist with logistics or administration and those that support lethal decision-making, a distinction OpenAI has not fully clarified. The company’s willingness to simultaneously court a $110 billion commercial raise and a Pentagon contract illustrates the dual-use pressure every frontier lab now faces.
Also worth noting: OpenAI separately published a statement opposing any designation of Anthropic as a supply chain risk, a move that reveals both a preference for industry solidarity on regulatory questions and a calculation that broad supply-chain restrictions could eventually apply to OpenAI itself. See yesterday’s issue for the Anthropic supply chain context.
Claude Hits No. 1 on the App Store as Users Defect From ChatGPT

Anthropic’s Claude reached the top of the Apple App Store rankings, with the movement attributed to users switching from ChatGPT in apparent approval of Anthropic’s more cautious stance on Pentagon partnerships. App store rankings are a noisy signal, but a sustained shift to the top position reflects real download volume and not just sentiment, and it demonstrates that AI companies’ ethical positioning now has measurable commercial consequences. For Anthropic, which has built its brand around safety-first messaging, this is a rare moment where that positioning translated directly into user acquisition.
The episode is a warning for any frontier lab that treats its public values statements as marketing copy. Users are now watching for consistency between stated principles and commercial decisions, and they have enough alternatives to act on the difference.
Stanford Study Finds All Major AI Companies Train on User Conversations by Default

A Stanford analysis of privacy policies across OpenAI, Google, Meta, Anthropic, Microsoft, and Amazon found that all six companies use consumer conversations to train their models by default, with some retaining data indefinitely and none providing meaningful special protections for users aged 13 to 18. The researchers flagged a two-tiered structure: enterprise customers are opted out of training by default, while individual consumers paying monthly subscriptions are opted in. Only Microsoft stated an explicit policy of attempting to remove personal identifiers before training data is used.
The practical implications are significant. Sensitive queries about health, legal situations, and finances that users believe are private are, under current default settings, potentially training data forever. Regulators in the EU and several US states are already examining AI data practices, and this research gives them specific evidence to cite. Any organization advising employees or clients on AI tool usage should revisit those guidelines this week.
NVIDIA Advances Autonomous Networks With Agentic AI Blueprints and Telco Reasoning Models

NVIDIA released agentic AI blueprints and purpose-built reasoning models for telecommunications operators, targeting a shift from rule-based network automation to systems that make independent operational decisions. Network automation has become the top AI investment priority for telcos, and NVIDIA is positioning its stack as the reference architecture for that transition. The move extends NVIDIA’s reach beyond chips and into the software and deployment layer, a pattern consistent with its broader strategy of owning the full AI development pipeline.
Telecom is one of the first industries where agentic AI is being deployed at infrastructure scale rather than in productivity tools, which makes it an early test of whether autonomous decision-making holds up under the reliability demands of critical infrastructure. The results will inform how quickly other regulated industries move from pilot to production.
Analysis
Why XML Tags Are So Fundamental to Claude (3 min read)

This piece examines why Anthropic’s Claude responds so reliably to XML-structured prompts, tracing the pattern back to training choices and the model’s internal representation of structured input. For practitioners building production pipelines on top of Claude, understanding this is not academic: it explains why the same instruction framed in XML reliably outperforms plain prose for complex multi-step tasks. The piece also raises a harder question about whether model-specific prompt idioms create a form of vendor lock-in that is invisible until you try to migrate.
As Claude gains App Store users who will use it through conversational interfaces, the underlying prompt engineering that powers API-connected products becomes more consequential, not less. Teams building on Claude should treat XML structuring as a core technique rather than an optional refinement.
New Research Finds AI Skills Deliver Large Productivity Gains Even When Sourced From Mediocre Examples (5 min read)

A paper highlighted by Ethan Mollick tested the practical value of AI skills assembled from GitHub and similar repositories, rating the source material at only 6.2 out of 12 for quality. Despite the mediocre inputs, the resulting productivity gains were substantial, and the effect was strongest outside software domains. The implication is that organizations do not need to wait for polished, expertly curated skill libraries to see meaningful returns from AI agent deployments.
Mollick’s accompanying note that we are still in early stages of understanding how to write skills and what agent harnesses need is the more important framing. Large productivity gains from low-quality inputs suggest the ceiling on well-designed skills is considerably higher, which means the organizations investing now in skill development and agent infrastructure are building a compounding advantage, not just a temporary one.
MicroGPT, by Andrej Karpathy (3 min read)

Karpathy’s MicroGPT follows the lineage of his earlier micrograd and nanoGPT projects: a deliberately minimal implementation designed to make the core mechanics of a modern language model legible to anyone willing to read the code. Where prior projects focused on training fundamentals, this one addresses architectural decisions that have accumulated in production models and strips them back to their essential form. For engineers who learned from his earlier work, this is the natural next step.
The pedagogical value is real, but the strategic value for teams is equally concrete. Engineers who understand a model’s internals from first principles are better positioned to diagnose failures, evaluate vendor claims, and make informed fine-tuning decisions. Karpathy’s writing remains one of the most efficient ways to close that gap.
From the Field
A Developer Built a Working Prototype of Ad-Supported AI Chat
A Hacker News contributor built and shared a prototype demonstrating what AI chat interfaces might look like if they were funded by advertising rather than subscriptions. The demo explored ad placement within conversational turns, contextual targeting based on query content, and the UX friction that results. The HN discussion was pointed: most practitioners found the model plausible but predicted high churn once users understood that the context of their queries was being used to serve ads.
This matters for anyone thinking about AI product pricing. The subscription model is not yet settled, and if frontier models continue to commoditize, ad-supported tiers become a realistic scenario for consumer products. The privacy implications compound those identified in the Stanford study above: an ad-supported model has an economic incentive to retain and analyze conversation data that a subscription model does not.
A Developer Fine-Tuned SDXL on Childhood Photos to Simulate Memory Recall
A practitioner fine-tuned Stable Diffusion XL on approximately 60 personal childhood photographs, then used the resulting model to generate images designed to evoke fragmented, emotionally weighted memories rather than literal reconstructions. The project sits at the intersection of technical fine-tuning practice and something closer to applied psychology. The r/ChatGPT discussion focused less on the technical approach and more on whether this kind of personal model training opens genuinely therapeutic possibilities or risks reinforcing distorted recollections.
From a purely technical standpoint, the project illustrates that meaningful fine-tuning results are now achievable with small personal datasets on consumer hardware. That lowers the barrier for a category of deeply personal, identity-adjacent AI applications that no product team has fully addressed yet.
The U.S. Government Has Computers. They Are the Wrong Kind for AI Inference.
A point surfaced by Ethan Mollick and worth repeating to practitioners: federal agencies hold substantial computing infrastructure but it is largely CPU-based legacy hardware unsuited to the parallel matrix operations that AI inference requires. This is not a minor gap. It means that government AI deployments at scale are structurally dependent on commercial cloud providers, specifically those with large GPU fleets, regardless of what procurement policy intends. The Amazon investment in federal AI capacity is a direct response to that gap.
For vendors selling AI tools to government clients, this dependency is both a bottleneck and an opportunity. Solutions that minimize inference compute requirements or run efficiently on less specialized hardware will have a procurement advantage in the public sector that they do not have elsewhere.
Voices
@emollick observed that the U.S. government has lots of computers but they are the wrong kind of compute for inference, and that agencies need to use AWS or another cloud provider just like any private organization does. The post cuts through the assumption that government AI deployment is primarily a policy or funding problem. The bottleneck is hardware architecture, and no amount of budget allocation changes that in the short term.
@emollick also noted, in reference to a new arXiv paper on AI agent skills, that we are very early in understanding how to write skills and what harnesses agents need to use them effectively. Coming alongside the paper showing large productivity gains from mediocre skills, this is a useful corrective to the assumption that agent infrastructure is a solved problem. The gap between what works today and what works at enterprise scale remains wide.
Retweeted by @demishassabis, a post shared a prompt showing Google’s Gemini 2 producing high-quality exploded-view product diagrams with clean part separation, accurate perspective, and full internal component consistency. That Hassabis amplified this specific technical demonstration, rather than a general capability showcase, suggests confidence in the model’s performance on structured visual reasoning tasks. For product designers and technical illustrators, exploded-view generation at this fidelity has direct workflow implications.
Business Intelligence
Small Business
The Stanford privacy findings deserve your immediate attention even if you run a ten-person company. If any of your team members are pasting client information, financial data, or internal strategy into consumer AI chat tools, that content is almost certainly being used for model training under current default settings. You do not need a legal team to act on this: update your internal guidelines this week to require that any AI tool used with sensitive business information is either an enterprise tier with training opted out, or a tool where you have confirmed the data handling policy in writing.
Claude reaching the top of the App Store is a practical signal worth acting on. If your team has been using ChatGPT by habit rather than by deliberate evaluation, this is a good moment to run a genuine comparison. The competitive landscape for AI assistants has shifted enough that the tool you adopted eighteen months ago may not be the best fit for what your business needs now. The switching cost is low, and the productivity difference between a well-matched tool and an adequate one compounds over time.
OpenAI’s $840 billion valuation and its Pentagon partnership may feel remote from a small business perspective, but the underlying dynamic is not. The companies that dominate AI infrastructure are making bets now that will determine which tools are available, at what price, and under what terms in two to three years. Businesses that are already building workflows around specific AI providers should maintain at least one tested alternative, both for negotiating leverage and for continuity if pricing or policy changes.
Mid-Market
The two-tiered privacy structure identified in the Stanford research has a direct vendor procurement implication. Enterprise tiers across major AI platforms opt users out of training by default, while consumer subscriptions do not. If any part of your organization is using consumer accounts for business work, you are likely contributing your operational data to model training without explicit consent. Before your next vendor review, confirm in writing which tier each AI tool operates under and what the data retention and training policies are for that tier specifically.
The productivity research on AI skills flagged by Mollick has operational implications for how you structure AI adoption internally. The finding that mediocre skills still produce large productivity gains suggests you do not need to wait for a polished internal AI knowledge base before deploying agents. It also suggests the teams that start building and iterating on agent skills now, even imperfectly, will be measurably ahead of those who wait for best practices to stabilize. Assign ownership of agent skill development to a specific role rather than treating it as a shared responsibility that no one prioritizes.
NVIDIA’s telco autonomy push is worth watching even if you are not in telecommunications. The pattern of NVIDIA providing reference architectures and reasoning models tailored to specific verticals is likely to extend to logistics, manufacturing, and financial services. When it does, the first-mover companies in each vertical will have a significant implementation head start. Start identifying which agentic use cases in your industry are closest to production readiness, because the infrastructure to support them is arriving faster than most roadmaps assume.
Enterprise
The Stanford privacy analysis gives your legal and compliance teams a specific, peer-reviewed document to work from. The finding that privacy disclosures are spread across multiple sub-policies in ways that researchers themselves found challenging to parse is directly relevant to your vendor due diligence process. Any AI vendor contract signed without explicit, contractual confirmation of data retention limits, training opt-out status, and personal data handling procedures now carries documented legal exposure. Your procurement team should treat AI vendor privacy terms with the same scrutiny applied to data processing agreements under GDPR or CCPA, because regulators will eventually do the same.
OpenAI’s Pentagon deal and the associated acknowledgment that it was rushed should prompt a governance conversation at the board level. Enterprises deploying OpenAI tools at scale are now in a supplier relationship with a company that has acknowledged moving faster than its own review processes warrant on a sensitive government contract. That is not a reason to exit the relationship, but it is a reason to ensure your AI governance framework does not assume vendor stability and principled decision-making as baseline conditions. Scenario planning for rapid vendor policy changes should be part of your AI risk register.
The structural dependency of federal agencies on commercial cloud for AI inference is a procurement and security insight with enterprise parallels. Most large organizations are in an analogous position: significant legacy compute infrastructure that cannot run modern inference workloads efficiently. The enterprises that are mapping this gap now, quantifying which AI use cases require external compute and which can be brought in-house with targeted hardware investment, will be better positioned to negotiate cloud contracts and to respond to data sovereignty requirements as they tighten. This is a question for the CTO’s roadmap, not just the IT operations team.
In Brief
- OpenAI opposes Anthropic supply chain designation. OpenAI published a statement arguing against any regulatory classification of Anthropic as a supply chain risk, a notable instance of a company publicly defending a direct competitor on a policy question.
- Gemini 2 produces high-fidelity exploded-view product diagrams. A circulating prompt demonstrates the model generating accurate, detailed technical product diagrams with consistent internal component separation, a result with direct relevance to product design and engineering documentation workflows.
- New arXiv paper examines optimal skill design for AI agents. Researchers find that effective agent deployment depends heavily on how skills are written and structured, and that current approaches remain far from settled best practice.
- Claude overtakes ChatGPT in Apple App Store rankings. The shift reflects both user response to the Pentagon controversy and sustained interest in Claude’s capabilities, confirming that the consumer AI assistant market remains genuinely competitive.
- NVIDIA releases telco-specific reasoning models and agentic blueprints. The move extends NVIDIA’s footprint from chip supplier to vertical software provider, with telecommunications serving as the first large-scale industry test bed for autonomous AI network management.
- Amazon expands AI investment in U.S. federal agencies. With government compute infrastructure poorly suited to inference workloads, AWS is positioned as an essential intermediary for any federal AI deployment at scale, a dependency that shapes both procurement and security policy.
- Andrej Karpathy publishes MicroGPT. A minimal, readable implementation of a modern language model intended to make core architectural decisions transparent, continuing the pedagogical series that includes micrograd and nanoGPT.
Tool of the Day
Ad-Supported AI Chat Prototype is a developer-built proof of concept aimed at product designers, AI startup founders, and anyone thinking seriously about alternative monetization models for conversational AI. The prototype places contextual ads within chat turns and demonstrates concretely how query content could be used for targeting, which makes it useful not as a production tool but as a design artifact. Its most specific value is for product teams preparing for a future where subscription fatigue or commoditized models force a rethink of AI pricing: running your own UX tests against this prototype will surface user tolerance questions that no amount of theoretical discussion will answer.

