Anthropic Acquires Computer-Use Startup Vercept After Meta Poached One of Its Founders
Anthropic has acquired Vercept, a Seattle-based startup building AI agents capable of autonomously operating within desktop and web applications. The deal closed in the shadow of Meta hiring away one of Vercept’s founders: a detail that underscores how aggressively the major labs are competing for talent and capability in agentic AI, where the ability to control software interfaces is increasingly the differentiating layer.
For Anthropic, this is a direct infrastructure move. Computer-use agents are central to its commercial roadmap, and owning the capability rather than partnering for it reduces a meaningful dependency. Watch whether Vercept’s technology surfaces in Claude’s enterprise offerings before the end of the quarter.
Nvidia Posts Another Record Quarter as Capital Expenditure Across the Industry Climbs
Nvidia reported record quarterly earnings, with CEO Jensen Huang pointing to explosive growth in AI token demand as the primary driver. The company is simultaneously expanding capital expenditure to meet that demand: a posture that signals Nvidia’s internal models show no near-term ceiling on infrastructure spending by its customers.
The significance here is not simply Nvidia’s profit. It is the confirmation that hyperscalers have not blinked on compute investment despite macro uncertainty. That sustained spending has downstream consequences for chip availability, cloud pricing, and which companies can afford to train frontier models at all. We covered the early pressure on compute supply chains in the February 25 issue.
Google and Samsung Launch the On-Device AI Features Apple Promised but Could Not Deliver
Google and Samsung announced that Gemini will handle genuine multi-step phone tasks: booking rides, ordering food, navigating third-party apps: launching first on Pixel 10 and Galaxy S26. These are precisely the capabilities Apple previewed for Siri and subsequently failed to ship on schedule, a gap that has become increasingly visible.
This is less a product story than a competitive positioning story. Apple built its smartphone dominance partly on the perception of seamless software integration. If Gemini reliably executes multi-step agent tasks on Android while Siri remains conversational, that perception shifts in ways that are difficult to reverse quickly. The question now is reliability at scale, not feature parity.
Public Opposition to AI Data Centers Is Forcing a Reckoning on Infrastructure Expansion
Communities across the United States and Europe are moving beyond protest to policy, with some municipalities enacting outright bans on new data center construction. The objections center on energy draw, water consumption for cooling, and strain on local grids: concerns that are factually grounded and increasingly difficult for companies to dismiss with sustainability pledges.
This is the constraint that does not appear on most AI roadmaps. The industry’s assumptions about where and how fast it can build physical infrastructure are now subject to local political processes that operate on timelines disconnected from product cycles. The companies best positioned are those that secured land and permits early, or those with existing relationships in jurisdictions where opposition is weaker.
Atlassian Adds AI Agents as Assignable Workers in Jira
Atlassian has updated Jira to allow teams to assign tasks to AI agents through the same interface used to assign work to human colleagues. The framing matters: rather than building a separate AI layer, Atlassian is inserting agents into the existing workflow record, making their work trackable, reviewable, and auditable alongside human output.
The practical test will be whether teams actually trust the output enough to close tickets on agent-completed work, or whether AI assignments become a new category of tasks that humans must re-verify before marking done. That friction: or the absence of it: will determine whether this is a genuine productivity lever or a well-designed demo.
Analysis
Andrej Karpathy on the Discontinuous Leap in Coding Agents (3 min read)
Karpathy argues that AI coding agents crossed a functional threshold in December: not through gradual improvement but through a step change in coherence, tenacity, and the ability to sustain complex multi-step tasks. His illustration is concrete: he gave an agent a natural-language instruction to log into a remote machine, configure a GPU server, deploy a model, build a web dashboard, set up system services, and return a written report. It completed the task in roughly thirty minutes without further input. Three months ago, he writes, that was a weekend project.
The structural claim is worth sitting with: the era of typing code into an editor is ending. What replaces it is task decomposition, agent orchestration, and judgment about which pieces to hand off and which require human direction. Karpathy calls the highest-leverage version of this “agentic engineering”: managing parallel agent instances through orchestrator layers. For anyone running a software team, the question is not whether this changes the work, but how fast to reorganize around it.
On Rethinking Software Companies From First Principles (1 min read)
In a post that has circulated widely among engineering leaders, the claim is direct: software engineering has changed more in the past three months than in the preceding thirty years, and running a software company now requires rethinking operations from the ground up. The brevity of the post is part of its signal: this is not a detailed argument but a temperature reading from someone close to the work.
Read alongside Karpathy’s post above, a consistent picture emerges from practitioners: the change is not incremental, the old mental models are wrong, and the window to reorganize before competitors do is narrow. The leaders most at risk are those treating this as a tooling upgrade rather than a structural shift in how software gets built.
Karpathy on the Hardware Constraint at the Center of the Token Economy (2 min read)
In a separate post timed to MatX’s $500M raise, Karpathy lays out the core engineering problem in AI infrastructure with unusual clarity. The fundamental constraint is physical: AI chips have two distinct memory pools: fast but small on-chip SRAM, and slow but large off-chip DRAM: and the optimal orchestration of computation across those two pools, especially for long-context inference in tight agentic loops, is a problem that neither Nvidia nor Cerebras has fully solved. He describes it as the most intellectually interesting and financially significant engineering challenge in AI today.
This framing matters because it grounds the chip competition in specifics rather than marketing claims. MatX, founded by ex-Google TPU engineers, is explicitly targeting this gap. Whether they can execute at production scale is unproven, but the problem they are attacking is real and the reward for solving it: measured against Nvidia’s current market cap: is substantial.
From the Field
Developer Uses Codex to Diagnose and Fix an 18-Month-Old Swift Compiler Bug
A Swift developer assigned Codex the task of investigating a 0.5-second compilation slowdown that had been filed as a bug eighteen months ago and left unresolved. The agent reproduced the issue, instrumented the Swift compiler with diagnostic logging, identified the root cause, and produced a pull request that has since been submitted to the Swift open-source repository. The developer’s reaction: a single stunned emoji: is a reasonable response.
This is the kind of use case that does not appear in product demos. Debugging compiler performance regressions requires deep familiarity with low-level tooling, tolerance for ambiguous failure modes, and patience. The fact that an agent handled it end-to-end on a cold start suggests that the ceiling on what can be delegated is considerably higher than most teams are currently testing.
Open-Source Repo Extends Claude Into a Persistent Autonomous Coding Engine
A repository circulating in developer communities provides a structured configuration layer on top of Claude that enables persistent, autonomous coding workflows: essentially turning the model into an engine that can manage multi-file projects, maintain context across sessions, and execute coding tasks without constant human re-prompting. It is fully open source and has attracted rapid attention from developers already running agentic setups.
The practical value for teams is in the scaffolding, not the model itself. Knowing how to configure memory, tool access, and task boundaries for autonomous agents is becoming a distinct skill, and repositories like this are where that knowledge is currently being developed and shared: faster than any formal documentation. Teams building internal AI tooling should be watching this layer closely.
Adobe Firefly Quick Cut Generates First-Draft Video Edits From Raw Footage
Adobe’s Quick Cut feature, now in beta within Firefly, takes raw footage and a text instruction and assembles a rough cut automatically: handling clip selection, sequencing, and basic pacing without manual intervention. For solo creators and small production teams, the time savings on the assembly stage of editing are real and immediate.
Among video professionals in practitioner communities, the reaction is divided along predictable lines. Experienced editors note that the creative judgment in the rough-cut phase is often where the story gets shaped, not just assembled. But for producers who spend hours on mechanical assembly before any creative editing begins, this removes the part of the job they least wanted. How that division plays out across different production contexts will define which jobs this replaces and which it amplifies.
Voices
@karpathy wrote: “Programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You’re spinning up AI agents, giving them tasks in English and managing and reviewing their work in parallel.” Karpathy’s posts carry weight because he is describing actual workflows, not extrapolating from benchmarks. The specificity of the DGX Spark example: a real machine, a real task, a thirty-minute completion: is harder to dismiss than abstract claims about capability.
@soumithchintala flagged a TIME report that Anthropic has dropped its flagship safety pledge, comparing the move to OpenAI’s relinquishing of its “open” identity. Soumith Chintala is a co-creator of PyTorch and a respected voice in the research community, not a commentator prone to hyperbole. His characterization of this as potentially wilder than the OpenAI name shift is a signal that people close to the field see this as a substantive change in Anthropic’s public commitments, not a procedural update.
@adonis_singh posted that gpt-5.3-codex achieved 86 percent on IBench, significantly ahead of all other models evaluated. Benchmark results require careful scrutiny: methodology, test contamination, and scope all affect what the number actually measures: but a gap this large on a coding-focused evaluation, if it holds, would represent a meaningful separation from the current competitive field. Worth watching for independent replication before drawing conclusions.
Business Intelligence
Small Business
The Jira update and the broader shift toward assignable AI agents have an underappreciated implication for small teams: you no longer need to build elaborate internal systems to get structured work out of AI. If you are already using project management tools, the infrastructure for delegating to AI is arriving inside the tools you have. A ten-person team that learns to decompose its work clearly enough to assign tasks to agents gains capacity that would otherwise require two or three additional hires. The investment required is in learning the decomposition skill, not in new software.
The Adobe Quick Cut feature is worth specific attention for any business producing video content for marketing, social, or client delivery. If you are currently paying an editor by the hour for assembly work: pulling clips, sequencing a rough cut, delivering a first draft: that cost is about to compress significantly. Redirect that budget toward the creative direction layer, which AI handles far less reliably, and toward distribution strategy, which it does not handle at all.
The risk most small businesses are not considering is vendor concentration. As AI capabilities consolidate around a small number of platforms: Google’s ecosystem for Android, Anthropic for enterprise agents, Adobe for creative workflows: the businesses that built workflows around one provider are exposed when that provider changes pricing, deprecates a feature, or is acquired. The open-source repository extending Claude for autonomous coding is a reminder that there are hedges available, but they require some technical capacity to use.
Mid-Market
The Karpathy post deserves a direct read by anyone running an engineering team. The implication is not that developers become less important: it is that the developers who understand how to decompose and orchestrate agent tasks will outperform those who do not by a factor that makes individual hiring decisions less meaningful than training and workflow design. If your engineering hiring strategy is unchanged from eighteen months ago, it is misaligned with the current environment. The scarce resource is now judgment and architecture, not code volume.
Atlassian’s Jira update gives mid-market companies something genuinely useful: a way to introduce AI agents into operational workflows without a separate change management process. Your teams already know how to assign, track, and close Jira tickets. The organizational friction of AI adoption drops when it fits into an existing mental model. The risk is that teams close AI-assigned tickets without adequate verification, introducing errors that are harder to catch because they arrived through a trusted process. Build the review step into the workflow before you need it.
The public opposition to data center construction is a mid-market concern if you are making infrastructure commitments: particularly if your growth plans depend on cloud capacity in specific regions. The companies most exposed are those with SLAs tied to regional availability and no contractual protections against capacity constraints driven by regulatory action. Now is the right time to review those agreements and ask your cloud providers directly which of their planned facilities face active local opposition.
Enterprise
The Anthropic safety pledge development reported by TIME warrants legal and compliance review before it warrants a technology decision. Enterprise procurement of AI systems increasingly involves representations about safety commitments, responsible use frameworks, and audit rights. If those commitments are changing at the vendor level, the contractual language your legal team negotiated may no longer reflect what the vendor is actually committed to. This is not a reason to stop using Anthropic products, but it is a reason to reopen the contract review and ask specific questions about what has changed and what remains binding.
The Amazon AGI lab leadership departure: David Luan leaving after less than two years: is the kind of signal that enterprise technology teams should log rather than act on immediately. Leadership turnover in AI research organizations correlates with shifts in product roadmap, research priorities, and the stability of enterprise relationships. If Amazon Bedrock or AWS AI services are embedded in your production stack, assign someone to monitor the succession announcement and any accompanying changes in stated direction. Catching a pivot early is considerably cheaper than discovering it mid-contract.
Nvidia’s record quarter and the sustained capital expenditure across the industry have a specific board-level implication: the cost of compute is not declining on the timeline that most enterprise AI business cases assumed. If your AI program’s ROI projections were built on falling inference costs over a two-to-three year horizon, those projections should be revisited. The infrastructure spending data suggests the hyperscalers see sustained demand: which means sustained pricing power for compute, not the commodity price curve that conservative models projected. Simultaneously, the MatX raise and the broader Nvidia challenger ecosystem are worth tracking as a potential hedge in multi-year procurement planning.
In Brief
- Alphabet Folds Intrinsic Robotics Back Into Google: After nearly five years as an independent Alphabet unit, Intrinsic is being reintegrated into Google, signaling that robotics software is now treated as a core Google business rather than a speculative bet.
- MatX Raises $500 Million to Challenge Nvidia on AI Chip Architecture: Founded by ex-Google TPU engineers, MatX is targeting the memory-compute orchestration problem that Nvidia’s current architecture leaves partially unsolved, particularly for long-context agentic inference.
- Amazon’s AGI Lab Leader David Luan Is Departing to Start a New Venture: Luan leaves after less than two years leading Amazon’s San Francisco AGI lab, with no succession plan disclosed, creating uncertainty in Amazon’s AI research leadership at a competitive moment.
- AI Companies Are Sacrificing Near-Term Revenue to Capture India’s User Base: ChatGPT, Claude, and local competitors are scaling free offerings in India’s price-sensitive market, betting that a massive installed base will convert to paid subscriptions as the market matures.
- Gemini on Android Can Now Execute Multi-Step Tasks Independently: Google has expanded Gemini beyond question-answering to complete real-world transactions like ride requests and food orders, a functional shift from assistant to agent on consumer devices.
- Anthropic Drops Its Flagship Safety Pledge, According to TIME: The reported change in Anthropic’s public safety commitments is drawing comparisons to OpenAI’s structural pivot and warrants scrutiny from enterprise customers with compliance obligations.
Tool of the Day
Claude Code Best Practice is an open-source configuration framework that extends Claude into a persistent autonomous coding engine capable of managing multi-file projects and sustained development tasks without constant re-prompting. It is built for developers and engineering teams who want to move beyond single-prompt interactions toward structured agent workflows: the kind Karpathy describes in today’s Analysis section. A concrete use case: set it up to take a well-specified feature ticket, scaffold the implementation across multiple files, write tests, and return a diff for human review, without manual step-by-step direction. It requires comfort with command-line configuration and a working API key, but no additional infrastructure.

