AI Weekly Recap: Sonnet 4.6, Alibaba's Qwen3.5, and the Global South Takes Center Stage
The biggest AI stories from the week — Anthropic ships Sonnet 4.6, Alibaba drops Qwen3.5, India hosts the first major AI summit in the Global South, and Big Tech bets $650 billion on infrastructure.
AI Weekly Recap: Sonnet 4.6, Alibaba's Qwen3.5, and the Global South Takes Center Stage
Last week was about the big launches — Opus 4.6, GPT-5.3-Codex, and the SaaS shock. This week the dust started to settle, but the pace didn't. Anthropic shipped Sonnet 4.6, Alibaba fired back with Qwen3.5, China's AI labs went on a release spree, India opened the first major AI summit in the Global South, and Big Tech collectively pledged $650 billion in AI infrastructure. Here's what mattered.
Anthropic Ships Claude Sonnet 4.6
Two weeks after dropping Opus 4.6, Anthropic followed up with Sonnet 4.6. The pitch: near-Opus-level intelligence at Sonnet pricing. Early reports point to meaningful improvements in multi-step coding, financial analysis, and computer use — with some users describing the computer use capability as approaching human-level.
This is a deliberate compression play. Frontier capabilities are trickling down to lower price tiers faster than ever. If Sonnet can do 90% of what Opus does at a fraction of the cost, most developers and businesses won't need the top tier. That's good for adoption, and it puts pressure on every competitor's pricing model.
Alibaba Drops Qwen3.5
Alibaba released Qwen3.5 on February 17, a 397-billion-parameter open-weight model with new agentic capabilities. Self-reported benchmarks put it on par with the leading models from OpenAI, Anthropic, and Google DeepMind.
The "self-reported" part deserves skepticism — it always does. But the broader signal is real: the gap between Chinese and Western frontier models is narrowing fast. A year ago, these labs were playing catch-up. Now they're launching within days of each other and competing on the same benchmarks. The open-weight approach also means Qwen3.5 will spread quickly through the developer ecosystem.
China's AI Labs Go on a Release Spree
Alibaba wasn't alone. Zhipu AI released GLM-5, a 128-billion-parameter open-source model built for coding and agentic tasks — and they claim it was trained entirely on domestic Chinese chips, not NVIDIA hardware. If true, that's a significant milestone for China's semiconductor independence story.
ByteDance launched Seedance 2.0 for video generation, and Kuaishou unveiled Kling 3.0 as a multimodal model. The Chinese AI ecosystem isn't just keeping up — it's shipping at a pace that makes Silicon Valley's release cadence look measured.
OpenAI: Codex-Spark, Retirements, and Trademark Trouble
OpenAI had a busy week too. GPT-5.3-Codex-Spark launched as a research preview — an ultra-fast coding model built in partnership with Cerebras, delivering over 1,000 tokens per second. Unlike the full Codex model designed for autonomous long-running tasks, Spark is optimized for rapid iteration: quick edits, instant feedback, real-time pair programming.
The legacy cleanup continued. OpenAI officially retired GPT-4o, GPT-4.1, GPT-4.1 mini, o4-mini, and the GPT-5 Instant and Thinking variants from ChatGPT. The model graveyard grows.
In less glamorous news, OpenAI has been temporarily blocked from using the word "Cameo" in a product due to a trademark dispute. Between this and the Jony Ive device delay, their non-model efforts keep hitting legal walls.
India Hosts the First Global AI Impact Summit
India opened the AI Impact Summit 2026 in New Delhi this week — the first major global AI conference held in the Global South. Around 100 countries are represented, with leaders like Sundar Pichai and Sam Altman in attendance.
Prime Minister Modi inaugurated the India AI Impact Expo, showcasing 12 new AI models built by Indian companies and trained for local languages and real-world problems. India also projected over $200 billion in AI investments over the next two years.
Sam Altman told the conference that the world "urgently" needs to regulate the fast-evolving technology. Coming from the CEO of the company that just started putting ads in ChatGPT, the call for regulation carries a certain irony — but he's not wrong.
Big Tech Pledges $650 Billion in AI Infrastructure
Google, Amazon, Meta, and Microsoft announced the largest collective capital spending plan in history — roughly $650 billion in AI infrastructure for 2026. Amazon alone is planning $200 billion.
The market's reaction? Their combined market cap dropped over $950 billion. Investors aren't questioning whether AI works — they're questioning whether the returns will justify the spend. It's the classic infrastructure dilemma: someone has to build the roads, but roads don't always pay for themselves.
Meta and NVIDIA also announced a multi-year strategic partnership, with Meta adopting NVIDIA's Vera Rubin platform for next-generation clusters and Spectrum-X Ethernet networking.
Google's Responsible AI Report and New Regulations
Google published its 2026 Responsible AI Progress Report, highlighting work across product development, scientific discovery, and healthcare. The report leans into the idea that agentic AI systems are about to dramatically boost productivity — a claim that's both exciting and vague enough to mean almost anything.
On the regulation front, India's new IT Rules take effect February 20, requiring platforms to label AI-generated audio and video and remove illegal content within three hours. Heavy penalties for non-compliance. It's one of the most concrete and enforceable AI content regulations we've seen from a major economy.
Meanwhile in the US, federal agencies are reviewing state-level AI rules with a deadline of March 2026, with the goal of preempting overly restrictive state legislation like California's proposed AI hiring law.
Mozilla Launches One-Click AI Privacy Tool
Mozilla shipped a new privacy feature in Firefox that lets users opt out of and delete their data from AI training datasets with a single click. It's part of their broader "Trustworthy AI" initiative.
This is a small but meaningful move. Most people have no idea their data is being used for training, and even fewer know how to opt out. Making it one click removes the biggest barrier: friction. Expect other browsers to follow — or face pressure to.
Quick Hits
- UK announced major funding for global AI access, including an African Language Hub supporting 40 languages and reaching 700 million people, plus a compute hub in Cape Town.
- Deutsche Bank reports 85% of developers now use AI coding assistants with up to 60% productivity gains — and notes that software development itself may be the sector most exposed to AI disruption.
- OpenAI hired Peter Steinberger, creator of OpenClaw, the viral open-source AI agent. The talent war is shifting from researchers to agent-infrastructure builders.
- Anthropic donated the Model Context Protocol (MCP) to the Linux Foundation. OpenAI, Microsoft, and Google are now all using it — a rare moment of industry convergence.
- Safety concerns continue to simmer. Prominent researchers from multiple labs publicly flagged that capabilities are outpacing safety work. Dario Amodei described being "just a few years away from a country of geniuses in a data center."
That's the week. The frontier models keep getting cheaper and faster, the infrastructure bets keep getting bigger, and the rest of the world is starting to show up at the table. The real question isn't who has the best model anymore — it's who builds the best system around it.
AI Weekly Recap: Ads in ChatGPT, Anthropic's $30B Round, and the SaaS Trillion-Dollar Dip
The biggest AI stories from the week — OpenAI starts showing ads, Anthropic raises a massive round, new frontier models drop, and the SaaS market takes a hit.
Claude Code, agents.md Mythen und der KI-Tsunami: Was du jetzt wissen musst
Die wichtigsten KI-News der Woche – Claude Code Remote Control, die agents.md-Studie der ETH Zürich, Nvidias Physical AI, Mercury 2, Perplexity Computer und der Arbeitsmarkt-Tsunami.