Posts

Anthropic re-allows OpenClaw? Not quite: limbo and model routing win

Anthropic re-allows OpenClaw? Not quite: limbo and model routing win

A post on r/openclaw sparked hope: “Anthropic allows OpenClaw again.” But the community quickly dismantled the optimism. Peter Steinberger had tweeted that Boris (from Anthropic) confirmed CLI-style usage is allowed. OpenClaw added support, only to find they were still blocked.

As user siberianmi put it: “It’s in a weird limbo where CLI use should work in theory but doesn’t in practice.”

OpenClaw 4.20 and 4.21: Kimi K2.6, Image 2 and the usual bug dance that breaks setups

OpenClaw dropped 4.20 and, a mere six hours later, 4.21. Textbook release cadence for @steipete’s project: push, discover the damage, patch. Let’s break it down.

What’s new in 4.20

4.20 is a substantial release, mostly cleanup and consolidation, with a few notable features:

Plus: sanitizeForLog() optimization (single regex vs iterative loop), plugin loader reuse for leaner tests, Docker E2E for channel dependencies, and QA suite that fails by default on failed scenarios.

OpenClaw 4.20: an inventory of 29 essential fixes for serious users

OpenClaw 4.20: an inventory of 29 essential fixes for serious users

A Reddit user lists every manual fix they had to apply after upgrading. Some of them, they claim, reappeared upstream.


If you think upgrading to OpenClaw 4.20 is painless, you’re probably not using it hard enough. u/Marcelovc, an active contributor on r/openclaw, published a provocatively titled post - “This sucks from Openclaw founder but read this if you want to update to 4.20” - listing 29 fixes required to make version 4.20 work as intended.

Google Announces Deep Research Max: Autonomous Research Agents on Gemini 3.1 Pro

Google Announces Deep Research Max: Autonomous Research Agents on Gemini 3.1 Pro

Google has announced Deep Research Max, the latest generation of its autonomous research agents built on Gemini 3.1 Pro. The announcement, published April 21, 2026, introduces two distinct configurations: Deep Research, optimized for speed and interactivity, and Deep Research Max, designed for maximum comprehensiveness through extended test-time compute.

The video published on the Google for Developers channel features testimonials from early adopters in the financial and pharmaceutical sectors, highlighting how the platform enables tackling complex research questions in days rather than weeks.

Kimi K2.6 Beats GPT-5.4 at Code Review: antirez's Real-World Test on Richard Hipp's Patch

Salvatore Sanfilippo, better known as antirez, published a video testing three top-tier LLMs, Kimi K2.6, Claude Opus 4.7, and GPT-5.4, on a real code review task: analyzing a pull request from Dr. Richard Hipp, creator of SQLite, on the linenoise library. The result? The Chinese open-source model beat the American “golden standard” in one specific but instructive case.

Context: A PR from a Programming Legend

Antirez opens with a political and pragmatic premise: he watches Chinese models with growing interest not just for democratization, but as an exit strategy. Between transatlantic tensions and the risk that access to American models could become expensive or restricted, having viable alternatives is a practical necessity.