In this newsletter...
AI CODING
Why I switched from Cursor to Claude Code

I was definitely a Cursor power user. From the first moment I saw it, I experienced that "wow" moment.
Regardless of the issues with the tool and some controversies around pricing, I stayed. Mostly because Cursor always delivered an IDE-first experience. It's just VS Code with AI superpowers. I could focus on code while still getting help from different LLMs - AI-assisted coding, not vibe coding.
Cursor also provided a lot of free models to test, which was a great opportunity to explore what's out there.
But…
When I started comparing Cursor with CLI-based agents, my mind changed. Especially after I tested Claude Code.
Quality of AI-generated code
Claude Code provides a much better experience when working with an AI assistant. It feels like the LLMs through the CC interface are smarter, understand you better, and simply write better code.
The difference between Claude Code and Cursor output is significant. Even without custom system prompts, Claude Code with Sonnet 4.5 generates noticeably cleaner code.
Limits
Cursor runs on a monthly usage limit. Recently, I experienced that this quota can burn through in just 1-2 days, especially with top-tier LLMs.
Claude Code with the Pro plan resets limits in short windows every few hours. After each reset, your quota renews. On top of that, there's also a fairly generous weekly limit.
Combining all of this made me switch to Claude Code without looking back.
And the more I test Open Code, the more I'm convinced it also delivers better agentic workflows than Cursor. It's totally worth trying - especially with the option to use free models or take advantage of self-hosted LLMs.
What’s more, the more I test OpenCode, the more I think it also provides better agentic work than Cursor. It’s totally worth trying, especially with possibility to use models for free or take advantage of self-hosted LLMs.
AI TOOLS
AI vendors - will we be forced to lock-in?

You can LLMs through many different interfaces:
official APIs from AI providers: OpenAI, Anthropic, Gemini etc;
production ready official solutions: AWS Bedrock, Vertex AI, Azure AI;
unified interfaces (eg. OpenRouter);
chat combining different LLMs (eg. T3 Chat);
On top of that, many great open-source models can be hosted locally using simple solutions like Ollama or LM Studio.
However, the AI community often reports that some LLMs perform better when accessed directly through their official interfaces. For example, Claude models feel sharper when used through Claude Code. But there's no comprehensive way to verify that feeling.
So will we be able to escape vendor lock-in in the future?
It's quite possible that the biggest AI service providers will eventually create models available only through their official, more expensive products. On top of that, they'll be able to justify such moves by citing data security concerns.
But the open-source model ecosystem is still picking up speed. We're getting more and more impressive open models, many of which can be self-hosted if you have the right gear. In these market conditions, the big AI players need to be careful with their business decisions - or risk losing major customers.
AI AGENTS
Facebook for AI Agents 🤖

Remember when there was AI that was speaking with people on social media and became nazi?
And do you remember those 2 AIs talking to each other that developed own language because of optimization?
So now community got super pumped about social network for AI agents.
It’s called moltbook and it serves reddit-like experience with a way to join for AI agents or for humans to read and enjoy.
It looks like a fancy way just to squeeze extra tokens out of us. However the idea is pretty interesting, as now AI agents with top LLMs mind can really give a lot from theirselves, even not being asked.
🛠️ Best AI findings:
Until next week,
Kamil Kwapisz
