In this newsletter...

ARTIFICIAL INTELLIGENCE
2026 - the year of AI Agents or AI Assistants?

If you’ve watched AI growth in 2025 I bet you heard that “2025 is the year of AI agents”. But was is true?

There was a huge pike for AI agents libraries and toolsets, blogpost mentions, tutorial and paid courses. But AI agents never really ruled production AI integrations yet.

However at the end of 2025, we’ve seen a small pivot - from suggesting AI agents for every business task, people discovered that for many cases it’s better to use AI assistant. What is the difference?

Both are about taking care of some kind of process or task.

AI Agent is an autonomous (or almost autonomous) software, where LLM (like GPT-5.2, Claude-4.5-Opus or Gemini 3) is supervising the process execution. Based on inputs and outputs it makes decision what should be done next (from a list of available tools). LLM in AI agent is the brain, it’s in the center of everything.

With AI assitant you’re the brain and in the center. You know the inputs and outputs, you make decision what to do next, and AI can only HELP you with the decision and do some jobs for you. You can treat AI assistant like a team of eager small startups employees, that will execute your ideas without saying “no”.

So what approach is better? Like always, it depends. However, AI assistants are the fastest, most secure and robust way of taking advantage of AI possiblities:

  • You don’t need to build anything from scratch or tune it, just import some data and documents to start working.

  • You’re always in the loop, so your skills and experience still are super important asset and your competetive advantage.

  • You automate boring and hard stuff, but get results almost instantly, so you can act fast in case of errors.

  • You feel like having superpowers and super-productivity.

And that’s why I think 2026 will be really the year of AI assistant. It’s now the best way to simultaneously research AI usage possibilties while pushing business forward without countless hours spending of fixing stuff.

However AI agents are still great tools, and they will be even more important in the future. It just requires time and iterations, so it’s worth consider starting your agent right now, implementing it as a side project. I think it will pay off soon.

2026 will be the year of AI assistants

AI AGENTS
Why AI Agents are not working (yet)?

Business processes are complicated. Even hours of AI research will not prepare you for the limitless creativity of customers trying to use your product the wrong way, or for strange combinations like a bad phase of the moon that make your software behave differently for no obvious reason.

In order to address such complexity with typical software (set of ifs and loops), you need a lot of knowledge, experience, cleverness, and most important: iterations.

Even if you outsmarted your users and secure system for all the weird stuff that can be done, someday there will be another case that you haven’t thought of.

The same is with AI agents. Even though AI models can be “WOW” level smart, they still won’t cover every case. LLM in agent software only have it’s own knowledge (very general one), list of tools, inputs and outputs. It doesn’t have experience of battle tested business analyst. And even if you LLM can learn from it’s own mistakes - too many information can cause overflow and decrease the results.

Another issue with AI agents is strictly connected to current LLM biggest issue - hallucinations. We learn to use that word, but to be more precise I mean:

  • making up facts,

  • answering other questions than were asked,

  • not listening to instructions added to prompts,

  • or just answering not the way we thought it would.

And this “hallucinations” word here is not right, because that’s the way LLM operates. As they’re using statistics to complete a text, there are mathematical chances things can go south. And of course, with good prompting, well trained model and system prompts (defined both at a model level and user level) those answer mismatches (“hallucinations”) can become less possible, we should always take into consideration that LLM can be wrong.

Actually, the same rule should be applied for working with humans.

You can’t simple add LLM, give it a toolbox with some set of instructions and expect it will work the way as you would. Maybe it will, but it requires time, iterations and still improving prompts, tools and maybe even model operating as agent brain.

Takeaway: AI agents, like traditional software, require constant iteration to handle business complexity. No amount of upfront planning or prompt engineering can eliminate edge cases, and LLMs' statistical nature.

AI AGENTS
5 tips to build good AI Agents 🤖

Even though AI Agents are usually not yet production ready, there are few universal tips I can give you after doing hundreds of AI agents.

1. More specialization and linearity

If you have budget for entire IT team, you should hire specialists from frontend, backend, database, security etc. You wouldn’t hire one guy that can do everything, but is specialized in nothing.

Same is for AI agents. Creating one enormous AI agent that will take care of everything makes no sense. If you want to build agent for running blog:

  • create one agent that will supervise the process,

  • one agent that will research topic,

  • another one to create posts,

  • another one to review,

  • one more to improve on-site SEO,

  • and another one to schedule posts.

You can specialize it even more. Or some things may be more generalized. It depends on how it will be performing. Iterate and adjust. But don’t create one single agent that will take care of everything

2. Add logging

LLMs are making mistakes. It is one of the biggest problem. And you can’t cope with that if you have no idea what mistakes they made, when and why it happend.

Proper logging is one of the most important things in agents development. It allows you to study the outputs in order to find what caused error. This way you can easily spot if the error was done becasue tool output was wrong, or it was a model who made a mistake.

3. Use different models for different tasks

Some tasks require a really smart LLM to make a proper decision, understand the input, or generate proper code. That’s where you should put your best model.

However, if you will use this expensive LLM everywhere, the cost of running agents can be too high. Especially if it will get stuck in a loop.

Always try to find smaller, cheaper model that should be able to do the job. Consider:

  • not using Thinking mode,

  • using open source models,

  • using models with less params.

And most important: test different models for same tasks. You’ll quickly see that for some tasks Claude models are better, while for other GPTs are doing the job as you want.

4. Keep it simple. Things will change.

There are already a lot of frameworks and libraries for building AI agents. No-code, low-code or full-code. And it will be even more.

But nothing is the standard yet. And nothing is tested and used enough so I can recommend something.

In my opinion, the best way to create AI agents now is using basic LLM integrations in your favourite programming language, proper loop and a toolset defined in a JSON file. This way you keep things simple and focus on the core of agent instead of using fancy tech.

My recommendation is to start with n8n, as it’s a great way for prototyping. It has also a set of ready-to-use integrations, so you will save time.

If you feel you hit the wall with possibilites, just rewrite code into your favourite language. That way you can create robust and optimized solution for tool that you’ve already validated with a prototype.

5. Iterate, iterate, iterate

Iteration is the most important is such a software development. Always check the results, if it’s not satisfying you - change some stuff and observe.

Great software is not build on smart code from the beginning, but years of fixing and improving stuff.

Takeaway: Build specialized AI agents (not monoliths), log everything to catch mistakes, mix cheap/expensive models strategically, keep your stack simple since no framework is proven yet, and iterate.

🛠️ Best AI findings:

  • Claude Cowork: Claude Code with a name that won’t suggest focusing on coding.

  • OpenRouter: Still the best unified LLM interface.

Until next week,
Kamil Kwapisz