Back

From bricklayer to architect, lessons from Laurentius on agentic coding tools

February 10, 2026

At Itsavirus, we recently hosted an internal session where our teammate Laurentius shared what he learned after hands-on testing three agentic coding platforms: Cursor, Claude, and Google Antigravity.

The session wasn't just a feature comparison, it was more about understanding a fundamental shift in how software gets built.

Laurentius summed it up with a simple title, From bricklayer to architect.

What’s changing in development work

For decades, software development looked like bricklaying.

Developers wrote code line by line, file by file. They configured environments, handled boilerplate, jumped between editor and terminal, chased stack traces, and ran tests by hand. The work demanded skill and focus, but also ate time through repetition.

Laurentius's point was simple: agentic coding platforms are changing this dynamic.

They're not just better autocomplete. They're shifting the developer's role from placing individual bricks and toward designing the structure as a whole.

What does that actually mean in practice? It means less time goes into typing code.

More time goes into deciding what deserves to exist, how systems fit together, and whether the output solves the real problem.

The job shifts from execution to judgment.

What Laurentius tested, and why the differences matter

Laurentius compared three platforms, each representing a different step along the same path.

  • Cursor: Faster bricklaying

Cursor strengthens the traditional workflow. You still work inside a code editor, but with an AI layer that understands context across the repository, completes intelligently, and generates code when asked.

The workflow stays familiar. You control what gets written and when. The main change is speed.

  • Claude: Conversational collaboration

Claude changes how you interact with code. Instead of writing implementations directly, you describe what you want in plain language. Claude responds with code, explanations, and alternatives.

You review the output, give feedback, and refine together. The exchange feels collaborative, but involvement in implementation details remains high.

  • Google Antigravity: Architectural delegation

Antigravity pushes further.

You describe a feature, and the platform generates an implementation plan, architectural decisions, task breakdowns, and verification steps. Multiple agents operate in parallel. One refactors a module while another writes tests for an unrelated feature. Your role centers on reviewing plans, approving direction, and resolving ambiguity when the agent reaches limits.

Multiple agents can work in parallel: one refactoring a module whilst another writes tests for a completely different feature. Your role becomes reviewing plans, approving direction, and making calls when the agent encounters ambiguity it can't resolve on its own.

The pattern that showed up across every tool

Across all three platforms, one pattern stood out to Laurentius.

The bottleneck in software development no longer sits with typing speed. It comes down to the decisions being made and the judgment behind them.

Laurentius demonstrated this live.

A task that normally takes a full day across a coordinated team, setting up a feature, writing tests, and updating docs, finished in under an hour with proper agent orchestration in Antigravity.

The interesting part wasn’t the speed. His time went into reviewing plans, approving architectural choices, spotting edge cases, and steering direction when ambiguity appeared.

The developer role stayed essential, but the work itself changed.

The auto-pilot paradox

Laurentius also outlined the risks that come with delegating to agents.

  • Skill erosion happens quietly

Heavy reliance on agents weakens deep understanding. When an agent writes every SQL query, debugging a deadlock at three in the morning becomes harder. Teams have seen this pattern before with earlier abstractions. Convenience trades against fluency.

  • Spinning wastes time and money

Agents fall into fix-fail-retry loops. Without guardrails, this burns API credits and clogs pipelines. Laurentius showed an example where an agent spent twenty minutes chasing a failing test caused by a locked configuration file. A human resolved the issue in half a minute.

  • Review fatigue creates blind spots

Reviewing hundreds of generated lines drains focus faster than writing a small amount yourself. Writing builds a mental model naturally. Reviewing forces reverse engineering of decisions made invisibly. Over time, teams fall into “looks good” approvals because deep review feels exhausting. That’s where architectural bugs slip through.

What this shift means for development teams

This shift isn’t about replacing developers, the work itself changes shape.

Repetitive tasks like boilerplate, scaffolding, tests, and routine updates belong with agents.

What remains sits squarely with humans.

Architecture, judgment, system design, tradeoffs, reviews, and decisions under uncertainty.

Laurentius put it this way:

You're still in control. You're just spending your time on the parts that actually need your attention."

For small teams, the impact stands out.

A three-person team can operate with a much higher ceiling. With agents handling parallel maintenance and routine work, people focus on strategy and hard problems instead of backlog churn.

The practical takeaway

Treating AI coding tools as faster autocomplete misses the point. These platforms reshape the workflow itself.

The shift is already happening. Teams can choose between adapting early or catching up later.

The developers who succeed will be those who think in systems, communicate clearly with agents, review output with intent, and make strong decisions about what deserves to exist.

The real question isn’t WHETHER this shift arrives.

The question is whether you’re ready to move from bricklayer to architect.

Interested in how agentic development could work for your team? Contact us to discuss practical implementation strategies.

Latest insights

A sharp lens on what we’re building and our take on what comes next.

See more
[Whitepaper] The AI Transformation Framework
The practical way to optimise cloud spend with human–AI collaboration
AI governance checklist: 10 questions every leader must answer before adopting AI

Latest insights

A sharp lens on what we’re building and our take on what comes next.

See more
Why enterprises are moving to domain-specific AI systems
Data as a structural constraint in strangler migrations
Workforce digital twins: the operating system behind successful AI adoption

Latest insights

A sharp lens on what we’re building and our take on what comes next.

See more
Developing the Factum app
Shaping the Future: Itsavirus Unveils a New Vision for Smart Cities
Building Barter, revolutionizing influencer marketing

Latest insights

A sharp lens on what we’re building and our take on what comes next.

See more
Workshop : From Idea to MVP
Webinar: What’s next for NFT’s?
Webinar: finding opportunities in chaos

Latest insights

A sharp lens on what we’re building and our take on what comes next.

See more
How we helped Ecologies to turn survey results into reliable, faster reports using AI
How to deal with 1,000 shiny new tools
Develop AI Integrations with Itsavirus