· Nicholas Arbuckle

Context Is the New Code: Rethinking How We Build AI Agents

Context Is the New Code: Rethinking How We Build AI Agents

The BlueNexus team are constantly researching emerging trends within the AI sector. Earlier this week we came across an extremely interesting article which proposed the notion of focusing strongly on context within LLM training methods. We find this particularly interesting as it strongly aligns with our product offering and wider vision of how AI not only should be developed, how it must be developed.

What if the secret to building smarter AI agents wasn’t better models, but rather better memory & context? This is the core idea behind Yichao Ji’s recent writeup, which details lessons from developing Manus, a production-grade AI system that ditched traditional model training in favour of something far more agile - "context engineering".

From Training to Thinking

Rather than teaching an LLM what to think through intensive fine-tuning, Manus has been focusing on designing how it thinks, via structured, persistent, runtime context.

Key tactics include:

This approach points to a deeper shift in the LLM training debate, shifting from “prompt engineering” to context architecture, and it’s changing how intelligent systems are being built.

Diving Deeper

Ji’s article observes that developers still default to the “If the model isn’t good enough, retrain it" approach. But Manus demonstrates that this isn't scalable. It’s expensive, brittle, & hard to maintain across use cases. Instead, they show that by designing the right context window with memory, goals, state, & constraints, developer you can achieve robust agentic behavior 𝐟𝐫𝐨𝐦 𝐞𝐱𝐢𝐬𝐭𝐢𝐧𝐠 𝐋𝐋𝐌𝐬.

We don't necessarily see this as a "work around" but rather the new standard emerging, which is fantastic within the R&D lens of LLM training.

Obligatory mention that we carry some level of bias here, as this new standard plays straight into our wheelhouse.

Alas, BlueNexus Agrees

We wont sit here and "Shill" this approach from the roof tops, it’s fair to say this emerging standard aligns strongly with what we have been building.

The future of AI isn’t just about inference, speed or model accuracy, in our opinion it’s about relevance, continuity, portability and coordination.

By this we mean:

As always, were interested in what other AI builders think?

We would love to get a collective thoughts across the spectrum from anyone participating in this space. Feel free to add your colour to the conversation & start a dialogue with likeminded people in the comments below.