Enterprise architect Dominik Tomicevic shares recent best practice about a fast-moving area in AI development  Prompt engineering is powerful but not enough on Enterprise architect Dominik Tomicevic shares recent best practice about a fast-moving area in AI development  Prompt engineering is powerful but not enough on

Beyond the Prompts: Why Context Engineering Is the Next Frontier for Enterprise AI

Enterprise architect Dominik Tomicevic shares recent best practice about a fast-moving area in AI development 

Prompt engineering is powerful but not enough on its own—context is the critical factor here. Graph-based approaches, particularly GraphRAG, are emerging as the next evolution, moving beyond vanilla prompt engineering to what is now called context engineering. 

Here’s the logic, starting with something that sounds, and is, problematic—context rot. Context rot is an emerging challenge in enterprise AI, occurring when a model’s understanding of context degrades over time or across tasks. Repeated queries exacerbate the problem, as outdated or poorly linked context can dominate responses, slowly eroding trust in the AI system. 

Large language models (LLMs) rely heavily on curated context—structured data, knowledge graphs, and other inputs—to generate reliable outputs. But as information accumulates, shifts, or becomes fragmented across multiple sources, the context available to the model can become diluted or inconsistent, reducing the accuracy and relevance of its responses. 

This leads to outputs that are less precise, more prone to hallucinations, and often disconnected from the underlying business reality. Multiple studies confirm that beyond a certain context size, AI model accuracy tends to decline. Essentially, there comes a point where adding more context can actually reduce model performance. 

Surely more context is better?  

To some, this might seem counterintuitive: surely the more a model ‘knows,’ the better it should be at making inferences? But, we’re dealing with AI here, not the human brain. With current architectures, context windows have limits—an LLM’s attention mechanism simply cannot interact with every token in a very large dataset. 

As a result, the larger the context window, the more opportunities for error or misinterpreted information. In a business setting, that’s a real problem in terms of valid inferences: the model may focus on irrelevant facts, and the extra context can actually dilute the relevance of its outputs. 

Bottom line: on their own, an LLM cannot inherently understand an organization’s data schema or the implicit relationships between entities. This knowledge must be explicitly modeled—often through a combination of knowledge graphs and curated datasets. Without this structure, even the most advanced LLM can generate invalid queries or misinterpret the data. 

To be honest, I see this every day with customers trying, and often struggling, to build non-trivial AI applications. The real solution is to focus on relevance, not volume. Counterintuitive as it may seem, the key is to provide the minimum necessary context to solve the task at hand. 

Giving an LLM too many tools or access to excessive datasets can lead to tool overload: the model might select the wrong tool, misuse it, or generate inaccurate outputs. We need ways to limit access to only the essential tools and data for a given task, which may involve training models on tool-specific APIs or query languages. Without this discipline, even well-curated data may produce inaccurate results. 

Why you need to think bigger than ‘prompts’ 

If the search space is appropriately constrained, then, even if that sounds counterintuitive, it becomes highly effective. This is where we need to move beyond the ideas of prompts. While prompts work well for simple ChatGPT-level tasks, real enterprise AI requires engineering the right context rather than endlessly refining prompts. The focus should shift from prompt finesse to deliberately designing the search space the model operates within. 

As you’ll quickly see, this isn’t work that happens in the question box—it’s a programming task focused on what information the model is actually ingesting. Effective context engineering relies on both quantitative and qualitative metrics to find the right balance. Like prompt engineering, it involves iterating on what information helps the model produce useful, reliable outputs—but the iteration happens in code, not text. 

What should be happening at the code level? A key computer science concept to guide us here is recursion. The goal is to structure and filter context by recursively summarizing relevant portions of a graph-structured dataset. Yes—I deliberately smuggled in the word ‘graph’—because best practices suggest that basic RAG isn’t enough to handle this recursion elegantly. Instead, we need the next evolution of Retrieval-Augmented Generation: GraphRAG. 

First developed at Microsoft Research, GraphRAG is our friend here as it’s a very effective way of structuring and filtering context across graph-structured datasets. And yes, we want graphs, not something like SQL, for two key reasons. First, graphs are better suited to capture the nuances of relationships in complex information. Second, they allow complex tasks to be broken into smaller subtasks, each of which can be handled separately, with results aggregated to form a coherent final answer. 

That’s a nicely modular approach that reduces context complexity and ensures that the model’s reasoning is aligned with business logic. This makes GraphRAG an excellent tool for context engineering: it allows us to avoid loading the entire graph—or even a large subgraph—into the model’s context, and instead: 

  • expanding from possible relevant nodes out of points that are possible context entries 
  • summarizing at each expansion step, and so constantly reminding the LLM to really focus on the task at hand 
  • trimming irrelevant information 
  • and ensuring that the final context is concise and tailored for the specific user task. 

Top context engineer tips 

From our experience helping organizations start down this path, several practical lessons about context engineering have emerged. 

One: It really helps to pre-process the LLM with smaller amounts of context at a time, preventing it from becoming overwhelmed. Two: Combining prompt engineering—to remind the model of the task and the need for concise, relevant answers—with step-by-step or recursive summarization can significantly improve context quality and reduce that dreaded context rot.  

Three: For complex workflows, context engineering isn’t just about providing the right data, it also involves dynamically selecting which tools and datasets are relevant for each step. For example, a model might need to fetch the latest financial metrics via a specialized query tool while simultaneously consulting a market sentiment model. Effective orchestration is essential to ensure that each submodel only ever sees the context it needs. 

In sum, context rot is a real challenge when working with large context windows in LLMs. Developers tackling this issue consistently report that the most effective mitigation involves structuring, filtering, summarizing, and continuously testing context so that only essential information enters the model’s working memory. 

Mitigating context rot requires deliberate engineering: curating high-quality context, segmenting complex tasks, dynamically managing tool access, and leveraging knowledge graphs to ensure the model always sees the most relevant, well-structured information. 

Without these context-aiding steps, AI workflows risk producing unreliable insights, no matter how large or sophisticated the underlying model is. Growing evidence shows that structuring enterprise data as graphs and applying RAG methods enables AI to reason effectively over large datasets, helping to overcome the limitations of context windows. 

The takeaway for anyone trying to make AI work for their company is that the days of clever prompts are over. The best path forward with LLMs lies in structuring data, leveraging knowledge graphs, and curating tool access—providing your model with the ideal context it needs to reason accurately and reliably across complex enterprise workflows. Doing so ensures the AI produces insights that truly help the business move the needle. 

Dominik Tomicevic is CEO and Co-founder of Memgraph, a high-performance, in-memory graph database that serves as a real-time context engine for AI applications, powering enterprise solutions with richer context, sub-millisecond query performance, and explainable results that developers can trust.

Market Opportunity
WHY Logo
WHY Price(WHY)
$0.00000001515
$0.00000001515$0.00000001515
-0.19%
USD
WHY (WHY) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Bitcoin Has Taken Gold’s Role In Today’s World, Eric Trump Says

Bitcoin Has Taken Gold’s Role In Today’s World, Eric Trump Says

Eric Trump on Tuesday described Bitcoin as a “modern-day gold,” calling it a liquid store of value that can act as a hedge to real estate and other assets. Related Reading: XRP’s Biggest Rally Yet? Analyst Projects $20+ In October 2025 According to reports, the remark came during a TV appearance on CNBC’s Squawk Box, tied to the launch of American Bitcoin, the mining and treasury firm he helped start. Company Holdings And Strategy Based on public filings and company summaries, American Bitcoin has accumulated 2,443 BTC on its balance sheet. That stash has been valued in the low hundreds of millions of dollars at recent spot prices. The firm mixes large-scale mining with the goal of holding Bitcoin as a strategic reserve, which it says will help it grow both production and asset holdings over time. Eric Trump’s comments were direct. He told viewers that institutions are treating Bitcoin more like a store of value than a fringe idea, and he warned firms that resist blockchain adoption. The tone was strong at times, and the line about Bitcoin being a modern equivalent of gold was used to frame American Bitcoin’s role as both miner and holder.   Eric Trump has said: bitcoin is modern-day gold — unusual_whales (@unusual_whales) September 16, 2025 How The Company Went Public American Bitcoin moved toward a public listing via an all-stock merger with Gryphon Digital Mining earlier this year, a deal that kept most of the original shareholders in control and positioned the new entity for a Nasdaq debut. Reports show that mining partner Hut 8 holds a large ownership stake, leaving the Trump family and other backers with a minority share. The listing brought fresh attention and capital to the firm as it began trading under the ticker ABTC. Market watchers say the firm’s public debut highlights two trends: mining companies are trying to grow by both producing and holding Bitcoin, and political ties are bringing more headlines to crypto firms. Some analysts point out that holding large amounts of Bitcoin on the balance sheet exposes a company to price swings, while supporters argue it aligns incentives between miners and investors. Related Reading: Ethereum Bulls Target $8,500 With Big Money Backing The Move – Details Reaction And Possible Risks Based on coverage of the launch, investors have reacted with both enthusiasm and caution. Supporters praise the prospect of a US-based miner that aims to be transparent and aggressive about building a reserve. Critics point to governance questions, possible conflicts tied to high-profile backers, and the usual risks of a volatile asset being held on corporate balance sheets. Eric Trump’s remark that Bitcoin has taken gold’s role in today’s world reflects both his belief in its value and American Bitcoin’s strategy of mining and holding. Whether that view sticks will depend on how investors and institutions respond in the months ahead. Featured image from Meta, chart from TradingView
Share
NewsBTC2025/09/18 06:00
Tether CEO: AI Bubble Poses Biggest Risk to Bitcoin in 2026

Tether CEO: AI Bubble Poses Biggest Risk to Bitcoin in 2026

Tether CEO Paolo Ardoino has identified a potential AI-driven bubble as Bitcoin's biggest risk heading into 2026. However, he does not anticipate the same sharp corrections seen in previous market cycles, citing growing institutional adoption as a stabilizing force.
Share
MEXC NEWS2025/12/19 16:05
Bearish Sentiment Spikes as Bitcoin Drops to $84.8K, Creating Potential Contrarian Signal

Bearish Sentiment Spikes as Bitcoin Drops to $84.8K, Creating Potential Contrarian Signal

Bearish sentiment is surging across social media platforms following Bitcoin's pullback to $84,800, according to blockchain analytics firm Santiment. Retail investors are pushing fearful narratives harder than bullish outlooks, creating a notable shift in market mood.
Share
MEXC NEWS2025/12/19 15:56