Enterprise architect Dominik Tomicevic shares recent best practice about a fast-moving area in AI development  Prompt engineering is powerful but not enough on Enterprise architect Dominik Tomicevic shares recent best practice about a fast-moving area in AI development  Prompt engineering is powerful but not enough on

Beyond the Prompts: Why Context Engineering Is the Next Frontier for Enterprise AI

Enterprise architect Dominik Tomicevic shares recent best practice about a fast-moving area in AI development 

Prompt engineering is powerful but not enough on its own—context is the critical factor here. Graph-based approaches, particularly GraphRAG, are emerging as the next evolution, moving beyond vanilla prompt engineering to what is now called context engineering. 

Here’s the logic, starting with something that sounds, and is, problematic—context rot. Context rot is an emerging challenge in enterprise AI, occurring when a model’s understanding of context degrades over time or across tasks. Repeated queries exacerbate the problem, as outdated or poorly linked context can dominate responses, slowly eroding trust in the AI system. 

Large language models (LLMs) rely heavily on curated context—structured data, knowledge graphs, and other inputs—to generate reliable outputs. But as information accumulates, shifts, or becomes fragmented across multiple sources, the context available to the model can become diluted or inconsistent, reducing the accuracy and relevance of its responses. 

This leads to outputs that are less precise, more prone to hallucinations, and often disconnected from the underlying business reality. Multiple studies confirm that beyond a certain context size, AI model accuracy tends to decline. Essentially, there comes a point where adding more context can actually reduce model performance. 

Surely more context is better?  

To some, this might seem counterintuitive: surely the more a model ‘knows,’ the better it should be at making inferences? But, we’re dealing with AI here, not the human brain. With current architectures, context windows have limits—an LLM’s attention mechanism simply cannot interact with every token in a very large dataset. 

As a result, the larger the context window, the more opportunities for error or misinterpreted information. In a business setting, that’s a real problem in terms of valid inferences: the model may focus on irrelevant facts, and the extra context can actually dilute the relevance of its outputs. 

Bottom line: on their own, an LLM cannot inherently understand an organization’s data schema or the implicit relationships between entities. This knowledge must be explicitly modeled—often through a combination of knowledge graphs and curated datasets. Without this structure, even the most advanced LLM can generate invalid queries or misinterpret the data. 

To be honest, I see this every day with customers trying, and often struggling, to build non-trivial AI applications. The real solution is to focus on relevance, not volume. Counterintuitive as it may seem, the key is to provide the minimum necessary context to solve the task at hand. 

Giving an LLM too many tools or access to excessive datasets can lead to tool overload: the model might select the wrong tool, misuse it, or generate inaccurate outputs. We need ways to limit access to only the essential tools and data for a given task, which may involve training models on tool-specific APIs or query languages. Without this discipline, even well-curated data may produce inaccurate results. 

Why you need to think bigger than ‘prompts’ 

If the search space is appropriately constrained, then, even if that sounds counterintuitive, it becomes highly effective. This is where we need to move beyond the ideas of prompts. While prompts work well for simple ChatGPT-level tasks, real enterprise AI requires engineering the right context rather than endlessly refining prompts. The focus should shift from prompt finesse to deliberately designing the search space the model operates within. 

As you’ll quickly see, this isn’t work that happens in the question box—it’s a programming task focused on what information the model is actually ingesting. Effective context engineering relies on both quantitative and qualitative metrics to find the right balance. Like prompt engineering, it involves iterating on what information helps the model produce useful, reliable outputs—but the iteration happens in code, not text. 

What should be happening at the code level? A key computer science concept to guide us here is recursion. The goal is to structure and filter context by recursively summarizing relevant portions of a graph-structured dataset. Yes—I deliberately smuggled in the word ‘graph’—because best practices suggest that basic RAG isn’t enough to handle this recursion elegantly. Instead, we need the next evolution of Retrieval-Augmented Generation: GraphRAG. 

First developed at Microsoft Research, GraphRAG is our friend here as it’s a very effective way of structuring and filtering context across graph-structured datasets. And yes, we want graphs, not something like SQL, for two key reasons. First, graphs are better suited to capture the nuances of relationships in complex information. Second, they allow complex tasks to be broken into smaller subtasks, each of which can be handled separately, with results aggregated to form a coherent final answer. 

That’s a nicely modular approach that reduces context complexity and ensures that the model’s reasoning is aligned with business logic. This makes GraphRAG an excellent tool for context engineering: it allows us to avoid loading the entire graph—or even a large subgraph—into the model’s context, and instead: 

  • expanding from possible relevant nodes out of points that are possible context entries 
  • summarizing at each expansion step, and so constantly reminding the LLM to really focus on the task at hand 
  • trimming irrelevant information 
  • and ensuring that the final context is concise and tailored for the specific user task. 

Top context engineer tips 

From our experience helping organizations start down this path, several practical lessons about context engineering have emerged. 

One: It really helps to pre-process the LLM with smaller amounts of context at a time, preventing it from becoming overwhelmed. Two: Combining prompt engineering—to remind the model of the task and the need for concise, relevant answers—with step-by-step or recursive summarization can significantly improve context quality and reduce that dreaded context rot.  

Three: For complex workflows, context engineering isn’t just about providing the right data, it also involves dynamically selecting which tools and datasets are relevant for each step. For example, a model might need to fetch the latest financial metrics via a specialized query tool while simultaneously consulting a market sentiment model. Effective orchestration is essential to ensure that each submodel only ever sees the context it needs. 

In sum, context rot is a real challenge when working with large context windows in LLMs. Developers tackling this issue consistently report that the most effective mitigation involves structuring, filtering, summarizing, and continuously testing context so that only essential information enters the model’s working memory. 

Mitigating context rot requires deliberate engineering: curating high-quality context, segmenting complex tasks, dynamically managing tool access, and leveraging knowledge graphs to ensure the model always sees the most relevant, well-structured information. 

Without these context-aiding steps, AI workflows risk producing unreliable insights, no matter how large or sophisticated the underlying model is. Growing evidence shows that structuring enterprise data as graphs and applying RAG methods enables AI to reason effectively over large datasets, helping to overcome the limitations of context windows. 

The takeaway for anyone trying to make AI work for their company is that the days of clever prompts are over. The best path forward with LLMs lies in structuring data, leveraging knowledge graphs, and curating tool access—providing your model with the ideal context it needs to reason accurately and reliably across complex enterprise workflows. Doing so ensures the AI produces insights that truly help the business move the needle. 

Dominik Tomicevic is CEO and Co-founder of Memgraph, a high-performance, in-memory graph database that serves as a real-time context engine for AI applications, powering enterprise solutions with richer context, sub-millisecond query performance, and explainable results that developers can trust.

Market Opportunity
WHY Logo
WHY Price(WHY)
$0.00000001515
$0.00000001515$0.00000001515
-0.19%
USD
WHY (WHY) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

U.S. Court Finds Pastor Found Guilty in $3M Crypto Scam

U.S. Court Finds Pastor Found Guilty in $3M Crypto Scam

The post U.S. Court Finds Pastor Found Guilty in $3M Crypto Scam appeared on BitcoinEthereumNews.com. Crime 18 September 2025 | 04:05 A Colorado judge has brought closure to one of the state’s most unusual cryptocurrency scandals, declaring INDXcoin to be a fraudulent operation and ordering its founders, Denver pastor Eli Regalado and his wife Kaitlyn, to repay $3.34 million. The ruling, issued by District Court Judge Heidi L. Kutcher, came nearly two years after the couple persuaded hundreds of people to invest in their token, promising safety and abundance through a Christian-branded platform called the Kingdom Wealth Exchange. The scheme ran between June 2022 and April 2023 and drew in more than 300 participants, many of them members of local church networks. Marketing materials portrayed INDXcoin as a low-risk gateway to prosperity, yet the project unraveled almost immediately. The exchange itself collapsed within 24 hours of launch, wiping out investors’ money. Despite this failure—and despite an auditor’s damning review that gave the system a “0 out of 10” for security—the Regalados kept presenting it as a solid opportunity. Colorado regulators argued that the couple’s faith-based appeal was central to the fraud. Securities Commissioner Tung Chan said the Regalados “dressed an old scam in new technology” and used their standing within the Christian community to convince people who had little knowledge of crypto. For him, the case illustrates how modern digital assets can be exploited to replicate classic Ponzi-style tactics under a different name. Court filings revealed where much of the money ended up: luxury goods, vacations, jewelry, a Range Rover, high-end clothing, and even dental procedures. In a video that drew worldwide attention earlier this year, Eli Regalado admitted the funds had been spent, explaining that a portion went to taxes while the remainder was used for a home renovation he claimed was divinely inspired. The judgment not only confirms that INDXcoin qualifies as a…
Share
BitcoinEthereumNews2025/09/18 09:14
MSCI’s Proposal May Trigger $15B Crypto Outflows

MSCI’s Proposal May Trigger $15B Crypto Outflows

MSCI's plan to exclude crypto-treasury companies could cause $15B outflows, impacting major firms.
Share
CoinLive2025/12/19 13:17
This U.S. politician’s suspicious stock trade just returned over 200% in weeks

This U.S. politician’s suspicious stock trade just returned over 200% in weeks

The post This U.S. politician’s suspicious stock trade just returned over 200% in weeks appeared on BitcoinEthereumNews.com. United States Representative Cloe Fields has seen his stake in Opendoor Technologies (NASDAQ: OPEN) stock return over 200% in just a matter of weeks. According to congressional trade filings, the lawmaker purchased a stake in the online real estate company on July 21, 2025, investing between $1,001 and $15,000. At the time, the stock was trading around $2 and had been largely stagnant for months. Receive Signals on US Congress Members’ Stock Trades Stocks Stay up-to-date on the trading activity of US Congress members. The signal triggers based on updates from the House disclosure reports, notifying you of their latest stock transactions. Enable signal The trade has since paid off, with Opendoor surging to $10, a gain of nearly 220% in under two months. By comparison, the broader S&P 500 index rose less than 5% during the same period. OPEN one-week stock price chart. Source: Finbold Assuming he invested a minimum of $1,001, the purchase would now be worth about $3,200, while a $15,000 stake would have grown to nearly $48,000, generating profits of roughly $2,200 and $33,000, respectively. OPEN’s stock rally Notably, Opendoor’s rally has been fueled by major corporate shifts and market speculation. For instance, in August, the company named former Shopify COO Kaz Nejatian as CEO, while co-founders Keith Rabois and Eric Wu rejoined the board, moves seen as a return to the company’s early innovative spirit.  Outgoing CEO Carrie Wheeler’s resignation and sale of millions in stock reinforced the sense of a new chapter. Beyond leadership changes, Opendoor’s surge has taken on meme-stock characteristics. In this case, retail investors piled in as shares climbed, while short sellers scrambled to cover, pushing prices higher.  However, the stock is still not without challenges, where its iBuying model is untested at scale, margins are thin, and debt tied to…
Share
BitcoinEthereumNews2025/09/18 04:02