AI 驱动的代码编辑器 Cursor 推出动态上下文发现功能以提升 Token 效率

Cursor 推出了「动态上下文发现」,这是一种优化 AI 驱动代码编辑器中 Token 使用的新方法。该方法从静态的大上下文窗口转向动态检索仅必要信息,主要通过将文件作为基于 LLM 工具的主要接口。它采用了五种技术:将大型工具输出写入文件(如 shell 命令)、将完整历史保存到文件以防止在总结上下文时信息丢失、将领域特定能力存储在文件中以便语义搜索发现、动态获取 MCP(模块化代码游乐场)工具细节而不是预先包含所有内容,以及将终端输出同步到文件以便智能体更轻松访问。这种方法显著减少了 Token 数量(MCP 工具最多减少 46.9%)并提高了开发人员效率,尽管其对延迟的影响仍存在争议。Cursor 计划很快向所有用户推出此功能




Cursor has introduces a new approach to minimize the context size of requests sent to large language models. Called dynamic context discovery, this method moves away from including large amounts of static context upfront, allowing the agent to dynamically retrieve only the information it needs. This reduces token usage and limits the inclusion of potentially confusing or irrelevant details.

Cursor employs five distinct techniques to implement dynamic context discovery. A common feature across all of them is the use of files as the primary interface for LLM-based tools, allowing content to be stored and fetched dynamically by the agent instead of overwhelming the context window.

As coding agents quickly improve, files have been a simple and powerful primitive to use, and a safer choice than yet another abstraction that can't fully account for the future.

The first technique Cursor uses involves writing large outputs, such as those from shell commands or other tools, to files, ensuring that no relevant information is ever lost. The agent can then use tail to access the end of the file as needed.

Secondly, to prevent the loss of information when long context is summarized to fit token limits, Cursor saves the full history to a file, allowing the agent to retrieve any missing details later. Similarly, domain-specific capabilities are stored in files, enabling the agent to dynamically discover relevant ones using Cursor's semantic search tools.

For MCP tools, rather than including all tools from MCP servers upfront, the agent retrieves only the tool names and fetches the full detail as needed. This has a significant impact on total token count:

The agent now only receives a small bit of static context, including names of the tools, prompting it to look up tools when the task calls for it. In an A/B test, we found that in runs that called an MCP tool, this strategy reduced total agent tokens by 46.9% (statistically significant, with high variance based on the number of MCPs installed).

AI 驱动的代码编辑器 Cursor 推出动态上下文发现功能以提升 Token 效率

Another advantage of this approach is that the agent can monitor the status of each MCP tool. For instance, if an MCP server requires re-authentication, the agent can notify the user instead of overlooking it entirely.

Finally, the output of all terminal sessions is synced to the file system, making it easier for the agent to answer user questions about a failing command. Storing output in files also allows the agent to grep only the relevant information, further reducing context size.

X user @glitchy noted that, while reducing tokens is an important goal, it is unclear how this might affect latency. @NoBanksNearby added that dynamic context discovery is "huge for dev efficiency when running multiple MCP servers" and @casinokrisa confirmed reinforced this:

Reducing tokens by nearly half cuts costs and speed up responses, especially across multiple servers.

Finally, @anayatkhan09 hinted at possible improvements:

The next step is exposing that dynamic context policy to users so we can tune recall aggressiveness per repo instead of treating all tools the same.

According to Cursor, dynamic context discovery will be available to all users in the coming weeks.



AI 前线

硅基 AI 日报:阿里千问月活突破 1 亿;OpenAI 官宣收购 AI 医疗应用 Torch

2026-1-14 23:02:26

AI 前线

Vercel 开源用于本地文件系统上下文检索的 Bash 工具

2026-1-14 23:02:33

0 条回复 A文章作者 M管理员
    暂无讨论,说说你的看法吧
个人中心
购物车
优惠劵
今日签到
有新私信 私信列表
搜索