<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="/rss.xsl" type="text/xsl"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Context | Steven’s Diary | 为0700.hk卖命中</title><description>如你所见这只是一个日记本而已#赛博鸡蛋 一些羊毛#光与影 摄影作品或创作过程记录#吃点好 吃啥了#面基#活动#灵感菇 一些 MVP 的 demo 或 idea 记录点击讨论区进群友情链接：@stvgateway 联系Steven：@stvlynn_bot</description><link>https://broadcastchannel-a3e.pages.dev</link><item><title>#优质博文 #AI #LLM #ContextHow AI Remembers and Why It Forgets: Part 1. The Context Problem：介绍大语言模型（LLM）如何通过上下文（Context）模拟记忆，以及为何大量信息会导致“上下文腐烂”（Context Rot）现象</title><link>https://broadcastchannel-a3e.pages.dev/posts/4898</link><guid isPermaLink="true">https://broadcastchannel-a3e.pages.dev/posts/4898</guid><pubDate>Tue, 07 Apr 2026 15:20:02 GMT</pubDate><content:encoded>&lt;a href=&quot;/search/%23%E4%BC%98%E8%B4%A8%E5%8D%9A%E6%96%87&quot;&gt;#优质博文&lt;/a&gt; &lt;a href=&quot;/search/%23AI&quot;&gt;#AI&lt;/a&gt; &lt;a href=&quot;/search/%23LLM&quot;&gt;#LLM&lt;/a&gt; &lt;a href=&quot;/search/%23Context&quot;&gt;#Context&lt;/a&gt;&lt;br /&gt;&lt;a href=&quot;https://www.developerway.com/posts/how-ai-remembers-and-forgets-part1&quot; target=&quot;_blank&quot;&gt;How AI Remembers and Why It Forgets: Part 1. The &lt;mark&gt;Context&lt;/mark&gt; Problem&lt;/a&gt;：介绍大语言模型（LLM）如何通过上下文（&lt;mark&gt;Context&lt;/mark&gt;）模拟记忆，以及为何大量信息会导致“上下文腐烂”（&lt;mark&gt;Context&lt;/mark&gt; Rot）现象。&lt;br /&gt;&lt;br /&gt;&lt;blockquote&gt;AI 摘要：文章揭示了 AI 并没有持久记忆，其所谓的“记忆”完全依赖于对话中反复发送的上下文（&lt;mark&gt;Context&lt;/mark&gt;）。作者通过实验证明，即使在模型宣称的上下文窗口（&lt;mark&gt;Context&lt;/mark&gt; Window）限制内，大量信息也会导致“上下文腐烂”（&lt;mark&gt;Context&lt;/mark&gt; Rot），引发性能下降、信息遗漏及幻觉。&lt;/blockquote&gt;&lt;br /&gt;&lt;br /&gt;&lt;i&gt;author Nadia Makarevich&lt;/i&gt;&lt;a href=&quot;https://www.developerway.com/posts/how-ai-remembers-and-forgets-part1&quot; target=&quot;_blank&quot;&gt;
  
  &lt;div&gt;Developer Way&lt;/div&gt;
  &lt;img class=&quot;link_preview_image&quot; alt=&quot;How AI Remembers and Why It Forgets: Part 1. The Context Problem&quot; src=&quot;/static/https://cdn4.telesco.pe/file/EWyRUZyZGL_DRz9sYVUgjZUfXO1KyyarvoqhWF2R-A95InjgS2J_iomJ2QoN5oceDSGkn-QsMXGCf01D-zDKa9sGYdL-IEglld-AlHCuqU1eaagxVZ9_oRMnjs140j6AV30SH91trOQ0LtpqvGAVhrrrzS9x4LJ4c4YBkHv3jEGByflZtI9E2LI6AbOxko6GDxaXjs20SbfCM5K4KsWwDxZxBfPQPs5GCTkQ32CImoRnsnzNiy8mM8qhC-HRcZhqjzlNXt-gcttbDYq-eMQ2E0ev456GWYHQeJaPKeOcAbnZg7hFflvrlnNSvmSXP0EQ1tjzaQk6MV0LnBdeAsVlqA.jpg&quot; loading=&quot;lazy&quot; /&gt;
  &lt;div&gt;How AI Remembers and Why It Forgets: Part 1. The &lt;mark&gt;Context&lt;/mark&gt; Problem&lt;/div&gt;
  &lt;div&gt;How does AI actually remember things between messages, and why does it forget halfway through? I ran a few experiments on Claude Sonnet and GPT-5 and wrote down what I saw.&lt;/div&gt;
&lt;/a&gt;</content:encoded></item></channel></rss>