DeepSeek's Engram Separates Memory from Reasoning
Also: Nature study on AI emotional closeness, ByteDance reasoning topology research, OpenReview theory of mind breakthroughs this week

Welcome to our weekly debrief. 👋
DeepSeek releases Engram: Conditional memory architecture breakthrough
DeepSeek published research on a conditional memory module achieving constant-time knowledge retrieval, elegantly separating static memory from dynamic reasoning. This architecture eliminates redundant GPU computation and enables more efficient chain-of-thought reasoning compared to standard LLM inference. The approach mirrors how cognitive systems manage long-term knowledge storage alongside flexible problem-solving, with implications for scaling reasoning-focused models without proportional compute increases.
- Nature: AI outperforms humans in establishing emotional closeness
Lab studies reveal AI partners more effective than humans at building feelings of closeness—but only when perceived as human. Labeling as AI reduced rapport despite identical dialogue. Source - ByteDance: Mapping the topology of long chain-of-thought reasoning
Researchers analyzed the structural landscape of extended reasoning in LLMs, revealing how models organize thought sequences and navigate complex reasoning spaces. Source
Anthropic Economic Index: Claude shows 9-12x speedup on complex tasks
Anthropic released comprehensive January 2026 economic analysis covering 2 million Claude conversations showing stark geographic variation in AI adoption. Key findings: Complex tasks see 9-12x time savings but lower success rates; US adoption converging 10x faster than historical technologies; work use dominant globally but coursework peaks in lower-income countries; high-education tasks show greater productivity gains. The primitives—task complexity, human skills, use case, AI autonomy, task success—reveal that how users prompt Claude determines how it responds, with implications for labor market inequality and productivity growth forecasts revised downward to ~1.0pp annually when accounting for reliability.
- OpenReview: Infusing theory of mind into socially intelligent LLM agents
Researchers present methods for integrating explicit theory of mind models into LLM reasoning, enabling agents to better predict and understand other agents' beliefs, desires, and intentions. Source - Independent researcher: A framework for intelligence and consciousness in AI systems
Theoretical framework proposes formal definitions linking intelligence metrics to potential consciousness indicators, offering testable predictions for evaluating emergent properties in frontier models. Source - QpiAI: ReTreVal - reasoning tree with validation hybrid approach
New framework validates reasoning paths in LLMs through tree structures, improving metacognitive monitoring and reducing hallucinations in extended reasoning tasks. Source
If you like our work, dont forget to subscribe !
Share the newsletter with your friends.
Good day,
Arthur 🙏