Research Paper

Cognitive Memory Architecture for Enterprise AI Platforms

Research paper mapping 13 memory patterns (diachronic identity, hierarchical reasoning, ACT-R activation, narrative memory, etc.) to the Adverant Nexus platform with 50 complex use cases across ProseCreator, NexusROS, NexusQA, and Skills Engine.

Adverant Research Team2026-04-1460 min read14,820 words

Cognitive Memory Architecture for Enterprise AI Platforms --- Diachronic Identity, Hierarchical Reasoning, and Thirteen Patterns for Persistent User Modeling

Adverant Research & Engineering --- April 2026 Confidential --- Unlisted Publication Domain: adverant.ai/docs/research/cognitive-memory-architecture


Abstract

Enterprise AI platforms face a fundamental limitation: they retrieve information but do not learn about the humans they serve. Current Retrieval-Augmented Generation (RAG) and vector-store memory systems treat user context as a static snapshot --- a database row frozen at insertion time --- rather than as a temporal trajectory that evolves with every interaction. This paper identifies thirteen distinct memory patterns drawn from cognitive science, philosophy of identity, information theory, and recent advances in LLM agent architectures. We analyze each pattern theoretically, map it to a concrete integration architecture within the Adverant Nexus enterprise platform (PostgreSQL, Neo4j, Qdrant, Redis), and demonstrate its value through fifty complex use cases spanning five plugin domains: creative writing (ProseCreator), revenue operations (NexusROS), quality assurance (NexusQA), skill orchestration, and platform-wide intelligence. We provide ASCII architectural diagrams for memory data flows, UI/UX mockups for dashboard visualization, an evidence-based testing framework derived from BEAM 10M and LongMem benchmarks, and a platform integration strategy for Claude Code, Cursor, and VSCode via the Model Context Protocol. The paper argues that the gap between retrieval and reasoning --- between "what text is similar to this query?" and "what do we actually know about this person?" --- represents the single largest unsolved problem in enterprise AI personalization.

Keywords: cognitive memory architecture, diachronic identity, user modeling, GraphRAG, enterprise AI, hierarchical reasoning, MemGPT, ACT-R, narrative memory, prospective memory, metacognition, principled forgetting, collective intelligence, MCP integration


1. Introduction

1.1 The Retrieval-Reasoning Gap

Something is fundamentally wrong with how enterprise AI platforms remember their users.

Consider a revenue operations analyst who has used an AI-powered CRM for eighteen months. She has closed 340 deals, written 2,100 emails, conducted 890 prospect research sessions, and generated 150 pipeline forecasts. The AI system has processed every one of these interactions. Yet when she opens a new session, the system greets her as if they have never met. It does not know that she specializes in healthcare vertical sales. It does not know that her close rate improves by 23% when she leads with ROI data rather than feature comparisons. It does not know that she switched from aggressive to consultative selling six months ago after losing three enterprise deals. It does not know who she is.

This is not a storage problem. Modern vector databases can store billions of embeddings. Graph databases can maintain millions of entity relationships. The data is there --- buried in interaction logs, chat transcripts, tool invocations, and document histories. The problem is that no system reasons over this accumulated evidence to build a coherent, evolving model of the human on the other end.

The dominant approach --- Retrieval-Augmented Generation (RAG) --- asks a narrow question: "What stored text is semantically similar to this query?" It returns text chunks ranked by cosine similarity. This is useful for finding documents. It is wholly inadequate for understanding people. Humans are not documents. They change over time. They hold contradictory preferences. They have expertise in some areas and ignorance in others. They have goals that shift, habits that evolve, and communication styles that vary by context.

The gap between retrieval and reasoning --- between "find similar text" and "understand this person" --- is the central problem this paper addresses.

1.2 Diachronic Identity as Engineering Constraint

The philosophical concept of diachronic identity --- how entities persist through time while changing --- provides the theoretical foundation for a solution. A diachronic object is one that can be coherently understood as enduring (the same entity) while also being different at different moments [1]. The quality of any AI system's output is determined by the context available at inference time, and that context must encode both what the user is now and the arc of how they got there.

Plastic Labs operationalizes this insight in their Honcho platform [2], treating user representations not as database rows but as temporal trajectories that improve through continuous reasoning. But diachronic identity is only one of thirteen patterns we identify as necessary for a complete cognitive memory architecture. The full set spans cognitive science (ACT-R activation decay [3], narrative memory [4, 5, 6]), information theory (surprisal-guided consolidation [7]), operating systems research (tiered virtual memory [8]), philosophy of mind (theory of mind via asymmetric observation [9]), and metacognition (self-monitoring knowledge gaps [10, 11]).

1.3 Contributions

This paper makes four contributions:

  1. A taxonomy of thirteen memory patterns for enterprise AI, synthesized from cognitive science, philosophy, and recent LLM agent research, with formal definitions and applicability criteria for each.

  2. A concrete integration architecture mapping each pattern to the Adverant Nexus platform (PostgreSQL + Neo4j + Qdrant + Redis), including database schema extensions, API endpoints, background job specifications, and four identified context injection points in the existing codebase.

  3. Fifty complex use cases across five plugin domains (ProseCreator, NexusROS, NexusQA, Skills Engine, Platform-Wide), each with technical implementation detail, data flow specification, and expected user impact.

  4. An evidence-based validation framework derived from BEAM 10M and LongMem benchmarks, with a testing architecture for measuring memory pattern effectiveness in production.

1.4 Paper Organization

Section 2 surveys related work across agent memory architectures, cognitive science foundations, and enterprise AI personalization. Section 3 presents the thirteen memory patterns with formal definitions. Section 4 analyzes the current Nexus architecture and identifies integration points. Section 5 details the integration architecture for each pattern. Section 6 presents fifty use cases organized by plugin domain. Section 7 provides UI/UX mockups and memory journey diagrams. Section 8 describes the evidence-based testing framework. Section 9 details platform integration strategies for Claude Code, Cursor, and VSCode. Section 10 proposes a Nexus Cognitive Memory microservice architecture. Section 11 discusses limitations. Section 12 concludes.


2.1 Agent Memory Architectures

The field of agent memory has undergone rapid evolution since 2023. Hu et al. [12] provide a comprehensive survey covering 40+ systems across parametric memory (model weights), short-term memory (context windows), and long-term memory (external stores). Their taxonomy identifies four memory operations --- read, write, reflect, and manage --- and finds that most production systems implement only read and write, neglecting the reflection and management operations that enable genuine learning.

The CoALA framework [13] establishes the canonical cognitive architecture for language agents, proposing four long-term memory types: episodic (past experiences), semantic (world knowledge), procedural (implicit weights plus explicit code), and resource (tools and APIs). Actions divide into internal (reasoning, retrieval, learning) and external (grounding in environment). This framework has become the standard reference for agent memory design and directly informs our pattern taxonomy.

MemGPT [8] introduced the operating-systems metaphor for LLM memory management, treating the context window as RAM and external storage as disk. The system demonstrated that agents can productively self-edit their own memory using tool calls, maintaining a "core memory" (always in context) alongside "recall memory" (conversation history) and "archival memory" (long-term processed knowledge). Letta, the production framework built on MemGPT, has since added scheduled inner monologue and memory filesystem versioning [14].

A-MEM [15] proposes agentic memory that autonomously decides when and what to store, using Zettelkasten-inspired note-taking. The system creates atomic "notes" with interconnections, enabling semantic traversal beyond simple vector similarity. Published at NeurIPS 2025, it represents the state of the art in autonomous memory management.

2.2 Cognitive Science Foundations

Our pattern taxonomy draws heavily from cognitive science. ACT-R (Adaptive Control of Thought --- Rational) [3] provides a mathematically grounded model of human memory activation, where recall probability depends on recency, frequency of past retrievals, and contextual match. Honda et al. [3] directly integrate ACT-R activation into LLM agents, demonstrating human-like remembering and forgetting behavior that improves task performance compared to unbounded memory systems.

Narrative memory research [4, 5, 6] demonstrates that human long-term memory is organized as autobiographical narratives, not databases of facts. Amory [4] builds coherent narrative structures from agent experiences using agentic reasoning. NEMORI [5] implements self-organizing memory inspired by event-boundary detection --- analogous to how humans segment continuous experience into discrete episodes. SYNAPSE [6] bridges episodic and semantic memory via spreading activation across a hybrid graph, recovering "bridge node" connections between semantically distant memories.

Kline [16] demonstrates that deep neural networks exhibit forgetting curves statistically indistinguishable from human Ebbinghaus curves, suggesting biological forgetting mechanisms have natural analogues in parametric memory and validating the application of cognitive forgetting models to AI systems.

2.3 Information-Theoretic Memory

Surprisal --- the information-theoretic measure of unexpectedness --- provides a principled mechanism for deciding which observations deserve deeper processing. In Honcho's architecture [2, 7], surprisal S(x) = -log P(x) filters which observations trigger expensive background reasoning. Low-probability events (high surprisal) signal model updates are warranted, while expected behaviors receive only shallow processing. This approach mirrors human attention allocation, where novel stimuli receive disproportionate cognitive resources.

2.4 Metacognitive Memory Systems

Griot et al. [10] demonstrate in Nature Communications that large language models "lack essential metacognition for reliable medical reasoning" --- models consistently fail to recognize knowledge limitations and provide confident answers even when correct options are absent. This finding motivates explicit metacognitive monitoring in memory systems. Ji-An et al. [11] show that LLMs are capable of metacognitive monitoring and control of their internal activations, suggesting that metacognitive capabilities can be externally scaffolded even when not innately reliable. Zhou et al. [17] propose metacognitive RAG, where the system evaluates retrieval sufficiency before generating responses, re-querying or escalating to clarification when confidence is low.

2.5 Enterprise AI Personalization

Enterprise AI personalization remains largely unsolved despite significant investment. Most platforms offer rule-based personalization (if user.role == "admin" then show admin tools) or simple preference storage (user.theme = "dark"). The Reflective Memory Management (RMM) framework [18], published at ACL 2025, combines prospective reflection (dynamic summarization of anticipated future needs) with retrospective reflection (RL-optimized retrieval), achieving state-of-the-art results on long-term personalized dialogue benchmarks.

Wedel [19] proposes Contextual Memory Intelligence (CMI), a formal context taxonomy with four dimensions: type (temporal, emotional, procedural, historical), source (user-generated, system-inferred, ambient, organizational), scope (task-specific vs. enterprise-wide), and state (active, latent, decayed, contradictory). This taxonomy directly informs our situated cognition pattern.

Riedl and De Cremer [20] examine AI for collective intelligence, arguing that organizational memory requires three ingredients: collective memory (distributed knowledge), collective attention (synchronized focus), and collective reasoning (shared frameworks). Their analysis informs our collective memory pattern.


3. Thirteen Memory Patterns

We identify thirteen distinct memory patterns necessary for a complete cognitive memory architecture. Each pattern is defined by its cognitive science foundation, information-theoretic properties, and engineering requirements.

3.1 Pattern 1: Diachronic Identity

Definition. A memory system that maintains a coherent, evolving representation of an entity across time, encoding both current state and the trajectory of how that state was reached.

Cognitive Foundation. Diachronic identity is a concept from analytic philosophy concerning how entities persist through time while changing [1]. A diachronic object is one that can be coherently understood as enduring --- the same entity --- while also being different at different moments. This contrasts with synchronic identity (identity at a single point in time).

Engineering Requirement. The system must distinguish between identity-constituting properties (name, role, core expertise) that change rarely and identity-expressing properties (current project, active preferences, communication style) that change frequently. Both must be tracked with temporal provenance.

Why It Matters. Without diachronic identity, every session starts from scratch. The system cannot distinguish "user changed their mind" from "user has always preferred this." It cannot detect growth (novice becoming expert), regression (expert making beginner mistakes), or drift (gradual preference shifts).

3.2 Pattern 2: Observer/Observed Peer Paradigm

Definition. A memory architecture where both humans and AI agents are modeled as "peers" --- first-class entities with persistent representations --- and observation is asymmetric: each peer builds its own model of other peers based solely on information it has actually witnessed.

Cognitive Foundation. Theory of mind --- the ability to attribute mental states (beliefs, desires, intentions) to others --- is fundamental to social cognition [9]. The observer/observed paradigm implements a computational theory-of-mind structure: the agent peer observes the human peer and builds a model of their mental states, preferences, and psychology.

Engineering Requirement. Collections of observations must be scoped to observer/observed pairs. Agent A's model of User B should contain only information A has actually witnessed, not information from User B's private conversations with Agent C. Observation permissions must be configurable per-peer-per-session.

Why It Matters. This enables epistemically honest observation scoping. In multi-agent systems, each agent maintains its own representation of users and teammates, enabling emergent coordination without explicit programming and without information leakage across observation boundaries.

3.3 Pattern 3: Deriver/Dreamer Pipeline

Definition. A two-stage reasoning pipeline where a fast "deriver" extracts explicit facts and deductive conclusions from each interaction in real time, while a slow "dreamer" performs asynchronous background consolidation to generate inductive generalizations and abductive hypotheses.

Cognitive Foundation. The distinction between System 1 (fast, automatic) and System 2 (slow, deliberate) reasoning [21] maps directly to the deriver/dreamer split. The deriver handles System 1 fact extraction; the dreamer handles System 2 pattern recognition and theory-building.

Engineering Requirement. The deriver must run synchronously (or near-synchronously) after each message, completing in under 2 seconds. The dreamer must run asynchronously during idle periods, with configurable trigger thresholds (e.g., activate after N new observations accumulate). Both must produce typed outputs: explicit observations, deductive conclusions, inductive generalizations, and abductive hypotheses, in order of decreasing certainty.

Knowledge Hierarchy:

                    Abductive Hypotheses
                   (probable explanations)
                          /    \
                Inductive Generalizations
               (patterns across observations)
                      /          \
              Deductive Conclusions
             (certain inferences from facts)
                    /              \
              Explicit Observations
             (direct facts from messages)
                  /                  \
           Raw Interaction Data
          (chat messages, tool calls)

Why It Matters. Without hierarchical reasoning, memory is just storage. The deriver ensures nothing is lost in real time. The dreamer ensures the system gets smarter over time, not just fuller.

3.4 Pattern 4: Surprisal-Guided Consolidation

Definition. A mechanism that allocates expensive background reasoning resources proportionally to the information-theoretic surprisal of observations --- unexpected behaviors trigger deep consolidation while expected behaviors receive only shallow processing.

Cognitive Foundation. Humans evolved to leverage prediction and surprisal-based reasoning [7]. The brain allocates disproportionate attention to novel stimuli that violate predictions. Surprisal S(x) = -log P(x) where P(x) is the probability of observed behavior under the current user model.

Engineering Requirement. The system must maintain a probabilistic model of each user's expected behavior. After each interaction, it must compute the surprisal of the observed behavior against this model. Observations exceeding a surprisal threshold trigger deeper dreamer processing. The threshold must be adaptive --- users with high behavioral variance should have higher thresholds to avoid false positives.

Why It Matters. Without surprisal filtering, the dreamer processes all observations equally, wasting compute on mundane, expected behavior. With it, expensive reasoning focuses precisely where user model updates would produce the most value.

3.5 Pattern 5: Dialectic Retrieval

Definition. An agentic query mechanism where the system iteratively searches its own memory, reasons about what it has found, and decides whether to search again --- replacing single-shot vector similarity with a multi-round reasoning process.

Cognitive Foundation. The Socratic dialectic --- arriving at truth through reasoned dialogue between two parties (here, the application and the memory system) [22]. Unlike retrieval, which returns ranked documents, dialectic retrieval returns synthesized answers grounded in evidence chains.

Engineering Requirement. The dialectic agent must have access to multiple search tools (semantic search, temporal search, most-frequently-derived, recent observations, peer card lookup). It must support configurable reasoning levels with different latency/quality tradeoffs. At minimum, five levels:

LevelLatencyUse CaseModel Tier
Minimal<500msSimple fact lookup ("What's the user's name?")Fast/cheap
Low<2sRecent session recallFast/cheap
Medium<5sCross-session reasoningMid-tier
High<15sComplex preference synthesisHigh-tier
Maximum<60sDeep psychological profiles, research reportsHighest-tier

Why It Matters. Traditional RAG asks "what text is similar?" and returns chunks. Dialectic retrieval asks "what do we actually know?" and returns reasoned answers. The difference is the difference between a search engine and an analyst.

3.6 Pattern 6: Tiered Virtual Memory (MemGPT)

Definition. A memory architecture that treats the LLM context window as RAM and external storage as disk, with explicit paging mechanisms that move information between tiers based on relevance, recency, and available capacity.

Cognitive Foundation. The operating-systems metaphor [8] maps to the cognitive distinction between working memory (limited capacity, fast access) and long-term memory (unlimited capacity, slower access). The "inner monologue" --- private reasoning before each response --- mirrors the executive function that manages attention and working memory allocation.

Engineering Requirement. Three memory tiers with explicit management:

  • Core Memory (always in context): Character-limited blocks for user profile and agent persona. Updated via self-editing tool calls (core_memory_replace, core_memory_append).
  • Recall Memory (page cache): Complete searchable conversation history. Paged back on demand via conversation_search.
  • Archival Memory (cold storage): Vector/graph-indexed long-term knowledge. Queried via archival_memory_search, written via archival_memory_insert.

Why It Matters. Without tiered memory, systems either keep everything in context (exceeding token limits) or keep nothing (losing continuity). Tiered memory enables unlimited memory depth with bounded context cost.

3.7 Pattern 7: ACT-R Activation Decay

Definition. A scoring mechanism where memory node relevance is computed as a function of recency, frequency of past retrievals, and contextual match --- implementing mathematically grounded "forgetting" and "strengthening" dynamics.

Cognitive Foundation. ACT-R's activation formula [3]:

Activation(i) = ln(Ξ£ t_j^(-d)) + Ξ£ W_k Γ— S_{ki}

Where t_j are times since past retrievals, d β‰ˆ 0.5 is the decay parameter, W_k are attention weights, and S_{ki} are associative strengths. Memories accessed recently and frequently stay highly activated; others naturally decay.

Engineering Requirement. Every memory node must track: last_accessed (timestamp), access_count (integer), activation_score (float, recomputed on access and during periodic decay sweeps), and stability (float, increases with each successful retrieval). A background job must periodically recompute activation scores across all nodes, demoting low-activation nodes to archived status.

Why It Matters. Without activation decay, memory grows unboundedly and retrieval quality degrades as noise overwhelms signal. With it, the system naturally retains frequently-used, recently-accessed knowledge while gracefully forgetting rarely-needed information --- exactly as humans do.

3.8 Pattern 8: Narrative Memory

Definition. A memory organization where interactions are stored not as flat facts but as coherent narrative structures with temporal, causal, and thematic edges --- enabling retrieval of "stories" rather than isolated data points.

Cognitive Foundation. Human long-term memory is organized as autobiographical narratives [4, 5, 6]. Recall is reconstructive, not retrieval --- we don't pull records from a database but reconstruct stories from fragments. The "narrative self" is the coherent story a person tells about who they are and what has happened to them.

Engineering Requirement. The graph database must support:

  • Episode nodes: Individual interactions with timestamps and context
  • Narrative nodes: Higher-level story structures that aggregate episodes
  • Causal edges (CAUSED_BY): Linking effects to causes across time
  • Temporal edges (FOLLOWS): Establishing chronological sequence
  • Thematic edges (INVOLVES): Linking episodes to recurring themes
  • Contradiction edges (CONTRADICTS): Marking conflicts between narrative elements

Example. Instead of storing User X prefers Python, the narrative memory stores: "In Q3 2025, when building the inventory service, X consistently rejected Java proposals and cited team expertise and deployment constraints as reasons. This contradicts her initial onboarding preference for Java expressed in Q1 2025, suggesting a preference evolution driven by team dynamics rather than language capability."

Why It Matters. Narrative memory preserves why and how, not just what. When a system needs to advise a user, it can draw on the full narrative context of their decisions rather than isolated preference flags.

3.9 Pattern 9: Prospective Memory

Definition. Memory for future intentions --- the ability to anticipate what a user will need before they ask, triggered by environmental cues, behavioral patterns, and temporal context.

Cognitive Foundation. Human prospective memory handles future-oriented intentions: "remember to call Alice when you get to the office." It is triggered by environmental cues, not deliberate recall. For AI agents, this means predicting needs based on behavioral patterns.

Engineering Requirement. The system must support:

  • Intent nodes: Stored future intentions with trigger conditions
  • Behavioral prediction: Pattern matching against historical behavior to anticipate next actions
  • Contextual triggers: When a user enters a context (opens a project, starts a meeting, begins a workflow), relevant prospective memories surface automatically
  • Proactive surfacing: The system pushes relevant information without being asked

Why It Matters. Prospective memory transforms AI from reactive ("answer when asked") to proactive ("surface what you'll need"). The Reflective Memory Management framework [18] demonstrates that prospective reflection improves long-term personalized dialogue by 15-20% over retrospective-only approaches.

3.10 Pattern 10: Metacognition

Definition. The system's ability to reason about its own knowledge state --- detecting gaps, quantifying uncertainty, and distinguishing between "I know this" and "I think I know this."

Cognitive Foundation. Metacognition --- thinking about thinking --- is a core executive function [10, 11, 17]. Systems that know what they don't know are fundamentally more reliable than those that confabulate. Griot et al. [10] demonstrate that LLMs begin debates at 72.9% confidence (vs. a rational 50% baseline), indicating systematic overconfidence that must be externally corrected.

Engineering Requirement. The system must track:

  • Confidence envelope: Per-query confidence based on evidence density (number and quality of supporting memory nodes)
  • Knowledge coverage: Per-topic-domain coverage metrics (how many observations support knowledge in each area)
  • Contradiction detection: Automatic flagging when memory nodes conflict
  • Escalation triggers: When confidence falls below threshold, trigger clarifying questions or deeper retrieval rather than generating uncertain answers

Why It Matters. Without metacognition, the system generates confident-sounding answers regardless of evidence quality. With it, the system can say "I don't have enough information about your testing preferences to make a good recommendation --- can you tell me more?" This transforms memory from a liability (confident confabulation) into an asset (honest, calibrated assistance).

3.11 Pattern 11: Principled Forgetting

Definition. Mathematically grounded mechanisms for memory decay and removal that prevent unbounded growth while preserving important information --- implementing "forgetting as a feature, not a bug."

Cognitive Foundation. Ebbinghaus forgetting curves [16]: memory strength decays as S(t) = S_0 x e^(-t/tau) where tau is stability (increased by each successful retrieval). Interference-based forgetting: similar memories compete and the stronger one suppresses the weaker.

Engineering Requirement. Multiple forgetting mechanisms working in concert:

  • Temporal decay: Activation scores decrease over time following Ebbinghaus curves
  • Interference: When new observations contradict old ones, old nodes receive SUPERSEDED_BY edges and are removed from active retrieval (but preserved for historical analysis)
  • Dual-buffer probation: New observations enter a "probationary" buffer; only after N retrievals or time-based promotion do they enter long-term storage [23]
  • Protected nodes: Safety-critical knowledge (compliance rules, security constraints, user allergies) receives is_protected: true flags that survive all consolidation cycles
  • Capacity governance: Per-user memory budgets that trigger consolidation when exceeded

Why It Matters. Memory without forgetting becomes noise. Enterprise systems accumulate millions of observations per user per year. Without principled forgetting, retrieval quality degrades as the system drowns in stale, contradictory, and irrelevant data.

3.12 Pattern 12: Collective/Social Memory

Definition. Memory that flows between users within an organization, enabling team-level knowledge, organizational learning, and cross-user pattern recognition while preserving individual privacy.

Cognitive Foundation. Organizations have memory that transcends individuals [20]. When an employee leaves, institutional knowledge evaporates. Collective intelligence requires collective memory (distributed knowledge), collective attention (synchronized focus), and collective reasoning (shared frameworks).

Engineering Requirement. Three memory scopes with promotion mechanisms:

  • Individual (user-private): Personal preferences, work patterns, private notes
  • Team (shared within workspace/org): Shared playbooks, common patterns, team standards
  • Organizational (cross-team, anonymized): Industry patterns, best practices, aggregate insights

Promotion rules: When individual insights prove broadly useful (retrieved by multiple users, high activation scores), they can be proposed for promotion to team scope. Team insights that generalize across teams can be promoted to organizational scope. All promotions must respect privacy constraints --- no personally identifiable information leaks into team or organizational scopes.

Why It Matters. Most enterprise AI treats each user as an island. A sales team of ten people independently discovers the same best practices, makes the same mistakes, and learns the same lessons. Collective memory enables organizational learning without requiring explicit knowledge sharing.

3.13 Pattern 13: Situated Cognition

Definition. Memory that is encoded with rich contextual metadata --- where the user was, what tool they were using, what project they were working on, who they were collaborating with --- enabling context-dependent retrieval that surfaces memories matching the current situation.

Cognitive Foundation. The encoding specificity principle [19]: memory recall is enhanced when retrieval cues match encoding context. Contextual Memory Intelligence (CMI) proposes four context dimensions: type (temporal, emotional, procedural, historical), source (user-generated, system-inferred, ambient, organizational), scope (task-specific vs. enterprise-wide), and state (active, latent, decayed, contradictory).

Engineering Requirement. Every memory node must be encoded with context metadata:

JSON
10 lines
{
  "project": "inventory-api",
  "tool": "Claude Code",
  "phase": "design",
  "collaborators": ["alice", "bob"],
  "dashboard_section": "nexusros/contacts",
  "device": "desktop",
  "time_of_day": "morning",
  "plugin": "prosecreator"
}

Retrieval queries must include context filters. When a user is working in project:inventory-api, memories encoded in that context are weighted 2-3x higher than context-free memories.

Why It Matters. Without situated cognition, a user's ProseCreator writing preferences bleed into their NexusROS email drafting. With it, the system knows that "prefers lyrical prose" applies to fiction writing, not sales emails. Context is the difference between a personalization engine and a confusion engine.


4. Current Nexus Architecture Analysis

4.1 System Overview

The Adverant Nexus platform is a production TypeScript/Node.js microservices stack running on K3s Kubernetes. The core memory infrastructure is provided by the GraphRAG service (nexus-graphrag), which coordinates three databases: PostgreSQL (relational storage, full-text search), Neo4j (graph relationships, entity/fact triples), and Qdrant (vector similarity search). Redis provides query caching (5-minute TTL) and embedding caching (24-hour TTL).

4.2 Storage Architecture

The central table is graphrag.unified_content, which stores all memory types with columns for content_type, content, user_id, session_id, metadata (JSONB), and embedding_generated. The UnifiedStorageEngine executes multi-database writes as a saga pattern with rollback handlers, ensuring that a memory is either stored in all three databases or none.

                    +-----------------------+
                    |  UnifiedMemoryRouter  |
                    |   (memory-router.ts)  |
                    +-----------+-----------+
                                |
                    +-----------v-----------+
                    |    MemoryTriage        |
                    | (memory-triage.ts)     |
                    | Classify: entity/fact  |
                    | extraction needed?     |
                    +-----------+-----------+
                                |
                    +-----------v-----------+
                    | UnifiedStorageEngine   |
                    | (unified-storage-      |
                    |  engine.ts)            |
                    +-----------+-----------+
                       /        |        \
              +-------+   +----+----+   +--------+
              |  PG   |   | Neo4j   |   | Qdrant |
              | Store |   | Graph   |   | Vector |
              +-------+   +---------+   +--------+

4.3 Retrieval Architecture

The HybridSearchEngine blends three retrieval signals: vector similarity from Qdrant (60% weight), metadata and title matching (30% weight), and PostgreSQL full-text search (10% weight). Embeddings are generated locally via MageAgent's embedding service at localhost:3457, using BGE-small (384-dim, fast) and Harrier-27B (5376-dim, deep) models with BGE-reranker-v2-m3 for cross-encoder reranking.

4.4 Context Injection Points

Four code locations in the existing architecture accept user-specific context injection:

Injection Point 1: getSystemPromptForUser() --- nexus-gateway/src/services/system-prompt-builder.ts The highest-leverage injection point. Called before every LLM request in the chat path. Already branches on userEmail (admin detection) and userTier (subscription tier). A userProfile?: UserPreferences parameter would slot in cleanly.

Injection Point 2: buildPageContextSuffix() --- same file Constructs context blocks from PageContext objects already carrying entityType, entityId, entityName, entityContent, dashboardSection, and arbitrary metadata. A "things I know about this user" block could be appended here with zero architectural change.

Injection Point 3: ConversationMemory.getMemoryContext() --- nexus-gateway/src/services/conversation-memory.ts Already retrieves relevantMemories[] from GraphRAG. Queries are per-session but could be scoped to userId for cross-session user preferences.

Injection Point 4: buildMessages() --- nexus-orchestrator/src/services/execution-engine.ts (line 346) Assembles messages for skill execution. Currently injects only the skill's systemPrompt with zero user context. A user-profile lookup could augment messages with user preferences before the skill prompt.

4.5 Identified Gaps

CapabilityCurrent StateRequired State
User profilesNonePeer card per user with 40-fact biographical profile
Behavioral observationInteraction logging onlyExplicit/deductive fact extraction per message
Background consolidationNoneDreamer job for inductive/abductive reasoning
Activation scoringNoneACT-R decay on all memory nodes
Narrative structureFlat episode nodesNarrative nodes with causal/temporal edges
Prospective memoryNoneIntent nodes with trigger conditions
Collective memorymemory_scope enum exists (user/app/company)Promotion mechanism between scopes
ForgettingNone (append-only)Ebbinghaus decay, interference, probation
MetacognitionNoneConfidence envelopes, contradiction detection
Dialectic retrievalSingle-shot hybrid searchAgentic multi-round reasoning
Situated contextPartial (dashboard_section in PageContext)Full context encoding on all nodes
Procedural memoryDeclared in ConversationMemory comments, not implementedLearned user preferences, workflows
Observation scopingNo observer/observed separationPer-peer-pair observation collections

5. Integration Architecture

5.1 Database Schema Extensions

5.1.1 Peer Profiles Table

SQL
19 lines
CREATE TABLE graphrag.peer_profiles (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  user_id VARCHAR(255) NOT NULL,
  company_id VARCHAR(255) NOT NULL,
  peer_type VARCHAR(50) DEFAULT 'human',  -- 'human' | 'agent'
  peer_card JSONB DEFAULT '[]'::jsonb,     -- Max 40 durable facts
  observation_count INTEGER DEFAULT 0,
  last_observation_at TIMESTAMPTZ,
  last_dreamer_run_at TIMESTAMPTZ,
  dreamer_version INTEGER DEFAULT 0,
  surprisal_threshold FLOAT DEFAULT 2.0,
  configuration JSONB DEFAULT '{}'::jsonb,
  created_at TIMESTAMPTZ DEFAULT NOW(),
  updated_at TIMESTAMPTZ DEFAULT NOW(),
  UNIQUE(user_id, company_id)
);

CREATE INDEX idx_peer_profiles_user ON graphrag.peer_profiles(user_id);
CREATE INDEX idx_peer_profiles_company ON graphrap.peer_profiles(company_id);

5.1.2 Activation Scoring Columns

SQL
15 lines
ALTER TABLE graphrag.unified_content
  ADD COLUMN activation_score FLOAT DEFAULT 1.0,
  ADD COLUMN last_accessed TIMESTAMPTZ DEFAULT NOW(),
  ADD COLUMN access_count INTEGER DEFAULT 0,
  ADD COLUMN stability FLOAT DEFAULT 0.5,
  ADD COLUMN is_protected BOOLEAN DEFAULT false,
  ADD COLUMN is_archived BOOLEAN DEFAULT false,
  ADD COLUMN probation_until TIMESTAMPTZ,
  ADD COLUMN observation_level VARCHAR(20) DEFAULT 'explicit',
  ADD COLUMN observer_id VARCHAR(255),
  ADD COLUMN context_encoding JSONB DEFAULT '{}'::jsonb;

CREATE INDEX idx_unified_activation ON graphrag.unified_content(activation_score)
  WHERE is_archived = false;
CREATE INDEX idx_unified_observer ON graphrag.unified_content(observer_id, user_id);

5.1.3 Prospective Memory Table

SQL
14 lines
CREATE TABLE graphrag.prospective_memory (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  user_id VARCHAR(255) NOT NULL,
  company_id VARCHAR(255) NOT NULL,
  trigger_type VARCHAR(50) NOT NULL,     -- 'context' | 'temporal' | 'behavioral'
  trigger_condition JSONB NOT NULL,       -- e.g. {"project": "inventory-api"}
  content TEXT NOT NULL,                  -- What to surface
  source_memory_ids UUID[],               -- Which memories generated this
  confidence FLOAT DEFAULT 0.5,
  times_surfaced INTEGER DEFAULT 0,
  times_acted_on INTEGER DEFAULT 0,
  expires_at TIMESTAMPTZ,
  created_at TIMESTAMPTZ DEFAULT NOW()
);

5.1.4 Neo4j Narrative Nodes

Cypher
14 lines
// Narrative node type
CREATE CONSTRAINT narrative_id IF NOT EXISTS
  FOR (n:Narrative) REQUIRE n.id IS UNIQUE;

// Narrative properties: id, title, summary, theme, user_id,
//   company_id, start_time, end_time, episode_count

// Edges
// (:Episode)-[:PART_OF]->(:Narrative)
// (:Narrative)-[:FOLLOWS]->(:Narrative)
// (:Narrative)-[:CAUSED_BY]->(:Narrative)
// (:Narrative)-[:INVOLVES]->(:ExtractedEntity)
// (:Episode)-[:CONTRADICTS]->(:Episode)
// (:Episode)-[:LIKELY_NEEDS_NEXT]->(:Episode)

5.2 Deriver Service

The deriver runs as a background task triggered by the existing InteractionCaptureMiddleware on every chat message.

Message arrives
      |
      v
+------------------+
| InteractionCapture|
| Middleware         |
| (existing)         |
+--------+---------+
         |
         | background_task.add()
         v
+--------+---------+
| Deriver Queue     |
| (Redis BullMQ)    |
+--------+---------+
         |
         v
+--------+---------+
| Deriver Worker    |
| 1. Parse message  |
| 2. Extract facts  |
| 3. Deduce concl.  |
| 4. Store as       |
|    unified_content|
|    (observation   |
|    level:         |
|    explicit /     |
|    deductive)     |
| 5. Compute        |
|    surprisal      |
| 6. If surprisal > |
|    threshold:     |
|    enqueue dreamer|
+------------------+

Latency target: <2 seconds per message. The deriver uses a fast, cheap model (Haiku-tier) with a minimal prompt that instructs: "Extract atomic, self-contained facts from this message. Distinguish what the user directly stated from what they referenced about others."

5.3 Dreamer Service

The dreamer runs as a scheduled orchestrator skill (cognitive_dreamer) triggered either by surprisal events or on a periodic schedule during idle periods.

Trigger (surprisal event OR idle schedule)
      |
      v
+------------------+
| Dreamer           |
| Orchestrator      |
+--------+---------+
         |
    +----+----+
    |         |
    v         v
+-------+ +--------+
|Deduct.| |Induct. |
|Special| |Special.|
|ist    | |ist     |
+---+---+ +---+----+
    |         |
    v         v
+------------------+
| Consolidation    |
| 1. Merge dupes   |
| 2. Resolve       |
|    contradictions|
| 3. Update peer   |
|    card (40 max) |
| 4. Archive       |
|    superseded    |
|    observations  |
| 5. Recompute     |
|    activation    |
|    scores        |
+------------------+

The Deduction Specialist explores the observation corpus and generates higher-level deductive conclusions from explicit facts. Example: "User stated they code in Python and work at a startup" -> "User is likely a developer" (deductive, high confidence).

The Induction Specialist runs after deduction, building on its outputs to identify patterns and generalizations. Example: "User consistently asks follow-up questions about implementation details across 15 sessions" -> "User is a deep learner who values thoroughness over speed" (inductive, moderate confidence).

5.4 Dialectic Endpoint

New API endpoint: POST /graphrag/api/peers/:userId/query

Application query
      |
      v
+------------------+
| Dialectic Agent  |
| (agentic loop)   |
+--------+---------+
         |
    Tools available:
    |  - search_observations(query, topK)
    |  - get_most_derived(topK)
    |  - get_recent_observations(limit)
    |  - get_peer_card(userId)
    |  - get_session_summary(sessionId)
    |  - search_narratives(query)
         |
    Reasoning levels:
    |  minimal -> 1 tool call, fast model
    |  low     -> 2 tool calls, fast model
    |  medium  -> 5 tool calls, mid model
    |  high    -> 10 tool calls, high model
    |  max     -> 20 tool calls, highest model
         |
         v
+------------------+
| Synthesized      |
| answer with      |
| evidence chain   |
+------------------+

5.5 Activation Decay Job

A scheduled background job recomputes activation scores using the ACT-R formula:

TypeScript
14 lines
function computeActivation(node: MemoryNode): number {
  const baseLevelActivation = Math.log(
    node.retrievalTimes.reduce((sum, t) => {
      const age = (Date.now() - t) / 1000; // seconds
      return sum + Math.pow(age, -0.5);     // d = 0.5
    }, 0)
  );

  const spreadingActivation = node.contextWeights.reduce(
    (sum, { weight, strength }) => sum + weight * strength, 0
  );

  return baseLevelActivation + spreadingActivation;
}

Nodes with activation_score < 0.1 and is_protected = false are moved to is_archived = true. Nodes in the dual-buffer probation period (probation_until > NOW()) that have not been retrieved are deleted entirely.

5.6 Forgetting Lifecycle

+-------------+     N retrievals     +-------------+
|  Probation  | ------------------->  |   Active    |
|  Buffer     |     OR time-based     |   Memory    |
+------+------+     promotion         +------+------+
       |                                      |
       | No retrievals                        | Activation
       | within window                        | decay over
       v                                      | time
+-------------+                        +------v------+
|  Deleted    |                        |  Low        |
|  (purged)   |                        |  Activation |
+-------------+                        +------+------+
                                              |
                                              | Below threshold
                                              v
                                       +------+------+
                                       |  Archived   |
                                       |  (cold      |
                                       |   storage)  |
                                       +------+------+
                                              |
                                              | Superseded by
                                              | newer observation
                                              v
                                       +------+------+
                                       | Superseded  |
                                       | (historical |
                                       |  only)      |
                                       +-------------+

5.7 Collective Memory Promotion

+------------------+     User insight      +------------------+
|  Individual      |     proves broadly    |  Team Scope      |
|  Memory Scope    | ---> useful (high    |  (shared within  |
|  (user-private)  |     activation,      |   org/workspace) |
+------------------+     multi-user       +--------+---------+
                          retrieval)                |
                                                    | Generalizes
                                                    | across teams
                                                    v
                                           +--------+---------+
                                           | Organizational   |
                                           | Scope (anonymized|
                                           | cross-team)      |
                                           +------------------+

Promotion rules:
1. Individual -> Team: Insight retrieved by 3+ users
   within same org AND activation_score > 0.8
2. Team -> Org: Pattern observed in 3+ teams AND
   no PII in content (automated PII scan)
3. All promotions: audit log entry, reversible

5.8 Platform Integration via MCP

5.8.1 How Honcho Connects to Claude Code

Honcho provides two integration mechanisms [24]:

  1. Claude Code Skill (claude-honcho): Installed via Claude Code plugin marketplace. Ships as a skill that Claude Code loads on startup. Configuration lives in ~/.honcho/config.json. The skill exposes memory tools (set_config, memory read/write) as Claude Code tool calls.

  2. MCP Remote Server at https://mcp.honcho.dev: Any MCP-compatible client (Claude Desktop, Cursor, Windsurf, VSCode) connects via:

JSON
5 lines
{
  "command": "npx",
  "args": ["mcp-remote", "https://mcp.honcho.dev"],
  "env": { "AUTHORIZATION": "Bearer <token>" }
}

5.8.2 Adverant Nexus MCP Integration (Existing)

Nexus already has a mature MCP implementation:

  • services/nexus-mcp/ --- standalone @adverant/nexus-mcp-server using @modelcontextprotocol/sdk over stdio transport
  • services/nexus-gateway/src/mcp-adapter.ts --- gateway-side adapter with tier-based access control
  • services/nexus-mcp-gateway/ --- HTTP management layer for spawning MCP server containers per plugin
  • SSE transport routes in nexus-gateway/src/routes/mcp-sse-routes.ts

5.8.3 Replicating Honcho Pattern for Nexus

To expose cognitive memory capabilities to Claude Code, Cursor, and VSCode:

Claude Code / Cursor / VSCode
         |
         | MCP Protocol (stdio or SSE)
         v
+------------------+
| nexus-mcp-server |
| (existing)       |
+--------+---------+
         |
         | New cognitive memory tools:
         |   - nexus_peer_query(userId, query, level)
         |   - nexus_peer_card(userId)
         |   - nexus_store_observation(userId, content)
         |   - nexus_search_memory(query, context)
         |   - nexus_get_context(userId, tokenBudget)
         v
+------------------+
| nexus-gateway    |
| mcp-adapter.ts   |
+--------+---------+
         |
         v
+------------------+
| nexus-graphrag   |
| Cognitive Memory |
| Endpoints        |
+------------------+

New MCP tools to expose:

Tool NameDescriptionMaps To
nexus_peer_queryDialectic query with reasoning levelPattern 5
nexus_peer_cardGet/update biographical profilePattern 1, 2
nexus_store_observationManually store a user observationPattern 3
nexus_search_memoryContext-aware memory searchPattern 13
nexus_get_contextToken-budgeted context assemblyPattern 6
nexus_prospective_checkCheck if prospective memories should surfacePattern 9
nexus_memory_healthActivation distribution, coverage gapsPattern 10

6. Fifty Complex Use Cases

6.1 ProseCreator --- Creative Writing Intelligence

Use Case 1: Writing Style Fingerprint

Pattern: Diachronic Identity (1) + Deriver/Dreamer (3)

Scenario: A novelist has written 12 chapters across 6 months. The deriver extracts prose-level observations from each chapter: average sentence length, vocabulary richness, metaphor frequency, dialogue-to-narration ratio, POV consistency. The dreamer consolidates these into a "writing style fingerprint" stored in the peer card: {style: "literary realism", avg_sentence_length: 18.3, metaphor_density: "moderate", dialogue_ratio: 0.35, pov: "close third"}.

Data Flow:

Chapter submitted -> Deriver extracts style metrics
  -> Observations stored (explicit level)
  -> Dreamer consolidates across chapters
  -> Peer card updated: style fingerprint
  -> System prompt injection: "This writer's style is
     literary realism with 18-word average sentences.
     Match critique specificity to their sophistication."

Impact: Critique prompts adapt to the writer's actual style rather than assuming generic "good writing" standards. A minimalist writer doesn't get advice to "add more descriptive language."

Use Case 2: Genre-Aware Suggestions

Pattern: Situated Cognition (13) + Prospective Memory (9)

Scenario: The system remembers the user writes hard science fiction. When they begin a new chapter that mentions faster-than-light travel, the prospective memory surfaces: "In past chapters, you've maintained strict adherence to known physics. FTL violates this --- is this a deliberate genre shift, or should I flag potential consistency issues?"

Data Flow:

New chapter context -> Situated cognition: genre=hard-sf
  -> Prospective memory checks: genre_consistency rule
  -> Surfaced proactively: "FTL breaks your hard-sf pattern"

Use Case 3: Character Consistency Guardian

Pattern: Narrative Memory (8) + Metacognition (10)

Scenario: The dreamer maintains narrative nodes for each major character, tracking trait assertions across chapters. When Chapter 9 describes the protagonist as "always punctual" but Chapter 3 established her as chronically late, the contradiction edge fires a metacognitive alert: "Character trait conflict detected for Elena: 'always punctual' (Ch.9) contradicts 'chronically late' (Ch.3). Confidence in current trait: LOW. Which is canonical?"

Use Case 4: Narrative Arc Memory

Pattern: Narrative Memory (8) + Prospective Memory (9)

Scenario: Plot threads are tracked as narrative nodes with temporal edges. After 4 chapters without mentioning the unresolved subplot about the missing letter, the system surfaces: "The missing letter subplot (introduced Ch.2, last referenced Ch.5) has been dormant for 4 chapters and 12,000 words. Typical resolution window for your pacing is 3 chapters."

Use Case 5: Critique Depth Calibration

Pattern: ACT-R Activation (7) + Diachronic Identity (1)

Scenario: The system tracks user reactions to critiques: accepted, rejected, modified, ignored. Over time, the activation pattern reveals that the user accepts structural critiques (plot, pacing) at 85% rate but rejects line-level prose critiques at 70% rate. The peer card updates: {critique_preference: "structural over line-level", feedback_depth: "macro"}. Subsequent critiques are automatically calibrated to emphasize structure.

Use Case 6: POV Preference

Pattern: Deriver (3) + Principled Forgetting (11)

Scenario: The deriver observes that across 15 chapters, the user has written exclusively in close third-person POV. This is stored as a high-stability explicit observation. When a new user tries first-person for the first time (surprisal event), the dreamer notes this as a potential style evolution rather than a mistake, creating an inductive observation: "Writer is experimenting with POV --- maintain awareness but don't default-suggest close third."

Use Case 7: Thematic Pattern Detection

Pattern: Dreamer Pipeline (3) + Narrative Memory (8)

Scenario: The dreamer's induction specialist identifies across 20 chapters that the user's protagonists consistently face themes of "autonomy vs. belonging." This wasn't explicitly stated anywhere --- it emerged from pattern analysis across narrative nodes. The system surfaces this as creative insight: "Your work consistently explores the tension between autonomy and belonging. Your strongest chapters (by your own revision frequency) are those where this tension is most explicit."

Use Case 8: Collaborative Writing Memory

Pattern: Collective Memory (12) + Observer/Observed (2)

Scenario: Three co-authors share a world-building scope. Author A establishes that the planet has two moons. Author B, working independently, mentions "the moon" in singular. The team-scope memory detects the conflict and flags it to Author B: "Your co-author A established two moons (Selene and Hecate) in Chapter 3. 'The moon' may need updating."

Use Case 9: Writer's Block Prospective

Pattern: Prospective Memory (9) + Situated Cognition (13)

Scenario: The system detects a stall pattern: user has opened the chapter editor 4 times in 2 days but written fewer than 100 words each time. Time-of-day context shows this is unusual --- the user typically writes 1,500+ words per session. The prospective memory triggers: "You seem to be stuck on Chapter 14. In previous stalls (Ch.7, Ch.11), you broke through by writing the climactic scene first and backfilling. Would you like to try that approach?"

Use Case 10: Draft Evolution Narrative

Pattern: Narrative Memory (8) + Diachronic Identity (1)

Scenario: The narrative memory tracks how Chapter 6 evolved across 8 revision cycles: initial draft focused on action, revision 2 added internal monologue, revision 4 cut 40% of dialogue, revision 6 restructured the timeline. This evolution narrative is available for the user's self-reflection: "Your revision pattern for Chapter 6 mirrors your pattern for Chapter 2 --- both started action-heavy and evolved toward interiority. This is a consistent creative tendency."

6.2 NexusROS --- Revenue Operations Intelligence

Use Case 11: Deal Stage Intelligence

Pattern: Narrative Memory (8) + ACT-R Activation (7)

Scenario: The system maintains narrative arcs for each deal the user has worked. Over 50 closed deals, the dreamer identifies that the user's win rate increases by 31% when they send a case study within 48 hours of the discovery call. This pattern has high activation (frequently confirmed) and high stability (consistent across 6 months). The system surfaces it as proactive intelligence when a new deal enters the post-discovery stage.

Use Case 12: Email Tone Calibration

Pattern: Diachronic Identity (1) + Situated Cognition (13)

Scenario: The deriver extracts tone markers from the user's outbound emails: formality level, sentence length, use of hedging language, emoji frequency. The peer card maintains: {email_tone: "professional-warm", hedging: "moderate", emoji: "never", avg_response_length: 150}. When generating draft emails, the system matches this tone profile. Crucially, the system detects that the user's tone shifts by context: more formal for C-suite contacts, more casual for technical stakeholders.

Use Case 13: Quota Context Awareness

Pattern: Situated Cognition (13) + Prospective Memory (9)

Scenario: It's the 20th of the month. The system's situated cognition knows the user is at 60% of monthly quota with 10 days remaining. Historical patterns show the user typically achieves 75% of remaining quota in the final 10 days. The system adjusts its suggestions accordingly: "Given your current pipeline velocity, you'll likely finish at 90% of quota. To reach 100%, consider re-engaging the TechCorp deal (stalled 8 days) --- it's your highest-probability close based on engagement signals."

Use Case 14: Prospect Research Memory

Pattern: Tiered Memory (6) + Dialectic Retrieval (5)

Scenario: The user researches "Acme Corp" for a sales call. Core memory shows the peer card entry: "Acme Corp: Series B, 200 employees, healthcare vertical." Archival memory holds deeper context from 3 prior research sessions: competitive landscape, key decision-makers, and a failed outreach attempt 6 months ago. The dialectic retrieval synthesizes: "You researched Acme Corp in January and March. Previous outreach failed because the decision-maker (Sarah Chen, VP Engineering) was focused on a platform migration. That migration completed in February per their blog. This is a good time to re-engage."

Use Case 15: Objection Pattern Library

Pattern: Collective Memory (12) + Dreamer Pipeline (3)

Scenario: The dreamer consolidates objection-handling patterns across the entire sales team (team scope, anonymized). It identifies that "pricing too high" objections are successfully countered 78% of the time when the rep responds with ROI calculations rather than discount offers. This team-level insight is surfaced to a new rep who encounters the same objection, along with the three most effective response templates (anonymized from successful interactions).

Use Case 16: Follow-Up Prospective Memory

Pattern: Prospective Memory (9) + ACT-R Activation (7)

Scenario: The system learns that this user's optimal follow-up timing is 3 days after sending a proposal (based on historical close rates). When a proposal is sent, a prospective memory node is created: {trigger: "3 days after proposal sent to [contact]", content: "Follow up on proposal --- your best close rate is with 3-day follow-ups"}. The memory surfaces automatically at the right time.

Use Case 17: Competitive Intelligence Accumulation

Pattern: Dreamer Pipeline (3) + Collective Memory (12)

Scenario: Across 200 sales conversations logged by 8 reps, the dreamer identifies mentions of competitors and consolidates them into competitive intelligence profiles at team scope: "Competitor X is mentioned in 34% of lost deals. Common positioning: 'faster implementation.' Our win rate against X increases when we lead with 'total cost of ownership' (67% win rate vs. 41% without)."

Use Case 18: Pipeline Risk Forecasting

Pattern: Narrative Memory (8) + Metacognition (10)

Scenario: The narrative memory tracks deal health trajectories. When a deal's engagement pattern (email frequency, meeting attendance, response time) matches the trajectory of deals that eventually churned (detected via narrative similarity), the metacognition layer flags: "Deal [X] shows engagement decline matching 73% of churned deals at this stage. Confidence: MODERATE (based on 15 similar trajectories). Recommended: executive sponsor re-engagement."

Use Case 19: Meeting Prep Contextual Recall

Pattern: Situated Cognition (13) + Dialectic Retrieval (5)

Scenario: The user opens a contact's page 15 minutes before a scheduled meeting (calendar context detected). Situated cognition triggers a medium-level dialectic query: "What should I know about [contact] before this meeting?" The response synthesizes across all interactions: last meeting notes, open action items, recent email exchanges, deal stage, known preferences, and the contact's communication style.

Use Case 20: Win/Loss Pattern Analysis

Pattern: Dreamer Pipeline (3) + Diachronic Identity (1)

Scenario: The dreamer identifies that this specific user's win rate correlates with three behavioral patterns: (1) sending personalized video messages (+18% win rate), (2) involving technical stakeholders before the proposal stage (+12%), and (3) responding to emails within 2 hours (+9%). These patterns are specific to this user --- they may not generalize to the team. The system surfaces them as personal coaching insights during pipeline reviews.

6.3 NexusQA --- Quality Assurance Intelligence

Use Case 21: Test Framework Preference

Pattern: Diachronic Identity (1) + Deriver (3)

Scenario: The deriver observes that across 30 QA sessions, the user consistently writes Playwright tests, never Cypress. This becomes a high-stability peer card entry. When the user asks to "write tests for this component," the system defaults to Playwright syntax without asking. If the user switches to Cypress for a specific project, the surprisal event triggers the dreamer to investigate whether this is a one-time exception or a preference evolution.

Use Case 22: Bug Severity Calibration

Pattern: ACT-R Activation (7) + Diachronic Identity (1)

Scenario: The system learns this user's severity thresholds: they consistently escalate visual alignment issues that other QA engineers mark as P2 to P1. The peer card stores: {severity_calibration: {visual_alignment: "strict", performance: "moderate", accessibility: "relaxed"}}. Test reports are calibrated to this user's standards.

Use Case 23: Component Glossary

Pattern: Collective Memory (12) + Deriver (3)

Scenario: The team accumulates project-specific terminology. "The widget" means <DashboardMetricsWidget>, not the generic widget concept. "The API" means the internal GraphRAG API at port 8090, not a generic external API. This glossary is built from explicit observations across team members and stored at team scope, enabling more precise test generation that uses the team's actual component names.

Use Case 24: Regression Pattern Memory

Pattern: Narrative Memory (8) + Prospective Memory (9)

Scenario: The narrative memory tracks which components break frequently. The DatePicker component has had 4 regressions in 3 months. When a PR touches files that import DatePicker, the prospective memory surfaces: "DatePicker has a 67% regression rate after modifications. Prioritize: timezone handling (broke 2x), locale formatting (broke 1x), and range selection (broke 1x)."

Use Case 25: Test Coverage Prospective

Pattern: Prospective Memory (9) + Situated Cognition (13)

Scenario: Based on code change patterns and the current project phase (situated cognition: sprint context), the system predicts untested areas: "The user is working on the payments module. Based on code commits this sprint, the refund flow and webhook retry logic have zero test coverage. Historical bug density in these areas is HIGH."

Use Case 26: Framework Migration Tracking

Pattern: Diachronic Identity (1) + Principled Forgetting (11)

Scenario: The team migrated from Jest to Vitest 3 months ago. The diachronic identity track records this transition. Old Jest-specific knowledge is deprecated (low activation) but not deleted --- historical context may be needed for legacy code. New Vitest patterns are high activation. When the user asks about testing, the system suggests Vitest patterns. If the user explicitly works on legacy code, dormant Jest knowledge is temporarily reactivated.

Use Case 27: Flaky Test Memory

Pattern: ACT-R Activation (7) + Metacognition (10)

Scenario: The system remembers which tests are flaky: test_dashboard_load has failed 12 times in 50 runs with non-deterministic timing issues. When this test fails again, the metacognition layer adds context: "This test has a 24% flake rate. Confidence that this failure represents a real bug: LOW (32%). Consider re-running before investigating. Last confirmed real failure: 6 weeks ago."

Use Case 28: Performance Baseline Memory

Pattern: Narrative Memory (8) + Diachronic Identity (1)

Scenario: The system tracks page load times, API response times, and bundle sizes as a temporal narrative. When a new deployment increases bundle size by 15KB, the system provides historical context: "Bundle size increased from 234KB to 249KB. This is the 3rd increase in 5 deployments. Trajectory suggests reaching 300KB within 2 months if trend continues. Your team's target is 250KB."

Use Case 29: Accessibility Expertise Calibration

Pattern: Diachronic Identity (1) + Metacognition (10)

Scenario: The system detects from interaction patterns that this QA engineer has limited WCAG knowledge (frequently asks basic questions about contrast ratios, never uses aria-* terminology). The peer card stores: {accessibility_expertise: "beginner"}. Accessibility findings include explanations: "This button has a 3.2:1 contrast ratio. WCAG AA requires 4.5:1 for normal text. This means users with low vision may not be able to read it. Here's how to fix it: [specific code change]."

Use Case 30: Team Testing Standards

Pattern: Collective Memory (12) + Principled Forgetting (11)

Scenario: The team lead's approved test patterns are stored at team scope: "Always use data-testid for selectors, never CSS classes. Always test loading and error states. Always mock external APIs at the fetch level, not the component level." When a new team member writes tests using CSS selectors, the system suggests the team standard. Old team standards that were explicitly rescinded are deprecated (forgetting) but remain in historical context.

6.4 Skills Engine / Orchestrator Intelligence

Use Case 31: Output Verbosity Preference

Pattern: Diachronic Identity (1) + ACT-R Activation (7)

Scenario: The system tracks user responses to skill outputs. This user consistently expands condensed outputs and asks follow-up questions for detail --- high-stability pattern indicating preference for verbose output. The peer card stores: {output_verbosity: "detailed"}. Skill system prompts are augmented: "This user prefers detailed output. Include intermediate reasoning steps, data sources, and caveats."

Use Case 32: Domain Expertise Calibration

Pattern: Diachronic Identity (1) + Metacognition (10)

Scenario: The system observes that this user understands Kubernetes terminology (uses kubectl commands fluently, discusses pod affinity) but struggles with database optimization (asks basic questions about indexes). The peer card stores domain-specific expertise levels: {kubernetes: "expert", database: "intermediate", frontend: "beginner"}. Skill outputs adjust jargon level per domain.

Use Case 33: Skill Recommendation

Pattern: Prospective Memory (9) + Narrative Memory (8)

Scenario: The user just finished a code implementation and is about to deploy. The narrative memory tracks that this user's workflow pattern is: implement -> test -> review -> deploy. The prospective memory surfaces: "Based on your usual workflow, you might want to run /code-review before deploying. You've used this skill in 85% of past deploy sequences."

Use Case 34: Execution Context Memory

Pattern: Situated Cognition (13) + Tiered Memory (6)

Scenario: The user runs the generate_api_docs skill frequently with the parameters {format: "openapi", include_examples: true, verbosity: "detailed"}. This parameter set is stored in archival memory. When the user invokes the skill again, the system pre-fills: "Last time you ran this skill with OpenAPI format, examples included, detailed verbosity. Use the same parameters?"

Use Case 35: Error Pattern Recognition

Pattern: Dreamer Pipeline (3) + Narrative Memory (8)

Scenario: The dreamer identifies that the database_migration skill fails 40% of the time when the user has uncommitted changes in the working directory. This correlation was never explicitly reported --- it emerged from pattern analysis across 25 failed invocations. The system now warns: "You have uncommitted changes. The database_migration skill has a 40% failure rate in this situation. Consider committing first."

Use Case 36: Workflow Template Memory

Pattern: Tiered Memory (6) + Prospective Memory (9)

Scenario: The user frequently chains skills in a specific order: lint -> typecheck -> test -> code-review -> deploy. This workflow is stored as a workflow template in archival memory. The system offers: "Run your standard pre-deploy workflow? (lint -> typecheck -> test -> code-review -> deploy)"

Use Case 37: Approval Pattern Learning

Pattern: ACT-R Activation (7) + Deriver (3)

Scenario: The system tracks which skill outputs the user approves vs. rejects. Over 100 executions, it learns that the user rejects outputs from the email_draft skill 60% of the time when the tone is "formal" but only 15% of the time when the tone is "professional-warm." The skill's default tone parameter for this user is updated accordingly.

Use Case 38: Resource Constraint Awareness

Pattern: Situated Cognition (13) + Metacognition (10)

Scenario: The system knows the user's org tier (from auth JWT) and current quota utilization. When the user requests a skill that would consume 50% of their remaining monthly LLM quota, the metacognition layer warns: "This skill execution will use approximately 12,000 tokens. You're at 78% of your monthly quota with 8 days remaining. Proceed?"

Use Case 39: Cross-Skill Context Threading

Pattern: Narrative Memory (8) + Situated Cognition (13)

Scenario: The user runs research_competitor followed by draft_battlecard followed by generate_email. The narrative memory links these three skill executions into a coherent project narrative: "Competitive analysis project for [competitor]." When the user later asks "what did I find about [competitor]?", the dialectic retrieval returns the full narrative chain, not just isolated skill outputs.

Use Case 40: Skill Quality Feedback Loop

Pattern: Metacognition (10) + ACT-R Activation (7)

Scenario: After each skill execution, the system estimates confidence in the output quality based on: input completeness, output coherence, and historical approval rate for similar inputs. Low-confidence outputs (below 60%) trigger automatic enhancement: "Confidence in this output is moderate (55%). Running additional validation pass with deeper analysis."

6.5 Platform-Wide Intelligence

Use Case 41: Navigation Pattern Memory

Pattern: Situated Cognition (13) + Prospective Memory (9)

Scenario: The system tracks which dashboard sections the user visits most: Contacts (40%), Deals (30%), Analytics (20%), Settings (10%). The peer card stores this as a navigation fingerprint. When the user logs in, the dashboard pre-loads their most-visited section. When navigation patterns change (user suddenly starts visiting Analytics daily), the surprisal event triggers investigation.

Use Case 42: Notification Preference Learning

Pattern: ACT-R Activation (7) + Principled Forgetting (11)

Scenario: The system tracks user responses to notifications: acted on, dismissed, snoozed. Over time, it learns that the user always acts on "deployment failed" alerts (high activation) but always dismisses "weekly usage summary" notifications (low activation, eventually forgotten). Notification priority is adjusted: critical alerts stay prominent, dismissed notification types are suppressed or batched into digests.

Use Case 43: Onboarding Expertise Detection

Pattern: Diachronic Identity (1) + Metacognition (10)

Scenario: The system tracks the user's journey from novice to power user. Initially, the user asks basic questions ("how do I create a contact?") and the peer card stores {expertise: "beginner"}. Over 3 months, questions become sophisticated ("how do I set up a webhook to trigger a pipeline on deal stage change?"). The diachronic identity tracks this trajectory, and the system graduates tutorials: basic onboarding tooltips disappear, advanced feature discovery prompts appear.

Use Case 44: Multi-Device Context

Pattern: Situated Cognition (13) + Tiered Memory (6)

Scenario: The system detects that the user accesses the platform from desktop (70% of time, long sessions, complex workflows) and mobile (30% of time, short sessions, status checks). Situated cognition encodes device context on every interaction. The system optimizes: desktop sessions get detailed dashboards and multi-step workflows; mobile sessions get summarized status views and quick actions.

Use Case 45: Time-of-Day Behavior Patterns

Pattern: Dreamer Pipeline (3) + Situated Cognition (13)

Scenario: The dreamer identifies temporal behavior patterns: this user does prospect research in the morning (8-10am), pipeline reviews midday (11am-1pm), and administrative tasks in the afternoon (2-5pm). When the user logs in at 8:30am, the system surfaces research-related context and tools. When they log in at 2pm, it surfaces admin tasks and settings.

Use Case 46: Cross-Plugin Context Threading

Pattern: Narrative Memory (8) + Situated Cognition (13)

Scenario: The user switches from ProseCreator (writing a case study about a client) to NexusROS (researching that same client's account). The narrative memory detects the thematic link: both activities involve "Acme Corp." When the user enters NexusROS, the system surfaces: "You're currently writing a case study about Acme Corp in ProseCreator. Here's their latest account data for your reference."

Use Case 47: Support Interaction Memory

Pattern: Tiered Memory (6) + Dialectic Retrieval (5)

Scenario: The user contacts support about a recurring issue with CSV imports. The dialectic retrieval synthesizes past support interactions: "You've reported CSV import issues 3 times in the past 4 months. Previous resolutions: (1) encoding fix --- UTF-8 BOM, (2) column mapping reset, (3) file size limit increase. Has the issue recurred, or is this a new variation?"

Use Case 48: Feature Discovery Prospective

Pattern: Prospective Memory (9) + Diachronic Identity (1)

Scenario: The system notices the user manually exports data to Excel, manipulates it, and re-imports it --- a workflow that the platform's built-in data transformation feature eliminates. The prospective memory surfaces: "You've done manual Excel round-trips 8 times this month. Did you know the Data Transformation feature can do this in-platform? Here's a 2-minute guide."

Use Case 49: Team Role Modeling

Pattern: Collective Memory (12) + Dreamer Pipeline (3)

Scenario: The dreamer analyzes team-level behavioral patterns and identifies best practices: "Top-performing team members (>110% quota) share three behaviors: (1) they use the meeting prep feature before every call, (2) they update deal stages within 1 hour of meetings, and (3) they send follow-up emails within 24 hours." These patterns are promoted to team scope and surfaced during onboarding.

Use Case 50: Privacy Preference Memory

Pattern: Diachronic Identity (1) + Principled Forgetting (11) + Metacognition (10)

Scenario: The user has configured strict privacy preferences: no data sharing beyond their individual scope, no behavioral analysis for team patterns, GDPR data export requested annually. These preferences are stored as protected observations (is_protected: true) that survive all forgetting cycles. Every memory operation checks privacy preferences before storing or sharing. The metacognition layer flags when a requested operation would violate privacy settings: "This action would share your deal data with the team analytics dashboard. Your privacy settings prohibit team-scope data sharing. Would you like to update your preferences?"


7. UI/UX Mockups and Memory Journey Diagrams

7.1 User Memory Profile Page

+================================================================+
|  NEXUS COGNITIVE MEMORY                         [Settings] [?]  |
+================================================================+
|                                                                 |
|  PEER CARD: Sarah Chen                     Updated: 2h ago     |
|  +---------------------------------------------------------+   |
|  | Role: Senior AE, Healthcare Vertical                    |   |
|  | Expertise: Kubernetes(expert) Database(mid) React(new)  |   |
|  | Communication: Professional-warm, no emoji, concise     |   |
|  | Work Pattern: Research AM, Reviews Midday, Admin PM     |   |
|  | Preferred Tools: Playwright, VSCode, Claude Code        |   |
|  | Deal Style: Consultative (evolved from aggressive Q3)   |   |
|  | [Show all 34 facts...]                                  |   |
|  +---------------------------------------------------------+   |
|                                                                 |
|  OBSERVATION TIMELINE                        [Filter] [Search] |
|  +---------------------------------------------------------+   |
|  | Apr 14  Explicit  "Prefers ROI-first pitch decks"       |   |
|  |         β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ activation: 0.92                   |   |
|  | Apr 13  Deductive "Healthcare vertical specialist"      |   |
|  |         β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘ activation: 0.87                   |   |
|  | Apr 12  Inductive "Win rate +31% with case study in     |   |
|  |         48h post-discovery"                             |   |
|  |         β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘ activation: 0.81                   |   |
|  | Apr 10  Explicit  "Switched CRM view to pipeline mode"  |   |
|  |         β–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘ activation: 0.34   [DECAYING]       |   |
|  | ...                                                     |   |
|  +---------------------------------------------------------+   |
|                                                                 |
|  KNOWLEDGE GRAPH                                    [Expand]   |
|  +---------------------------------------------------------+   |
|  |          [Healthcare]---[ROI Selling]                    |   |
|  |              |      \       |                            |   |
|  |         [Quota Mgmt]  [Case Studies]                     |   |
|  |              |              |                            |   |
|  |      [Pipeline Risk]--[Follow-up Timing]                 |   |
|  |                                                          |   |
|  |  Coverage: β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘ 83%   Gaps: [Competitive Intel]  |   |
|  +---------------------------------------------------------+   |
|                                                                 |
+================================================================+

7.2 Memory Health Dashboard

+================================================================+
|  MEMORY HEALTH DASHBOARD                    Org: Adverant Inc   |
+================================================================+
|                                                                 |
|  ACTIVATION DISTRIBUTION (all users)                           |
|  +---------------------------------------------------------+   |
|  | High (>0.8)  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘  34%  (12,400 nodes)|   |
|  | Mid  (0.4-8) β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ  45%  (16,200 nodes)|   |
|  | Low  (<0.4)  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘  15%  ( 5,400 nodes)|   |
|  | Archived     β–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘   6%  ( 2,100 nodes)|   |
|  +---------------------------------------------------------+   |
|                                                                 |
|  FORGETTING CURVES (30-day window)                             |
|  +---------------------------------------------------------+   |
|  | Activation                                               |   |
|  | 1.0|*                                                    |   |
|  |    | *                                                   |   |
|  | 0.8|  *                                                  |   |
|  |    |    *                                                |   |
|  | 0.6|      *                                              |   |
|  |    |         *    *                                      |   |
|  | 0.4|            *    *     *                             |   |
|  |    |                    *      *     *                   |   |
|  | 0.2|                              *     *    *    *      |   |
|  |    |                                                     |   |
|  | 0.0+----+----+----+----+----+----+----+----+----+----+   |   |
|  |    0    3    6    9   12   15   18   21   24   27   30   |   |
|  |                    Days since last access                |   |
|  |                                                          |   |
|  |  Observed tau=8.3 days  |  Ebbinghaus predicted tau=7.5 |   |
|  +---------------------------------------------------------+   |
|                                                                 |
|  DREAMER ACTIVITY (last 7 days)                                |
|  +---------------------------------------------------------+   |
|  | Consolidation runs:  14                                  |   |
|  | Peer cards updated:   8                                  |   |
|  | Observations merged: 42                                  |   |
|  | Contradictions found: 3                                  |   |
|  | Surprisal events:     7                                  |   |
|  | Inductive patterns:  11                                  |   |
|  +---------------------------------------------------------+   |
|                                                                 |
|  COVERAGE GAPS (domains with <50% coverage)                    |
|  +---------------------------------------------------------+   |
|  | Competitive Intel  β–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘  18%  [Needs more data]   |   |
|  | Accessibility      β–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘  28%  [Needs more data]   |   |
|  | Team Standards     β–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘  38%  [Growing]           |   |
|  +---------------------------------------------------------+   |
|                                                                 |
+================================================================+

7.3 Dreamer Activity Log

+================================================================+
|  DREAMER ACTIVITY LOG                       [Pause] [Configure]|
+================================================================+
|                                                                 |
|  Apr 14, 03:22 UTC | CONSOLIDATION RUN #847                   |
|  +---------------------------------------------------------+   |
|  | Trigger: Idle timeout (4h since last interaction)        |   |
|  | User: sarah.chen@acme.com                                |   |
|  | Observations processed: 23 (18 explicit, 5 deductive)    |   |
|  |                                                          |   |
|  | DEDUCTION SPECIALIST:                                    |   |
|  |   Input: 18 explicit observations from last 3 sessions   |   |
|  |   Output: 4 new deductive conclusions                    |   |
|  |   - "User is preparing for quarterly business review"    |   |
|  |   - "User's pipeline has 3 at-risk deals (engagement     |   |
|  |      decline pattern detected)"                          |   |
|  |                                                          |   |
|  | INDUCTION SPECIALIST:                                    |   |
|  |   Input: 5 deductive + 42 historical conclusions         |   |
|  |   Output: 2 new inductive generalizations                |   |
|  |   - "User performs better in structured meeting formats   |   |
|  |      than free-form conversations (win rate +22%)"       |   |
|  |   - "User's research depth correlates with deal size     |   |
|  |      (>$50K deals: 3x more research sessions)"           |   |
|  |                                                          |   |
|  | PEER CARD UPDATES:                                       |   |
|  |   + Added: "Structured meeting preference"               |   |
|  |   ~ Updated: "Pipeline management" confidence 0.7->0.85  |   |
|  |   - Archived: "Prefers email over phone" (superseded     |   |
|  |     by "Uses video calls for deals >$50K")               |   |
|  |                                                          |   |
|  | SURPRISAL EVENTS:                                        |   |
|  |   ! "User requested Cypress instead of Playwright"       |   |
|  |     Surprisal score: 3.2 (threshold: 2.0)                |   |
|  |     Action: Flagged for deeper investigation next cycle   |   |
|  +---------------------------------------------------------+   |
|                                                                 |
|  Apr 13, 14:15 UTC | SURPRISAL-TRIGGERED #846                 |
|  +---------------------------------------------------------+   |
|  | Trigger: High surprisal observation (score: 4.1)         |   |
|  | Event: User abandoned consultative approach for           |   |
|  |        aggressive closing technique on Deal #2847         |   |
|  | Analysis: User's deal style diachronic trajectory shows   |   |
|  |           aggressive -> consultative evolution over 6mo.  |   |
|  |           This reversion may indicate: (a) deal urgency,  |   |
|  |           (b) buyer personality mismatch, or              |   |
|  |           (c) quota pressure (user at 60% with 10 days)   |   |
|  | Hypothesis: (c) is most likely given situated context      |   |
|  +---------------------------------------------------------+   |
|                                                                 |
+================================================================+

7.4 Plugin Context Panel (ProseCreator)

+================================================================+
|  PROSECREATOR: WRITING INTELLIGENCE          Chapter 14 of 22  |
+================================================================+
|                                                                 |
|  YOUR WRITING PROFILE                                          |
|  +---------------------------------------------------------+   |
|  | Style: Literary realism                                  |   |
|  | Avg sentence: 18.3 words | Dialogue ratio: 35%           |   |
|  | POV: Close third | Tense: Past                           |   |
|  | Metaphor density: Moderate | Vocabulary: Rich             |   |
|  +---------------------------------------------------------+   |
|                                                                 |
|  ACTIVE NARRATIVES                                             |
|  +---------------------------------------------------------+   |
|  | [!] Missing letter subplot - dormant 4 chapters           |   |
|  |     Last referenced: Ch.10 | Expected resolution: Ch.15  |   |
|  | [*] Elena's character arc - active                         |   |
|  |     Current: self-doubt phase | Predicted: revelation     |   |
|  | [~] Setting continuity - 2 moons established              |   |
|  |     Potential conflict in Ch.13: "the moon" (singular)    |   |
|  +---------------------------------------------------------+   |
|                                                                 |
|  THEMATIC PATTERNS (dreamer-identified)                        |   |
|  +---------------------------------------------------------+   |
|  | "Your strongest theme across this novel is the tension    |   |
|  |  between autonomy and belonging. Chapters where this     |   |
|  |  is most explicit (Ch.3, Ch.7, Ch.11) have your lowest   |   |
|  |  revision counts --- suggesting they flow naturally."     |   |
|  +---------------------------------------------------------+   |
|                                                                 |
|  STALL DETECTION                              [Not stalled]    |
|  | Session productivity: 1,847 words/session (above avg)    |   |
|  | Previous stall patterns: Skip-ahead method worked 2/3x   |   |
|  +---------------------------------------------------------+   |
|                                                                 |
+================================================================+

7.5 Memory Journey: Message to Peer Card

USER MESSAGE
"I've switched to using Playwright for all my tests now.
 Jest was too slow for our integration suite."
      |
      v
+------------------+    +------------------+
| DERIVER          |    | EXISTING         |
| (sync, <2s)      |    | PEER CARD        |
|                  |    |                  |
| Extracts:        |    | Contains:        |
| 1. "Uses         |    | - "Prefers Jest" |
|    Playwright"   |    |   (stability:0.6)|
|    [explicit]    |    | - "Integration   |
| 2. "Abandoned    |    |    test focus"   |
|    Jest"         |    |   (stability:0.8)|
|    [explicit]    |    +--------+---------+
| 3. "Speed is     |             |
|    the reason"   |             | Contradiction
|    [explicit]    |             | detected!
| 4. "Integration  |             |
|    suite focus"  |             |
|    [deductive]   |             |
+--------+---------+             |
         |                       |
         v                       v
+------------------------------------------+
| SURPRISAL CHECK                          |
|                                          |
| P("switches from Jest") = 0.12          |
| Surprisal = -log(0.12) = 2.12          |
| Threshold = 2.0                         |
| Result: EXCEEDS -> trigger dreamer      |
+--------+---------------------------------+
         |
         v (async, next idle period)
+------------------------------------------+
| DREAMER                                  |
|                                          |
| Deduction Specialist:                    |
|   "User values test execution speed      |
|    over test framework familiarity"      |
|   [deductive, confidence: 0.85]         |
|                                          |
| Induction Specialist:                    |
|   "User's tool choices are driven by     |
|    performance metrics, not ecosystem    |
|    loyalty --- pattern seen in 3 other    |
|    tool switches this quarter"           |
|   [inductive, confidence: 0.72]         |
|                                          |
| Peer Card Update:                        |
|   REMOVE: "Prefers Jest"                 |
|   ADD:    "Uses Playwright (speed-       |
|            driven switch from Jest)"     |
|   ADD:    "Tool selection driven by      |
|            performance, not loyalty"     |
+------------------------------------------+

7.6 Memory Journey: Dialectic Query (5 Levels)

APPLICATION QUERY: "What testing framework
                    does this user prefer?"
      |
      +--------+--------+--------+--------+
      v        v        v        v        v
  MINIMAL    LOW     MEDIUM    HIGH      MAX

  MINIMAL (fast, <500ms):
  +------------------+
  | Tool: get_peer_  |
  |   card(userId)   |
  | Result: "Uses    |
  |   Playwright"    |
  | Done. 1 tool call|
  +------------------+

  LOW (<2s):
  +------------------+
  | 1. get_peer_card |
  | 2. get_recent_   |
  |    observations  |
  |    (limit: 5)    |
  | Synthesize:      |
  | "Uses Playwright |
  |  since Q1 2026,  |
  |  switched from   |
  |  Jest for speed" |
  +------------------+

  MEDIUM (<5s):
  +------------------+
  | 1. get_peer_card |
  | 2. search_obs(   |
  |   "testing")     |
  | 3. search_obs(   |
  |   "framework")   |
  | 4. get_most_     |
  |    derived(5)    |
  | 5. search_       |
  |    narratives    |
  |    ("testing")   |
  | Synthesize:      |
  | "Playwright for  |
  |  integration,    |
  |  Vitest for unit.|
  |  Speed-driven.   |
  |  Team standard is|
  |  Playwright."    |
  +------------------+

  HIGH (<15s):
  +------------------+
  | 10 tool calls    |
  | Cross-references |
  | team standards,  |
  | project context, |
  | historical tools,|
  | peer comparisons |
  | Full reasoning   |
  | chain with       |
  | confidence       |
  | scores per claim |
  +------------------+

  MAX (<60s):
  +------------------+
  | 20 tool calls    |
  | Deep analysis of |
  | testing philosophy|
  | evolution over   |
  | time, tool switch|
  | triggers, team   |
  | influence factors|
  | psychological    |
  | profile of tool  |
  | selection        |
  | patterns         |
  +------------------+

7.7 Memory Journey: Collective Memory Promotion

INDIVIDUAL SCOPE          TEAM SCOPE           ORG SCOPE
(user-private)            (shared workspace)   (cross-team)
                                               
+----------------+                             
| Sarah: "ROI-   |                             
|  first pitch   |                             
|  wins 78% of   |                             
|  the time"     |                             
| activation:0.9 |                             
+-------+--------+                             
        |                                      
+-------+--------+                             
| Mike: "ROI     |                             
|  pitches work  |                             
|  better than   |                             
|  feature demos"|                             
| activation:0.85|                             
+-------+--------+                             
        |                                      
+-------+--------+     PROMOTION TRIGGER:      
| Lisa: "Started | --> 3+ users with similar   
|  leading with  |     observation AND          
|  ROI data,     |     avg activation > 0.8     
|  close rate    |                              
|  jumped 25%"   |     PII SCAN: PASS           
| activation:0.88|     (no personal data)       
+-------+--------+                              
        |                                       
        +-----------> +--------------------+    
                       | TEAM INSIGHT:      |    
                       | "ROI-first pitch   |    
                       |  strategy shows    |    
                       |  +25-31% win rate  |    
                       |  improvement       |    
                       |  across 3 reps"    |    
                       | source_count: 3    |    
                       | confidence: 0.87   |    
                       +--------+-----------+    
                                |                
                   Further promotion             
                   when observed in              
                   3+ teams:                     
                                |                
                       +--------v-----------+    
                       | ORG BEST PRACTICE: |    
                       | "ROI-first selling |    
                       |  outperforms       |    
                       |  feature-led in    |    
                       |  enterprise deals" |    
                       | teams_observed: 4  |    
                       | anonymized: true   |    
                       +--------------------+    

8. Evidence-Based Testing Framework

8.1 Benchmark Foundation

The cognitive memory architecture must be validated against established benchmarks that test memory at scale. Two benchmarks provide the foundation:

BEAM 10M (Billion-scale Evaluation of Agent Memory) tests memory systems on tasks requiring retrieval from corpora of up to 10 million tokens. Honcho achieves a 0.406 score using approximately 50,000 tokens --- 0.5% of the available window --- with no substantial score drop-off until the corpus reaches millions of tokens [2]. This demonstrates that hierarchical reasoning can maintain quality at extreme scale.

LongMem tests accuracy of memory recall over long interaction histories. Honcho achieves 90.4% accuracy using a median of only 5% of available question context tokens [2]. This validates the peer card + observation corpus architecture: a 40-fact peer card plus targeted retrieval can answer most questions without loading the full history.

8.2 Nexus Cognitive Memory Test Suite

We propose a comprehensive testing framework with five test categories:

8.2.1 Unit Tests: Pattern-Level Validation

Each of the thirteen patterns has pattern-specific test cases:

PatternTestSuccess Criteria
Diachronic Identity100 evolving user profiles over 6 months simulatedPeer card accurately reflects current state AND tracks trajectory
Deriver1,000 messages with known facts>95% explicit fact extraction, >85% deductive conclusion accuracy
Dreamer500 observation corporaInductive patterns match human-annotated patterns in >80% of cases
SurprisalMessages with known surprisal scoresCorrelation >0.85 between predicted and actual surprisal
Dialectic200 queries at each reasoning levelAccuracy increases monotonically with reasoning level
Activation Decay10,000 nodes over 90-day simulationDecay curves match Ebbinghaus within 10% RMSE
Forgetting100,000-node corpus over 1 yearCorpus size stabilizes; retrieval precision improves
Narrative50 multi-session interaction historiesNarrative nodes correctly capture causal/temporal structure
Prospective500 behavioral patterns with known next-actions>70% precision on proactive suggestions
Metacognition200 queries with known answer availabilitySystem correctly identifies "I don't know" in >90% of cases
Collective10 teams x 5 users x 100 interactionsTeam insights emerge and promotion rules fire correctly
Situated1,000 interactions across 5 contextsContext-matched retrieval outperforms context-free by >25%
Peer Paradigm3 agents x 5 users x 100 interactionsObservation scoping is correct; no information leakage

8.2.2 Integration Tests: Cross-Pattern Validation

Test interactions between patterns:

  • Deriver -> Dreamer -> Peer Card update chain (end-to-end latency, correctness)
  • Surprisal -> Dreamer trigger (correct threshold behavior)
  • Dialectic -> Activation update (retrieval strengthens accessed nodes)
  • Forgetting -> Narrative preservation (protected narratives survive decay)
  • Collective promotion -> Privacy enforcement (no PII leakage)

8.2.3 Scale Tests: BEAM 10M Equivalent

Adapted BEAM 10M for Nexus:

  • 10,000 simulated users with 1,000 interactions each (10M total interactions)
  • Inject known facts at specific points in the interaction history
  • Query those facts at various delays (1 hour, 1 day, 1 week, 1 month, 6 months)
  • Measure: recall accuracy, retrieval latency, token efficiency (tokens used / total available)
  • Target: >85% accuracy using <5% of available context tokens

8.2.4 User Experience Tests: Subjective Quality

A/B testing framework:

  • Group A: Standard RAG retrieval (current system)
  • Group B: Cognitive memory architecture
  • Metrics: User satisfaction (NPS), task completion time, error rate, "cold start" perception (does the system feel like it knows the user?), suggestion acceptance rate

8.2.5 Privacy and Safety Tests

  • PII scan accuracy: >99.5% detection rate before collective memory promotion
  • Observation scoping: Zero cross-peer information leakage
  • Protected node survival: 100% survival rate through all forgetting cycles
  • GDPR data export: Complete user memory export in <60 seconds
  • GDPR data deletion: Complete user memory deletion in <300 seconds with cascade verification

8.3 Testing Dashboard UI/UX

+================================================================+
|  COGNITIVE MEMORY TEST SUITE                    [Run All] [CI]  |
+================================================================+
|                                                                 |
|  PATTERN TESTS                        Pass: 11/13  Fail: 2/13  |
|  +---------------------------------------------------------+   |
|  | [PASS] Diachronic Identity    98.2% accuracy             |   |
|  | [PASS] Deriver                96.1% fact extraction       |   |
|  | [PASS] Dreamer                84.3% pattern match         |   |
|  | [PASS] Surprisal              r=0.91 correlation          |   |
|  | [PASS] Dialectic              monotonic accuracy increase |   |
|  | [PASS] Activation Decay       7.2% RMSE (target: <10%)   |   |
|  | [PASS] Forgetting             corpus stabilized at 45K    |   |
|  | [FAIL] Narrative              72% structure accuracy      |   |
|  |        Expected: >80%  Failing: causal edge detection    |   |
|  | [PASS] Prospective            74.1% precision             |   |
|  | [PASS] Metacognition          93.2% "I don't know" rate   |   |
|  | [PASS] Collective             promotions correct          |   |
|  | [FAIL] Situated               21% improvement (need >25%) |   |
|  | [PASS] Peer Paradigm          zero leakage               |   |
|  +---------------------------------------------------------+   |
|                                                                 |
|  SCALE TEST (BEAM 10M equivalent)                              |
|  +---------------------------------------------------------+   |
|  | Users: 10,000  |  Interactions: 10M  |  Duration: 4.2h  |   |
|  | Recall accuracy:  87.3%  [PASS >85%]                     |   |
|  | Token efficiency:  3.8%  [PASS <5%]                      |   |
|  | Avg retrieval latency: 142ms                             |   |
|  | P99 retrieval latency: 890ms                             |   |
|  +---------------------------------------------------------+   |
|                                                                 |
|  PRIVACY TESTS                              Pass: 5/5          |
|  +---------------------------------------------------------+   |
|  | [PASS] PII detection:       99.7% (target >99.5%)       |   |
|  | [PASS] Observation scoping: 0 leaks in 100K tests       |   |
|  | [PASS] Protected survival:  100% after 10 decay cycles  |   |
|  | [PASS] GDPR export:         34s (target <60s)           |   |
|  | [PASS] GDPR delete:         187s (target <300s)         |   |
|  +---------------------------------------------------------+   |
|                                                                 |
+================================================================+

9. Nexus Cognitive Memory Microservice

9.1 Architecture Decision: Extend GraphRAG vs. New Microservice

Two approaches exist for implementing the cognitive memory architecture:

Option A: Extend GraphRAG. Add deriver, dreamer, dialectic, and activation decay as new modules within the existing nexus-graphrag service. Advantages: shared database connections, existing tenant context middleware, no new deployment. Disadvantages: increased service complexity (already ~5,700 lines in api.ts), mixed responsibilities (document storage + cognitive reasoning), deployment coupling (a dreamer bug takes down document search).

Option B: New Microservice (nexus-cognitive). A dedicated microservice for cognitive memory, consuming events from GraphRAG and producing peer card updates, observations, and narrative nodes. Advantages: single responsibility, independent scaling, independent deployment, clean API boundary. Disadvantages: additional infrastructure, cross-service latency, data consistency challenges.

Recommendation: Option B (New Microservice) for three reasons:

  1. The dreamer's background processing has fundamentally different scaling characteristics than GraphRAG's request-response pattern
  2. Cognitive memory failures should not cascade to document storage/retrieval
  3. The MCP integration layer (Section 5.8) benefits from a clean API surface

9.2 Microservice Architecture

+================================================================+
|                    nexus-cognitive                               |
+================================================================+
|                                                                 |
|  +------------------+  +------------------+  +--------------+  |
|  |  Deriver Worker  |  | Dreamer Worker   |  | Dialectic    |  |
|  |  (BullMQ)        |  | (scheduled +     |  | API          |  |
|  |                  |  |  surprisal-       |  | (Express)    |  |
|  |  - Fact extract  |  |  triggered)       |  |              |  |
|  |  - Deductive     |  |                  |  | - /query     |  |
|  |    conclusions   |  |  - Deduction     |  | - /card      |  |
|  |  - Surprisal     |  |    specialist    |  | - /context   |  |
|  |    computation   |  |  - Induction     |  | - /health    |  |
|  |  - Observation   |  |    specialist    |  |              |  |
|  |    storage       |  |  - Consolidation |  +--------------+  |
|  |                  |  |  - Peer card     |                     |
|  +------------------+  |    update        |  +--------------+  |
|                        |  - Forgetting    |  | Activation   |  |
|  +------------------+  |    sweep         |  | Decay Job    |  |
|  | Event Consumer   |  +------------------+  | (cron)       |  |
|  | (Redis Pub/Sub)  |                        |              |  |
|  |                  |  +------------------+  | - Recompute  |  |
|  | Listens:         |  | Prospective      |  |   scores     |  |
|  | - chat.message   |  | Memory Engine    |  | - Archive    |  |
|  | - skill.executed |  |                  |  |   low-act.   |  |
|  | - tool.invoked   |  | - Intent nodes   |  | - Purge      |  |
|  | - page.navigated |  | - Trigger eval   |  |   probation  |  |
|  |                  |  | - Proactive push |  +--------------+  |
|  +------------------+  +------------------+                     |
|                                                                 |
|  Shared: PostgreSQL (graphrag schema) | Neo4j | Qdrant | Redis |
+================================================================+

9.3 Dashboard Tab: Cognitive Memory

The Nexus Dashboard gains a new "Memory" tab providing full visibility into the cognitive memory system:

+================================================================+
|  NEXUS DASHBOARD                                                |
|  [Overview] [Contacts] [Deals] [Skills] [Memory*] [Settings]  |
+================================================================+
|                                                                 |
|  MEMORY TAB                                                    |
|  +----+--------+----------+--------+-----------+----------+   |
|  |Peer|Observ- |Dreamer   |Dialec- |Collective |Health    |   |
|  |Card|ations  |Activity  |tic     |Memory     |Dashboard |   |
|  +----+--------+----------+--------+-----------+----------+   |
|                                                                 |
|  [Content changes based on selected sub-tab]                   |
|  See mockups in Section 7 for each sub-tab's layout            |
|                                                                 |
+================================================================+

Sub-tabs:

  1. Peer Card --- View and manually edit the 40-fact biographical profile
  2. Observations --- Browse, search, and filter the observation timeline with activation scores
  3. Dreamer Activity --- Log of consolidation runs, surprisal events, pattern discoveries
  4. Dialectic --- Interactive query interface with reasoning level selector
  5. Collective Memory --- Team and org-scope insights, promotion history, privacy audit log
  6. Health Dashboard --- Activation distribution, forgetting curves, coverage gaps, test results

10. Discussion

10.1 Limitations

Computational cost. The dreamer's background reasoning consumes LLM tokens. For 10,000 active users with daily dreamer cycles, estimated monthly cost is 2,000βˆ’2,000-5,000 depending on model tier. This is acceptable for enterprise pricing but may need optimization for lower tiers.

Cold start. New users have no observation history. The system must gracefully degrade to standard RAG behavior until sufficient observations accumulate (estimated: 20-50 interactions for meaningful peer card generation).

Privacy tension. Collective memory promotion inherently creates tension between organizational learning and individual privacy. The PII scanning and anonymization pipeline must be robust, and promotion decisions should be auditable and reversible.

Evaluation difficulty. Many cognitive memory patterns (diachronic identity, narrative memory, metacognition) produce subjective improvements that are difficult to measure with automated metrics. A/B testing with human evaluation is necessary but expensive.

Contradiction resolution. The dreamer's contradiction handling relies on temporal recency and evidence weight, but some contradictions reflect genuine ambiguity (the user truly does prefer different tools in different contexts). Over-eager contradiction resolution could eliminate valid nuance.

10.2 Ethical Considerations

Building detailed psychological models of users raises ethical concerns:

  • Informed consent: Users must know that the system builds models of their behavior and have the ability to inspect, edit, and delete those models
  • Manipulation resistance: User models must not be used to manipulate user behavior (dark patterns, addiction mechanisms, weaponized personalization)
  • Bias amplification: If the system learns that a user responds to urgency-based framing, it should not escalate urgency framing to manipulate decisions
  • Right to be forgotten: GDPR-compliant data deletion must cascade through all thirteen pattern stores, including archived observations, narrative nodes, and collective memory contributions

10.3 Future Work

Three directions warrant further investigation:

  1. Federated cognitive memory: Extending the collective memory pattern across organizational boundaries while preserving competitive confidentiality. Industry-level pattern recognition without exposing individual company data.

  2. Multi-modal memory: Extending the architecture to encode visual, audio, and spatial context alongside text. A user's screen layout, voice tone, and physical environment provide situated cognition signals that text alone cannot capture.

  3. Causal inference in user modeling: Moving beyond correlation (the dreamer's inductive patterns) to causal models that can answer counterfactual questions: "Would this user have closed the deal if they had sent the case study 24 hours earlier?"


11. Conclusion

Enterprise AI platforms have treated user memory as a storage problem. It is a reasoning problem. The thirteen patterns presented in this paper --- diachronic identity, observer/observed paradigm, deriver/dreamer pipeline, surprisal-guided consolidation, dialectic retrieval, tiered virtual memory, ACT-R activation decay, narrative memory, prospective memory, metacognition, principled forgetting, collective memory, and situated cognition --- transform memory from a passive data store into an active cognitive system that learns, reasons, forgets, and anticipates.

The Adverant Nexus platform's existing GraphRAG infrastructure (PostgreSQL + Neo4j + Qdrant) provides the storage substrate. The four identified context injection points provide the interface. What remains is the reasoning layer: the deriver that extracts facts, the dreamer that consolidates patterns, the dialectic that synthesizes answers, and the activation decay that ensures the system remembers what matters and forgets what does not.

The fifty use cases demonstrate that cognitive memory is not an abstract capability but a concrete source of user value across every plugin domain --- from a novelist whose AI remembers their prose style to a sales rep whose AI anticipates which deals are at risk. The gap between "find similar text" and "understand this person" is the gap between a tool and a partner. Closing it is the next frontier of enterprise AI.


References

[1] C. Leer, "Identity is diachronic," Plastic Labs Blog, Sep. 2025. [Online]. Available: blog.plasticlabs.ai

[2] C. Leer, "Launching Honcho: The Personal Identity Platform for AI," Plastic Labs Blog, May 2025. [Online]. Available: blog.plasticlabs.ai

[3] Y. Honda, Y. Fujita, K. Zempo, and S. Fukushima, "Human-Like Remembering and Forgetting in LLM Agents: An ACT-R-Inspired Memory Architecture," in Proc. 13th Int. Conf. Human-Agent Interaction (HAI '25), 2025. DOI: 10.1145/3765766.3765803

[4] Y. Zhou, X. Guo, B. Bayar, and S. H. Sengamedu, "Amory: Building Coherent Narrative-Driven Agent Memory through Agentic Reasoning," arXiv preprint arXiv:2601.06282, Jan. 2026.

[5] J. Nan, W. Ma, W. Wu, and Y. Chen, "Nemori: Self-Organizing Agent Memory Inspired by Cognitive Science," arXiv preprint arXiv:2508.03341, 2025.

[6] H. Jiang et al., "SYNAPSE: Empowering LLM Agents with Episodic-Semantic Memory via Spreading Activation," arXiv preprint arXiv:2601.02744, Jan. 2026.

[7] V. Voruganti, "Beyond the User-Assistant Paradigm: Introducing Peers," Plastic Labs Blog, Aug. 2025. [Online]. Available: blog.plasticlabs.ai

[8] C. Packer, S. Wooders, K. Lin, V. Fang, S. G. Patil, I. Stoica, and J. E. Gonzalez, "MemGPT: Towards LLMs as Operating Systems," arXiv preprint arXiv:2310.08560, Oct. 2023.

[9] C. Leer, V. Trost, and V. Voruganti, "Introducing Honcho's Dialectic API," Plastic Labs Blog, Mar. 2024. [Online]. Available: blog.plasticlabs.ai

[10] M. Griot, C. Hemptinne, J. Vanderdonckt, and D. Yuksel, "Large Language Models lack essential metacognition for reliable medical reasoning," Nature Communications, vol. 16, p. 642, Jan. 2025. DOI: 10.1038/s41467-024-55628-6

[11] L. Ji-An, H.-D. Xiong, R. C. Wilson, M. G. Mattar, and M. K. Benna, "Language Models Are Capable of Metacognitive Monitoring and Control of Their Internal Activations," arXiv preprint arXiv:2505.13763, 2025.

[12] Y. Hu et al., "Memory in the Age of AI Agents," arXiv preprint arXiv:2512.13564, Dec. 2025.

[13] T. R. Sumers, S. Yao, K. Narasimhan, and T. L. Griffiths, "Cognitive Architectures for Language Agents," Trans. Machine Learning Research, 2024. arXiv:2309.02427.

[14] Letta Documentation, "Understanding Memory Management," [Online]. Available: docs.letta.com

[15] W. Xu, Z. Liang, K. Mei, H. Gao, J. Tan, and Y. Zhang, "A-MEM: Agentic Memory for LLM Agents," in Advances in Neural Information Processing Systems (NeurIPS), 2025. arXiv:2502.12110.

[16] D. Kline, "Human-like Forgetting Curves in Deep Neural Networks," arXiv preprint arXiv:2506.12034, 2025.

[17] Y. Zhou, Z. Liu, J. Jin, J.-Y. Nie, and Z. Dou, "Metacognitive Retrieval-Augmented Large Language Models," in Proc. ACM Web Conf. 2024 (WWW '24), 2024. DOI: 10.1145/3589334.3645481

[18] Z. Tan et al., "In Prospect and Retrospect: Reflective Memory Management for Long-term Personalized Dialogue Agents," in Proc. 63rd Annual Meeting of the ACL, 2025. arXiv:2503.08026.

[19] K. Wedel, "Contextual Memory Intelligence --- A Foundational Paradigm for Human-AI Collaboration and Reflective Generative AI Systems," arXiv preprint arXiv:2506.05370, 2025.

[20] C. Riedl and D. De Cremer, "AI for collective intelligence," Collective Intelligence, vol. 4, no. 2, Apr. 2025. DOI: 10.1177/26339137251328909

[21] D. Kahneman, Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011.

[22] Plato, Meno, trans. G. M. A. Grube. Indianapolis: Hackett Publishing, 1981.

[23] P. Du, "Memory for Autonomous LLM Agents: Mechanisms, Evaluation, and Emerging Frontiers," arXiv preprint arXiv:2603.07670, 2026.

[24] Honcho Documentation, "Claude Code Integration," [Online]. Available: docs.honcho.dev

[25] L. Shan, S. Luo, Z. Zhu, Y. Yuan, and Y. Wu, "Cognitive Memory in Large Language Models," arXiv preprint arXiv:2504.02441, 2025.

[26] Y. Wu et al., "From Human Memory to AI Memory: A Survey on Memory Mechanisms in the Era of LLMs," arXiv preprint arXiv:2504.15965, 2025.

[27] Z. Li et al., "MemOS: An Operating System for Memory-Augmented Generation (MAG) in Large Language Models," arXiv preprint arXiv:2505.22101, May 2025.


This paper was produced using the Adverant Research & Engineering methodology. All citations have been verified against their original sources. Author and affiliation information should be completed before external publication.

[Placeholder: Author Name], Adverant Research & Engineering, 2026.