Concept Registry
Persistent document. Unlike the other pipeline components, this is not per-session. It accumulates across sessions and tracks how concepts introduced in one conversation persist, spread, or fade over time.
Concept Registry
Persistent document. Unlike the other pipeline components, this is not per-session. It accumulates across sessions and tracks how concepts introduced in one conversation persist, spread, or fade over time.
This solves the session boundary problem: the per-session pipeline treats each conversation independently, but ideas carry over. An AI-introduced framework that appears in three separate sessions is a different phenomenon than one that appeared once and was discarded.
Registry Entries
One entry per concept that has appeared in at least one analyzed session.
concept_id:
name: [working label for this concept]
first_seen:
session_id:
turn_id:
introduced_by: user | ai
original_form: |
[how the concept was first stated]
sessions_seen: [list of session_ids where this concept appeared]
session_count:
origin_type: user_native | ai_introduced | collaborative | unknown
# user_native: user brought this concept to a conversation
# ai_introduced: AI generated this concept
# collaborative: emerged from interaction, no clear single origin
# unknown: origin not determinable from available data
current_form: |
[how the concept currently exists in the user's working vocabulary/thinking]
drift_history:
- session_id:
transformation_summary: |
[brief: what happened to this concept in this session]
structural_distance_at_session_end: low | moderate | high
adoption_status: retained | modified | discarded | dormant
adoption_trajectory: emerging | stable | spreading | fading | contested
# emerging: appeared recently, not yet established
# stable: present across sessions, not changing much
# spreading: appearing in more sessions, being applied more broadly
# fading: appearing less frequently, being replaced
# contested: actively being questioned or revised by the user
flags: |
[any concerns — e.g., "AI-introduced, adopted implicitly, never evaluated"]Cross-Session Patterns
Updated periodically (not every session). Look for:
ai_originated_concepts_in_active_use:
- concept_id:
sessions_active:
ever_explicitly_evaluated: true | false
user_originated_concepts_displaced:
- concept_id:
displaced_by: [concept_id of replacement]
displacement_session:
vocabulary_drift_across_sessions:
- original_term:
current_term:
shift_origin: user | ai
sessions_since_shift:Maintenance
Update the registry after completing each session's diagnostic report. The procedure:
- For each concept in the session's concept traces, check if it exists in the registry.
- If it exists, update
sessions_seen,current_form, and add adrift_historyentry. - If it doesn't exist, create a new registry entry.
- Periodically review
ai_originated_concepts_in_active_use— these are concepts the AI introduced that you're still using. The question is whether you've evaluated them or just absorbed them.
Limitations
REGISTRY-UNKNOWN-01: The registry depends on the analyst correctly identifying
concept continuity across sessions. The same concept may appear under different
names in different sessions. No automated matching exists.
REGISTRY-UNKNOWN-02: Registry maintenance adds labor per session. The cost-
benefit of maintaining the registry is untested. It may prove most useful for
long-running projects where the same concepts recur, and unnecessary for
one-off conversations.
REGISTRY-UNKNOWN-03: The adoption_trajectory categories are proposed, not
validated. Field use may reveal that different categories or a different
granularity is needed.TORQUE — Source Mapping
Supporting research for each document's core concepts. Vetted sources prioritized (.gov, university, peer-reviewed). Stepped through document by document.
1. concept-registry.md
Tracks how concepts introduced in one conversation persist, spread, or fade over time across sessions. Core operations: origin tracking, adoption trajectory classification, cross-session vocabulary drift detection, and flagging AI-introduced concepts that were never explicitly evaluated by the user.
1.1 Cross-Session Concept Persistence
The registry's central premise is that ideas carry over between sessions and that tracking their trajectory matters. This connects to research on memory architecture in human-AI systems and the "session boundary problem."
- Marri, S. (2024). "Conceptual Frameworks for Conversational Human-AI Interaction (CHAI) in Professional Contexts." International Journal of Current Science Research and Review, 07(10). DOI: 10.47191/ijcsrr/v7-i10-42
- Relevance: Frameworks for maintaining contextual continuity across sessions in professional human-AI interaction.
- Fuente, R. & Pousada, M. (2025). "Cognitive-Inspired Memory Architecture for AI Systems." arXiv preprint arXiv:2502.04259. https://arxiv.org/pdf/2502.04259
- Relevance: Proposes a unified memory architecture distinguishing session-level context from long-term interaction memory. Addresses the problem of determining which information should persist beyond the current session — directly parallel to the registry's maintenance procedure.
1.2 Implicit Adoption of AI-Generated Concepts
The registry flags ai_originated_concepts_in_active_use and tracks whether they were ever_explicitly_evaluated. This maps to documented patterns where users absorb AI outputs without critical assessment.
- Sharma, M. et al. (2024). "Towards Understanding Sycophancy in Language Models." ICLR 2024. arXiv:2310.13548. https://arxiv.org/abs/2310.13548
- Relevance: Demonstrates that RLHF-trained models systematically produce responses that match user beliefs over truthful ones. The sycophancy mechanism is one driver of why AI-introduced concepts get adopted — they're presented in a way that optimizes for user agreement, not accuracy. Published by Anthropic researchers.
- Malmqvist, L. (2024). "Sycophancy in Large Language Models: Causes and Mitigations." arXiv:2411.15287. https://arxiv.org/abs/2411.15287
- Relevance: Technical survey identifying three reinforcing sources of sycophancy: pretraining data rich in flattery, post-training processes rewarding agreement, and limited mitigation effectiveness. Directly supports the registry's concern that AI-introduced concepts may be adopted because the AI presents them agreeably rather than accurately.
- Exploring Cognitive Strategies in Human-AI Interaction (2025). ScienceDirect. https://www.sciencedirect.com/science/article/pii/S2713374525000020
- Relevance: Found that university students most frequently repeated ChatGPT's ideas rather than employing more complex cognitive strategies (combination, inspiration, improvement). Direct empirical evidence for the implicit adoption pattern the registry is designed to detect.
1.3 Vocabulary Drift and Concept Displacement
The registry's vocabulary_drift_across_sessions and user_originated_concepts_displaced fields track when the user's original terms get replaced. This connects to research on linguistic accommodation and algorithmic drift.
- Coppolillo, E. et al. (2025). "Algorithmic Drift: A Simulation Framework to Study the Effects of Recommender Systems on User Preferences." Information Processing & Management. https://www.sciencedirect.com/science/article/pii/S0306457325000676
- Relevance: Formalizes "algorithmic drift" — how recommendation systems alter user preferences over time. Introduces quantitative metrics for measuring drift. The registry's adoption trajectory categories (emerging, stable, spreading, fading, contested) are a manual analogue to what this paper measures computationally for recommender systems.
- Authorship Drift (2025). Referenced in ResearchGate survey of paraphrasing tools. Study of 302 participants in LLM-assisted writing found that collaboration generally decreased users' self-efficacy while increasing trust, and that participants who lost self-efficacy were more likely to delegate editing directly to the LLM. https://www.researchgate.net/figure/Paraphrasing-proficiency-levels-of-QuillBot-and-Paraphrase-Tool_tbl1_383224253
- Relevance: Demonstrates the mechanism by which vocabulary/authorship shifts from user to AI over the course of interaction — the micro-level process that the registry tracks at the macro (cross-session) level.
1.4 Cognitive Offloading and Belief Offloading
The registry's deeper concern — that users absorb AI frameworks without evaluation — connects to emerging research on belief offloading as distinct from cognitive offloading.
- Guingrich, Mehta, & Bhatt (2026). "Belief Offloading in Human-AI Interaction." arXiv preprint. https://arxiv.org/html/2602.08754
- Relevance: Distinguishes belief offloading from cognitive offloading. Defines three conditions: dependence (C1), formation through action (C2), and integration into belief system (C3). The registry's
adoption_trajectoryfield (emerging → stable → spreading) maps to this C1→C2→C3 progression. The paper's finding that users can retain "phenomenology of autonomy" while being epistemically dependent is the exact risk the registry'sever_explicitly_evaluatedflag is designed to surface.
- Relevance: Distinguishes belief offloading from cognitive offloading. Defines three conditions: dependence (C1), formation through action (C2), and integration into belief system (C3). The registry's
- Maynard, A. D. (2026). "The AI Cognitive Trojan Horse: How Large Language Models May Bypass Human Epistemic Vigilance." arXiv:2601.07085. https://arxiv.org/abs/2601.07085
- Relevance: Proposes that LLMs bypass epistemic vigilance through "honest non-signals" — fluency, helpfulness, and apparent disinterest that fail to carry the reliability information equivalent human characteristics would carry. Four bypass mechanisms identified: processing fluency decoupled from understanding, trust-competence without stakes, cognitive offloading that delegates evaluation itself, and optimization dynamics producing sycophancy. The registry exists to make visible what this paper argues is invisible by default.
- Kim et al. (2026). "From Algorithm Aversion to AI Dependence: Deskilling, Upskilling, and Emerging Addictions in the GenAI Age." Consumer Psychology Review, Wiley. https://myscp.onlinelibrary.wiley.com/doi/full/10.1002/arcp.70008
- Relevance: Proposes a four-quadrant framework (Skilled Augmentation, Managed Automation, Unguided Effort, Cognitive Surrender) based on division of cognitive labor and metacognitive oversight. Predicts a natural drift toward Cognitive Surrender where users delegate both cognitive execution and metacognitive control. The registry's cross-session tracking is a manual countermeasure against this predicted trajectory.
1.5 Automation Bias and Anchoring
The registry's structural role — making AI influence visible — is a response to documented automation bias patterns.
- Busuioc, M. (2023). "Human–AI Interactions in Public Sector Decision Making: 'Automation Bias' and 'Selective Adherence' to Algorithmic Advice." Journal of Public Administration Research and Theory, 33(1), 153-169. Oxford Academic. https://academic.oup.com/jpart/article/33/1/153/6524536
- Relevance: Documents automation bias — the tendency to uncritically defer to automated system outputs — and selective adherence, where users adopt algorithmic advice selectively when it confirms existing biases. The registry's origin_type tracking (user_native vs ai_introduced) is designed to prevent this selective, unexamined adoption.
- Rastogi, C. et al. (2022). "Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making." Proceedings of the ACM on Human-Computer Interaction (CSCW). https://dl.acm.org/doi/10.1145/3512930
- Relevance: Models anchoring bias mathematically in human-AI collaboration. Demonstrates that AI outputs serve as anchors that condition subsequent human judgment. The registry's per-concept tracking of
first_seenandintroduced_byprovides the data needed to determine when an AI output has become an unexamined anchor.
- Relevance: Models anchoring bias mathematically in human-AI collaboration. Demonstrates that AI outputs serve as anchors that condition subsequent human judgment. The registry's per-concept tracking of
- NIST SP 1270 (2022). "Towards a Standard for Identifying and Managing Bias in Artificial Intelligence." National Institute of Standards and Technology. https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf
- Relevance: Federal standard identifying anchoring bias, confirmation bias, and automation bias as systemic risks in AI-assisted decision making. Provides institutional backing for the registry's approach of tracking concept origins and adoption patterns.
1.6 Temporal Drift in Human Judgment
The registry tracks concepts across time, and the adoption_trajectory field assumes that human evaluation of concepts changes. This is empirically supported.
- Zhou, L. & Chen, B. (2025). "Scientific Judgment Drifts Over Time in AI Ideation." arXiv preprint. https://arxiv.org/html/2511.04964v1
- Relevance: Two-wave study with 7,182 ratings from 57 researchers showing systematic temporal drift in how scientists evaluate the same research idea. Overall quality scores increased by 0.61 points on retesting. Directly supports the registry's design rationale: a concept that was evaluated positively in session 3 may have been evaluated differently if re-examined in session 8, because the user's standards shifted.
1.7 Structured Cognitive Engagement vs. Passive Acceptance
The registry's core function — requiring manual review of AI-introduced concepts — aligns with research showing that structured reflection mitigates cognitive offloading.
- Gerlich, M. et al. (2025). "From Offloading to Engagement: An Experimental Study on Structured Prompting and Critical Reasoning with Generative AI." Data, 10(11), 172. MDPI. https://www.mdpi.com/2306-5729/10/11/172
- Relevance: Cross-country experiment (n=150) showing unguided AI use fosters cognitive offloading without improving reasoning quality, while structured prompting reduces offloading and enhances critical reasoning. The registry's manual review procedure is functionally equivalent to the "structured prompting" intervention — it forces deliberate evaluation of what would otherwise be passively absorbed.