Beyond Context Limits: Subconscious Threads for Long-Horizon Reasoning

To break the context limits of large language models (LLMs) that bottleneckreasoning accuracy and efficiency, we propose the Thread Inference Model (TIM),a family of LLMs trained for recursive and decompositional problem solving, andTIMRUN, an inference runtime enabling long-horizon structured reasoning beyondcontext limits. Together, TIM hosted on TIMRUN supports virtually unlimitedworking memory and multi-hop tool calls within a single language modelinference, overcoming output limits, positional-embedding constraints, andGPU-memory bottlenecks. Performance is achieved by modeling natural language asreasoning trees measured by both length and depth instead of linear sequences.The reasoning trees consist of tasks with thoughts, recursive subtasks, andconclusions based on the concept we proposed in Schroeder et al, 2025. Duringgeneration, we maintain a working memory that retains only the key-value statesof the most relevant context tokens, selected by a rule-based subtask-pruningmechanism, enabling reuse of positional embeddings and GPU memory pagesthroughout reasoning. Experimental results show that our system sustains highinference throughput, even when manipulating up to 90% of the KV cache in GPUmemory. It also delivers accurate reasoning on mathematical tasks and handlesinformation retrieval challenges that require long-horizon reasoning andmulti-hop tool use.