Command Palette
Search for a command to run...
Idea2Story : Une pipeline automatisée pour transformer des concepts de recherche en récits scientifiques complets
Idea2Story : Une pipeline automatisée pour transformer des concepts de recherche en récits scientifiques complets
Résumé
La découverte scientifique autonome basée sur des agents fondés sur les grands modèles linguistiques (LLM) a récemment fait des progrès considérables, démontrant la capacité à automatiser des flux de recherche end-to-end. Toutefois, les systèmes existants s'appuient largement sur des paradigmes d'exécution centrés sur le runtime, qui consistent à lire, résumer et raisonner de manière répétée sur de vastes volumes de littérature scientifique en ligne. Cette stratégie de calcul en temps réel entraîne des coûts computationnels élevés, souffre de limitations liées à la taille de la fenêtre contextuelle, et conduit fréquemment à un raisonnement fragile et à des hallucinations. Nous proposons Idea2Story, un cadre fondé sur la pré-computation pour la découverte scientifique autonome, qui transfère la compréhension de la littérature scientifique du raisonnement en ligne vers une construction de connaissances hors ligne. Idea2Story collecte de manière continue des articles soumis à un processus de relecture ainsi que les retours des relecteurs, extrait les unités méthodologiques fondamentales, compose des schémas de recherche réutilisables, et les organise dans un graphe structuré de connaissances méthodologiques. En runtime, les intentions de recherche peu spécifiées des utilisateurs sont alignées sur des paradigmes de recherche établis, permettant ainsi une récupération efficace et une réutilisation de schémas de recherche de haute qualité, plutôt que de recourir à une génération ouverte ou à une approche par essais et erreurs. En ancrant la planification et l'exécution de la recherche dans un graphe de connaissances préalablement construit, Idea2Story atténue le goulot d'étranglement lié à la fenêtre contextuelle des LLM et réduit de manière significative le raisonnement répétitif en temps réel sur la littérature. Nous menons des analyses qualitatives et des études empiriques préliminaires démontrant qu’Idea2Story est capable de générer des schémas de recherche cohérents, méthodologiquement fondés et novateurs, et de produire plusieurs exemples de recherche de haute qualité dans un cadre end-to-end. Ces résultats suggèrent que la construction de connaissances hors ligne constitue une base pratique et évolutif pour une découverte scientifique autonome fiable.
One-sentence Summary
The AgentAlpha team proposes Idea2Story, a pre-computation framework that builds a methodological knowledge graph from peer-reviewed papers to ground vague research ideas into structured, reusable patterns—reducing LLM context limits and hallucination while enabling efficient, novel scientific discovery without runtime literature reprocessing.
Key Contributions
- Idea2Story introduces a pre-computation-driven framework that constructs a structured methodological knowledge graph from peer-reviewed papers and reviews, replacing inefficient runtime literature processing with offline knowledge curation to improve scalability and reduce hallucination.
- The system grounds user research intents by retrieving and composing validated research patterns from the knowledge graph, enabling efficient, context-aware planning that circumvents LLM context window limits and avoids open-ended trial-and-error generation.
- Preliminary empirical studies show Idea2Story generates coherent, novel, and methodologically grounded research demonstrations end-to-end, validating the practical feasibility of offline knowledge construction for autonomous scientific discovery.
Introduction
The authors leverage large language models to automate scientific discovery but address key inefficiencies in existing systems that rely on real-time, context-heavy literature processing. Prior approaches suffer from high computational costs, context window limits, and brittle reasoning due to repeated online summarization and trial-and-error exploration. Idea2Story introduces a pre-computation framework that builds a structured knowledge graph offline by extracting and organizing methodological units from peer-reviewed papers and their reviews. At runtime, it maps vague research intents to validated research patterns from this graph, enabling faster, more reliable, and more coherent scientific planning without reinventing known methods. This shift reduces hallucination risk and computational load while grounding research in empirically supported paradigms.
Dataset
-
The authors construct a paper pool from ~13,000 accepted machine learning papers (5,000 from NeurIPS, 8,000 from ICLR) published within the most recent three-year window, retaining full text (title, abstract, body) and associated review artifacts (comments, ratings, confidence scores, meta-reviews).
-
Each paper undergoes anonymization to remove author/reviewer identifiers (names, affiliations, emails) and safety filtering to eliminate toxic or abusive content, yielding a de-identified corpus that preserves technical and evaluative signals while minimizing privacy and safety risks.
-
The dataset is used to train Idea2Story, which leverages the paper-review pairs to learn how research contributions are framed and evaluated, supporting retrieval and composition of reusable methodological patterns rather than domain-specific content.
-
The knowledge graph built from this data reveals a hub-and-spoke structure: high-frequency domains act as hubs connecting many papers, while methodological patterns often bridge multiple domains—enabling abstraction-aware retrieval and synthesis beyond paper-level similarity.
Method
The framework of Idea2Story operates through a two-stage paradigm that decouples offline knowledge construction from online research generation, enabling the system to transform informal user ideas into structured, academically grounded research directions. The overall architecture is divided into an offline phase for building a persistent methodological knowledge base and an online phase for grounding user inputs and generating refined research patterns.

In the offline stage, the system begins by constructing a curated paper pool from top-tier peer-reviewed conferences, filtering out identities and harmful content to ensure privacy and safety. This anonymized and cleaned dataset undergoes method unit extraction, where each paper is deconstructed into its core methodological contributions. The extraction process leverages the structured layout of academic papers, analyzing the introduction, method, and experiments sections to isolate reusable method units that capture essential technical ideas while excluding implementation-specific details such as hyperparameter tuning or dataset selection. Each method unit is normalized into structured attributes, including atomic meta-methods and composition-level patterns, and represented as a vector embedding derived from its associated units. These embeddings are then projected into a lower-dimensional space using UMAP, followed by density-based clustering with DBSCAN to identify coherent research patterns that represent recurring methodological structures across the literature.

The extracted method units and research patterns are organized into a structured knowledge graph, which serves as a persistent methodological memory. This graph is defined as a directed graph G=(V,E), where nodes represent canonicalized method units or meta-methods, and edges encode composition relations between method units observed in prior work. Canonicalization groups semantically similar units into shared abstractions, reducing surface-level variation while preserving core methodological intent. The graph explicitly captures both reusable methodological elements and empirically observed compatibility, enabling the system to reason about methods at a higher level of abstraction than individual papers.
In the online stage, given a user-provided research idea, the system treats method discovery as a graph-based retrieval and composition problem over the knowledge graph. The process begins with user intent processing, where the input is interpreted as a multi-dimensional query that can be methodological, application-driven, or analysis-oriented. The system then performs retrieval and generation by identifying relevant research patterns through a multi-view retrieval formulation. This approach aggregates complementary signals from idea-level, domain-level, and paper-level retrieval views, each contributing a relevance score based on semantic similarity to the input query. The final ranking of research patterns is determined by a weighted sum of these view-specific scores, producing a ranked list of candidate patterns.
Following retrieval, the system initiates a review-guided refinement loop. A large language model acts as a reviewer, evaluating the retrieved research patterns on criteria such as technical soundness, novelty, and conceptual coherence. Based on the feedback, the system iteratively revises the pattern by recombining compatible method units or adjusting the problem formulation. This generate–review–revise loop continues until the pattern meets the reviewer's criteria for novelty, coherence, and feasibility, or until no further improvement is observed. The output is a refined research pattern that serves as a structured blueprint for downstream planning and paper generation.
Experiment
- Evaluated Idea2Story on 13K ICLR and NeurIPS papers to assess its ability to extract reusable methodological structures and generate coherent research patterns from ambiguous inputs.
- Analyzed extracted method units to confirm they represent meaningful, reusable abstractions.
- Conducted qualitative case studies using three real user ideas, comparing Idea2Story (powered by GLM-4.7) against a direct LLM baseline that lacks explicit pattern modeling.
- Found that Idea2Story reframes vague intent into dynamic, structurally grounded research blueprints, emphasizing generative refinement and evolving representations.
- Direct LLM outputs remained abstract, relied on conventional formulations, and lacked concrete methodological grounding.
- Independent evaluation by Gemini 3 Pro consistently favored Idea2Story for novelty, methodological substance, and overall research quality.