HyperAIHyperAI

Command Palette

Search for a command to run...

Similarity Field Theory: A New Foundation for AI and Cognition

A research team has developed a formal, relationship-based ontological framework in mathematics that elevates the concept of "similarity" to a foundational level—what is referred to as "firstness" in ontological terms. This marks a significant shift from the traditional view rooted in Aristotle’s 2,500-year-old ontology, which posits individual entities as the primary building blocks of reality. That framework, which underpinned much of Western thought and indirectly influenced the Industrial Revolution, is now being re-evaluated in the context of the current AI era. Much like the pre-theoretical stage of the 18th century, when steam engines were built through trial and error before thermodynamics was formalized by Carnot and Clausius, today’s AI development is largely driven by scale and empirical experimentation. Despite the creation of ever-larger and more powerful neural networks, there remains a lack of a unifying theoretical framework to explain their behavior, especially concerning the "black box" nature of these models, their stability, and the very essence of intelligence. In response, AI researcher Wu Qisheng and his team have re-examined the fundamental question: What is the nature of intelligence? Why are individual entities considered primary? To address this, they introduced a new mathematical framework called Similarity Field Theory (SFT), which treats similarity as the first principle of being. SFT reimagines cognition and intelligence not as properties of isolated objects, but as relations within a structured space of similarity. At the core of SFT is the idea that intelligence is generatively defined: "Intelligence is the ability to generate an entity that represents the same concept, given an existing entity that exemplifies it." Mathematically, this is formalized as a sequence of system states Z_p, which includes a finite set of entities X_K ⊆ X_p, all belonging to a superlevel set F_α(K) of a concept K. The intelligence of a system is then the ability of a generative operator G to produce new entities E′ such that the similarity field S(E′, K) meets or exceeds a threshold α. By shifting the focus from statistics to geometry, SFT reframes AI problems as questions about the structure of conceptual spaces. The team further derived and proved two key theorems. The Incompatibility Theorem provides a formal explanation for negotiation deadlocks in social interactions—situations where mutually exclusive claims cannot both be true. The Stability Theorem formalizes the necessity of long-term, consistent beliefs, whether in individual minds or collective social cognition. SFT also offers a new perspective on the interpretability of large language models (LLMs). By decomposing neural networks into their underlying "conceptual fibers"—the set of inputs that trigger a neuron to activate—the model’s internal logic can be understood through the geometry of these fibers. This allows for a more transparent, structure-based explanation of how LLMs process and generate language. The team applied SFT to three LLMs—cerebras-gpt-590M, pythia-160m, and gemma-3-270m—on two product categories, using the Bradley–Terry–Luce model to simulate brand rankings and consumer perception. The results showed a Spearman correlation of 0.963 and a mean absolute error of 2.160, indicating that LLMs have learned a significant portion of the underlying structure of collective human cognition. This opens new possibilities in social science, behavioral economics, and cultural research, enabling large-scale, data-driven virtual experiments instead of relying solely on traditional surveys. In practical applications, SFT can help improve LLMs by identifying logical inconsistencies. For instance, the team observed that prompts like "i is more typical than j" and "j is more typical than i" were both judged as true—violating the principle of transitivity. The Incompatibility Theorem provides a tool to detect and correct such contradictions during training, potentially leading to more coherent and reliable models. Moreover, the framework offers a new way to study collective cognition. Just as the microscope revolutionized biology, today’s LLMs, when interpreted through SFT, can act as virtual laboratories for observing and simulating large-scale human thought and behavior. The research process itself followed a rigorous path: first, asking deep philosophical questions about the nature of reality and truth; then translating intuitions into precise mathematical language; deriving logical consequences; testing them against real-world phenomena; and finally, using statistical tools like p-values and confidence intervals to validate results. Without this bridge to empirical reality, theoretical work remains abstract and disconnected. Notably, Similarity Field Theory draws inspiration from classical Eastern texts, reflecting Wu Qisheng’s belief in the enduring value of ancient wisdom. A former student at the University of Hong Kong, Wu transitioned from academia to industry, worked at major tech companies, and later used his resources to pursue a life aligned with personal values. He now works part-time, including as a data scientist at Copilot AI, a leading Canadian AI company. His mission includes making Eastern philosophical insights accessible and relevant in the modern scientific and technological world.

Related Links

Similarity Field Theory: A New Foundation for AI and Cognition | Trending Stories | HyperAI