HyperAI
Back to Headlines

NTT Researchers Unveil Breakthroughs in AI Accuracy, Security, and Efficiency at ICML 2025

4 days ago

Researchers from NTT Research, Inc. and NTT R&D, divisions of NTT (TYO:9432), made significant contributions to the field of AI and machine learning by presenting twelve papers at the forty-second International Conference on Machine Learning (ICML) held from July 13-19, 2025 in Vancouver. ICML is a leading global conference dedicated to advancing machine learning, particularly in areas like machine vision, computational biology, speech recognition, and robotics. The Physics of Artificial Intelligence (PAI) Group from NTT Research presented three key papers: "Representation Shattering in Transformers: A Synthetic Study with Knowledge Editing" - This paper investigates why Knowledge Editing (KE) algorithms, which alter the weights of large language models (LLMs) to correct inaccuracies, often degrade the models' factual recall and reasoning abilities. Researchers discovered that "representation shattering" is the culprit, where KE inadvertently affects representations of multiple entities, distorting the model's ability to infer new knowledge accurately. "Archetypal SAE: Adaptive and Stable Dictionary Learning for Concept Extraction in Large Vision Models" - Sparse Autoencoders (SAEs) are crucial for improving machine learning interpretability but suffer from severe instability. The PAI Group introduced Archetypal SAEs and Relaxed Archetypal SAEs, which enhance stability and reliability, making them more useful for concept extraction in large vision models. "Dynamical Phases of Short-Term Memory Mechanisms in RNNs" - This paper explores the neural mechanisms behind short-term memory in recurrent neural networks (RNNs), providing new insights and experimentally testable predictions. Understanding these mechanisms can advance systems neuroscience and improve RNN designs. Eight additional papers from NTT R&D labs in Japan were also accepted: "Portable Reward Tuning: Towards Reusable Fine-Tuning Across Different Pretrained Models" - This groundbreaking technology allows for reusable fine-tuning of AI models without retraining them, significantly reducing costs and enhancing the sustainability of customized generative AI. "Plausible Token Amplification for Improving Accuracy of Differentially Private In-Context Learning Based on Implicit Bayesian Inference" - Researchers proposed Plausible Token Amplification (PTA), which mitigates the accuracy degradation caused by differential privacy, a technique that protects data privacy. This can facilitate the use of LLMs in sensitive sectors like healthcare, finance, and government. "K2IE: Kernel Method-based Kernel Intensity Estimators for Inhomogeneous Poisson Processes" - This work presents a more computationally efficient method for analyzing large datasets using Poisson processes, which are essential for forecasting events in both online and offline contexts, such as social media posts and disease outbreaks. Other papers covered topics including: - Positive-Unlabeled AUC Maximization Under Covariate Shift - Enhancing the performance of machine learning models in imbalanced datasets. - Natural Perturbations for Black-Box Training of Neural Networks by Zeroth-Order Optimization - A novel approach for training neural networks without access to gradient information. - Learning to Generate Projections for Reducing Dimensionality of Heterogeneous Linear Programming Problems - An innovative method for simplifying complex problems while maintaining solution quality. - Guided Zeroth-Order Methods for Stochastic Non-Convex Problems with Decision-Dependent Distributions - Advanced optimization techniques for non-convex problems. - Deep Ridgelet Transform and Unified Universality Theorem for Deep and Shallow Joint-Group-Equivariant Machines - Theoretical advancements in deep learning and equivariant machine models. - Linear Mode Connectivity between Multiple Models modulo Permutation Symmetries - A method for merging multiple pretrained models to create a more robust and performant single model. NTT's commitment to AI innovation is driven by goals to ensure sustainability, respect human autonomy, and protect security and privacy. The PAI Group, established in April 2025, focuses on the fundamental aspects of AI, aiming to understand and bridge the gap between biological and artificial intelligences. Hidenori Tanaka, PAI Group Leader, highlighted the significance of fundamental research in achieving positive outcomes in AI development. Industry insiders praised NTT’s contributions, noting that the advancements in model interpretability, cost-effectiveness, and security are crucial for the broader adoption of AI technologies. These innovations not only enhance the technical capabilities of AI but also foster trust and ethical use in various sectors. NTT Research, Inc., which opened in July 2019 in Silicon Valley, conducts basic research and advances technologies to develop high-impact innovations across NTT Group’s global business. It houses four labs focusing on quantum information, cryptography, medical and health informatics, and artificial intelligence. NTT’s annual R&D investment represents 30% of its profits, showcasing the company’s dedication to cutting-edge technological development. For more details on NTT's AI innovations, visit NTT’s Innovation in Artificial Intelligence and NTT Research PAI Group. [References to the papers can be found here for more in-depth reading.]

Related Links