HyperAI
Back to Headlines

New Multimodal Neural Network Boosts Efficient Celestial Classification

4 days ago

In modern astronomical research, accurately identifying celestial objects is crucial for understanding the universe's structure, galaxy evolution, and dark matter distribution. Different types of celestial bodies have distinct radiation mechanisms, and astronomers typically rely on spectral observations for classification. However, spectral data acquisition is resource-intensive and challenging to implement on a large scale, leading to a scarcity of spectral data for most celestial objects. This limitation has long hindered comprehensive studies of the vast number of cosmic entities. Image observations, on the other hand, can cover a wider field of view in a shorter time and detect fainter objects than spectral observations. Photometric data can construct multi-band spectral energy distributions (SEDs), which reveal the radiation mechanisms of celestial objects, and provide morphological information, adding another dimension to classification efforts. However, relying solely on image morphology or SED features can introduce degeneracy. For instance, high-redshift quasars and stars may both appear as point sources in images, making them difficult to distinguish. Similarly, different types of celestial objects can overlap in color space, leading to classification errors. To address these challenges, a research team led by Dr. Haicheng Feng from the Yunnan Observatories of the Chinese Academy of Sciences, in collaboration with Dr. Rui Li from Zhengzhou University and Professor Nicola R. Napolitano from Federico II University of Naples, Italy, has developed a novel multimodal neural network model. This model innovatively combines morphological and SED information to achieve high-precision automatic classification of stars, quasars, and galaxies. The method has been applied to the fifth data release of the European Southern Observatory's (ESO) Kilo-Degree Survey (KiDS), covering 1,350 square degrees of sky. The team successfully classified over 27 million objects brighter than 23rd magnitude in the r-band. This approach holds significant value for large-scale, multi-band surveys like those planned for the Chinese Space Station Telescope and similar projects, which are expected to generate billions of celestial observations. Compared to traditional classification methods, the deep learning-based multimodal approach promises faster, automated, and more accurate results. The team plans to further enhance the model's adaptability and apply it to even larger datasets, promoting the transition from data quantity to data intelligence in astronomical data processing. This advancement will help build high-quality astronomical databases and provide a solid foundation for uncovering the mysteries of cosmic evolution. Recently, the research findings were published in a paper titled "Morpho-photometric Classification of KiDS DR5 Sources Based on Neural Networks: A Comprehensive Star–Quasar–Galaxy Catalog" in The Astrophysical Journal Supplement Series. The study was supported by the National Natural Science Foundation of China, the Ministry of Science and Technology, the Yunnan provincial government, and the Chinese Manned Space Program. Additional resources include a confusion matrix based on 20,000 celestial object samples, illustrating the effectiveness of the classification results.

Related Links