HyperAIHyperAI

Command Palette

Search for a command to run...

Extending the Design Space of Graph Neural Networks by Rethinking Folklore Weisfeiler-Lehman

Jiarui Feng; Lecheng Kong; Hao Liu; Dacheng Tao; Fuhai Li; Muhan Zhang; Yixin Chen

Abstract

Message passing neural networks (MPNNs) have emerged as the most popular framework of graph neural networks (GNNs) in recent years. However, their expressive power is limited by the 1-dimensional Weisfeiler-Lehman (1-WL) test. Some works are inspired by kkk-WL/FWL (Folklore WL) and design the corresponding neural versions. Despite the high expressive power, there are serious limitations in this line of research. In particular, (1) kkk-WL/FWL requires at least O(nk)O(n^k)O(nk) space complexity, which is impractical for large graphs even when k=3k=3k=3; (2) The design space of kkk-WL/FWL is rigid, with the only adjustable hyper-parameter being kkk. To tackle the first limitation, we propose an extension, (k,t)(k,t)(k,t)-FWL. We theoretically prove that even if we fix the space complexity to O(nk)O(n^k)O(nk) (for any k2k\geq 2k2) in (k,t)(k,t)(k,t)-FWL, we can construct an expressiveness hierarchy up to solving the graph isomorphism problem. To tackle the second problem, we propose kkk-FWL+, which considers any equivariant set as neighbors instead of all nodes, thereby greatly expanding the design space of kkk-FWL. Combining these two modifications results in a flexible and powerful framework (k,t)(k,t)(k,t)-FWL+. We demonstrate (k,t)(k,t)(k,t)-FWL+ can implement most existing models with matching expressiveness. We then introduce an instance of (k,t)(k,t)(k,t)-FWL+ called Neighborhood2^22-FWL (N2^22-FWL), which is practically and theoretically sound. We prove that N2^22-FWL is no less powerful than 3-WL, and can encode many substructures while only requiring O(n2)O(n^2)O(n2) space. Finally, we design its neural version named N2^22-GNN and evaluate its performance on various tasks. N2^22-GNN achieves record-breaking results on ZINC-Subset (0.059), outperforming previous SOTA results by 10.6%. Moreover, N2^22-GNN achieves new SOTA results on the BREC dataset (71.8%) among all existing high-expressive GNN methods.


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Extending the Design Space of Graph Neural Networks by Rethinking Folklore Weisfeiler-Lehman | Papers | HyperAI