Graphs Neural Networks (GNNs) and LLMs are colliding in exciting ways. 💥
This survey introduces a novel taxonomy for categorizing existing methods that combine LLMs and GNNs. 🦜🌐
The authors classified methods based on how they structure the interaction between LLMs and graph learning components.
This includes looking at the order of operations, how information flows between different parts of the model, and how the training and inference processes are designed.
This approach identified four distinct architectural approaches ⚙️:
1️⃣ GNNs as Prefix:
▪️ GNNs first process graph data to create structure-aware tokens for LLMs to use during inference.
▪️ This approach leverages GNNs' ability to capture complex relationships, providing a solid foundation for LLMs to build upon.
2️⃣ LLMs as Prefix:
▪️ LLMs process graph data alongside textual information to generate node embeddings to improve GNN training.
▪️ This method leverages LLMs' language capabilities to improve GNN training.
3️⃣ LLMs-Graphs Integration:
▪️ This approach involves a deeper integration of LLMs and GNNs through fusion training, GNN alignment, and LLM-powered graph agents.
4️⃣ LLMs-Only:
▪️ This method converts graph-structured data into LLM-friendly sequences. Some approaches incorporate multi-modal tokens, allowing LLMs to handle graph data in innovative ways.
This work goes beyond traditional taxonomies that only consider LLM's role within the system. It really digs into how these integrations work in practice 💡.
The potential for LLMs to overcome the limitations of GNNs is super exciting—and we’re just starting to see what’s possible 🚀.