[7] Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks.
In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò
Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Pro-
cessing Systems 31: Annual Conference on Neural Information Processing Systems,
NeurIPS, pages 5171–5181, 2018.
[8] Anton Tsitsulin, John Palowitch, Bryan Perozzi, and Emmanuel Müller. Graph
clustering with graph neural networks. CoRR, abs/2006.16904, 2020.
[9] Shiwen Wu, Fei Sun, Wentao Zhang, Xu Xie, and Bin Cui. Graph neural networks
in recommender systems: A survey. ACM Comput. Surv., 55(5), 2022.
[10] Daixin Wang, Yuan Qi, Jianbin Lin, Peng Cui, Quanhui Jia, Zhen Wang, Yanming
Fang, Quan Yu, Jun Zhou, and Shuang Yang. A semi-supervised graph attentive
network for financial fraud detection. In 2019 IEEE International Conference on
Data Mining, ICDM, pages 598–607, 2019.
[11] Kehang Han, Balaji Lakshminarayanan, and Jeremiah Liu. Reliable graph neural
networks for drug discovery under distributional shift, 2021.
[12] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen
Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for
machine learning on graphs. arXiv preprint arXiv:2005.00687, 2020.
[13] Arpandeep Khatua, Vikram Sharma Mailthody, Bhagyashree Taleka, Tengfei Ma,
Xiang Song, and Wen-mei Hwu. Igb: Addressing the gaps in labeling, features,
heterogeneity, and size of public graph datasets for deep learning research. In
Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and
Data Mining, 2023.
[15] Da Zheng, Chao Ma, Minjie Wang, Jinjing Zhou, Qidong Su, Xiang Song, Quan
Gan, Zheng Zhang, and George Karypis. Distdgl: Distributed graph neural
network training for billion-scale graphs, 2021.
[16] Zhenkun Cai, Qihui Zhou, Xiao Yan, Da Zheng, Xiang Song, Chenguang Zheng,
James Cheng, and George Karypis. Dsp: Efficient gnn training with multiple
gpus. In Proceedings of the 28th ACM SIGPLAN Annual Symposium on Principles
and Practice of Parallel Programming, page 392–404, 2023.
[17] Swapnil Gandhi and Anand Padmanabha Iyer. P3: Distributed deep graph
learning at scale. In 15th USENIX Symposium on Operating Systems Design and
Implementation (OSDI 21), pages 551–568, 2021.
[18] Yeonhong Park, Sunhong Min, and Jae W. Lee. Ginex: Ssd-enabled billion-scale
graph neural network training on a single machine via provably optimal in-
memory caching. Proc. VLDB Endow., 15(11):2626–2639, 2022.
[19] Jeongmin Brian Park, Vikram Sharma Mailthody, Zaid Qureshi, and Wen mei
Hwu. Accelerating sampling and aggregation operations in gnn frameworks
with gpu initiated direct storage accesses, 2024.
[20] Roger Waleffe, Jason Mohoney, Theodoros Rekatsinas, and Shivaram Venkatara-
man. Mariusgnn: Resource-efficient out-of-core training of graph neural net-
works. In Proceedings of the Eighteenth European Conference on Computer Systems,
pages 144–161, 2023.
[21] Jie Sun, Mo Sun, Zheng Zhang, Jun Xie, Zuocheng Shi, Zihan Yang, Jie Zhang,
Fei Wu, and Zeke Wang. Helios: An efficient out-of-core gnn training sys-
tem on terabyte-scale graphs with in-memory performance. arXiv preprint
arXiv:2310.00837, 2023.
[22] Minjie Wang, Lingfan Yu, Da Zheng, Quan Gan, Yu Gai, Zihao Ye, Mufei Li, Jinjing
Zhou, Qi Huang, Chao Ma, Ziyue Huang, Qipeng Guo, Hao Zhang, Haibin Lin,
Junbo Zhao, Jinyang Li, Alexander J Smola, and Zheng Zhang. Deep Graph
Library: Towards Efficient and Scalable Deep Learning on Graphs. In Proceedings
of the ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.
[23] William L. Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation
learning on large graphs. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio,
Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett,
editors, Advances in Neural Information Processing Systems 30: Annual Conference
on Neural Information Processing Systems, NeurIPS, pages 1024–1034, 2017.
[24] Difan Zou, Ziniu Hu, Yewen Wang, Song Jiang, Yizhou Sun, and Quanquan
Gu. Layer-dependent importance sampling for training deep and large graph
convolutional networks. In Advances in Neural Information Processing Systems
32: Annual Conference on Neural Information Processing Systems, NeurIPS, pages
11247–11256, 2019.
[25] Wenbing Huang, Tong Zhang, Yu Rong, and Junzhou Huang. Adaptive sampling
towards fast graph representation learning. In Proceedings of the 32nd Interna-
tional Conference on Neural Information Processing Systems, NeurIPS’18, page
4563–4572, 2018.
[26] Jianfei Chen, Jun Zhu, and Le Song. Stochastic training of graph convolutional
networks with variance reduction. In Proceedings of the 35th International Con-
ference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden,
July 10-15, 2018, volume 80, pages 941–949, 2018.
[27] Minji Yoon, Théophile Gervet, Baoxu Shi, Sufeng Niu, Qi He, and Jaewon Yang.
Performance-adaptive sampling strategy towards fast and accurate graph neural
networks. In The 27th ACM SIGKDD Conference on Knowledge Discovery and
Data Mining, pages 2046–2056, 2021.
[28] Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Vik-
tor K. Prasanna. Accurate, efficient and scalable graph embedding. In IEEE
International Parallel and Distributed Processing Symposium, IPDPS, pages 462–
471, 2019.
[29] Tianyi Zhang, Aditya Desai, Gaurav Gupta, and Anshumali Shrivastava.
Hashorder: Accelerating graph processing through hashing-based reordering,
2024.
[30] Hao Wei, Jeffrey Xu Yu, Can Lu, and Xuemin Lin. Speedup graph processing by
graph ordering. In Proceedings of the 2016 International Conference on Management
of Data, page 1813–1828, 2016.
[31] Haitian Jiang, Renjie Liu, Xiao Yan, Zhenkun Cai, Minjie Wang, and David Wipf.
Musegnn: Interpretable and convergent graph neural network layers at scale.
arXiv preprint arXiv:2310.12457, 2023.
[34] Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with
pytorch geometric. CoRR, abs/1903.02428, 2019.
[36] Jason Mohoney, Roger Waleffe, Henry Xu, Theodoros Rekatsinas, and Shivaram
Venkataraman. Marius: Learning massive graph embeddings on a single machine.
In 15th USENIX Symposium on Operating Systems Design and Implementation
(OSDI), 2021.
[37] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory
Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al.
Pytorch: An imperative style, high-performance deep learning library. Advances
in neural information processing systems, 32, 2019.
2024. accessed, April-2024.
accessed, April-2024.
accessed, April-2024.
[42] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro
Liò, and Yoshua Bengio. Graph Attention Networks. In International Conference
on Learning Representations (ICLR), 2018.
2024].
types/g5, 2024. [Online; accessed April-2024].
line; accessed Apirl-2024].
[Online; accessed Apirl-2024].
2024].
[49] Yuke Wang, Boyuan Feng, Gushu Li, Shuangchen Li, Lei Deng, Yuan Xie, and
Yufei Ding. {GNNAdvisor}: An adaptive and efficient runtime system for {GNN}
acceleration on {GPUs}. In 15th USENIX symposium on operating systems design
and implementation (OSDI 21), pages 515–531, 2021.
[50] Zhiqiang Xie, Minjie Wang, Zihao Ye, Zheng Zhang, and Rui Fan. Graphiler: Opti-
mizing graph neural networks with message passing data flow graph. Proceedings
of Machine Learning and Systems, 4:515–528, 2022.
[51] Yuke Wang, Boyuan Feng, Zheng Wang, Guyue Huang, and Yufei Ding. Tc-gnn:
Bridging sparse gnn computation and dense tensor cores on gpus. In USENIX
Annual Technical Conference (USENIX ATC), pages 149–164, 2023.
[52] Lingxiao Ma, Zhi Yang, Youshan Miao, Jilong Xue, Ming Wu, Lidong Zhou, and
Yafei Dai. NeuGraph: Parallel deep neural network computation on large graphs.
In Proceedings of the 2019 USENIX Annual Technical Conference (USENIX ATC),
pages 443–458, 2019.
[53] Zhihao Jia, Sina Lin, Mingyu Gao, Matei Zaharia, and Alex Aiken. Improving the
accuracy, scalability, and performance of graph neural networks with ROC. In
Proceedings of the Machine Learning and Systems (MLSys), pages 187–198, 2020.
[54] Cheng Wan, Youjie Li, Cameron R Wolfe, Anastasios Kyrillidis, Nam Sung Kim,
and Yingyan Lin. Pipegcn: Efficient full-graph training of graph convolutional
networks with pipelined feature communication. arXiv preprint arXiv:2203.10428,
2022.
[55] Cheng Wan, Youjie Li, Ang Li, Nam Sung Kim, and Yingyan Lin. Bns-gcn: Efficient
full-graph training of graph convolutional networks with partition-parallelism
and random boundary node sampling. Proceedings of Machine Learning and
Systems, 4:673–693, 2022.
[56] Zhenkun Cai, Xiao Yan, Yidi Wu, Kaihao Ma, James Cheng, and Fan Yu. Dgcl:
an efficient communication library for distributed gnn training. In Proceedings
of the Sixteenth European Conference on Computer Systems, page 130–144, 2021.
[57] Zeyuan Tan, Xiulong Yuan, Congjie He, Man-Kit Sit, Guo Li, Xiaoze Liu, Baole Ai,
Kai Zeng, Peter Pietzuch, and Luo Mai. Quiver: Supporting gpus for low-latency,
high-throughput gnn serving with workload awareness, 2023.