Distributed Training of Graph Neural Networks in Edge–Cloud Environments

Supervisor: Haleh Dizaji Author: N/A

Abstract

Graph Neural Networks (GNNs) are increasingly used for learning on graph-structured data, but their training is typically performed in centralized cloud environments. In edge–cloud systems, graph data and computation are naturally distributed, which motivates the need for distributed training approaches.

This thesis explores distributed training of Graph Neural Networks in edge–cloud environments, focusing on how graph data and training computation can be divided across multiple devices while managing communication and resource constraints. The work aims to design and evaluate a distributed training setup using a static graph and a fixed GNN architecture, and to analyze the impact of different design choices on training efficiency, communication overhead, and model accuracy. The study seeks to provide practical insights into the feasibility and limitations of distributed GNN training in resource-constrained edge environments.