Graphs are powerful and versatile data structures to model many real world problems and learning on graph-based data is attracting growing interest in both academia and industry. Recently, many models have been introduced for learning on graphs. Among these, Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning on graphs. However, due to node dependency in graphs, which leads to "neighborhood explosion" , training GCN for large graphs on GPU is not practical. To address this problem, sampling-based methods have been introduced to couple with stochastic gradient descent for training GCNs.
While effective in alleviating the neighborhood explosion, due to bandwidth and memory bottlenecks, these methods lead to computational overheads in preprocessing and loading new samples in heterogeneous systems (CPU-GPU), which significantly deteriorate the sampling performance. By decoupling the frequency of sampling from the sampling strategy, we propose LazyGCN, a general yet effective framework that can be integrated with any sampling strategy to substantially improve the training time. The idea behind LazyGCN is to perform sampling periodically and effectively recycle the sampled nodes to mitigate data preparation overhead. We theoretically analyze the proposed algorithm and give corroborating empirical evidence on large real-world graphs, demonstrating that the proposed schema can significantly reduce the number of sampling steps and yield superior speedup without compromising the accuracy.