Presentation
Applying Lossy Compression Techniques to GNN Training
DescriptionGraph neural networks (GNNs) are a state-of-the-art machine learning model for processing graph-structured data. The growing complexity of GNNs and size of real-world graphs have increased the memory requirements of GNN training and popular training platforms, like GPU, have memory capacity on the scale of tens of GB.
In this work, we study scientific floating-point lossy compressors applied to GNN training memory reduction. We develop a framework for GNN activation lossy compression, analyze lossy compression and other data reduction techniques, and explore methods to leverage GNN data features to improve compression. This work is ongoing and will encompass more compression optimizations in the future.
The poster session will provide an overview of GNN training and opportunities for compression, followed by an analysis of cuSZp, a scientific float-point lossy compressor, GNN performance against quantization and reduced precision, and lastly, preliminary exploration of leveraging GNN attributes for compression with top-k methods.
In this work, we study scientific floating-point lossy compressors applied to GNN training memory reduction. We develop a framework for GNN activation lossy compression, analyze lossy compression and other data reduction techniques, and explore methods to leverage GNN data features to improve compression. This work is ongoing and will encompass more compression optimizations in the future.
The poster session will provide an overview of GNN training and opportunities for compression, followed by an analysis of cuSZp, a scientific float-point lossy compressor, GNN performance against quantization and reduced precision, and lastly, preliminary exploration of leveraging GNN attributes for compression with top-k methods.

Event Type
Research and ACM SRC Posters
TimeThursday, 20 November 20258:00am - 5:00pm CST
LocationSecond Floor Atrium
Archive
view
