Speaker
Description
Convolutional neural networks (CNNs) have seen extensive applications in scientific data analysis, including in neutrino telescope experiments. However, the data from these experiments present numerous challenges to CNNs, such as non-regular geometry, sparsity, and high dimensionality. As a result, CNNs are highly inefficient on neutrino telescope data, and require significant pre-processing that results in information loss. We propose utilizing sparse submanifold convolutions as a solution to these issues. We aim to show that CNNs using these sparse submanifold convolutions achieves the competitive performance expected from a machine learning algorithm, while running orders of magnitude faster on both GPU and CPU compared to a traditional CNN. As a result of this speedup, these networks are capable of handling the trigger-level event rate for experiments such as IceCube.