Speaker
Description
The Southern Wide-field Gamma-ray Observatory (SWGO) is a proposed next-generation water-Cherenkov gamma-ray observatory in the Southern Hemisphere, thus being complementary to other water-Cherenkov detectors like HAWC (Mexico) and LHAASO (China), which are both located in the Northern Hemisphere.
One of the primary challenges of the water-Cherenkov technique, is the effective discrimination of gamma-ray signals from the prevalent hadronic background.
Several techniques have been developed in the past, primarily relying on human-designed discrimination variables.
In other scientific areas, recent advancements in deep learning have revealed that employing an end-to-end learning approach, which involves using raw data without the inclusion of handcrafted designed features, frequently improves the results. One specific deep learning architecture is the Transformer.
The self-attention mechanism of the Transformer, initially developed for tasks in natural language processing, offers a promising approach to efficiently handle the complex and variable-sized data in a ground-based observatory with high multiplicities.
In this work, this approach will be investigated specifically for Gamma-Hadron separation in SWGO. The performance will be evaluated and additionally the inner workings, meaning the individual building blocks, of the Transformer will be explained.