Inspired by biological neural networks, deep spiking neural networks (SNNs) offer lower energy consumption and faster processing speeds compared to deep neural networks (DNNs). However, SNNs, still under development, suffer from lower learning perform...
Inspired by biological neural networks, deep spiking neural networks (SNNs) offer lower energy consumption and faster processing speeds compared to deep neural networks (DNNs). However, SNNs, still under development, suffer from lower learning performance. Since most deep SNN research utilizes data augmentation techniques applied in DNNs, we aim to examine whether these augmentations are also effective in deep SNNs. Furthermore, we explore how varying the hyperparameters used in Mixup and CutMix affects their efficacy in order to identify the optimal settings for these techniques.