Fig. 2From: Sparse self-attention aggregation networks for neural sequence slice interpolationIllustration of Attention-Aware layer. L-RA and S-RA denote long-range attention and short-range attention, respectively. In this figure, we only perform two-level decompositionBack to article page