Fig. 2
From: Sparse self-attention aggregation networks for neural sequence slice interpolation

Illustration of Attention-Aware layer. L-RA and S-RA denote long-range attention and short-range attention, respectively. In this figure, we only perform two-level decomposition