Fig. 5From: Sparse self-attention aggregation networks for neural sequence slice interpolationThe results of the proposed model adopting different loss functions. From left to right: input frame 1, Ground Truth, result of \(\mathcal {L}_{1}\), result of \(\mathcal {L}_{f}\), result of \(\mathcal {L}_{s}\), input frame 2Back to article page