Skip to main content

Table 3 Effects on attention-aware layer (AAL)

From: Sparse self-attention aggregation networks for neural sequence slice interpolation

Synthesis cremi_triplet A cremi_triplet B cremi_triplet C mouse_triplet
  PSNR SSIM PSNR SSIM PSNR SSIM IE PSNR SSIM IE
KEL [9] 17.59 0.4354 15.88 0.3415 16.08 0.3506 30.82 13.46 0.1867 42.62
SSA [30] 18.12 0.4292 16.41 0.3425 16.33 0.3357 28.91 14.86 0.2261 35.41
AAL(Ours) 18.26 0.4374 16.79 0.3712 16.46 0.3575 28.38 15.04 0.2156 34.81
  1. Compared with kernel estimation layer (KEL) and interlaced sparse self-attention layer (SSA), the proposed attention-aware layer (AAL) presents a significant improvement on both the cremi_triplet and mouse_triplet datasets. Lower IEs indicates better performance
\