Neural Implicit Fields for Multi-Image Super-Resolution
1University of Agder · 2University of Copenhagen
*Corresponding authors: sander.jyhne@kartverket.no, nila@di.ku.dk
*Code and datasets are coming soon.
High-resolution imagery is often hindered by limitations in sensor technology, atmospheric conditions, and costs. Such challenges occur in satellite remote sensing, but also with handheld cameras, such as our smartphones. Hence, super-resolution aims to enhance image resolution algorithmically. Since single-image super-resolution requires solving an inverse problem, such methods must exploit strong priors, e.g. learned from high-resolution training data, or be constrained by auxiliary data, e.g. by a high-resolution guide from another modality. While qualitatively pleasing, such approaches often lead to hallucinated structures that do not match reality.
In contrast, multi-image super-resolution (MISR) aims to improve resolution by constraining the reconstruction with multiple views taken with sub-pixel shifts. We propose SuperF, a test-time optimization approach for MISR that leverages coordinate-based neural networks, also called neural fields. Their ability to represent continuous signals with an implicit neural representation (INR) makes them an ideal fit for the MISR task. The key characteristic of our approach is to share an INR for multiple shifted low-resolution frames and to jointly optimize the frame alignment with the INR. Our approach advances related INR baselines by directly parameterizing the sub-pixel alignment as optimizable affine transformation parameters and by optimizing via a super-sampled coordinate grid that corresponds to the output resolution. Our experiments yield compelling results on simulated bursts of satellite imagery and ground-level images from handheld cameras, with upsampling factors of up to 8. A key advantage of SuperF is that this approach does not rely on any high-resolution training data.
Animation of the MISR setup: multiple jittered low-resolution frames combining into a sharp reconstruction.
We introduce SuperF, a test-time optimization approach that leverages Implicit Neural Representations (INRs) to achieve super-resolution without the need for high-resolution training data. SuperF shares a single INR across multiple low-resolution frames, jointly optimizing the neural network alongside the sub-pixel alignment of each frame. By parameterizing these alignments as affine transformations and optimizing on a super-sampled coordinate grid, our method effectively reconstructs the underlying high-resolution signal. Furthermore, SuperF can estimate uncertainty maps to robustly handle noise, such as ignoring pixels obscured by clouds in satellite imagery.
Figure 1: Overview of the SuperF method, showing the shared implicit neural representation, frame-specific alignments, and super-resolved reconstruction.
Real-world imagery is rarely perfect. Satellite observations are frequently obstructed by clouds or shadows, while handheld bursts often contain moving objects or lighting shifts. Standard super-resolution methods can be easily confused by these inconsistencies, resulting in glitchy artifacts in the final image. SuperF does not just reconstruct the image; it also learns to estimate its own confidence via pixel-wise uncertainty maps that identify which parts of an image are reliable and which are noisy. During optimization, the model assigns high uncertainty to outliers—such as a cloud that appears in only one satellite frame—and effectively ignores or strongly down-weights these corrupted pixels. This allows SuperF to extract the consistent, high-quality underlying structure of the scene, even when individual input frames are messy or partially occluded.
Four examples showing LR frames (top row) and the corresponding estimated uncertainty maps (bottom row) for images with occlusions and inconsistencies. The uncertainty maps highlight cloudy or inconsistent pixels, enabling the GNLL loss to downweight these during optimization.
SuperF consistently outperforms test-time optimization baselines across multiple datasets, delivering sharper detail and lower perceptual error.
| SatSynthBurst | SyntheticBurst | |||||
|---|---|---|---|---|---|---|
| ×2 | ×4 | ×8 | ×2 | ×4 | ×8 | |
| Bilinear | 34.69 (3.50) | 29.71 (3.64) | 26.62 (3.68) | 27.66 (3.50) | 26.12 (3.72) | 25.44 (3.82) |
| Lafenetre et al. (2023) | 33.46 (3.62) | 27.70 (3.79) | 24.88 (3.71) | 27.02 (3.29) | 26.46 (3.05) | 25.19 (2.97) |
| NIR (Nam et al., 2022) [2k] | 26.26 (3.91) | 24.63 (4.41) | 23.85 (3.79) | 23.62 (4.43) | 22.69 (4.41) | 22.28 (4.40) |
| NIR (Nam et al., 2022) [5k] | 25.65 (5.82) | 24.99 (4.12) | 23.61 (2.97) | 24.46 (4.31) | 23.39 (4.32) | 22.93 (4.33) |
| SuperF MSE (ours) [2k] | 36.73 (1.66) | 32.94 (1.83) | 28.87 (2.32) | 29.38 (3.43) | 27.90 (3.94) | 27.08 (3.97) |
| SuperF GNLL (ours) [2k] | 37.26 (2.30) | 34.03 (2.71) | 29.28 (3.21) | 29.48 (3.76) | 27.47 (4.18) | 26.58 (4.18) |
Results on real satellite imagery from Sentinel-2 time series. These reconstructions were generated using the SuperF demo application.
Each tile loops through the LR burst automatically. Hover (or focus) to freeze on our base LR frame, then press / long-press to show the SuperF SR output. Let go to resume the LR sequence.
LR Sequence
LR Sequence
LR Sequence
LR Sequence
LR Sequence
Results on synthetic data for both satellite images and ground-level bursts.
Upsampling Factor: ×2
Qualitative comparison of SuperF reconstructions against baseline methods.
Try SuperF in your browser: choose a location and time period, and inspect the super-resolved render directly.
@misc{jyhne2025superfneuralimplicitfields,
title={SuperF: Neural Implicit Fields for Multi-Image Super-Resolution},
author={Sander Riisøen Jyhne and Christian Igel and Morten Goodwin and Per-Arne Andersen and Serge Belongie and Nico Lang},
year={2025},
eprint={2512.09115},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.09115},
}
This work was supported in part by the Pioneer Centre for AI, DNRF grant number P1 and by the Global Wetland Center (grant number NNF23OC0081089) from Novo Nordisk Foundation.