Survey Date: 2026-03-25
To put it bluntly, low-light enhancement grew into a mature, independent field because dark areas usually still contain information. It’s just weak, noisy, and hard to use. Highlight recovery hasn’t reached the same scale because many details hit a physical ceiling during capture and get clipped.
The real scarcity isn’t in all light-related processing. It’s specifically about taking a finished photo and restoring real details to areas that turned pure white. The industry handles highlights, of course, but the mainstream approach isn’t to save them after the fact. Instead, they intervene before information is lost. They use sensor HDR, multi-frame fusion, exposure bracketing, and RAW workflows to keep highlights from dying in the first place.
This is the intuition I want readers to build. Low-light enhancement is like amplifying and denoising a quiet but recorded sound. Highlight recovery is more like trying to reconstruct a waveform that was already flattened. The former is extraction, while the latter is closer to completion.
Before we discuss this, we need to separate several concepts that people often mix up. The difference isn’t just in the name. It’s about the state of the input, how much information remains, and whether the algorithm is estimating or guessing.
Low-Light Image Enhancement deals with photos taken in the dark. The sensor did receive photons, just not enough, resulting in a dim and noisy image. The core operation here is amplifying the weak signal while suppressing noise. The information is there, it just needs to be dug out.
Image Denoising is more general and isn’t limited to low-light scenes, though low-light is where noise is most severe. Random electronic noise from the sensor at high ISO can drown out the useful signal. Denoising algorithms work to separate signal from noise in a statistical sense.
Exposure Correction covers both underexposed and overexposed cases. Since 2021, researchers have started treating “one model for both under and overexposure” as a serious goal. This is a trend worth watching, and I’ll expand on it later.
Single-Image HDR Reconstruction / Inverse Tone Mapping aims to infer a wider dynamic range from a standard image. This usually involves handling missing information in highlights, but the goal isn’t just “fixing overexposure.” It’s “generating an HDR image.” Highlight recovery is just one part of that.
RAW Highlight Recovery is a common feature in RAW processing software. Camera sensors record raw data with 12 to 14 bits of depth, far exceeding the 8 bits in a JPEG. When an area looks pure white in a JPEG, the RAW data might still have some channels with unsaturated information. RAW highlight recovery uses this leftover data. How much it can do depends on how many channels haven’t fully clipped.
Multi-Frame HDR / Bracketing requires multiple photos with different exposures as input. Short-exposure frames keep highlight details, while long-exposure frames keep shadow details. Algorithms fuse them into one high dynamic range image. This is the most common HDR solution in mobile computational photography, but it isn’t single-image highlight recovery. It requires extra information captured at the time of shooting.
Generative Fill / Inpainting is the newest direction. When highlight areas truly have no information left, generative models can “imagine” plausible textures based on the surroundings. They might add clouds to a pure white sky. This can be visually satisfying, but it’s creation, not recovery. The generated content doesn’t correspond to what was actually in the scene.
Looking at these concepts, a clear line emerges. Low-light enhancement, denoising, and RAW highlight recovery deal with “information that exists but is hard to use.” Single-image highlight recovery, especially when all color channels are saturated and clipped, faces “information that is gone.” The former is an estimation problem, while the latter is about inference and generation.
To understand this asymmetry, we have to look at how camera sensors work.
Each pixel on a sensor is basically a photon counter. The more photons it receives during exposure, the higher the charge and the larger the recorded value. This counter has a lower limit (dark current noise, which creates random charge even without light) and an upper limit (full-well capacity, where the charge hits a physical limit and stops increasing).
In low light, there are very few photons, so the ratio of signal to random noise is poor. But the signal is there. If you take multiple identical photos, the noise is random while the signal is fixed. By averaging them, you can gradually raise the signal-to-noise ratio. This is the physical basis for night modes and burst denoising on phones. Even with a single photo, modern deep learning denoising can learn the statistical patterns of natural images to reasonably estimate the signal from the noise. It’s not making something from nothing. The underlying signal was always there.
Overexposure is completely different. When there are too many photons and the charge hits the full-well capacity, the counter stops at its maximum value. Whether the actual brightness was 1.1 times or 100 times that maximum, the recorded value is the same. All information above the limit is irreversibly clipped. This isn’t a problem of poor signal-to-noise ratio. The information was physically erased. Think of it this way: low light is like a quiet audio recording. You can turn up the volume and use software to clean up the background hiss because the original voice is still there. Overexposure is like a recording where the volume exceeded the microphone’s limit. The tops of the waveforms were flattened. There’s no way to recover those missing parts from that same recording.
This physical reality dictates the fundamental differences in algorithms, datasets, evaluation methods, and product paths for these two directions.
Low-light enhancement is a highly mature research direction in academia. It has authoritative surveys, like the one by Li et al. in IEEE TPAMI 2022 titled Low-Light Image and Video Enhancement Using Deep Learning: A Survey. It has standard test sets like SID and LOL, long-term resources like awesome-low-light-image-enhancement, and the NTIRE challenges that provide public benchmarks. In other words, this field has finished building the infrastructure a mature domain needs.
The milestones are clear. In 2018, Chen et al.’s Learning to See in the Dark brought RAW-domain low-light recovery to the forefront. In 2020, Guo et al.’s Zero-DCE made zero-reference methods popular. Since 2023, Transformers and diffusion models have naturally integrated into this direction. It’s no longer a collection of scattered problems but a well-established territory.
In contrast, overexposure correction and highlight recovery aren’t ignored, but they’ve never formed a unified, stable field that everyone discusses under the same name. Highlight research is scattered across several neighboring areas. Single-image HDR reconstruction is the most typical example. In Liu et al.’s SingleHDR (CVPR 2020), the core step for handling missing highlights is very close to what we think of as highlight recovery. However, the main task is still defined as HDR reconstruction, not highlight recovery. The Inverse Tone Mapping Challenge at AIM 2025 shows this line of research continues, but it’s long been tucked under larger task names like HDR and inverse tone mapping.
Real change started after 2021. Afifi et al.’s Learning Multi-Scale Photo Exposure Correction (CVPR 2021) began putting overexposure and underexposure into the same framework. With the CLIER benchmark at ICCV 2025, extreme overexposure was finally brought into a clear evaluation system. It’s not a major field yet, but it’s moving from the fringes to becoming a more defined research object.
Even so, low-light enhancement still has an overwhelming advantage in the number of papers, benchmarks, community resources, and competition tracks. The overexposure direction still doesn’t have its own “awesome list,” maintained community resources, or an independent NTIRE track.
The asymmetry in industry is even more obvious than in academia. Products don’t organize themselves by paper categories. They put resources into the paths that best solve real problems.
Low-light scenes have spawned a series of well-known product features. Google has Night Sight, Samsung has Nightography, Apple has Night Mode, and DxO even turned DeepPRIME into an independent brand. Whether it’s multi-frame fusion, denoising, RAW processing, or deep learning, these capabilities are packaged as a promise users can immediately understand: you can take cleaner, brighter photos in the dark.
Highlight recovery has never achieved a similar status. No phone manufacturer has made it a standalone feature like Night Mode. This is because the industry does two other things on a massive scale: they pull back data that isn’t completely dead in the RAW file, and they try to prevent it from dying during capture.
The first category is the Highlights slider in RAW editors. Adobe Lightroom, Capture One, RawTherapee, and Darktable all have similar capabilities. They’re useful not because the software creates details from pure white, but because RAW files sometimes have headroom. One channel might be saturated while others aren’t. A JPEG might look dead white, but the RAW data isn’t fully gone. Capture One’s Why shoot RAW? explains this boundary clearly. Methods like RawTherapee’s Reconstruct Colour also use this surviving partial information.
But there’s a key limit. When all three color channels are saturated, meaning all channels in the RAW data have reached the sensor’s full-well capacity, the Highlights slider can only produce neutral gray. As noted in an Adobe community discussion: “When there is no detail to recover (i.e. all three channels are completely blown), you get a neutral gray when you drag the Highlights slider to the left.”
The second category is the mobile HDR pipeline. Google’s HDR+ with Bracketing, Samsung’s Staggered HDR, and Qualcomm’s Spectra ISP all fall into this group. Their common point is simple: solve highlight problems during capture. Short exposures protect highlights, long exposures protect shadows, and multi-frame fusion stitches them together. The industry deploys highlight protection on a massive scale, not highlight resurrection.
Both approaches share a common requirement: they depend on information not being completely lost. RAW highlight recovery needs at least one channel to still have data. HDR pipelines need a short-exposure frame that kept highlight details. When all three channels in a JPEG area are at 255, neither solution can help.
For cases where information is totally lost, the closest product solution right now is Adobe Photoshop’s Generative Fill. You can select an overexposed area and let the model fill it in. But Generative Fill isn’t a dedicated highlight recovery tool. It’s a general content generation feature. Also, the generated content is the model’s guess based on context. It doesn’t correspond to the actual scene details that were there when the photo was taken. In the Topaz Photo AI community, users have explicitly requested a glare removal feature, but it hasn’t been implemented. This highlights the gap in product-level solutions for this need.
We can explain the causes of this asymmetry on several levels.
The most fundamental is the physical difference. Signals in low-light scenes can be recovered through multi-frame noise averaging or better denoising models because the underlying information is complete, just masked by noise. Highlight clipping means information is permanently removed. Unless there’s information from elsewhere, another frame, another channel, or learned priors, no mathematical tool can derive the original clipped values from the same image. This isn’t about whether the algorithm is good enough. It’s a constraint of information theory.
Next are the issues with data and evaluation. To train a low-light enhancement model, you can use long and short exposure pairs, synthetic noise, or even unpaired data like in Zero-DCE. The SID dataset provides large-scale training data using extreme short and normal long exposure pairs. But to train a highlight recovery model, you need to know the “ground truth” for the clipped areas, information that was lost during capture. HDR datasets can provide some reference because they have a wider dynamic range, but the mapping from HDR to “real scene brightness” involves its own assumptions. This difficulty in getting data limits independent highlight recovery research at its source.
Third is the difference in user demand and market narrative. Low-light photography is something people encounter every day: restaurants, bars, night scenes, indoor parties. Users can see the problem of a photo being “too dark” and have a strong desire to “take good photos in the dark.” Phone makers are happy to name and market this feature because it directly affects buying decisions. Overexposure on phones has been largely mitigated by HDR pipelines. Remaining cases mostly appear in professional photography, like backlit portraits or direct sunlight. Professional photographers usually use RAW workflows and proper exposure control to prevent the problem. This difference in demand leads to differences in research and product investment.
The final factor is the ambiguity of evaluation standards. The effects of low-light enhancement are relatively easy to judge. The image is brighter, there’s less noise, and details are clearer. These can be quantified with metrics like PSNR and SSIM. But how do you evaluate highlight recovery? If the original information is gone, any “recovery” is a guess. A model might add blue sky and white clouds to a clipped area, and it might look natural, but the real scene could have been a gray, hazy day. This “looks right but might not be real” dilemma makes it hard for the research community to establish accepted evaluation standards.
While gathering these materials, I found a few points that are easy to confuse.
RAW highlight recovery isn’t the same as recovering overexposed areas in a JPEG. The Highlights slider in RAW editors works because RAW data usually has 12 to 14 bits of depth and uncompressed channel data. When one channel is clipped but others still have information, you can calculate across channels. But JPEGs only have 8 bits of depth and use lossy compression. When a pixel value hits 255 in a JPEG, the information in that area was erased during encoding. So, for the same photo, the RAW version might recover highlight details while the JPEG version can’t.
The HDR and highlight protection advertised by phone companies aren’t single-image highlight recovery. The principles behind Google HDR+, Apple Smart HDR, and Samsung Staggered HDR involve getting extra information during the capture stage, through multiple frames or on-chip sub-pixels, and then fusing them for a wider dynamic range. This is a preventive measure, not a post-capture fix. These technologies can’t help once a user already has an overexposed photo.
Generative completion isn’t real recovery. Tools like Adobe Generative Fill can create plausible content in overexposed areas, but the details are the model’s guesses. They don’t correspond to the actual scene. In situations that require authenticity, like photojournalism, medical imaging, or remote sensing, this method can’t be considered recovery.
Specular highlight removal is a different problem. There’s a research direction called “specular highlight removal” (with several papers at ICCV 2025). It deals with local highlights caused by mirror-like reflections on surfaces, like glints on metal. This is different from “overexposure.” Specular highlights usually only affect small areas, and the physical model is relatively clear. You can use the dichromatic reflection model to separate diffuse and specular components. It doesn’t fall under the category of “highlight overexposure recovery.”
While the overall asymmetry remains clear, a few signals suggest things are slowly changing.
Exposure correction as an independent direction has grown since 2021, with steady paper output at CVPR and ICCV through 2024 and 2025. The title of the CLIER benchmark at ICCV 2025, “From Abyssal Darkness to Blinding Glare,” explicitly includes extreme overexposure in its evaluation. This is a first. Meanwhile, IntrinsicHDR at ECCV 2024 tries to do HDR reconstruction using intrinsic decomposition, splitting the image into albedo and shading, offering a new way to handle overexposed areas.
The rising power of generative models is also changing what’s possible. Diffusion models are already used in low-light enhancement, like in Diff-Retinex and ExposureDiffusion. The same generative power could theoretically be used to infer content in overexposed areas. The difference is that for low-light images, generative models mostly help with denoising and detail enhancement. For overexposed areas, they have to take on a larger “creative” role. This brings up questions about the boundary between authenticity and credibility. As model completions become more realistic, how do we help users understand what’s recovered and what’s generated? This is a product design problem that hasn’t been solved yet.
From a sensor technology perspective, Sony’s IMX828 sensor, released in 2025, claims to reach 25 stops of dynamic range (about 150 dB). If sensor capabilities keep growing, highlight clipping caused by insufficient dynamic range will become rarer in everyday shooting. The problem is being solved upstream, which will likely reduce the demand for recovery downstream.
Back to the original question: your observation is correct. There are far more low-light enhancement algorithms than highlight recovery algorithms, and this gap isn’t an accident.
Low-light enhancement grew into a full field because it met several conditions: it’s physically possible since the signal is still there; data can be constructed to build training sets; results can be compared so everyone knows what’s better; and there’s high-frequency demand from users taking photos in the dark every day. Highlight recovery is the opposite. Once information is clipped, the task shifts from recovery to completion. Ground truth is harder to define, evaluation is harder to establish, and the industry can solve most problems earlier in the chain with HDR, bracketing, and RAW workflows.
The most important takeaway isn’t that low light is more important than highlights. It’s that low-light problems mostly happen while the information is still alive, while highlight problems mostly show up after the information has died. The former naturally grows into a flourishing direction for algorithms and products, while the latter is better prevented than fixed after the fact.
This report is based on public academic papers, product documentation, technical blogs, and community discussions as of March 25, 2026. Sources include academic conferences like IEEE TPAMI, CVPR, ICCV, and ECCV, as well as industry materials from the Google Research Blog, Samsung Newsroom, Adobe Help Center, DxO website, and the Capture One Blog.