• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Project

Seeing Through Realistic Fog

 Camera Culture 

Seeing through dense, dynamic, and heterogeneous fog conditions. The technique, based on visible light, uses hardware that is similar to LIDAR to recover the target depth and reflectance.

The system relies on ultrafast measurements, used to computationally remove inclement weather conditions such as fog, and produce a photo and depth map as if the fog weren’t there (with contrast improved by 6.5x in dense fog conditions).  

Applications

  • Autonomous and augmented driving in challenging weather.
  • Airplanes and helicopters take off, landing and low level flight in dense fog conditions.
  • Trains traveling at normal speeds during inclement weather conditions.

Overview

The measurement is based on a SPAD camera (single photon avalanche diode) that time tags individual detected photons. A pulsed visible laser is used for illumination. The suggested approach is based on a probabilistic algorithm that first estimates the fog properties (background). Then the background is subtracted from the measurement with the fog leaving the signal photons from the target which are used to recover the target reflectance and depth.

The proposed model supports a wide range of fog densities and is able to work in fog that is heterogeneous and dynamic. The fog model is estimated directly from the measurement without prior knowledge. The motivation to use the background photons is similar to our All Photons Imaging work in which scattered light is measured and computationally used to robustly eliminate the scattering.

Other techniques to see through fog are usually based on longer wavelengths (like RF) and provide lower resolution and poor optical contrast, restricting the ability to identify road lanes and road signs. Another alternative method is based on time-gating that locks onto a small part of the unscattered signal, this results in poor signal-to-noise ratio and limits the applicability to moving platforms and high fog densities.

Copyright

Camera Culture

Copyright

Camera Culture

Copyright

Camera Culture

Frequently Asked Questions

  1. What are the main advantages of this method?
  2. What are some applications?
  3. Why is visible light essential for imaging through fog (why not just RADAR)?
  4. What do you mean by realistic fog?
  5. What is a SPAD camera?
  6. Is this a probabilistic framework? What does that mean?
  7. How does the SPAD single photon sensitivity help?
  8. How does the time-resolved sensing help?
  9. What is optical thickness?
  10. What are the main limitations of this method?
  1. What are the main advantages of this method?

    There are several key advantages:

    • Simplicity of the reconstruction algorithm allows it to run in real-time.
    • The approach doesn't assume prior knowledge about the fog density, and it works with a wide range fog densities.
    • The method is pixel-wise (each pixel estimates the fog properties and target independently), thus it naturally works with heterogeneous scenes and fog.
    • The required hardware is similar to LIDAR that is commonly used in self-driving cars.
    • Using visible light, it is possible to read road signs and detect lane markings.
  2. What are some applications?

    Augmenting a human driver and enabling self-driving cars to operate in challenging weather; allowing drones to navigate and follow targets in inclement weather; improving flight safety of airplanes and helicopters during takeoffs, landings and low-level flights in extreme weather; and allowing trains to travel faster in low visibility. 

  3. Why is visible light essential for imaging through fog (why not just RADAR)?

    Imaging in the visible part of the electromagnetic spectrum provides good resolution (short wavelength compared to RF), and good optical contrast (different materials appear very different under visible light). The latter is key to identifying road lane markings and reading road signs. 

  4. What do you mean by realistic fog?

    Our experiments are conducted with a water-based fog generator combined with a fan. This results in fog with variable densities (no fog to very dense), that is moving (dynamic) and heterogeneous (patchy). 

  5. What is a SPAD camera?

    SPAD (single photon avalanche diode) camera time tags individual photons as they are detected. Each pixel records one photon per laser pulse (our laser emits millions of pulses per second), and our method requires only a few tens of thousands photons. 

  6. Is this a probabilistic framework? What does that mean?

    We develop a probabilistic framework to model scattered photons from fog. This model is estimated from the raw measurement and is used to answer the question "what is the probability that a photon was scattered from fog or target?". 

  7. How does the SPAD single photon sensitivity help?

    The single photon sensitivity helps us in the development of a probabilistic framework. Effectively for each detected photon we ask what is the probability that this photon reflected from the fog or target. Perhaps more importantly, the SPAD measurement noise is much better (less noise) when compared to traditional cameras. 

  8. How does the time-resolved sensing help?

    We want to know what is the probability that a photon was reflected from the fog or target. A more robust way to answer that is asking given the prior knowledge that the photon was measured at a specific time, what is the probability that it was reflected from the fog or target? Intuitively, more information (time in this case) helps making better estimates. In the paper we specifically show that the probabilities to measure a photon from target and fog are different as a function of time. 

  9. What is optical thickness?

    Optical thickness is a measure to level of scattering. If we want to measure optical thickness of fog at a given time point, we measure the light intensity at that time, and compare it the light intensity without fog. With higher fog densities, and light intensity drops due to the scattering. Optical thickness is a dimensionless quantity so it's easy to compare it to different experiments. See more here

  10. What are the main limitations of this method?

    Because the approach is pixel wise (each pixel operates independently) it ignores spatial blur that is induced by the fog. While this effect was very minor in our measurements it is possible that it would become more apparent in other scenarios. 
    Another limitation is the acquisition time. While we can generate a new result with every new detected photon (100 microseconds), we rely on recently detected photons. In our setup we used a history of 2 seconds, thus this limits the immediate application for a moving platform. We hope that stronger laser and better hardware would improve this. 

Camera Culture Related Projects

Camera Culture Related Talks