Sie sind auf Seite 1von 49

CHROMATICITY BASED SMOKE REMOVAL FOR

ENDOSCOPIC IMAGE CLARITY ENHANCEMENT

Tchaka Kevin
Supervisor: Stoyanov Danail

Department of Computer Science


University College London

2015-09-01

(6,175 words)

This thesis is submitted to the University College London in partial fulfilment of the
requirements for MSc in Computer Graphics, Vision and Imaging

ABSTRACT

In robotically assisted minimally invasive surgery visibility and clarity of the image are
critical for the surgeons ability to perform the procedure. However, image quality during
endoscopic procedure can deteriorate for a number of reasons such as fogging due to
temperature gradients, focus blur and due to smoke generated when using cautery to cut
through tissues while burning them. In this project we investigated the use of dehazing
techniques, typically applied to outdoor images, to remove surgical smoke and improve the
clarity of the image. For that purpose we analysed the process of formation of images within
a hazy medium and the reason for the degradation of visibility.
METHODS
For simplicity and computational efficiency we used the dark-channel prior method and
histogram equalization. The image is then processed via the dark-channel prior method in
order to remove the smoke, and then the histogram equalization algorithm is performed on
the recovered radiance image to enhance the contrast and brightness of the final result.
RESULTS
Our results on images from robotic assisted procedures are promising but illustrate the
importance of considering the illumination colour for normalisation and the need of a
perfectly reliable prior.
CONCLUSION
We found our proposed solution gives out promising results, accomplishing its aim of smoke
removal and visibility enhancement. However, the colour map of the resulting recovered
radiance is modified and the time issue is not yet at a real-time performance level though the
processing time is better than it was first expected.

ACKNOWLEDGEMENTS

I give my thanks to Danail Stoyanov for his support and his patience all throughout the length
of this project. I also extend my gratitude to my fellow colleagues of CMIC for the insights
they did not hesitate to share.

CONTENTS
ABSTRACT .......................................................................................................................................................... 1
ACKNOWLEDGEMENTS ................................................................................................................................. 2
CONTENTS .......................................................................................................................................................... 3
CHAPTER 1.......................................................................................................................................................... 4
INTRODUCTION ................................................................................................................................................ 4
CHAPTER 2.......................................................................................................................................................... 7
LITERATURE REVIEW .................................................................................................................................... 7
2.1

OPTICAL MODEL OF IMAGE FORMATION THROUGH A HAZY MEDIUM ..................................................... 7

2.2

OUTDOOR IMAGES HAZE REMOVAL ....................................................................................................... 9

CHAPTER 3........................................................................................................................................................ 13
SYSTEM DESIGN ............................................................................................................................................. 13
3.1

DARK-CHANNEL PRIOR METHOD ......................................................................................................... 13

3.2

ESTIMATING AIRLIGHT AND TRANSMISSION ......................................................................................... 14

3.3

SCENE RADIANCE RECOVERY............................................................................................................... 16

3.4

IMAGE CONTRAST ENHANCEMENT ....................................................................................................... 17

CHAPTER 4........................................................................................................................................................ 19
EXPERIMENTAL RESULTS .......................................................................................................................... 19
4.1

RECOVERED RADIANCE COLOUR DISTRIBUTION................................................................................... 19

4.2

DARK CHANNEL AND TRANSMISSION ................................................................................................... 22

4.3

IMAGE CONTRAST ENHANCEMENT ....................................................................................................... 28

CHAPTER 5........................................................................................................................................................ 32
CONCLUSIONS AND FUTURE WORK ........................................................................................................ 32
REFERENCE ..................................................................................................................................................... 34
APPENDIX ......................................................................................................................................................... 35

Chapter 1

Introduction
Over the past decades, with the improvement in the fields of robotics and machine vision,
came the emergence of surgical robot to aid in performing surgery. Robotically assisted
surgery was developed in order to satisfy the need for minimally invasive surgery. Typically,
instead of directly moving the instruments, the surgeon would operate robots to perform the
surgical operation. For such operations, precision, accuracy and visibility are of the utmost
importance. Therefore, depending on the specific case diverse methods are employed to
provide the surgeon with the most accurate visual input of the robots end-effectors and
manipulators. In the particular case of endoscopic robotically assisted surgical operations, an
endoscopic camera is mounted on one of the robot arms to provide a precise view of the
scene to operate on. However, when burning tissues when using diathermy devices, the
process generates smoke. This can greatly impairs the visibility as shown in Figure 1.1
below.

Figure 1.1

Image from endoscopic transoral robotic surgery where a laser is used to debunk the base of the

tongue which creates smoke. On the left a frame where smoke impairs the visibility, on the right another
frame with clear visibility.

As of now, this means that at times the procedure is paused to wait for the smoke to clear out
in order to recover visibility. This is impractical both while considering the toll on the body
from repeated burn-rest cycles and the extension of the length of the surgical procedure. This
project aims to provide a solution to computationally remove the smoke and improve the
clarity.
In the case of outdoor scenes affected by bad weather haze removal has been the object of
numerous researches in the last decade. From (Kokkeong and Oakley, 2000) to (Li et al.,
2015), many methods were proposed. Most of these methods rely on the same optical model
to represent the formation of images in a hazy medium. They also rely on one or more
assumption which allows them to recover the radiance of the image without the effect of
haze.
According to the physics definition, smoke is classified as a type of haze, thus the idea came
to try and see if methods developed for outdoor scenes in bad weather could be applied to
endoscopic robotically assisted surgical images. This project can be summarized by the three
following objectives:
-

Remove the smoke from robotically-assisted surgical images by applying outdoor


dehazing schemes.

Maintain the maximum consistency level of each frame (i.e. retrieve the maximum
possible original radiance).

Provide minimum processing latency in order to minimize operation delay.


5

In order to accomplish those objectives, two steps were performed, first the selection of a
suitable dehazing method, then implementation of the method while optimizing the algorithm
for real-time purposes.
This report contains five chapters organized as follows. In Chapter 2 will be briefly presented
the optical model used in most dehazing techniques and several methods for outdoor
dehazing relying on this model. In Chapter 3, we look more specifically at the algorithm of
our solution, presenting the outdoor dehazing technique selected as well as our improvements
to it. We will present the results of our solution in Chapter 4 before concluding and offering
future possible avenues of research in Chapter 5.

Chapter 2

Literature Review

2.1 Optical Model of Image Formation through a hazy medium

Figure 2.1.1

Schematic illustration of image formation through a hazy medium. The image on the left
shows a forest in fog, and on the left we have the recovered image radiance.

The American Heritage Dictionary of the English Language defines haze as a reduced
visibility in the air as a result of condensed water vapour, dust, smoke, etc. in the
atmosphere. In imaging this definition means that the light reflected from a surface is
scattered in the atmosphere before reaching the camera. This can be due to the presence of
any type of aerosol such as dust, smoke, mist, etc.
Figure 2.1.1 above illustrates this process. Although for schematization purpose we
represented light as propagating in straight lines, in reality it is instead scattered and replaced
7

by a sum of previously scattered rays plus rays reflected from the aerosol responsible for the
haze. These are commonly referred to as the atmospheric light or airlight and transmission.
Two main assumptions form the basis for this model. The first is to assume that along a very
short distance , r being an arc-length parametrization of a light ray, the fraction of light
absorbed can be assumed to be linearly related to the medium extinction coefficient or albedo
. Integrating this process along a light ray we can express the transmission over a surface

at a distance as:

= 0

()

(2.1)

In a pixel sense this is usually expressed as:

() = ()

(2.2)

Where () is the scene depth at pixel . Pixels are here and throughout this whole thesis,

considered individually but in practice, should be the coordinate vector of an individual


pixel.

The second assumption relies on the Radiative Transport Equation (van Rossum and
Nieuwenhuizen, 1999) which stipulates that in the absence of black-body radiation the
fraction of light scattered from any particular direction is replaced by the same fraction of
light scattered from all other directions. As such one can assume that this light is dominated
by light that underwent several scattering event and can be considered as isotropic and
uniform in space. This is what is referred as the airlight (Koschmieder, 1924).
Following these two assumptions the image can be seen as composed of a sum of two
components: a multiplicative loss of the surface contrast due to the transmission and an
additive term related to the uniform airlight. This is formally described in equation (3) below
as:
() = ()() + 1 ()

(2.2)

Where represents the resulting hazy image, is a single pixel, is the airlight, is the
transmission and is the haze free image. is commonly referred to as the attenuation
module, and (1 ()) is the airlight module. Considering (3) the goal of haze removal is
8

to recover . Since is known, the easiest way to recover is to estimate the transmission and

airlight.

2.2 Outdoor Images Haze Removal


A great amount of work has already been done in the context of computational photography
on developing methods that restore images at minimal requirements in term of input data,
user intervention and complexity of the acquisition hardware. Twenty years ago, most of the
papers concerned with image dehazing assumed some form of additional data in addition to
the hazy image itself. For example in (Kokkeong and Oakley, 2000) haze is removed from
images taken by a forward-looking airborne camera by assuming that the scene-depth is
given. In (Narasimhan and Nayar, 2003) a user interactive tool for removing weather effects
is described. This requires the user to indicate areas heavily impacted by the weather and
areas that are not. In (Nayar and Narasimhan, 1999) a method is proposed for image dehazing
using two polarized cameras with different polarization. Considering our goals and the scope
of our project we will from there on mostly concern ourselves with method based on single
image dehazing. Most single image dehazing methods follow a common workflow described
in Figure 2.2.1 below. Mainly the hazy image is used to estimate the airlight and
transmission, then the radiance is recovered to output the dehazed image. Slight differences
can arise in the order of airlight and transmission estimation.

Figure 2.2.1

Single image dehazing basic workflow

One of the most outstanding work in this regard is certainly (Fattal, 2008). In this paper it is
assumed that the transmission and the surface shading are locally uncorrelated. This
assumption allows for the inference of the transmission in the area affected by thin haze.
Markov Random Field is then applied to the transmission map in order to propagate the
transmission to denser haze areas. Proceeding thereon with statistical smoothing, this
physically based approach provides very satisfying result although it fails when the whole
image is affected with dense haze. It mostly suffers from a heavy computation complexity
and time consumption.
Making the two simple observations that haze free images have more contrast than their hazy
counterpart and that the variation of airlight tend to be locally smooth (Tan, 2008) also
provides a single image dehazing method. His two observations allow him to realize that the
colour and visibility can be recovered by simply maximizing the contrast over a local window
of the hazy image. This approach, though lacking in the sense of physical logic, till provides
good results, though the colour map of the resulting images seems surreal. However it too
suffers from heavy computation. In addition some assumptions are made about the existence
of an infinitely distant point to approximate the airlight, which makes sense considering that
the scope of this method was originally made to accommodate outdoor images. However this
cannot be assumed in our particular case.
(Zhang et al., 2010) and (Wang and Fan, 2014) demonstrates the power of chromaticity prior
in regard to simplifying the computational load of the dehazing operation. (Zhang et al.,
2010) present an algorithm that extracts the transmission iteratively with the assumption that
large-scale chromaticity variations are due to transmission whereas small-scale luminance
variations are mostly due to scene albedo. The transmission map is then refined by a nonlinear edge preserving filter. Their approach gives good results, with relatively small
computations when compared to previous works. (Wang and Fan, 2014) propose a multiscale
depth fusion method with local Markov regularization to recover haze images from a single
hazy image. They employ Markov random field to blend multi-level details of chromaticity
priors. By linearly modelling the stochastic relation between haze priors and the depth map
10

they estimate the depth map via an inhomogeneous Laplacian-Markov random field, mixing
the energy model to the fog priors. Then they estimate the airlight utilizing the fused depth
map with a smooth constraint to find haze opaque regions. This method provides excellent
results at a relatively cheap computational cost. However due to the iterative process while
solving the Markov random field, it suffers from the same issue as (Zhang et al., 2010), the
iterations can be quite numerous and take a bit of time.
(Kaiming, 2011) proposes a different approach they call the Dark Channel Prior to solve the
dehazing issue. Their work stems from the observation that most haze free images contain
some pixels with very low intensities in at least one colour channel, due either to shadows,
colourful objects or dark objects. On the other hand, haze is mostly grey/whitish and in hazy
images hazy patches will mostly contain pixels with their three colour channels quite close
and at a pretty high value when compared with their haze free counterpart. Extracting the
dark channel prior, they estimate first the airlight by the brightest patch of pixels in the dark
channel, then estimate the transformation via a simple operator before recovering the clear
image Radiance. This method despite its apparent simplicity, provides very good result at
very little computation complexity and time. However even if this approach is the cheapest of
all described here in time and computation, it still takes a little bit of time to process a single
image, approximately 10~20 seconds for a 600x400 image.
The methods introduced previously have all been shown to produce quite satisfying results
when considering outdoor scenes impacted with haze. As such, the criterion for selecting a
suitable method for our design was the computation complexity and processing time for a
single image. For a single 600x400 pixel image, the method from (Tan, 2008) yields its
results in five to seven minutes on a double processors Pentium 4 architecture with 1GB
memory. (Fattal, 2008) yields pretty much similar results, while (Li et al., 2015) claims a
processing time of ten minutes for a 480x270 image resolution on a desktop computer with
Intel quad-core 2.4 GHz CPU, though their method performs simultaneously both defogging
and stereo reconstruction. In comparison (Kaiming, 2011) claims ten to twenty seconds of
processing time for a 600x400 image on a PC with a 3.0 GHz Intel Pentium 4 Processor.
11

Though the architecture varies it seemed quite visible that the dark-channel prior method
introduced in (Kaiming, 2011) is the fastest and less computationally expensive one. Plus it
had already been proven to yield excellent results for real-time video dehazing in outdoor
cases (Xiaoqiang et al., 2014). Following that example the dark-channel prior method was
selected as a basis for our system.

12

Chapter 3

System Design

3.1

Dark-Channel Prior Method

As stated previously, the dark channel prior is based on the observation that in most non-sky
patches of a haze-free image, at least one colour channel has very low intensity at some
pixels, meaning that the minimum value in such a patch should have a very low intensity
value. It is an generalization of the dark-object subtraction technique (Chavezjr, 1988) where
spatially homogeneous haze is removed by subtracting the colour of the darkest object in the
scene. The dark channel can be formally defined for an image as:
() = min min ()
(,,) ()

(3.1)

Where is the dark-channel, is a color channel, (x) represents a patch of the image

centred on pixel , and represents a pixel belonging to the local patch .

The intensity of the dark channel can be considered a rough approximation of the thickness of
the haze. As the intensity becomes higher the haze gets thicker. According to this the haze
can be considered as noise and through subtracting the dark-channel point the haze can be
removed. Here, following the lead of (Xiaoqiang et al., 2014) the dark-channel is
implemented globally for time-consumption purposes, removing the need to divide the image

13

in patches. The principle is that in the whole image there will exist credible dark-channel
pixels to approximate reflect the effect of light in the region. As such equation (3.1) becomes:
() = min ()
(,,)

3.2

(3.2)

Estimating Airlight and Transmission

Using the dark-channel of the image the airlight can be estimated as the colour of the most
haze-opaque pixel, i.e. the brightest pixel in the dark-channel of the image. Though the
brightest pixel might be the result of noise, so in order to account for that possibility the
airlight is estimated as the average of the top 0.1% brightest pixels in the dark-channel.
Taking the dark-channel, the top 0.1% brightest pixels are extracted, and then the average
colour of these pixels in the input image is computed and conserved as the estimated airlight.
This approach relies heavily on the fact that in outdoor hazy scenes the top 0.1% brightest
pixels in the dark-channel, would usually belong to infinite distance objects (they often end
up being sky-region pixels), thus very scarcely impacted by the objects own chromaticity
and allowing for a very good estimation of the airlight. However in robot-assisted surgery
images, infinitely distant objects are very unusual and the brightest pixels end up mostly
being reflections of the light source on tissues, or pixels belonging to the robots end
effectors. This affects the complete process as in this case the estimated colour does not
correspond to the most haze opaque region but simply to the colour of the brightest object in
the scene, Figure 3.2.1 below shows how this error manifest itself in a specific case.

Figure 3.2.1

Error estimating the airlight, the region circled in green corresponds to the top 0.1%

brightest pixels in the dark-channel, however this is not the most haze-opaque region in the image.

14

This issue does not prevent the removal of smoke, but impact quite severely the output image
colour map. One way of preventing this is to pre-process the dark-channel prior to the airlight
estimation in order to ignore those region and only consider the most haze opaque region.
What was done here was, on the assumption that haze would most likely not have a
brightness value too close to pure white, first threshold the dark-channel, removing too bright
objects before conducting the airlight estimation. The value of the threshold has been set at
= 0.6. A second way investigated is to rescale the brightness values of the dark-

channel pixels according to their difference with the brightest possible pixel value. A formula
to this effect is the following, for each pixel of the image, the new dark-channel brightness

() can be expressed as:


value

() = () 1.0 ()

(3.3)

This corrected dark-channel should then be normalized between 0 and 1 for the airlight
estimation.
Applying the min operator to equation (2.3) we have:
min () = () min ( ()) + 1 ()

(,,)

This is equivalent to:

(,,)

()
()
min = () min ( ) + 1 ()
(,,)
(,,)

(3.4)

(3.5)

According to the dark-channel prior, the dark-channel of the haze-free radiance should tend
to be zero, i.e.

() = min () = 0
(,,)

Since is strictly positive, this is equivalent to:

()
min = 0
(,,)

(3.6)

(3.7)

Now using equation (3.7) into (3.5), the transmission can be estimated as:
()
() = 1 min
(,,)

15

(3.8)

It is easy to realize that min(,,)


()

()

is the dark channel of the normalized haze image

. However, haze is an important cue to depth perception for humans thus it is unwise to

completely remove it, as it might create an unnatural feeling in the image and incur loss of
depth perception. To avoid such an issue a small amount of haze is kept by introducing a
constant parameter verifying 0 < < 1 in equation (3.7). The final transmission is then
estimated as:

() = 1 min
(,,)

()

(3.9)

In (Kaiming, 2011) the transmission is further refined by a soft matting algorithm to remove
the block effect induced by the fact that transmission might not be constant in a patch.
However that issue is non-existent when applying the global dark-channel operation as the
transmission is computed on a per-pixel basis.

3.3

Scene Radiance Recovery

Once the airlight and transmission have been estimated, the scene radiance can be recovered
by simply reversing equation (2.3) as:
() =

() 1 ()
()

(3.10)

As the transmission gets close to zero, the recovered radiance gets prone to noise, therefore
the transmission is usually restricted to a lower bound . What this implies is that a

small amount of haze is preserved in very dense haze. (Kaiming, 2011) indicates 0.1 as a

value of . This value will be discussed later. The final scene radiance is then recovered
as:

() =

() 1 ()
max((), )

(3.11)

The recovered haze-free radiance brightness is usually lower than the hazy image. In
addition, an effect of global dehazing is that the local dehazing effect is not as obvious. To

16

correct this it is necessary to enhance the contrast of the recovered radiance in order to
improve the quality of the restored image.

3.4

Image Contrast Enhancement

Contrast enhancement is actually the process of increasing the contrast of brightness and dark
regions via highlighting the edges and details of the image. There exist many different
algorithms for that purpose such as gamma correction, histogram equalization, Retinex or
others. Due to its overall simplicity and computational efficiency, histogram equalization was
selected for our purpose. Histogram equalization is simple and easy to implement, plus it is
actually quite fast and serve our purpose well.
The histogram equalization enhances image information by widening the grey level interval
and spreading out the most frequent intensity values. Its basic principle is to use a grey
mapping table to compute each pixel new intensity value. Doing so, the image is corrected by
making the intensity distribution uniform, thus increasing the dynamic range of the grey
values. Since we are dealing with colour images and to avoid having to consider the colour
correlation of RGB channels, the histogram equalization is applied to the value channel of the
HSV conversion of the recovered radiance.
A simple algorithm for histogram equalization is the following:
-

Compute the histogram of the value channel of the HSV conversion of the colour
image

Compute the cumulative distribution function of the histogram

For each pixel intensity value compute the new intensity value as
=

()min
min

( 1) , where is the number of pixels in the image, is

the number of grey levels used and min

is the minimum non-zero value of the

cumulative distribution function of the histogram.

17

One drawback of the histogram equalization is that it might also bring out and amplify noise
in the image, thus a simple smoothing technique is applied to the resulting image to correct
this issue. Figure 3.4.1 below presents a complete organigram of the solutions workflow.

Figure 3.4.1

Design Flow of the proposed solution.

18

Chapter 4

Experimental Results
Our original data consists of two stereo endoscopic videos streams from robotically assisted
surgeries. One is from a transoral robotic surgery where a laser is used to debunk the base of
tongue which creates smoke. The second is from a robotically assisted laparoscopic
cholecystectomy where diathermy creates the smoke artefacts. For some performance and
comparison issue, the data also includes some outdoor hazy images such as the forest image
in (Kaiming, 2011). All tests were performed on a PC with a 2.40 GHz Intel Core i74700MQ processor. The solution was implemented in C++ under Microsoft Visual Studio
2012 environment using the OpenCV 2.4.1 library for C++ (Bradski, 2000).

4.1

Recovered radiance Colour Distribution

Natively the proposed solution yields quite good results that can be seen in Figure 4.1.1
below. The top image of each pair is the original frame where smoke is present and the
bottom images are the resulting dehazed frames after going through our system. It can easily
be seen that there is definite improvement in visibility and the smoke has been successfully
removed. However the process seems to darken the global image. Though it is not to the level
of generating visual impairment it is still visible that colours are not as natural anymore.

19

(a)

(b)

Figure 4.1.1

Two example stereo endoscopic images from robotic assisted surgery before and after haze

removal. The top two images (a) show transoral robotic surgery where a laser is used to debunk the base of
tongue which creates smoke. The bottom two images (b) are from robotic assisted laparoscopic
cholecystectomy where diathermy creates the smoke artefacts. In both examples the smoke is removed
however the image colour space undergoes a significant shift due to the illumination colour. This can
potentially be overcome through colour consistency and illumination calibration. Or as in the next section via a
better definition of the dark-channel prior.

This colour change might be due to the fact that the dark-channel prior method happens to
severely reduce the colour channels that contribute to the dark-channel of the image. In our
20

case where the red channel is the most prominent, the blue and green channels are seriously
reduced; as shown in Figure 4.1.2 below.

(1) Colour distribution before haze removal

(2) Colour distribution after haze removal

Figure 4.1.2

Colour distributions of the images from Figure 4.1.1 before and after haze removal showing the

need to colour adjustment. For each pair the figures on the right correspond to the transoral surgery while the ones
on the left relate to the cholecystectomy.

Colour consistency or illumination calibration techniques might be possible solution to deal


with this issue. This should be investigated further. However, knowing that the colour
distribution reduction is due to the inner mechanism of the dark-channel technique allow us
to investigate how this might be addressed by slightly modifying the dark-channel estimation
itself in order to pre-emptively correct this issue. The next section is dedicated to the results
of our investigations in that direction.

21

4.2

Dark Channel and Transmission

The dark channel extraction step is the cornerstone of the whole technique. The dark-channel
is used to estimate both the airlight and transmission. The qualities of a good dark-channel
are that it is smooth and shows clearly the haze density at each pixel and the regions of haze
and no haze. The dark-channel should have dark pixels where there is no haze and bright
pixels where haze is present. In equation (3.7) it is shown that the estimated transmission is
more or less a negative view of the dark-channel. Which explains the close relationship
between the two, it stems from it that the clearer the dark-channel the better the final result.
The nave dark-channel computed with equation (3.2) yield quite good results for outdoor
images, as shown in Figure 4.2.1 below. However it can be seen that the colours of the
recovered radiance are more vivid and intense. While this may look like a good improvement
for the hazy forest image, it is less practical with a surgery sample such as the transoral
example in Figure 4.2.2 below. In this figure it can be seen that the dark-channel and
transmission images are messier, resulting in the weird colours of the recovered radiance.

Figure 4.2.1

Original source (top right), dark-channel (top left), transmission (bottom right) and

recovered radiance (bottom left) for the hazy forest image from (Kaiming, 2011) when computing the
dark-channel with equation (3.2).

22

Figure 4.2.2

Original source (top), dark-channel (second), transmission (third) and recovered

radiance (bottom) of a transoral surgery frame when computing the dark-channel with equation
(3.2).

Figure 4.2.3 hereafter shows the same result when applied to our second example, a
laparoscopic cholecystectomy.

23

Figure 4.2.3

Original source (top), dark-channel (second), transmission (third) and recovered

radiance (bottom) of a liver cholecystectomy frame when computing the dark-channel with
equation (3.2).

Applying the correction of the dark-channel discussed in section 3.2 seems to correct this
issue a fair bit. By thresholding the values of the dark-channel, the region selected to estimate
the airlight gets closer to the most haze-opaque region. Applying the correction defined in
equation (3.3) seems to have similar results. Figure 4.2.4 and 4.2.5 below shows the darkchannel for the transoral surgery and laparoscopic cholecystectomy frames as before, but
computed according to the original equation (3.2), equation (3.2) thresholded, and equation
(3.3).

24

Figure 4.2.4

Comparison of different dark-channel extractions of a transoral surgery image, top:

dark-channel computed from equation (3.2), middle: dark-channel thresholded by 0.6, bottom: darkchannel corrected via equation (3.3)

Figure 4.2.5

Comparison of different dark-channel extractions of a laparoscopic cholecystectomy

image, top: dark-channel computed from equation (3.2), middle: dark-channel thresholded by 0.6,
bottom: dark-channel corrected via equation (3.3)

25

Similarly, Figure 4.2.6 shows the corresponding recovered radiance. One can see from it that
correcting the dark-channel can lead to an actual correction of the colour map of the
recovered radiance. The results are quite encouraging however; one thing that is not visible
on a single image is that this comes at the cost of a reduction of the density of haze removed
during the process. Once applied to consecutive frames on a video it is easy to realize that the
closer the colours look to the source frame, the less effective the smoke density reduction.

Figure 4.2.6

Recovered radiance from a transoral surgery frame. Top: source, second: dark-

channel computed with equation (3.2), third: thresholded dark-channel, bottom: dark-channel
computed with equation (3.3)

In the end, it all boils down to a choice between an acceptable level of haze and the colour
aspect of the output image. The final implementation choice was to prioritize the smoke
removal using equation (3.2). Even though the results for a single image are more impressive
26

with equation (3.3), when taking into account the whole video sequence aspect, results are
more convenient with equation (3.2). the same results can be observed from the laparoscopic
cholecystectomy case as can be seen on figure 4.2.7.

Figure 4.2.7

Recovered radiance from the laparoscopic cholecystectomy frame. Top: source,

second: dark-channel computed with equation (3.2), third: thresholded dark-channel, bottom: darkchannel computed with equation (3.3)

The differences in the decrease of haze density are more easily visible on the hazy forest
image from (Kaiming, 2011) where the differences in the level of haze remaining after each
approach are clearly visible. Figure 4.2.6 below illustrates this.

27

Figure 4.2.8

Recovered radiance from the hazy forest. Top right: source, top left: dark-channel

computed with equation (3.2), bottom right: thresholded dark-channel, bottom left: dark-channel
computed with equation (3.3)

It is easy to see on the above figure, that as the colours come closer to a natural aspect, the
amount and density of haze remaining in the recovered radiance increases. It was a little less
obvious but figure 4.2.6 and 4.2.7 above illustrated the same result.

4.3

Image Contrast Enhancement

As stated in section 3.4 the recovered radiance after applying the dark-channel prior
technique usually has lower brightness than the source. Thus to improve the quality and
visibility it is necessary to improve the contrast of the recovered radiance. The choice for our
solution was to use histogram equalization for simplicity and computational complexity
considerations. What was tested here was the difference in quality and computational time
between simple histogram equalization, and contrast limited adaptive histogram equalization
28

(CLAHE). It is also worthy to notice that histogram equalization and clahe both carry the risk
of bringing out noise in the resulting image. O they are usually coupled with a de-noising
technique. Here we investigated two possibilities a well, a simple median blurring with a
kernel size of 3, or a mix of Gaussian blurring with a kernel size of 5 and some Laplacian
sharpening.
Table 4.3.1 Computational times of contrasts enhancement schemes

Contrast enhancement
technique
Histogram Equalization
Histogram Equalization
CLAHE
CLAHE

De-noising technique
Median Blur
Gaussian Blur + Laplacian
Sharpening
Median Blur
Gaussian Blur + Laplacian
Sharpening

Computational time
(ms)
341
365
347
376

In terms of computational times first, Table 4.3.1 above records the computational times of
the different solutions investigated for a 1440x288 pixel image. The times recorded include
the entirety of the process and not just the contrast enhancement part; however the rest of the
solution is invariant in each case. For the sake of better correlation with the visual quality test
below, the solution for this test used the corrected dark-channel prior using equation (3.3) that
was introduced before. The reason for this is that this corrected prior leads to images where
the difference in quality is easier to see visually. It is our belief that the qualitative aspect of
the test should hold true regardless of the method used for estimating the dark-channel. As
shown in the table the most reliable solution time-wise is the histogram equalization plus
median blur mix. This is as was expected so far. The following figure shows the visual result
of applying each set of techniques to the same transoral surgery frame exploited throughout
this thesis.

29

Figure 4.3.1

Visual results of the contrast enhancement techniques. Top to bottom: Source frame

Histogram equalization + median blur Histogram equalization + Gaussian blur & sharpening
CLAHE + median blur CLAHE + Gaussian blur & Sharpening.

30

Here the most impressive result visually is attained using CLAHE + Gaussian blur and
sharpening. However looking at Table 4.3.1 above it is also the most time consuming. The
one offering the best ratio of quality and time consumption appears to be CLAHE + median
blurring with only 347 milliseconds of computational time. In the end our final solution uses
CLAHE with median blurring.

31

Chapter 5

Conclusions and Future Work


This project had as an aim to investigate the use of outdoor scenes dehazing techniques for
the sake of smoke removal in minimally invasive surgery video sequences. Investigating the
process of image formation through hazy medium, and looking at the assumptions and holds
of pre-existing dehazing techniques for outdoor scenes, a system was designed to process and
remove smoke from endoscopic robotically assisted surgical operation video sequences. The
system processes the video sequence frame per frame and removes the smoke from each
individual frame. The designed system using the dark-channel prior technique (Kaiming,
2011) as a basis coupled with contrast enhancement techniques, can be hooked to a video
stream and works with a small delay. However, the colours of the output image are slightly
unnatural (more vivid and intense), a solution was explored for this issue, but came at the
sacrifice of part of the smoke removal. As the goal was for the increase of visibility and
clarity and the colour shift does not impair these, it was decided to prioritize the smoke
removal to the exactitude of the colour map.
Future work would mainly lie in resolving the colour issue of the solution, possibly by
studying more in depth the relation between the dark-channel prior, airlight estimation and
the colour output. Through empirical abuse of the formula an interesting relation was raised
between the two but a more formal study might reveal a solution to the issue at hand. Also,
although the solution seems to work quite well, implementation at a hardware level at image
32

acquisition time would do wonders for the real-time issue. It would also be interesting to
investigate the possibility of cutting time on computation by only estimating airlight once
instead of at each frame for it should not vary too much in the stride of one single operation.

33

Reference
BRADSKI, G. 2000. {The OpenCV Library}. Dr. Dobbs Journal of Software Tools.
CHAVEZJR, P. 1988. An improved dark-object subtraction technique for atmospheric
scattering correction of multispectral data. Remote Sensing of Environment, 24, 459479.
FATTAL, R. 2008. Single image dehazing. ACM Trans. Graph., 27, 1-9.
KAIMING, H. 2011. Single Image Haze Removal Using Dark Channel Prior. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 33, 2341-2353.
KOKKEONG, T. & OAKLEY, J. P. Enhancement of color images in poor visibility
conditions. Image Processing, 2000. Proceedings. 2000 International Conference on,
10-13 Sept. 2000 2000. 788-791 vol.2.
KOSCHMIEDER, H. 1924. Theorie der horizontalen Sichtweite: Kontrast und Sichtweite,
Keim & Nemnich.
LI, Z., TAN, P., TAN, R. T., ZOU, D., ZHIYING ZHOU, S. & CHEONG, L.-F.
Simultaneous Video Defogging and Stereo Reconstruction. Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, 2015. 4988-4997.
NARASIMHAN, S. G. & NAYAR, S. K. 2003. Interactive (de) weathering of an image
using physical models. IEEE Workshop on Color and Photometric Methods in
Computer Vision, 6, 1.
NAYAR, S. K. & NARASIMHAN, S. G. Vision in bad weather. Computer Vision, 1999.
The Proceedings of the Seventh IEEE International Conference on, 1999. IEEE, 820827.
TAN, R. T. Visibility in bad weather from a single image. Computer Vision and Pattern
Recognition, 2008. CVPR 2008. IEEE Conference on, 23-28 June 2008 2008. 1-8.
VAN ROSSUM, M. C. W. & NIEUWENHUIZEN, T. M. 1999. Multiple scattering of
classical waves: microscopy, mesoscopy, and diffusion. Reviews of Modern Physics,
71, 313-371.
WANG, Y. & FAN, C. 2014. Single Image Defogging by Multiscale Depth Fusion. Image
Processing, IEEE Transactions on, 23, 4826-4837.
XIAOQIANG, J., JIEZHANG, C., JIAQI, B., TINGTING, Z. & MEIJIAO, W. Real-time
enhancement of the image clarity for traffic video monitoring systems in haze. Image
and Signal Processing (CISP), 2014 7th International Congress on, 14-16 Oct. 2014
2014. 11-15.
ZHANG, J., LI, L., YANG, G., ZHANG, Y. & SUN, J. 2010. Local albedo-insensitive single
image dehazing. The Visual Computer, 26, 761-768.

34

Appendix
List of main Visual Studio Projects:

Single Image Dehazing

Dehazing Processing Toolbox: main core of the dehazing unit, this executable project runs the solution on one
image and outputs the recovered radiance as a jpeg image. Note that this project can only accept input images of
type supported by the OpenCV 2.4.4 library. Set for 64-bit architecture only. To execute on 32-bit architecture,
recompile project using 32-bit OpenCV includes and libraries. It is recommended to use the Dehazer project as
an interface to execute this application. It was not intended to make it stand-alone and as such, no help was
implemented.
Dehazer: Simple interface for dehazing one single image, mainly used for testing the process before running a
whole sequence dehazing. This project is merely an interface most of the actual process in run by the Dehazing
Processing Toolbox project executable.

Video Sequences and Stream

VideoDehazingToolbox: Interface project to the processes manipulating whole video sequences and streams.
Use is quite straight-forward, select input video/stream, choose options (visualizations/contrast
enhancement/etc.) then run.
VideoDehazer: Simple project processing a complete video sequence/Stream and outputting the result in
another video file/stream. Beware it is recommended to use executable via the interface project
VideoDehazingToolbox, it was not intended to be used as a standalone, though this is possible, it is not
recommended as no help was provided.
DehazerRealTest: Performs mainly the same operations as VideoDehazer project, however, outputs the results
stream directly at runtime.

35

Dehazer class: Main implementation of the processing structure


Class header:
#ifndef DEHAZER_H_INCLUDED
#define DEHAZER_H_INCLUDED
#include "Common.h"
#include "OpenCvCommon.h"
typedef struct ImageDetails
{
string resolution;
int width;
int height;
string path;
}imgdetails;
class Dehazer
{
public:
Dehazer(string impath, string imout, int method);
int dehaze();
Mat getOrImg();
Mat getDehazedImg();
imgdetails getImgDetails();
void saveimg();
private:
//source image
string imagepath;
Mat orimg;
//recovered radiance
string outputimg;
Mat timg;
int tech;
Mat AdaptiveHistogramEqualization(Mat img);
Mat HistogramEqualization(Mat img);
Mat AcEstimation(Mat img, Mat imgdark);
Mat darkbrightchannelprior(Mat img, bool dark);
void DarkChannel();
};
#endif

36

Class Implementation
#include "Dehazer.h"
#include <math.h>
Dehazer::Dehazer(string impath, string imout, int method)
{
this->imagepath = impath;
this->outputimg = imout;
this->tech = method;
Mat img = imread(this->imagepath, 1);
img.convertTo(this->orimg, CV_64F, 1.0/255);
}
Mat Dehazer::darkbrightchannelprior(Mat img, bool dark)
{
int rows = img.rows, cols = img.cols;
int channels = img.channels();
Mat imgtmp;
Mat res;
if(channels >= 3)
{
vector<Mat> imgchannels(channels);
cv::split(img, imgchannels);
if(dark)
{
min(imgchannels[0], imgchannels[1], imgtmp);
min(imgchannels[2], imgtmp, res);
}
else
{
max(imgchannels[0], imgchannels[1], imgtmp);
max(imgchannels[2], imgtmp, res);
}
}
else res = img;
}

return res;

Mat Dehazer::AcEstimation(Mat img, Mat imgdark)


{
int rows = img.rows, cols = img.cols;
Mat Ac(rows,cols,img.type());
int channels = img.channels();
double maxd = 0.0;
vector<Mat> imgchannels(channels);
cv::split(img, imgchannels);
minMaxLoc(imgdark, NULL, &maxd, NULL, NULL);
Mat Jdark = imgdark/maxd;
int npix = (int)floor(rows*cols/2000);
Point maxpos;
Vec3d Ach(0,0,0);
for(int i = 0; i < npix; i++)
{
minMaxLoc(Jdark, NULL, NULL, NULL, &maxpos);
Ach += img.at<Vec3d>(maxpos);

37

Jdark.at<double>(maxpos) = 0;
}
Ach = Ach/npix;
for(int i = 0; i < channels; i++)
imgchannels[i] = Ach.val[i];

cv::merge(imgchannels, Ac);
return Ac;

void Dehazer::saveimg()
{
imwrite(outputimg, this->timg);
}
Mat Dehazer::getOrImg()
{
return this->orimg;
}
Mat Dehazer::getDehazedImg()
{
return this->timg;
}
imgdetails Dehazer::getImgDetails()
{
imgdetails details;
details.height = orimg.rows;
details.width = orimg.cols;
details.path = imagepath;
ostringstream oss;
oss << details.width << "X" << details.height;
details.resolution = oss.str();
}

return details;

Mat Dehazer::AdaptiveHistogramEqualization(Mat img)


{
Mat res(img.rows, img.cols, img.type());
Mat hsv(img.rows, img.cols, img.type());
//Convert image to hsv
cv::cvtColor(img, hsv, CV_BGR2HSV);
//hsv = BGRToHSV(img);
//Extract v channel of hsv mage
vector<Mat> hsvchannels(hsv.channels());
cv::split(hsv, hsvchannels);
//apply clahe to v channel
Ptr<CLAHE> clahe = createCLAHE();
Mat vclahe(img.rows, img.cols, hsvchannels[2].type());
clahe->apply(hsvchannels[2], vclahe);

38

hsvchannels[2] = vclahe;
//merge hsv image back
cv::merge(hsvchannels, hsv);
//Convert new hsv image to BGR space
cv::cvtColor(hsv, res, CV_HSV2BGR);
//Smooth the image to avoid bringing out noise
medianBlur(res, res, 3);
}

return res;

Mat Dehazer::HistogramEqualization(Mat img)


{
Mat res(img.rows, img.cols, img.type());
Mat hsv(img.rows, img.cols, img.type());
//Convert image to hsv
cvtColor(img, hsv, CV_BGR2HSV);
//Extract v channel of hsv mage
vector<Mat> hsvchannels(hsv.channels());
cv::split(hsv, hsvchannels);
//Perform histogram equalization on v channel
int hist[256];
for(int i = 0; i < 256; i++)
hist[i] = 0;
for(int x = 0; x < img.cols; x++)
{
for(int y = 0; y < img.rows; y++)
{
Point p = Point(x,y);
int val = hsvchannels[2].at<uchar>(p);
if(val > 255) val = 255;
else if(val < 0) val = 0;
hist[val]++;
}
}
int cdf[256];
cdf[0] = hist[0];
for(int i = 1; i < 256; i++)
{
cdf[i] = cdf[i-1] + hist[i];
}
for(int x = 0; x < img.cols; x++)
{
for(int y = 0; y < img.rows; y++)
{
Point p = Point(x,y);
int val = hsvchannels[2].at<uchar>(p);
if(val > 255) val = 255;

39

cdf[0]))*255 + 0.5);

else if(val < 0) val = 0;


if(val != 0)
{
int nval = (int)floor(((cdf[val] - cdf[0])/((double)cdf[255]}

hsvchannels[2].at<uchar>(p) = nval;

//merge hsv image back


merge(hsvchannels, hsv);
//Convert new hsv image to BGR space
cvtColor(hsv, res, CV_HSV2BGR);
//Smooth the image to avoid bringing out noise
medianBlur(res, res, 3);
}

return res;

void Dehazer::DarkChannel()
{
int rows = orimg.rows, cols = orimg.cols;
int channels = orimg.channels();
vector<Mat> imgchannels(channels);
cv::split(orimg, imgchannels);
double maxd = 0.0;
//compute dark and bright channel priors
Mat imgdark = darkbrightchannelprior(orimg, true);
//Estimate Airlight
imgdark = imgdark.mul(abs(imgdark - 1), 1.0);
Mat Ac = AcEstimation(orimg, imgdark);
//Compute Transmission
Mat dAc = darkbrightchannelprior(Ac, true);
Mat T = 0.95*imgdark;
T = 1-T;
T = darkbrightchannelprior(T,true);
max(T, 0.1, T);
imgchannels[0] = T;
imgchannels[1] = T;
imgchannels[2] = T;
cv::merge(imgchannels, T);
//Recover Radiance
Mat wimg = ((orimg - Ac)/T) + Ac;
wimg.convertTo(timg, CV_8UC3, 255);
//Histogram Equalization
switch(tech)
{
case 0:

40

timg = AdaptiveHistogramEqualization(timg);
break;
case 1:
timg = HistogramEqualization(timg);
break;

int Dehazer::dehaze()
{
switch(this->tech)
{
case 0:
case 1:
this->DarkChannel();
break;
default:
cout<<"Unimplemented or unrecognised method! Available 1 -> Dark
Channel Prior!"<<endl;
return EXIT_FAILURE;
}
}

return EXIT_SUCCESS;

Video Class: Manipulates video files or stream


Class header
#ifndef VIDEO_H_INCLUDED
#define VIDEO_H_INCLUDED
#include "OpenCvCommon.h"
#include <iostream>
using namespace cv;
using namespace std;
class Video
{
public:
Video(string vpath, string opath, string vname, int method);
int ProcessVideo();
private:
string vidpath;
string vidname;
string outpath;
int method;
VideoCapture orvid;
VideoWriter procvid;
};
#endif

41

Class Implementation
#include "Video.h"
#include "Dehazer.h"
Video::Video(string vpath, string opath, string vname, int method)
{
this->vidname = vname;
this->vidpath = vpath;
this->outpath = opath;
this->method = method;
this->orvid = VideoCapture();
this->procvid = VideoWriter();
}
int Video::ProcessVideo()
{
int res = EXIT_SUCCESS;
orvid.open(vidpath);
if(orvid.isOpened())
{
int fcc = static_cast<int>(orvid.get(CV_CAP_PROP_FOURCC));
double fps = orvid.get(CV_CAP_PROP_FPS);
Size S((int)orvid.get(CV_CAP_PROP_FRAME_WIDTH),
(int)orvid.get(CV_CAP_PROP_FRAME_HEIGHT));
procvid.open(outpath, fcc, fps, S, true);
if(procvid.isOpened())
{
namedWindow("original stream");
namedWindow("dehazed stream");
int frames = (int)orvid.get(CV_CAP_PROP_FRAME_COUNT);
Mat frame;
Mat Ac;
for(int i = 0; i < frames; i++)
{
int pos = (int)orvid.get(CV_CAP_PROP_POS_FRAMES);
int delay = 1000/(int)orvid.get(CV_CAP_PROP_FPS);
if(orvid.read(frame))
{
imshow("original stream", frame);
Dehazer d = Dehazer(frame, method);
int t = d.dehaze();
if(t==EXIT_SUCCESS)
{
Mat resimg = d.getDehazedImg();
imshow("dehazed stream", resimg);
procvid << resimg;
}
else res = EXIT_FAILURE;
waitKey(delay);
}
}
procvid.release();
}
else
{

42

cout<<"Failure to open video file: "<<this->outpath<<endl;


res = EXIT_FAILURE;

}
else
{
}
}

}
orvid.release();
destroyAllWindows();

cout<<"Failure to open video file: "<<this->vidpath<<endl;


res = EXIT_FAILURE;

return res;

Common Include files:


Common.h
#ifndef COMMO._H_I.CLUDED
#define COMMO._H_I.CLUDED
#include <iostream>
using namespace std;
#endif

OpenCvCommon.h

#ifndef OPE.CVCOMMO._H_I.CLUDED
#define OPE.CVCOMMO._H_I.CLUDED
#include <opencv2\opencv.hpp>
#include <opencv2/core/core.hpp>
using namespace cv;
#endif

43

DehazingProcessingToolbox Main (DPT.cpp):


#include "Dehazer.h"
#include <time.h>
#include <chrono>
using namespace chrono;
void LogDetails(imgdetails details, double lapse)
{
time_t now;
time(&now);
FILE* log;
fopen_s(&log, "log.txt", "a+");
if(log != NULL)
{
fprintf_s(log, "
%s",ctime(&now));
fprintf_s(log, "image: \n");
fputs(details.path.c_str(), log);
fprintf_s(log, "\nResolution: %s\n", details.resolution.c_str());
fprintf_s(log, "Width: %d\nHeight: %d\n", details.width, details.height);
fprintf_s(log, "Time elapsed: %f ms\n", lapse);
fprintf_s(log, "\n\n");
fclose(log);
}
}
int main(int argc, char* argv[])
{
string impath, imout;
int method;
#if _DEBUG
cout<<"Welcome to DehazingProcessingToolbox (Debug 1.0 version)"<<endl;
#else
cout<<"Welcome to DehazingProcessingToolbox (Release 1.0 version)"<<endl;
#endif
cout<<"Author: Tchaka Kevin"<<endl;
cout<<endl;
if(argc==7)
{
for(int i = 0; i < argc; i++)
{
string arg = argv[i];
if(arg.compare("-i")==0||arg.compare("-I")==0)
impath = argv[i+1];
else if(arg.compare("-o")==0||arg.compare("-O")==0)
imout = argv[i+1];
else if(arg.compare("-m")==0||arg.compare("-M")==0)
method = atoi(argv[i+1]);
}
//Initialize dehazer object class
Dehazer dpt(impath, imout, method);
//Start timer
typedef high_resolution_clock Clock;

44

Clock::time_point start = Clock::now();


//Begin processing image
dpt.dehaze();
//end timer
Clock::time_point end = Clock::now();
milliseconds lapse = duration_cast<milliseconds>(end-start);
//record detail of operation
LogDetails(dpt.getImgDetails(), lapse.count());

}
else
{

//Write output image


dpt.saveimg();

cout<<"Wrong arguments number syntax is \"DehazingProcessingToolbox.exe -i


imagepath -m dehazing method (integer) -o outputimage\""<<endl;
cout<<"Number of arguments: "<<argc<<endl;
cout<<"Arguments: "<<endl;
for(int i = 0; i < argc; i++)
{
cout<<argv[i]<<endl;
}
}

system("pause");

45

DehazerRealTest Main (DatDoor.cpp):


#include "Common.h"
#include "Video.h"
int main(int argc, char* argv[])
{
string vpath, vout, vname;
int method;
#if _DEBUG
cout<<"Welcome to DehazerRealTest (Debug 1.0 version)"<<endl;
#else
cout<<"Welcome to DehazerRealTest (Release 1.0 version)"<<endl;
#endif
cout<<"Author: Tchaks"<<endl;
cout<<endl;
if(argc==9)
{
for(int i = 0; i < argc; i++)
{
string arg = argv[i];
if(arg.compare("-v")==0||arg.compare("-V")==0)
vpath = argv[i+1];
else if(arg.compare("-o")==0||arg.compare("-O")==0)
vout = argv[i+1];
else if(arg.compare("-n")==0||arg.compare("-N")==0)
vname = argv[i+1];
else if(arg.compare("-m")==0||arg.compare("-M")==0)
method = atoi(argv[i+1]);
}
Video vid(vpath,vout,vname,method);
vid.ProcessVideo();
system("pause");
return EXIT_SUCCESS;
}
else
{

cout<<"Wrong arguments number syntax is \"VideoDehazer.exe -v videopath -m


dehazing method (integer) -n videoname -o outputvideo\""<<endl;
cout<<"Number of arguments: "<<argc<<endl;
cout<<"Arguments: "<<endl;
for(int i = 0; i < argc; i++)
{
cout<<argv[i]<<endl;
}
system("pause");
return EXIT_FAILURE;
}
system("pause");
}

return EXIT_SUCCESS;

46

VideoDehazer Main (videoDehazer.cpp):


#include "Video.h"
int main(int argc, char* argv[])
{
string vpath, vout, vname;
int method;
#if _DEBUG
cout<<"Welcome to VideoDehazer (Debug 1.0 version)"<<endl;
#else
cout<<"Welcome to VideoDehazer (Release 1.0 version)"<<endl;
#endif
cout<<"Author: Tchaks"<<endl;
cout<<endl;
if(argc==9)
{
for(int i = 0; i < argc; i++)
{
string arg = argv[i];
if(arg.compare("-v")==0||arg.compare("-V")==0)
vpath = argv[i+1];
else if(arg.compare("-o")==0||arg.compare("-O")==0)
vout = argv[i+1];
else if(arg.compare("-n")==0||arg.compare("-N")==0)
vname = argv[i+1];
else if(arg.compare("-m")==0||arg.compare("-M")==0)
method = atoi(argv[i+1]);
}
Video vid(vpath,vout,vname,method);
vid.ProcessVideo();
system("pause");
return EXIT_SUCCESS;
}
else
{

cout<<"Wrong arguments number syntax is \"VideoDehazer.exe -v videopath -m


dehazing method (integer) -n videoname -o outputvideo\""<<endl;
cout<<"Number of arguments: "<<argc<<endl;
cout<<"Arguments: "<<endl;
for(int i = 0; i < argc; i++)
{
cout<<argv[i]<<endl;
}
system("pause");
return EXIT_FAILURE;
}
system("pause");
}

47

Contact Detail:
Tchaka Kevin
Student number: 14052924
Department: Computer Science, University College London
Email: kevin.tchaka.14@ucl.ac.uk

Supervisor: Danail Stoyanov, PhD


UCL Centre for Medical Image Computing
Department: Computer Science, University College London
Email: danail.stoyanov@ucl.ac.uk

48

Das könnte Ihnen auch gefallen