Implementation Of Background Subtraction Cultural Studies Essay

Using Fuzzy Color for Surveillance

S.Kokila1, M.Pandiyarajan2, #, Y.B.Santhosh kumar3, S.Saravanan3, B.Thirumurugan3

Assistant professor1, PG Scholar2, UG Scholars3

Department of ECE, Gojan School of Business and Technology

Chennai, Tamil Nadu

#pandiyarajan14@gmail.com

Abstract—Background subtraction is a very popular approach for detecting moving objects from a still scene. From the previous method they did the background subtraction which is not sufficient way to remove the noise in the input image or in video. In this paper we introduce the concept which to analysis any theft or surveillance in the given current video, so that the background is easily eliminated and we can find the required image what we thought. Specifically, we propose to adopt a clustering-based feature, called fuzzy color histogram (FCH), which has an ability of greatly attenuating color variations generated by background motions while still highlighting moving objects.

Index Terms— Background subtraction, clustering-based feature, fuzzy color histogram, structured motion patterns.

I. Introduction

With increasing interest in high-level safety and security,

Smart video surveillance systems, which enable advanced operations such as object tracking and behavior understanding, have been in critical demand. Background subtraction is a computational vision process of extracting foreground objects in a particular scene. A foreground object can be described as an object of attention which helps in reducing the amount of data to be processed as well as provide important information to the task under consideration. Often, the foreground object can be thought of as a coherently moving object in a scene. We must emphasize the word coherent here because if a person is walking in front of moving leaves, the person forms the foreground object while leaves though having motion associated with them are considered background due to its repetitive behavior. In some cases, distance of the moving object also forms a basis for it to be considered a background, e.g if in a scene one person is close to the camera while there is a person far away in background, in this case the nearby person is considered as foreground while the person far away is ignored due to its small size and the lack of information that it provides. Identifying moving objects from a video sequence is a fundamental and critical task in many computer-vision applications. A common approach is to perform background subtraction, which identifies moving objects from the portion of video frame that differs from the background model.

II. FUZZY COLOR HISTOGRAM AND ITS APPLICATION TO BACKGROUND SUBTRACTION

In this paper, the colour histogram is viewed as a color distribution from the probability viewpoint. Given a color space containing color bins, the color histogram of image containing pixels is represented as, where is the probability of a pixel in the image belonging to the th color bin, and is the total number of pixels in the th color bin. According to the total probability theory,

can be defined as follows:

----- (1)

Where Pj is the probability of a pixel selected from image I being the jth pixel, which is 1/N, and Pi/j is the conditional probability of the selected th pixel belonging to the ith color bin.In the context of CCH is defined as

------ (2)

This definition leads to the boundary issue of CCH such that the histogram may undergo abrupt changes even though color variations are actually small. This reveals the reason why the CCH is sensitive to noisy interference such as illumination changes and quantization errors. The proposed FCH essentially modifies probability Pi|j as follows. Instead of using the probability Pi|j, we consider each of the N pixels in image I being related to all the color bins via fuzzy-set membership function such that the degree of "belongingness" or "association" of the th pixel to the ith colour bin is determined by distributing the membership value of the jth pixel,, μijto the ith color bin.

2. A.DEFINITION (Fuzzy Color Histogram):

The fuzzy color histogram (FCH) of image I can be expressed asF(I)=[f1,f2, f3,...... fn], where

----- (3)

has been defined in(1) , and is the membership value of n th row and nth column pixel in the nth color bin. In contrast with CCH, our FCH considers not only the similarity of different colors from different bins but also the dissimilarity of those colors assigned to the same bin. Therefore, FCH effectively alleviates the sensitivity to the noisy interference.

2. B.FCH COMPUTING

Equation (3) gives the definition of FCH, but it does not provide an applicable method to compute FCH. Given two colors and, Hafner et al. measure their perceptual similarity in terms of the Euclidean distance between colors and represented in a chosen color space. However, the measurement does not consider the non uniformity inherent in color space representation. To accurately quantify the perceptual color similarity between two colors recorded in a specific color space, the non uniformity of that color space should be considered. For that, we choose the CIELAB color space which is one of perceptually uniform color spaces and has been increasingly exploited into many electronic color imaging systems

Since RGB color space has been most commonly used for representing color images, intuitively we need to perform nonlinear color space transformation from RGB to CIELAB pixel by pixel. Such pixel-wise transformation is computationally intensive for the entire image. Moreover, to compute the FCH of a color image, we need to compute each pixel’s membership values with respect to all available color bins, respectively. Such direct approach is also not favourable because of its large computational load. To address the above-mentioned issues, we propose an efficient method to compute FCH based on fuzzy -means (FCM) clustering algorithm.

First, we perform fine uniform quantization in RGB color space by mapping all pixel colors to histogram bins. Here, the bin number is chosen to be large enough so that it makes the color difference between two adjacent bins small enough. Then, we transform the colors from RGB to CIELAB color space. Finally, we classify these colors in CIELAB color space to clusters using FCM clustering technique (usually, hence, a coarse quantization process), with each cluster representing an FCH bin. Through these steps, a pixel’s membership value to an FCH bin can be represented by the corresponding fine color bin’s membership value to the coarse color bin. Note that we only need to compute these membership values once, and they are represented as a M=[mij] n x n’ membership matrix . Each element Mij in M is the membership value of the jth fine color bin distributing to the th coarse color bin. Thus, the FCH of an image can be directly computed from its CCH without computing membership values for each pixel. That is, given an n’-bin CCH H n’ x1, the corresponding n-bin FCH F nx1can be computed as follows:

---- (4)

where membership matrix is pre-computed only once and can be used to generate FCH for each database image. We employ FCM clustering algorithm to not only classify the fine colors to clusters but also obtain membership matrix at the same time. For the latter, we explain how it works with more details as follows.

FCM is an unsupervised clustering algorithm that has been applied successfully to a number of problems involving feature analysis, clustering and classifier design. The FCM minimizes an objective function, which is the weighted sum of squared errors within each group, and is defined as follows

whereV=[v1,v2, ....vc]T is a vector of unknown cluster prototypes. The value of uik represents the membership of the data point xk from the set X= [x1,x2,.....xn] with respect to the ith cluster. The inner product defined by a norm matrix A defines a measurement of similarity between a data point and the cluster prototypes, respectively. A nondegenerate fuzzy –partition of is X conveniently represented by a matrix U=u[ik]. The weighting exponent controls the extent of membership shared by c clusters.

It has been shown by Bezdek [20] that if for all and and , then could be minimized at where

---(6) and (7)

Equations (6) and (7) cannot be solved analytically, but an approximate solution can be obtained by performing the following iterative procedures

III.ALGORITHM (Fuzzy -Means)

Step-1: Input the number of clusters, the wighting exponent, and error tolerance

Step-2: Initialisze the cluster centers vi, for 1 ≤i≤ c

Step- 3: Input data X= {x1, x2, .... xn}

Step-4: Calculate the c cluster centers {vi (l)} by (6)

Step-5: Update U(l) by (7)

Step-6:||U(l)-U(l-1)||>€,l =l+1

In our work, we need to classify the fine colors in CCH into clusters for FCH. Due to the perceptual uniformity of CIELAB color space, the inner product can be simply replaced by, which is the Euclidean distance between the fine color and the cluster center. The fuzzy clustering result of FCM algorithm is represented by matrix, and is referred to as the grade of membership of color with respect to cluster center. Thus, the obtained matrix can be viewed as the desired membership matrix for computing FCH, i.e. Moreover, the weighting exponent in FCM algorithm controls the extent or "spread" of membership shared among the fuzzy clusters. Therefore, we can use the parameter to control

the extent of similarity sharing among different color bins in FCH. The membership matrix can be thus adjusted according to different image retrieval applications. In general, if higher noisy interference is involved, larger value should be used.

IV.FUZZY MEMBERSHIP BASED LOCAL HISTOGRAM FEATURES

Fuzzy Membership Based Local Histogram Features The idea of using FCH in a local manner to obtain the reliable background model in dynamic texture scenes is motivated by the observation that background motions do not make severe alterations of the scene structure even though they are widely distributed or occur abruptly in the spatiotemporal domain, and color variations yielded by such irrelevant motions can thus be efficiently attenuated by considering local statistics defined in a fuzzy manner, i.e., regarding the effect of each pixel value to all the color attributes rather than only one matched color in the local region (see Fig. 1). Therefore, it is thought that fuzzy membership based local histograms pave a way for robust background subtraction in dynamic texture scenes. In this subsection, we summarize the FCH model [1] and analyze the properties related to background subtraction in dynamic texture scenes.

First of all, in a probability view, the conventional colour histogram (CCH) can be regarded as the probability density function. Thus, the probability for pixels in the image to belong to the ith colour bin wi can be defined as follows:

where N denotes the total number of pixels. P(Xj) is the probability of colour features selected from a given image being those of the jth pixel, which is determined as P(wi / Xj). The conditional probability is 1 if the colour feature of the selected ith pixel is quantized into the ith colour bin and 0 otherwise. Since the quantized color feature is assumed to be fallen into exactly one color bin in CCH, it may lead to the abrupt change even though color variations are actually small. In contrast to that, FCH utilizes the fuzzy membership [13] to relax such a strict condition. More specifically, the conditional probability of (1) P(xi / Xj) represents the degree of the belongingness for color features of the th pixel to the th color bin (i.e., fuzzy membership uij) in FCH and it thus enables to be robust to the noise interference and quantization error.

Now, a rather critical issue is to efficiently compute such membership values. In this work, we employ a novel color quantization scheme based on the fuzzy -means (FCM) clustering technique as introduce in [12]: First, the RGB color space is uniformly and finely quantized into m histogram bins (e.g., 4096)

and subsequently convert them into the CIELab color space. Note that the CIELab color space is adopted to correctly quantify the perceptual color similarity based on the uniform distance. Finally, we classify these m colors in the CIELab color space to clusters (each cluster represents an individual FCH bin) using the FCM clustering technique (m >> c) . That is, by conducting FCM clustering, we can obtain the membership values of a given pixel to all FCH bins. More specifically, the FCM algorithm finds a minimum of a heuristic global cost function defined as follows [2]:

Where and denote the feature vector (e.g., values of each colour channel) and the cluster center, respectively. is a constant to control the degree of blending of the different clusters and is generally set to 2. Then we have following equations, i.e.,∂J/ ∂Vi =0 and ∂J / ∂Pj =0 where Pj denotes the prior probability of , P(wj)at the minimum of the cost function. These lead to the solution given as,

Where .dij=||Xj-vi||2. Since (3) and (4) rarely have analytic solutions, these (i.e., cluster center and membership value) are estimated iteratively according to [3]. It is worth noting that these membership values derived from (4) only need to be computed once and stored as a membership matrix in advance. Therefore, we can easily build FCH for the incoming video frame by directly referring to the stored matrix without computing membership values for each pixel. For the robust background subtraction in dynamic texture scenes, we finally define the local FCH feature vector at the th pixel position of the th video frame as follows:

Where Wjk denotes the set of neighboring pixels centered at the position. uiq denotes the membership value obtained from (4), indicating the belongingness of the color feature computed at the pixel position to the color bin as mentioned. By using the difference of our local features defined in (5) between consecutive frames, we can build the reliable background model. Fig. 1 shows the robustness of local FCH to dynamic textures compared to CCH. As can be seen, local CCHs obtained from the same pixel position of two video frames are quite different due to strongly waving leaves. In contrast to that, we confirm that FCH provides relatively consistent results even though dynamic textures are widely distributed in the background. Therefore, it is thought that our local FCH features are very useful for modeling the background in dynamic texture scenes. In the following, we will explain the updating scheme for background subtraction based on the similarity measure of local FCH features.

V.BACKGROUND SUBTRACTION WITH LOCAL FCH FEATURES

In this subsection, we describe the procedure of background subtraction based on our local FCH features. To classify a given pixel into either background or moving objects in the current frame, we first compare the observed FCH vector with the model FCH vector renewed by the online update as expressed in (6):

Where Bj(k)=1 denotes that the th pixel in the th video frame is determined as the background whereas the corresponding pixel belongs to moving objects if.

B(j,k)= 0. τ is a thresholding value ranging from 0 to 1. The similarity measure used in (6), which adopts normalized histogram intersection for simple computation, is defined as follows:

Where denotes the background model of the th pixel position in the th video frame, defined in (8). Note that any other metric (e.g., cosine similarity, Chi-square, etc.) can be employed for this similarity measure without significant performance drop. In order to maintain the reliable background model in dynamic texture scenes, we need to update it at each pixel position in an online manner as follows:

Where is the learning rate. Note that the larger denotes that local FCH features currently observed strongly affect to build the background model. By doing this, the background model is adaptively updated. For the sake of completeness, the main steps of the proposed method are summarized in Algorithm 1.

VI.MORPHOLOGICAL FILTERING

Morphological image processing is a collection of non-linear operations related to the shape or morphology of features in an image. Morphological operations rely only on the relative ordering of pixel values, not on their numerical values, and therefore are especially suited to the processing of binary images. Morphological operations can also be applied to grey scale images such that their light transfer functions are unknown and therefore their absolute pixel values are of no or minor interest.

Morphological techniques probe an image with a small shape or template called a structuring element. The structuring element is positioned at all possible locations in the image and it is compared with the corresponding neighbourhood of pixels. Some operations test whether the element "fits" within the neighbourhood, while others test whether it "hits" or intersects the neighbourhood:

morph-probing

Figure1: Probing of an image with a structuring element.

A morphological operation on a binary image creates a new binary image in which the pixel has a non-zero value only if the test is successful at that location in the input image.

The structuring element is a small binary image, i.e. a small matrix of pixels, each with a value of zero or one:

The matrix dimensions specify the size of the structuring element.

The pattern of ones and zeros specifies the shape of the structuring element.

An origin of the structuring element is usually one of its pixels, although generally the origin can be outside the structuring element.

morph-stru-elem

Figure2: Examples of simple structuring elements.

A common practice is to have odd dimensions of the structuring matrix and the origin defined as the centre of the matrix. Stucturing elements play in moprphological image processing the same role as convolution kernels in linear image filtering.

When a structuring element is placed in a binary image, each of its pixels is associated with the corresponding pixel of the neighbourhood under the structuring element. The structuring element is said to fit the image if, for each of its pixels set to 1, the corresponding image pixel is also 1. Similarly, a structuring element is said to hit, or intersect, an image if, at least for one of its pixels set to 1 the corresponding image pixel is also 1.

morph-hit-fit

Figure3: Fitting and hitting of a binary image with structuring elements s1 and s2.

Zero-valued pixels of the structuring element are ignored, i.e. indicate points where the corresponding image value is irrelevant.

6. A.FUNDAMENTAL OPERATIONS

More formal descriptions and examples of how basic morphological operations work are given in the Hypermedia Image Processing Reference (HIPR) developed by Dr. R. Fisher et al. at the Department of Artificial Intelligence in the University of Edinburgh, Scotland, UK.

6. B.EROSION AND DILATION

The erosion of a binary image f by a structuring element s (denoted f sign-erosions) produces a new binary image g = f sign-erosions with ones in all locations (x,y) of a structuring element's origin at which that structuring element s fits the input image f, i.e. g(x,y) = 1 is s fits f and 0 otherwise, repeating for all pixel coordinates (x,y).

Erosion with small (e.g. 2×2 - 5×5) square structuring elements shrinks an image by stripping away a layer of pixels from both the inner and outer boundaries of regions. The holes and gaps between different regions become larger, and small details are eliminated.

Larger structuring elements have a more pronounced effect, the result of erosion with a large structuring element being similar to the result obtained by iterated erosion using a smaller structuring element of the same shape. If s1 and s2 are a pair of structuring elements identical in shape, with s2 twice the size of s1, then

f sign-erosions2 ≈ (f sign-erosions1) sign-erosions1.

mor-pri-erosion

Figure4: Erosion: a 3×3 square structuring element

Erosion removes small-scale details from a binary image but simultaneously reduces the size of regions of interest, too. By subtracting the eroded image from the original image, boundaries of each region can be found: b = f − (f sign-erosions ) where f is an image of the regions, s is a 3×3 structuring element, and b is an image of the region boundaries.

The dilation of an image f by a structuring element s (denoted f sign-dilations) produces a new binary image g = f sign-dilations with ones in all locations (x,y) of a structuring element's orogin at which that structuring element s hits the the input image f, i.e. g(x,y) = 1 if s hits f and 0 otherwise, repeating for all pixel coordinates (x,y). Dilation has the opposite effect to erosion -- it adds a layer of pixels to both the inner and outer boundaries of regions.

The holes enclosed by a single region and gaps between different regions become smaller, and small intrusions into boundaries of a region are filled in:

mor-pri-dilationFigure5: Dilation a 3×3 square structuring element

Results of dilation or erosion are influenced both by the size and shape of a structuring element. Dilation and erosion are dual operations in that they have opposite effects. Let f c denote the complement of an image f, i.e., the image produced by replacing 1 with 0 and vice versa. Formally, the duality is written as

f sign-dilations = f c sign-erosionsrot

where srot is the structuring element s rotated by 180sign-opening. If a structuring element is symmetrical with respect to rotation, then srot does not differ from s.

By doing thos morphological process we can able to reduce the noise from the background subtracted video.

VII.IMPLEMENTATION

Figure6: Input Video

The given above image is the input image which the video which will be subtracted using FCH. Here the exact output of the background subtraction is done using FCH and this is one of the finite method to use for surveillance.

Figure7: Output Video

VIII.CONCLUSION

It can be concluded that the proposed scheme for background subtraction using fuzzy color histogram greatly helps in surveillance by eliminating unwanted images at the background and by concentrating on the required image. Noise is greatly reduced in this system and background subtraction and fuzzy color histogram are one of the finest methods to use for surveillance.

ACKNOWLEDGEMENT

The authors are thankful to Gojan School of Business and Technology for providing necessary support and facilities to carry out the project work.