Prefer Your Language

Search This Blog

Sobel Filter implementation in C

The algorithm we will look at in this tutorial is an edge detection algorithm, specifically an edge detection algorithm based on the Sobel operator. This algorithm works by calculating the gradient of the intensity of the image at each point, finding the direction of the change from light to dark and the magnitude of the change. This magnitude corresponds to how sharp the edge is.
The Sobel Operator

To calculate the gradient of each point in the image, the image is convolved with the Sobel Kernel. Convolution is done by moving the kernel across the image, one pixel at a time. At each pixel, the pixel and its neighbours are weighted by the corresponding value in the kernel, and summed to produce a new value. This operation is shown in the following diagram.
~~~~~~~~~~~~~~~~~~~            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| k00 | k10 | k20 |            | i00 | i10 | i20 | i30 | ..
|~~~~~+~~~~~+~~~~~+            |~~~~~+~~~~~+~~~~~+~~~~~+~~~
| k01 | k11 | k21 |            | i01 | i11 | i21 | i31 | ..
|~~~~~+~~~~~+~~~~~+            |~~~~~+~~~~~+~~~~~+~~~~~+~~~
| k02 | k12 | k22 |            | i02 | i12 | i22 | i32 | ..
|~~~~~+~~~~~+~~~~~+            |~~~~~+~~~~~+~~~~~+~~~~~+~~~
                               | i03 | i13 | i23 | i33 | ..
Applying the kernel to the     |~~~~~+~~~~~+~~~~~+~~~~~+~~~
to the first pixel gives us    |  .. |  .. |  .. |  .. | .. 
an output value for that
pixel

     o00 = k11 * i00 + k12 * i01 + k21 * i10 + k22 * i11
we treat it as though the kernel has been overlaid onto the image, with the centre pixel of the kernel aligning we the first pixel in the image. Then we multiple each entry in the kernel by the value beneath it, and sum them to produce a single single output value from that pixel.
For the pixels on the boundary, we just ignore any entries in the kernel that fall outside.
Edge detection using the Sobel Operator applies two separate kernels to calculate the x and y gradients in the image. The length of this gradient is then calculated and normalised to produce a single intensity approximately equal to the sharpness of the edge at that position.
The kernels used for Sobel Edge Detection are shown below.
~~~~~~~~~~~~~
|-1 | 0 | 1 |
|~~~+~~~+~~~|
|-2 | 0 | 2 |
|~~~+~~~+~~~|
|-1 | 0 | 1 |
~~~~~~~~~~~~~
x gradient kernel

~~~~~~~~~~~~~
|-1 |-2 |-1 |
|~~~+~~~+~~~|
| 0 | 0 | 0 |
|~~~+~~~+~~~|
| 1 | 2 | 1 |
~~~~~~~~~~~~~
y gradient kernel


The Algorithm
Now the algorithm can be broken down into its constituent steps
1 Iterate over every pixel in the image
2 Apply the x gradient kernel
3 Apply the y gradient kernel
4 Find the length of the gradient using pythagoras' theorem
5 Normalise the gradient length to the range 0-255
6 Set the pixels to the new values
This illustrates the main steps, though we miss out some specifics such as what we do when we meet the boundary of the image.

We need to do one tweak so that this will work for our edge detection. When we are iterating over the image, we will be changing value however when we move on to the next pixel, the just changed value will still be in the area of influence of our kernel. For this reason we need to make sure the image we read pixels from and the image we write pixels to are different, and we can do this by just created an empty image, which we will populate with the new values.
img_edge = Image(img.width, img.height)
 
Now we will read pixels from the original image, but write the edge values to img_edge
Before we start fiddling with the pixel values, we need to decide how we will handle the image boundaries. There are a few different ways to do this, one way is to check each pixel to see if it is on the boundary of the image, and ignore the non-existant pixels on that side. An alternative way would be to wrap the pixels, so that a non-existant pixel on the left edge actually maps to the pixel on the same row on the right edge. For simplicities sake I am simply going to ignore the boundary pixels, instead choosing to start one pixel in from the starting point, and 1 pixel away from the finishing edge. To do this, change the iterators to start at 1, and finish at img.width - 1 and img.height - 1 respectively. Now we can safely index adjacent pixels without falling off the edge of the image.

Right, lets move on to applying the gradients. For the x gradient, we need the column of pixels to the left and right, for each of the 6 pixels we index we will add up the red, green and blue intensities, multiply it by the kernel, and accumulated it into Gx. We will also do the same for the y gradient. As Gx and Gy both use the same pixels on occasions, it is best to apply the two kernels simultaneously.
Now that the kernels are applied, Gx and Gy contain un-normalised values, which equal the relative length of the gradient in their respective axis. We wish to calculate the length of the gradient, and normalise it to a range suitable for displaying.
Lets calculate the relative length of each gradient. We can do this with pythagoras' theorem:
Pythagoras' theorem
 
sqr(a) + sqr(b) = sqr(c)
c = sqrt( sqr(a) + sqr(b) )
 
length = math.sqrt((Gx * Gx) + (Gy * Gy))
 
This gives us an un-normalised length, to get the length normalised, we need to know the range of length

We know that each pixel's intensity value has a range of 0 to 765, due to it being the addition of 3 variables with range 0 to 255. Also, from looking at the kernels, we can see the maximum amount you can accumulate is +/- 4 intensities. Therefore the total range of Gx and Gy is between 0 and 3060. These values are then squared, added and square rooted, giving us a final range of 0 to 4328 ( sqrt(2 * sqr(3060)) ). We can renormalise by dividing by the upper bounds and multiplying by 255.
Right at the end of the code we need to change displayImage to display our new image rather than the original. Other than that, the script is finished, and we can test it to see the results. The script should produce a new grayscale image, highlighting edges from the original image.
This concludes this tutorial. Below are the code listings for this tutorial. The C code will actually continually capture images from the webcam, and you will find that on the Raspberry Pi it is capable of displaying the edge detected image in realtime.
/* sobel.c */
#include <stdio.h>
#include <stdlib.h>
#include <float.h>
#include "mypgm.h"

void sobel_filtering( )
     /* Spatial filtering of image data */
     /* Sobel filter (horizontal differentiation */
     /* Input: image1[y][x] ---- Outout: image2[y][x] */
{
  /* Definition of Sobel filter in horizontal direction */
  int weight[3][3] = {{ -1,  0,  1 },
                  { -2,  0,  2 },
                  { -1,  0,  1 }};
  double pixel_value;
  double min, max;
  int x, y, i, j;  /* Loop variable */
 
  /* Maximum values calculation after filtering*/
  printf("Now, filtering of input image is performed\n\n");
  min = DBL_MAX;
  max = -DBL_MAX;
  for (y = 1; y < y_size1 - 1; y++) {
    for (x = 1; x < x_size1 - 1; x++) {
      pixel_value = 0.0;
      for (j = -1; j <= 1; j++) {
          for (i = -1; i <= 1; i++) {
            pixel_value += weight[j + 1][i + 1] * image1[y + j][x + i];
          }
      }
      if (pixel_value < min) min = pixel_value;
      if (pixel_value > max) max = pixel_value;
    }
  }
  if ((int)(max - min) == 0) {
    printf("Nothing exists!!!\n\n");
    exit(1);
  }

  /* Initialization of image2[y][x] */
  x_size2 = x_size1;
  y_size2 = y_size1;
  for (y = 0; y < y_size2; y++) {
    for (x = 0; x < x_size2; x++) {
      image2[y][x] = 0;
    }
  }
  /* Generation of image2 after linear transformtion */
  for (y = 1; y < y_size1 - 1; y++) {
    for (x = 1; x < x_size1 - 1; x++) {
      pixel_value = 0.0;
      for (j = -1; j <= 1; j++) {
          for (i = -1; i <= 1; i++) {
            pixel_value += weight[j + 1][i + 1] * image1[y + j][x + i];
          }
      }
      pixel_value = MAX_BRIGHTNESS * (pixel_value - min) / (max - min);
      image2[y][x] = (unsigned char)pixel_value;
    }
  }
}

main( )
{
  load_image_data( );   /* Input of image1 */
  sobel_filtering( );   /* Sobel filter is applied to image1 */
  save_image_data( );   /* Output of image2 */
  return 0;
}


Need complete and better code for Sobel_in_c? click here

Got Questions? 
 Feel free to ask me any question because I'd be happy to walk you through step by step! 

References 


 For Contact us….. Click on Contact us Tab

8 comments:

  1. "mypgm.h" not found

    ReplyDelete
    Replies
    1. mypgm.h
      http://en.lmgtfy.com/?q=mypgm.h&l=1

      Delete
    2. This comment has been removed by the author.

      Delete
    3. In Image Processing Projects Image Processing Projects For Final Year Students, the Sobel Filter algorithm applies a 3x3 matrix, also known as the kernel, to each pixel in the image. During each iteration, we calculate the gradient change of the pixels within a 3x3 kernel. There is a direct correlation between the magnitude of the shift in pixel intensity and the significance of the edge. The technique mainly depends on the mathematical procedure known as convolution.

      Deep Learning Final Year Projects

      Convolution is the mathematical operation that involves multiplying and summing related elements of two matrices, namely the kernel and the image matrix. Prior to this operation, the rows and columns of the kernel are flipped.

      The Sobel operator calculates the spatial gradient of a picture in two dimensions, highlighting areas with high spatial frequency that represent edges. Usually, it is employed to determine the rough absolute gradient magnitude at every point in a grayscale image input. The Sobel operator has a slower computation time compared to the Roberts Cross operator. However, its larger convolution kernel results in a stronger smoothing effect on the input image. As a result, the Sobel operator is less affected by noise. The operator typically yields significantly greater output values for similar edges in comparison to the Roberts Cross.

      Similar to the Roberts Cross operator, the output values from the operator can exceed the maximum pixel value allowed for image types that only accept small integer pixel values (such as 8-bit integer pictures). In such cases, the conventional approach is to assign the maximum permissible value to the output pixels that exceed the limit. To circumvent the issue, one can utilise an image format that accommodates a wider spectrum of pixel values.

      Delete
  2. This comment has been removed by the author.

    ReplyDelete
  3. "mypgm.h" not found

    ReplyDelete
  4. Good information should be provided to more people
    London Escort Services

    ReplyDelete
  5. what goes in "mypgm.h"?

    ReplyDelete

Help us to improve our quality and become contributor to our blog

Become a contributor to this blog. Click on contact us tab
Blogger Template by Clairvo