Filters in Convolutional Neural Networks

Filters in Convolutional Neural Networks

The main reason why we need CNNs is that humans can’t design filters from scratch.

What are filters?

I think of filters as feature extraction tools.

Let us understand in 1 dimensions.

If you want to detect the left edge of an image, you apply the left edge filter: [-1, +1]

If you want to detect the right edge of an image, you apply the right edge filter: [+1, -1]

If you want to detect an isolated pixel, you apply the isolated edge filter: [-1, +1, -1]


An image from my whiteboard notes


In 2 dimensions, if we want to detect vertical stripes, we use the vertical stripe filter.

If we want to detect horizontal stripes, we use the horizontal stripe filter.


Filters can do the convolution operation with images and extract features.

It’s quite easy to manually design filters for detected left edges, right edges, vertical or horizontal stripes.

If you want to detect a zebra, the black and white stripes is a prominent feature.


How do design a filter to detect that?

As images become more complex, the features which form the image also become complex.

To design the best filters to identify these features, we need neural networks.

We start with an image.

Construct a set of filters in the first layer.

That leads to a tensor of images.

We then construct a second set of filters which are tensors.

We continue this until we have enough layers and enough filters.


The hope is that if the neural network trains on enough images, the filter values will be optimised.

Once the filter values are optimised, we will hopefully detect the features and hence we will detect what’s present in the image.

This thinking works very well and that’s why CNNs are so commonly used for image classification tasks!

We made a video to explain this here:


To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics