Tag Archives: Contours

Sorting Contours

OpenCV does not provide a built-in function or method to perform the actual sorting of contours, but we can define ours.

The actual sort_contours  function takes two arguments.

  • The first is cnts , the list of contours that the we want to sort, and
  • the second is the sorting method , which indicates the direction in which we are going to sort our contours (i.e. left-to-right, top-to-bottom, etc.).
# import the necessary packages
import numpy as np
import argparse
import cv2
import imutils
 
def sort_contours(cnts, method="left-to-right"):
	# initialize the reverse flag and sort index
	reverse = False
	i = 0
 
	# handle if we need to sort in reverse
	if method == "right-to-left" or method == "bottom-to-top":
		reverse = True
 
	# handle if we are sorting against the y-coordinate rather than
	# the x-coordinate of the bounding box
	if method == "top-to-bottom" or method == "bottom-to-top":
		i = 1
 
	# construct the list of bounding boxes and sort them from top to
	# bottom
	boundingBoxes = [cv2.boundingRect(c) for c in cnts]
	(cnts, boundingBoxes) = zip(*sorted(zip(cnts, boundingBoxes),
		key=lambda b:b[1][i], reverse=reverse))
 
	# return the list of sorted contours and bounding boxes
	return (cnts, boundingBoxes)

We define variables that indicate the sorting order (ascending or descending) and the index of the bounding box we are going to use to perform the sort.

  • Default, we initialize these variables to sort in ascending order and along to the x-axis location of the bounding box of the contour.

We first compute the bounding boxes of each contour, which is simply the starting (x, y)-coordinates of the bounding box followed by the width and height.

The boundingBoxes  enable us to sort the actual contours.

We return the (now sorted) list of bounding boxes and contours.


Another helper function, draw_contour :

def draw_contour(image, c, i):
	# compute the center of the contour area and draw a circle
	# representing the center
	M = cv2.moments(c)
	cX = int(M["m10"] / M["m00"])
	cY = int(M["m01"] / M["m00"])
 
	# draw the countour number on the image
	cv2.putText(image, "#{}".format(i + 1), (cX - 20, cY), cv2.FONT_HERSHEY_SIMPLEX,
		1.0, (255, 255, 255), 2)
 
	# return the image with the contour number drawn on it
	return image

This function simply computes the center (x, y)-coordinate of the supplied contour c. Then, uses the center coordinates to draw the contour ID, i.

Finally, the passed in image  is returned to the calling function.


Let’s work it out the helper functions with a real example.

import numpy as np
import cv2

# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True, help="Path to the input image")
ap.add_argument("-m", "--method", required=True, help="Sorting method")
args = vars(ap.parse_args())
 
# load the image and initialize the accumulated edge image
image = cv2.imread(args["image"])
accumEdged = np.zeros(image.shape[:2], dtype="uint8")
 
# loop over the blue, green, and red channels, respectively
for chan in cv2.split(image):
	# blur the channel, extract edges from it, and accumulate the set
	# of edges for the image
	chan = cv2.medianBlur(chan, 11)
	edged = cv2.Canny(chan, 50, 200)
	accumEdged = cv2.bitwise_or(accumEdged, edged)
 
# show the accumulated edge map
cv2.imshow("Edge Map", accumEdged)

Output :

Figure 1: (Left) Our original image. (Right) The edge map of the Lego bricks.

Now, let’s (1) find the contours of these Lego bricks, and then (2) sort them:

# find contours in the accumulated image, keeping only the largest ones
cnts= cv2.findContours(accumEdged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:5]
orig = image.copy()
 
# loop over the (unsorted) contours and draw them
for (i, c) in enumerate(cnts):
	orig = draw_contour(orig, c, i)
 
# show the original, unsorted contour image
cv2.imshow("Unsorted", orig)
 
# sort the contours according to the provided method
(cnts, boundingBoxes) = sort_contours(cnts, method=args["method"])
 
# loop over the (now sorted) contours and draw them
for (i, c) in enumerate(cnts):
	draw_contour(image, c, i)
 
# show the output image
cv2.imshow("Sorted", image)
cv2.waitKey(0)

We will first find the actual contours in our accumulated edge map image.

Based on these contours, we are now going to sort them according to their area/size by using a combination of the Python sorted  function and the cv2.contourArea  method — this allows us to sort our contours according to their area (i.e. size) from largest to smallest.

Then, we make a call to our custom sort_contours  function. This method accepts our list of contours along with sorting direction method and sorts them, returning a tuple of sorted bounding boxes and contours, respectively.

Figure 2: Sorting our Lego bricks from top-to-bottom.
SORTING OUR LEGO BRICKS FROM TOP-TO-BOTTOM

On the left we have our original unsorted contours. Clearly, we can see that the contours are very much out of order — the first contour is appearing at the very bottom and the second contour at the very top!

However, by applying our sorted_contours  function we were able to sort our Lego bricks from top-to-bottom.

Figure 3: Sorting our contours from bottom-to-top.
SORTING OUR CONTOURS FROM BOTTOM-TO-TOP.

We’re simply leveraging the bounding box of each object in the image to sort the contours by direction using Python and OpenCV.


One last reminder: if there is anything that you take away from these sections on contouring, it’s that contours are very simple yet extremely powerful.

Whenever you are working on a new problem, consider how contours and the associated properties of contours can help you solve the problem. More often than not, a clever use of contours can save you a lot of time and avoid more advanced (and tedious) techniques.

Contour Approximation

Contour approximation is perhaps one of widely used concept when it comes to working with objects in images — it is such a powerful (and perhaps even overlooked) technique that you can leverage extensively when building real-world computer vision applications.


Contour approximation is an algorithm for reducing the number of points in a curve with a reduced set of points — thus, an approximation. This algorithm is commonly known as the Ramer-Douglas-Peucker algorithm, or simply: the split-and-merge algorithm.

The general assumption of this algorithm is that a curve can be approximated by a series of short line segments. And we can thus approximate a given number of these line segments to reduce the number of points it takes to construct a curve.

  • Overall, the resulting approximated curve consists of a subset of points that were defined by the original curve.

The algorithm is implemented in OpenCV via the cv2.approxPolyDP  function.


Let’s work it out with an example.

Take a look at the example image below — our goal here is to detect only the rectangles, while ignoring the circles/ellipses. We could use simple contour properties. But let’s instead solve the problem using contour approximations:

In order to calculate contour approximation, first we need to compute the actual perimeter of the contoured region. And once we have the length of the perimeter, we can use it to approximate it by making a call to cv2.approxPolyDP function.

In our toy example image above, our goal is to detect the actual rectangles while ignoring the circles/ellipses. So let’s take a second to consider if we can exploit the geometry of this problem.

A rectangle has 4 sides. And a circle has no sides. Or, in this case, since we need to represent a circle as a series of points: a circle is composed of many, many tiny line segments — far more than the 4 sides that compose a rectangle.

So if we approximate the contour and then examine the number of points within the approximated contour, we’ll be able to determine if the contour is a rectangle or not!

Once we have the approximated contour, we check the len  (i.e. the length, or number of entries in the list) to see how many vertices (i.e. points) our approximated contour has. If our approximated contour has a 4 vertices, we can thus mark it as a rectangle.

# import the necessary packages
import cv2
import imutils
 
# load the the cirles and squares image and convert it to grayscale
image = cv2.imread("images/circles_and_squares.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
 
# find contours in the image
cnts = cv2.findContours(gray.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
 
# loop over the contours
for c in cnts:
	# approximate the contour
	peri = cv2.arcLength(c, True)
	approx = cv2.approxPolyDP(c, 0.01 * peri, True)
 
	# if the approximated contour has 4 vertices, then we are examining
	# a rectangle
	if len(approx) == 4:
		# draw the outline of the contour and draw the text on the image
		cv2.drawContours(image, [c], -1, (0, 255, 255), 2)
		(x, y, w, h) = cv2.boundingRect(approx)
		cv2.putText(image, "Rectangle", (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX,
			0.5, (0, 255, 255), 2)
 
# show the output image
cv2.imshow("Image", image)
cv2.waitKey(0)

Output :

As you can see, only the rectangles and squares have been outlined in yellow with the “Rectangle” text placed above them. Meanwhile, the circles and ellipses have been entirely ignored.


Tolerance level for Contour Approximation (ϵ)

To control the level of tolerance for the approximation, we need to define a ϵ (epsilon) value.

In practice, we define this ϵ relative to the perimeter of the shape we are examining. Commonly, we’ll define ϵ as some percentage (usually between 1-5%) of the original contour perimeter.

This is because the internal contour approximation algorithm is looking for points to discardThe larger the ϵ value is, the more points will be discarded. Similarly, the smaller the ϵ value is, the more points will be kept.

So the question becomes: what’s the optimal value for ϵ ? And how do we go about defining it?

It’s very clear that an ϵ value that will work well for some shapes will not work well for others (larger shapes versus smaller shapes, for instance). This means that we can’t simply hardcode an ϵ value into our code — it must be computed dynamically based on the individual contour.

  • Thus, we define ϵ relative to the perimeter length so we understand how large the contour region actually is. Doing this ensures that we achieve a consistent approximation for all shapes inside the image.

It’s typical to use roughly 1-5% of the original contour perimeter length for a value of ϵ. Anything larger, and you’ll be over-approximating your contour to almost a single straight line. Similarly, anything smaller and you won’t be doing much of an actual approximation.


Advanced Example

Our goal is to utilize contour approximation to find the sales receipt in the following image:

Our receipt is not exactly laying flat. It has some folds and wrinkles in it. So it’s certainly not a perfect rectangle. Which leads us to the question: If the receipt is not a perfect rectangle, how are we going to find the actual receipt in the image?

A receipt looks like a rectangle, after all — even though it’s not a perfect rectangle. So if we apply contour approximation and look for rectangle-like regions, I’m willing to bet that we’ll be able to find the receipt in the image:

# import the necessary packages
import cv2
import imutils
 
# load the receipt image, convert it to grayscale, and detect
# edges
image = cv2.imread("images/receipt.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
edged = cv2.Canny(gray, 75, 200)
 
# show the original image and edged map
cv2.imshow("Original", image)
cv2.imshow("Edge Map", edged)

# find contours in the image and sort them from largest to smallest,
# keeping only the largest ones
cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:7]
 
# loop over the contours
for c in cnts:
	# approximate the contour and initialize the contour color
	peri = cv2.arcLength(c, True)
	approx = cv2.approxPolyDP(c, 0.01 * peri, True)
 
	# show the difference in number of vertices between the original
	# and approximated contours
	print("original: {}, approx: {}".format(len(c), len(approx)))
 
	# if the approximated contour has 4 vertices, then we have found
	# our rectangle
	if len(approx) == 4:
		# draw the outline on the image
		cv2.drawContours(image, [approx], -1, (0, 255, 0), 2)
 
# show the output image
cv2.imshow("Output", image)
cv2.waitKey(0)

After applying edge detection, our receipt looks like this:

We can clearly see the outline of the receipt now. But we’re also getting a lot of other stuff we aren’t interested in. For example, we’re getting a lot of noise from the shadowing in the lower-right hand corner of the image. We’re also getting the outlines of all the actual letters and characters on the receipt itself.

So how in the hell are we going to disregard all this noise and find only the receipt outline?

The answer is a 2 step process.

  1. The first step is to sort the contours by their size, keeping only the largest ones (sort these contours from largest-to-smallest), and
  2. the second step is to apply contour approximation.

If the approximated contour consists of only 4 points, then we have found a rectangle. And since our receipt should be the largest rectangle in the image, we can thus assume we have found it.

As can be seen by the image on the left we have successfully found our receipt.

Advanced Contour Properties

We are going to review some of the advanced contour properties:

  1. Aspect ratio
  2. Extent
  3. Convex hull
  4. Solidity

Aspect Ratio

The actual definition of the contour’s aspect ratio is as follows:

aspect ratio = image width / image height

The aspect ratio is simply the ratio of the image width to the image height.

Shapes with an aspect ratio < 1 have a height that is greater than the width — these shapes will appear to be more “tall” and elongated. For example, most digits and characters on a license plate have an aspect ratio that is less than 1 (since most characters on a license plate are taller than they are wide).

And shapes with an aspect ratio > 1 have a width that is greater than the height. The license plate itself is an example of a object that will have an aspect ratio greater than 1 since the width of a physical license plate is always greater than the height.

Finally, shapes with an aspect ratio = 1 (plus or minus some \epsilon of course), have approximately the same width and height. Squares and circles are examples of shapes that will have an aspect ratio of approximately 1.


Extent

The extent of a shape or contour is the ratio of the contour area to the bounding box area:

extent = shape area / bounding box area

Recall that the area of an actual shape is simply the number of pixels inside the contoured region. On the other hand, the rectangular area of the contour is determined by its bounding box, therefore:

bounding box area = bounding box width x bounding box height

In all cases the extent will be < 1 — this is because the number of pixels inside the contour cannot possibly be larger the number of pixels in the bounding box of the shape.


Convex Hull

Given a set of X points in the Euclidean space, the convex hull is the smallest possible convex set that contains these X points.

On the left we have our original shape. And in the center we have the convex hull of original shape. Notice how the boundary/contour has been drawn around all extreme points of the shape, but leaving no extra space along the contour — thus the convex hull is the minimum enclosing polygon of all points of the input shape, which can be seen on the right.

Another important aspect of the convex hull that we should discuss is the convexity. Convex curves are curves that appear to “bulge out”. If a curve is not bulged out, then we call it a convexity defect.

The gray outline of the hand in the image above is our original shape. The red line is the convex hull of the hand. And the black arrows, such as in between the fingers, are where the convex hull is “bulged in” rather than “bulged out”. Whenever a region is “bulged in”, such as in the hand image above, we call them convexity defects.

  • Perhaps not surprisingly, the convex hull and convexity defects play a major role in hand gesture recognition, as it allows us to utilize the convexity defects of the hand to count the number of fingers.

Solidity

The solidity of a shape is the area of the contour area divided by the area of the convex hull:

solidity = contour(shape) area / convex hull area

Again, we always have a solidity value < 1. The number of pixels inside a shape cannot possibly outnumber the number of pixels in the convex hull, because, by definition, the convex hull is the smallest possible set of pixels enclosing the shape.

Just as in the extent of a shape, when using the solidity to distinguish between various objects you’ll need to manually inspect the values of the solidity to determine the appropriate ranges. For example, the solidity of a shape is actually perfect for distinguishing between the X’s and O’s on a tic-tac-toe board.


How do we put these contour properties to work for us? Let’s work it out with few examples.  We’ll be utilizing our contour properties to distinguish between X’s and O’s on a tic-tac-toe board and how to recognize different Tetris blocks.

Note: Always consider contours before more advanced computer vision and machine learning methods.


Distinguishing Between X’s and O’s

Let’s get started by recognizing the X’s and O’s on a tic-tac-toe board.

Tic-tac-toe is a two player game. One player is the “X” and the other player is the “O.” Players alternate turns placing their respective X’s and O’s on the board, with the goal of getting three of their symbols in a row, either horizontally, vertically, or diagonally. It’s very simple game to play, common among young children who are first learning about competitive games.

We are going to leverage computer vision and contour properties to recognize the X’s and O’s on the board. 

We start off with finding the actual contours in the image. From there we start looping over each individual contour and computing the actual properties of our contour..

  • Remember that the cv2.contourArea function gives us the number of pixels that reside inside the contour.

So how are we going to put these properties to work for us?

The letter X has four large and obvious convexity defects — one for each of the four V’s that form the X. On the other hand, the O has nearly no convexity defects, and the ones that it has are substantially less dramatic than the letter X. Therefore, the letter O is going to have a larger solidity than the letter X.

# import the necessary packages
import cv2
import imutils
 
# load the tic-tac-toe image and convert it to grayscale
image = cv2.imread("images/tictactoe.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
 
# find all contours on the tic-tac-toe board
cnts = cv2.findContours(gray.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
 
# loop over the contours
for (i, c) in enumerate(cnts):
	# compute the area of the contour along with the bounding box
	# to compute the aspect ratio
	area = cv2.contourArea(c)
	(x, y, w, h) = cv2.boundingRect(c)
 
	# compute the convex hull of the contour, then use the area of the
	# original contour and the area of the convex hull to compute the
	# solidity
	hull = cv2.convexHull(c)
	hullArea = cv2.contourArea(hull)
	solidity = area / float(hullArea)

        # initialize the character text
	char = "?"
 
	# if the solidity is high, then we are examining an `O`
	if solidity > 0.9:
		char = "O"
 
	# otherwise, if the solidity it still reasonabably high, we
	# are examining an `X`
	elif solidity > 0.5:
		char = "X"
 
	# if the character is not unknown, draw it
	if char != "?":
		cv2.drawContours(image, [c], -1, (0, 255, 0), 3)
		cv2.putText(image, char, (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 1.25,
			(0, 255, 0), 4)
 
	# show the contour properties
	print("{} (Contour #{}) -- solidity={:.2f}".format(char, i + 1, solidity))
 
# show the output image
cv2.imshow("Output", image)
cv2.waitKey(0)

Output :

We are able to identify all of the X’s and O’s without a problem while totally ignoring the actual game board itself. And we accomplished all of this by examining the solidity of each contour.


Identifying Tetris Blocks

In few problems, such as in identifying the various types of Tetris blocks, we need to utilize more than one contour property. Specifically, we’ll be using aspect ratioextentconvex hull, and solidity in conjunction with each other to perform our brick identification.

Here is what our Tetris image looks like:

The aqua piece is known as a Rectangle. The blue and orange blocks are called L-pieces. The yellow shape is obviously a Square. And the green and red bricks on the bottom are called Z-pieces.

Our goal here is to extract contours from each of these shapes and then identify which shape each of the blocks are.

  1. First thing, we convert the image to binary using thresholding where the background pixels are black and the foreground pixels (i.e. the Tetris blocks) are white.
  2. We then find the contours in our thresholded image.
  3. Calculate the contour properties.

Aspect ratio, which is simply the ratio of the width to the height of the bounding box. Again, remember that the aspect ratio of a shape will be < 1 if the height is greater than the width. The aspect ratio will be > 1 if the width is larger than the height. And the aspect ratio will be approximately 1 if the width and height are equal.

  • Aspect ratio can be used to differentiate between the square and rectangle pieces.

The extent of the current contour is the area (i.e. number of pixels that reside within the contour) divided by the true rectangular (area = width x height) area of the bounding box.

  •  We can use the extent to see if we are looking at an L-piece (or) examining at Z-piece
# import the necessary packages
import numpy as np
import cv2
import imutils
 
# load the Tetris block image, convert it to grayscale, and threshold
# the image
image = cv2.imread("images/tetris_blocks.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 225, 255, cv2.THRESH_BINARY_INV)[1]
 
# show the original and thresholded images
cv2.imshow("Original", image)
cv2.imshow("Thresh", thresh)
 
# find external contours in the thresholded image and allocate memory
# for the convex hull image
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
hullImage = np.zeros(gray.shape[:2], dtype="uint8")

# loop over the contours
for (i, c) in enumerate(cnts):
	# compute the area of the contour along with the bounding box
	# to compute the aspect ratio
	area = cv2.contourArea(c)
	(x, y, w, h) = cv2.boundingRect(c)
 
	# compute the aspect ratio of the contour, which is simply the width
	# divided by the height of the bounding box
	aspectRatio = w / float(h)
 
	# use the area of the contour and the bounding box area to compute
	# the extent
	extent = area / float(w * h)
 
	# compute the convex hull of the contour, then use the area of the
	# original contour and the area of the convex hull to compute the
	# solidity
	hull = cv2.convexHull(c)
	hullArea = cv2.contourArea(hull)
	solidity = area / float(hullArea)
 
	# visualize the original contours and the convex hull and initialize
	# the name of the shape
	cv2.drawContours(hullImage, [hull], -1, 255, -1)
	cv2.drawContours(image, [c], -1, (240, 0, 159), 3)
	shape = ""

	# if the aspect ratio is approximately one, then the shape is a square
	if aspectRatio >= 0.98 and aspectRatio <= 1.02:
		shape = "SQUARE"
 
	# if the width is 3x longer than the height, then we have a rectangle
	elif aspectRatio >= 3.0:
		shape = "RECTANGLE"
 
	# if the extent is sufficiently small, then we have a L-piece
	elif extent < 0.65:
		shape = "L-PIECE"
 
	# if the solidity is sufficiently large enough, then we have a Z-piece
	elif solidity > 0.80:
		shape = "Z-PIECE"
 
	# draw the shape name on the image
	cv2.putText(image, shape, (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5,
		(240, 0, 159), 2)
 
	# show the contour properties
	print("Contour #{} -- aspect_ratio={:.2f}, extent={:.2f}, solidity={:.2f}"
		.format(i + 1, aspectRatio, extent, solidity))
 
	# show the output images
	cv2.imshow("Convex Hull", hullImage)
	cv2.imshow("Image", image)
	cv2.waitKey(0)

Output :

Using simple contour properties, we were able to recognize X’s and O’s on a tic-tac-toe board. And we were also able to recognize the various types of Tetris blocks. Again, these contour properties, which are very simple on the surface, can enable us to identify various shapes — we just need to take a step back, be a little clever, and inspect the values of each of our contour properties to construct rules to identify each shape.


Another Example

Identifying shapes based on contour properties.

import numpy as np
import cv2
import imutils

# load the image and convert it to grayscale
image = cv2.imread("more_shapes_example.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

cv2.imshow("Gray", gray)

# find all contours on the image
cnts = cv2.findContours(gray.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
hullImage = np.zeros(gray.shape[:2], dtype="uint8")

# loop over the contours
for (i, c) in enumerate(cnts):

    M = cv2.moments(c)
    cX = int(M["m10"] / M["m00"])
    cY = int(M["m01"] / M["m00"])

    # compute the area of the contour along with the bounding box
    # to compute the aspect ratio
    area = cv2.contourArea(c)
    (x, y, w, h) = cv2.boundingRect(c)
    aspectRatio = w / float(h)
        
    extent = area / float(w * h)

    # compute the convex hull of the contour, then use the area of the
    # original contour and the area of the convex hull to compute the solidity
    hull = cv2.convexHull(c)
    hullArea = cv2.contourArea(hull)
    solidity = area / float(hullArea)

    cv2.drawContours(hullImage, [hull], -1, 255, -1)
    cv2.drawContours(image, [c], -1, (240, 0, 159), 3)

    if aspectRatio >= 0.98 and aspectRatio <= 1.02:
        shape = "CIRCLE"
    elif extent > 0.95:
        shape = "RECTANGLE"
    else:
        shape = "ARROW"

    print("Contour #{} -- aspect_ratio={:.2f}, extent={:.2f}, solidity={:.2f}"
     .format(i + 1, aspectRatio, extent, solidity))

    cv2.putText(image, shape, (cX - 20, cY), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255))

    cv2.imshow("Convex Hull", hullImage)
    cv2.imshow("Image", image)

    cv2.waitKey(0)

Output :

Simple Contour Properties

Along with simply finding and drawing contours, there exists a fairly extensive set of properties that we can use to quantify and represent the shape of an object in an image.

Few of the simple properties we can compute for contours are :

  1. Centroid/Center of Mass
  2. Area
  3. Perimeter
  4. Bounding boxes
  5. Rotated bounding boxes
  6. Minimum enclosing circles
  7. Fitting an ellipse

Centroid/Center of Mass

The “centroid” or “center of mass” is the center (x, y)-coordinate of an object in an image. This (x, y)-coordinate is actually calculated based on the image moments, which are based on the weighted average of the (x, y)-coordinates/pixel intensity along the contour.

  • Image moments allow us to use basic statistics to represent the structure and shape of an object in an image.

The centroid calculation itself is actually very straightforward: it’s simply the mean (i.e. average) position of all (x, y)-coordinates along the contour of the shape.

If you were to imagine computing the centroid yourself, you would simply walk along the points of the outline and average their (x, y)-coordinates together.

Computing contour properties is done on only a single contour at a time — thus we start looping over our detected contours :

  • Using the cv2.moments function we are able to compute the center (x, y) -coordinate of the shape the contour represents.
  • This function returns a dictionary of moments with the keys of the dictionary as the moment number and the values as the actual calculated moment.
# import the necessary packages
import numpy as np
import argparse
import cv2
import imutils
 
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True, help="Path to the image")
args = vars(ap.parse_args())
 
# load the image and convert it to grayscale
image = cv2.imread(args["image"])
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
 
# find external contours in the image
cnts = cv2.findContours(gray.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
clone = image.copy()
 
# loop over the contours
for c in cnts:
	# compute the moments of the contour which can be used to compute the
	# centroid or "center of mass" of the region
	M = cv2.moments(c)
	cX = int(M["m10"] / M["m00"])
	cY = int(M["m01"] / M["m00"])
 
	# draw the center of the contour on the image
	cv2.circle(clone, (cX, cY), 10, (0, 255, 0), -1)
 
# show the output image
cv2.imshow("Centroids", clone)
cv2.waitKey(0)
clone = image.copy()

Output :

For each shape in the image above we have been able to successfully compute and draw the center (x, y)-coordinates.


Area and Perimeter

The area of the contour is the number of pixels that reside inside the contour outline. Similarly, the perimeter (sometimes called arc length) is the length of the contour.

Computing the area of the contour is accomplished using the cv2.contourArea  function. This function takes only a single argument: the contour that we want to compute the area for.

Computing the perimeter of the contour is accomplished using the cv2.arcLength  function. This function takes two arguments: the contour itself along with a flag that indicates whether or not the contour is “closed.” A contour is considered closed if the shape outline is continuous and there are no “holes” along the outline. In most cases, you’ll be setting this flag to True , indicating that your contour has no gaps.

# import the necessary packages
import numpy as np
import argparse
import cv2
import imutils
 
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True, help="Path to the image")
args = vars(ap.parse_args())
 
# load the image and convert it to grayscale
image = cv2.imread(args["image"])
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
 
# find external contours in the image
cnts = cv2.findContours(gray.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
clone = image.copy()

# loop over the contours again
for (i, c) in enumerate(cnts):
	# compute the area and the perimeter of the contour
	area = cv2.contourArea(c)
	perimeter = cv2.arcLength(c, True)
	print("Contour #{} -- area: {:.2f}, perimeter: {:.2f}".format(i + 1, area, perimeter))
 
	# draw the contour on the image
	cv2.drawContours(clone, [c], -1, (0, 255, 0), 2)
 
	# compute the center of the contour and draw the contour number
	M = cv2.moments(c)
	cX = int(M["m10"] / M["m00"])
	cY = int(M["m01"] / M["m00"])
	cv2.putText(clone, "#{}".format(i + 1), (cX - 20, cY), cv2.FONT_HERSHEY_SIMPLEX,
		1.25, (255, 255, 255), 4)
 
# show the output image
cv2.imshow("Contours", clone)
cv2.waitKey(0)

Output :

The perimeter has a more important role to play when we explore contour approximation in a few sections.


Bounding Boxes

A bounding box is exactly what it sounds like — an upright rectangle that “bounds” and “contains” the entire contoured region of the image. However, it does not consider the rotation of the shape, so you’ll want to keep that in mind.

A bounding box consists of four components: the starting x-coordinate of the box, then the starting y-coordinate of the box, followed by the width and height of the box.

We fit a bounding box by making a call to the cv2.boundingRect  function. This function expects only a single parameter: the contour c that we want to compute the bounding box for.

  • We then take the returned starting (x, y)-coordinates, the width, and the height of the bounding box, and use the cv2.rectangle  function to draw our bounding box.
# import the necessary packages
import numpy as np
import argparse
import cv2
import imutils
 
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True, help="Path to the image")
args = vars(ap.parse_args())
 
# load the image and convert it to grayscale
image = cv2.imread(args["image"])
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
 
# find external contours in the image
cnts = cv2.findContours(gray.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
clone = image.copy()

# loop over the contours
for c in cnts:
	# fit a bounding box to the contour
	(x, y, w, h) = cv2.boundingRect(c)
	cv2.rectangle(clone, (x, y), (x + w, y + h), (0, 255, 0), 2)
 
# show the output image
cv2.imshow("Bounding Boxes", clone)
cv2.waitKey(0)

Output :


Rotated Bounding Boxes

While simple bounding boxes are great, they do not take into the consideration the rotation of the shape in an image.

Computing the rotated bounding box requires two OpenCV functions: cv2.minAreaRect  and cv2.cv.BoxPoints.

The cv2.minAreaRect function takes our contour and returns a tuple with 3 values. The first value of the tuple is the starting (x, y)-coordinates of the rotated bounding box. The second value is the width and height of the bounding box. And the final value is our θ, or angle of rotation of the shape.

In order to draw a rotated bounding box, we pass the output of cv2.minAreaRect  to the cv2.boxPoints function ( cv2.cv.BoxPoints  for OpenCV 2.4) which converts the (x, y)-coordinates, width and height, and angle of rotation into a set of coordinates points.

# import the necessary packages
import numpy as np
import argparse
import cv2
import imutils
 
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True, help="Path to the image")
args = vars(ap.parse_args())
 
# load the image and convert it to grayscale
image = cv2.imread(args["image"])
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
 
# find external contours in the image
cnts = cv2.findContours(gray.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
clone = image.copy()

# loop over the contours
for c in cnts:
	# fit a rotated bounding box to the contour and draw a rotated bounding box
	box = cv2.minAreaRect(c)
	box = np.int0(cv2.cv.BoxPoints(box) if imutils.is_cv2() else cv2.boxPoints(box))
	cv2.drawContours(clone, [box], -1, (0, 255, 0), 2)
 
# show the output image
cv2.imshow("Rotated Bounding Boxes", clone)
cv2.waitKey(0)

Output :

In general, you’ll want to use standard bounding boxes when you want to crop a shape from an image. And you’ll want to use rotated bounding boxes when you are utilizing masks to extract regions from an image.


Minimum Enclosing Circles

Just as we can fit a rectangle to a contour, we can also fit a circle.

The most important function to take note of here is cv2.minEnclosingCircle , which takes our contour and returns the (x, y)-coordinates of the center of circle along with the radius of the circle.

Now that we have the center and the radius, it’s fairly simple to utilize the cv2.circle  function to draw the minimum enclosing circle of the contour.

# import the necessary packages
import numpy as np
import argparse
import cv2
import imutils
 
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True, help="Path to the image")
args = vars(ap.parse_args())
 
# load the image and convert it to grayscale
image = cv2.imread(args["image"])
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
 
# find external contours in the image
cnts = cv2.findContours(gray.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
clone = image.copy()

# loop over the contours
for c in cnts:
	# fit a minimum enclosing circle to the contour
	((x, y), radius) = cv2.minEnclosingCircle(c)
	cv2.circle(clone, (int(x), int(y)), int(radius), (0, 255, 0), 2)
 
# show the output image
cv2.imshow("Min-Enclosing Circles", clone)
cv2.waitKey(0)

Output :

Notice that even though the lightning bolt shape has a very small area, it has a large perimeter — thus it has a large minimum enclosing circle to fit the entire shape into the circle region.


Fitting an Ellipse

Fitting an ellipse to a contour is much like fitting a rotated rectangle to a contour.

Under the hood, OpenCV is computing the rotated rectangle of the contour. And then it’s taking the rotated rectangle and computing an ellipse to fit in the rotated region.

The actual ellipse is fit to the shape using the cv2.fitEllipse  function, which we then pass on the cv2.ellipse to draw the enclosing region.

*** Note: A contour must have at least 5 points for an ellipse to be computed — if a contour has less than 5 points, then an ellipse cannot be fit to the rotated rectangle region.

# import the necessary packages
import numpy as np
import argparse
import cv2
import imutils
 
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True, help="Path to the image")
args = vars(ap.parse_args())
 
# load the image and convert it to grayscale
image = cv2.imread(args["image"])
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
 
# find external contours in the image
cnts = cv2.findContours(gray.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
clone = image.copy()

# loop over the contours
for c in cnts:
	# to fit an ellipse, our contour must have at least 5 points
	if len(c) >= 5:
		# fit an ellipse to the contour
		ellipse = cv2.fitEllipse(c)
		cv2.ellipse(clone, ellipse, (0, 255, 0), 2)
 
# show the output image
cv2.imshow("Ellipses", clone)
cv2.waitKey(0)

Output :

Notice that since a rectangle only has 4 points, we cannot fit an ellipse to it — but for all other shapes, we have > 5 points so this is not a concern.


By far, you’ll be using the standard bounding box the most in your own computer vision applications. There are also times where the rotated bounding box is used, but not nearly as much as the standard bounding box.

The perimeter/arc length is also used a lot, but normally only in the context of contour approximation.

Finding and Drawing Contours

After we identify the outlines and structures of objects in images using Edge Detection, the next big question comes, How do we find and access these outlines?

The answer: contours.

This is very important concept, and being able to leverage simple contour properties enables you to solve complicated problems with ease.

Contours are extensively used in :

  1. Object detection
  2. Shape analysis

Edges Vs Contours

There certainly does not seem to be much difference between the two resulting images! But, underneath the surface, the difference between edges and contours is significant.

When we perform edge detection, we find the points where the intensity of pixel values changes significantly, and turn those pixels on, while turning the rest of the pixels off. The edge pixels are in an image, and there is no particular requirement that the pixels representing an edge are all contiguous.

Contours, on the other hand, are not necessarily part of an image, unless we choose to draw them (as we did for the contour image above). Rather, contours are abstract collections of points and / or line segments corresponding to the shapes of the objects in the image. Thus, they can be manipulated by our programs; we can count the number of contours, use them to categorize the shapes in the object, use them to crop objects from an image, and more.

  • Contours are continuous lines or curves that bound or cover the full boundary of an object in an image.

Finding Contours in an Image and Drawing them

Contours are simply the outlines/boundary of an object in an image.

This brings us to the point that, we must first find the object by using methods such as edge detection or thresholding — meaning we are simply seeking a binary image where white pixels correspond to objects in an image and black pixels as the background. There are many ways to obtain a binary image like this, but the most used methods are edge detection and thresholding.

Note: If the image is simple enough, we might be able to get away with using the grayscale image as an input without actually creating a binary image using Edge detection and thresholding.

Once we have this binary or grayscale image, we can be able to find the outlines of the objects in the image using cv2.findContours  function.

The cv2.findContours  function has many parameters to explore :

contours, hierarchy=cv.findContours(image, mode, method[, contours[, hierarchy[, offset]]])

1. The first parameter is the source image from which we want to find contours. This is an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary . You can use compareinRangethreshold , adaptiveThresholdCanny, and others to create a binary image out of a grayscale or color one.

  • The cv2.findContours  function is destructive to the input image (meaning that it manipulates it) so if you intend on using your input image again, be sure to clone it using the copy()  method prior to passing it into cv2.findContours .

2. The second parameter is the Contour retrieval mode. This parameter controls how the contours are retrieved. The various options are listed below.

  1. RETR_EXTERNAL : Retrieves only the outer contours. This is the fastest mode.
  2. RETR_LIST : Retrieves all of the contours without establishing any hierarchical relationships. So you won’t know if one contour is nested inside another.
  3. RETR_CCOMP : Retrieves all of the contours and organizes them into a two-level hierarchy. At the top level, there are external boundaries of the components. At the second level, there are boundaries of the holes. If there is another contour inside a hole of a connected component, it is still put at the top level.
  4. RETR_TREE : Retrieves all of the contours and reconstructs a full hierarchy of nested contours. This is slower but detailed.
  5. RETR_FLOODFILL : Unfortunately, this is undocumented.

3. The third parameter is the Contour approximation method. As mentioned above, a contour is simply list of points that form boundary of the shape. One way is, to store all the points representing the boundary, but it is wasteful to store hundreds of points for simple shapes like a triangle or a quad. For a triangle, 3 points are enough and for a quad 4 points are enough. This flag helps us choose the level of approximation. Here are the options.

  1. CHAIN_APPROX_NONE : No approximation is used and all points are returned.
  2. CHAIN_APPROX_SIMPLE : This simple approximation algorithm that works well when the shapes are polygons. It will return 4 points for a quad and 3 points for a triangle and so on and so forth.
  3. CHAIN_APPROX_TC89_L1 : This is a more accurate approximation algorithms. This should be used when the shapes curved and are not simple polygons.
  4. CHAIN_APPROX_TC89_KCOS : This is computationally more expensive and slightly more accurate than CHAIN_APPROX_TC89_L1 algorithm. This should be used when the shapes are curved and are not simple polygons.

Working Example

Consider this image of several six-sided dice on a black background.

Suppose we want to automatically count the number of dice in the image. We can use contours to do that. We find contours with the cv2.findContours() function, and then easily examine the results to count the number of objects. Our strategy will be this:

  1. Read the input image, convert it to grayscale, and blur it slightly.
  2. Use simple fixed-level thresholding to convert the grayscale image to a binary image.
  3. Use the cv2.findContours() function to find contours corresponding to the outlines of the dice.
  4. Print information on how many contours – and thus how many objects – were found in the image.
  5. For illustrative purposes, draw the contours in the original image so we can visualize the results.

In order to perform thresholding, we need to find a threshold value. For that, first look at the grayscale histogram for the dice image, so we can find a threshold value that will effectively convert the image to binary.

Since finding contours works on white objects set against a black background, in our thresholding we want to turn off the pixels in the background, while turning on the pixels associated with the face of the dice. Based on the histogram, a threshold value of 200 seems likely to do that.

# read original image
image = cv2.imread(filename = filename)
 
# create binary image
gray = cv2.cvtColor(src = image, code = cv2.COLOR_BGR2GRAY)

# smooth the image to remove unwanted noise/unwanted details
blur = cv2.GaussianBlur(src = gray, ksize = (5, 5), sigmaX = 0)

# apply simple thresholding to convert the grayscale image to binary image
(t, binary) = cv2.threshold(src = blur, thresh = t, maxval = 255, type = cv2.THRESH_BINARY)

Now, we find the contours, based on the binary image of the dice. The way we are using cv2.findContours() function takes three parameters, and it returns three values:

(_, contours, _) = cv2.findContours(image = binary, 
                                    mode = cv2.RETR_EXTERNAL,
                                    method = cv2.CHAIN_APPROX_SIMPLE)
  • The first parameter to the function is the image to find contours in. Remember, this image should be binary, with the objects you wish to find contours for in white, against a black background.
  • Second, we pass in a constant indicating what kind of contours we are interested in. Since we are interested in counting the objects in this image, we only care about the contours around the outermost edges of the objects, and so we pass in the cv2.RETR_EXTERNAL parameter.
  • The last parameter tells the function if it should simplify the contours or not. We pass in cv2.CHAIN_APPROX_SIMPLE, which tells the function to simplify by using line segments when it can, rather that including all the points on what would be a straight edge. Using this parameter saves memory and computation time in our program.

The cv2.findContours() function returns three values, as a tuple; in this case, we are choosing to ignore the first and third return value.

  • The first value is an intermediate image that is produced during the contour-finding process. We are not interested in that image in this application, so we effectively discard that image by placing the underscore (_) in the place of the first return value.
  • The second return value is a list of NumPy arrays, contours. Each array holds the points for one contour in the image. So, if we have executed our strategy correctly, the number of contours – the length of the contours list – will be the number of objects in the image.
  • The final return value is a NumPy array that contains hierarchy information about the contours. This is not useful to us in our object-counting program, so we also choose to discard that return value with the _.

After finding the contours of the image, we print information about them out to the terminal, so that we can see the number of objects detected in the image. The code that does the printing looks like this:

print("Found %d objects." % len(contours))
for (i, c) in enumerate(contours):
    print("\tSize of contour %d: %d" % (i, len(c)))

Finally, we draw the contour points on the original image.

cv2.drawContours(image = image, 
                 contours = contours, 
                 contourIdx = -1, 
                 color = (0, 0, 255), 
                 thickness = 5)
  • The first parameter is the image we are going to draw the contours on.
  • Then, we pass in the list of contours to draw. This is our list of contours we found using the cv2.findContours  function.
  • The third parameter tells us where to start when we draw the contours; -1 means to draw them all. If we specified 2 here, only the third contour would be drawn.
  • The fourth parameter is the color to use when drawing the contours.
  • Finally, we specify the thickness of the contour points to draw. Here we are drawing the contours in red, with a thickness of 5, so they will be very visible on the image.

Full code :

# import the necessary packages
import numpy as np
import cv2

# read original image
image = cv2.imread(filename = filename)
 
# create binary image
gray = cv2.cvtColor(src = image, code = cv2.COLOR_BGR2GRAY)

# smooth the image to remove unwanted noise/unwanted details
blur = cv2.GaussianBlur(src = gray, ksize = (5, 5), sigmaX = 0)

# apply simple thresholding to convert the grayscale image to binary image
(t, binary) = cv2.threshold(src = blur, thresh = t, maxval = 255, type = cv2.THRESH_BINARY)

(_, contours, _) = cv2.findContours(image = binary, 
                                    mode = cv2.RETR_EXTERNAL,
                                    method = cv2.CHAIN_APPROX_SIMPLE)

print("Found %d objects." % len(contours))
for (i, c) in enumerate(contours):
    print("\tSize of contour %d: %d" % (i, len(c)))

cv2.drawContours(image = image, 
                 contours = contours, 
                 contourIdx = -1, 
                 color = (0, 0, 255), 
                 thickness = 5)

# show the output image
cv2.imshow("All Contours", image)
cv2.waitKey(0)

Output :


Another Example

# import the necessary packages
import numpy as np
import argparse
import cv2
import imutils
  
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True, help="Path to the image")
args = vars(ap.parse_args())
  
# load the image and convert it to grayscale
image = cv2.imread(args["image"])
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
  
# show the original image
cv2.imshow("Original", image)
  
# find all contours in the image and draw ALL contours on the image
(_, cnts, _) = cv2.findContours(gray.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
clone = image.copy()
cv2.drawContours(clone, cnts, -1, (0, 255, 0), 2)
print("Found {} contours".format(len(cnts)))
  
# show the output image
cv2.imshow("All Contours", clone)
cv2.waitKey(0)

We’ll instruct cv2.findContours  to return a list of all contours in the image by passing in the cv2.RETR_LIST  flag. This flag will ensure that all contours are returned. 

Finally, we pass in the cv2.CHAIN_APPROX_SIMPLE  flag. If we did not specify this flag and instead used cv2.CHAIN_APPROX_NONE , we would be storing every single (x, y)-coordinate along the contour. In general, this not advisable. It’s substantially slower and takes up significantly more memory. By compressing our horizontal, vertical, and diagonal segments into only end-points we are able to reduce memory consumption significantly without any substantial loss in contour accuracy. 

*** The cv2.findContours  function returns a tuple of values. The problem with the returning tuple is that it is in a different format for OpenCV 2.4, OpenCV 3, OpenCV 3.4, OpenCV 4.0.0-pre, OpenCV 4.0.0-alpha, and OpenCV 4.0.0 (official). As you can imagine, this is quite confusing for novices and experts alike. See this blog post for further details.

# You need at minimum imutils==0.5.2 . 
pip install --upgrade imutils
  • To accommodate all these changes across different versions, we will use imutils package to grab the contours.

We then draw our found contours using cv2.drawContours  function.

  • The first argument we pass in is the image we want to draw the contours on.
  • The second parameter is our list of contours we found using the cv2.findContours  function.
  • The third parameter is the index of the contour inside the cnts  list that we want to draw. If we wanted to draw only the first contour, we could pass in a value of 0. If we wanted to draw only the second contour, we would supply a value of 1. Passing in a value of -1 for this argument instructs the cv2.drawContours  function to draw all contours in the cnts  list. Personally, I like to always supply a value of -1 and just supply the single contour manually.
  • Finally, the last two arguments to the cv2.drawContours  function is the color of the contour, and the thickness of the contour line (2 pixels).

Output :

Note, most interestingly, the cutout region(oval) inside the rectangle has been detected as well! Is this the default behavior? What if we didn’t want to detect that oval region? What if we are only interested in the external shapes? Can we do it? Of course we can!

Loop over the contours individually and draw them:

# loop over the contours individually and draw each of them
for (i, c) in enumerate(cnts):
	print("Drawing contour #{}".format(i + 1))
	cv2.drawContours(clone, [c], -1, (0, 255, 0), 2)
	cv2.imshow("Single Contour", clone)
	cv2.waitKey(0)

Note : In general, if you want to draw only a single contour, I would get in the habit of always supplying a value of -1 for your contour index and then wrapping your single contour c as a list.

Find contours but keep only EXTERNAL contours in the image:

# find contours in the image, but this time keep only the EXTERNAL
# contours in the image
cnts = cv2.findContours(gray.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
cv2.drawContours(clone, cnts, -1, (0, 255, 0), 2)
print("Found {} EXTERNAL contours".format(len(cnts)))
 
# show the output image
cv2.imshow("All Contours", clone)
cv2.waitKey(0)

This time we are supplying a value of cv2.RETR_EXTERNAL  for the contour detection mode. Specifying this flag instructs OpenCV to return only the external most contours of each shape in the image, meaning that if one shape is enclosed in another, then the contour is ignored.

Output :


Using Masks with Contours

What if we wanted to access just the blue rectangle and ignore all other shapes? How would we do that?

Loop over the contours individually, draw a mask for the contour, and then apply a bitwise AND.

  • We create an empty NumPy array with the same dimensions of our original image. This empty NumPy array will serve as the mask  for the current shape that we want to examine.
  • Now that we have initialized our mask , we can draw the contour on the mask. Notice how I only supplied a value of 255 (white) for the color here — but isn’t this incorrect? Isn’t white represented as (255, 255, 255)? Since we are working with a mask that has only a single (grayscale) channel — thus only need to supply a value of 255 to get white.
# loop over the contours individually
for c in cnts:
	# construct a mask by drawing only the current contour
	mask = np.zeros(gray.shape, dtype="uint8")
	cv2.drawContours(mask, [c], -1, 255, -1)
 
	# show the images
	cv2.imshow("Image", image)
	cv2.imshow("Mask", mask)
	cv2.imshow("Image + Mask", cv2.bitwise_and(image, image, mask=mask))
	cv2.waitKey(0)

Output :

Figure 4: Accessing only a single shape at a time, hiding the rest.