Detecting the contour of the edges of an image with the Vision framework
Learn how to detect and draw the edges of an image using the Vision framework in a SwiftUI app.
When we talk about image processing, one common task to perform is edge detection which is applied in various contexts such as object recognition, image segmentation, or computer graphics.
In this reference article, we are going to explore how to detect the contours of an image in an easy and fast way, using the new DetectContoursRequest
introduced with the redesign of the Vision framework API for iOS 18 and macOS 15.
Before we start, there’s one important thing to mention: contour detection works best with high-contrast; so, be sure the images you are working on are treated properly to achieve the best results.
Detect the contours
import Vision
Import the Vision framework to have access to its features.
Declare a function that receives the image to be analyzed as a parameter.
func detectContours(image: UIImage) async throws -> CGPath? {
// Image to be used
guard let image = CIImage(image: image) else {
return nil
}
// Set up the detect contours request
var request = DetectContoursRequest()
request.contrastAdjustment = 1.5
request.contrastPivot = nil
// Perform the detect contours request
let contoursObservations = try await request.perform(
on: image,
orientation: .downMirrored
)
// An array of all detected contours as a path object
let contours = contoursObservations.normalizedPath
return contours
}
The function detectContours(image:)
works as follows:
- Declares a constant
image
storing the image asCIImage
to be processed; - Sets up the
DetectContoursRequest
object - Adjusts the contrast of the image. The more the contrast, the better the edge detection will be. Use the
contrastAdjustment
property of the request, the default value is 2.0 by default, and it can range from 0.0 to 3.0; - Optionally, you can change the
contrastPivot
, which is the pixel value being used as a pivot for the contrast. It is set to 0.5 by default, in a range of 0.0 to 1.0, but you can also ask the Vision framework to automatically detect the value according to the image intensity by setting it to nil. - Performs the contour detection request on the image. The
perform(on:orientation:)
method might require the orientation of the image to be informed, that’s because, in Vision’s system, the coordinates’ origin is in the lower left-hand corner while in SwiftUI it is in the upper left-hand corner. By setting the orientation asdownMirrored
, the returning coordinates are normalized to the SwiftUI system. - The request returns an
ContoursObservation
object that stores a series of information about the contours detected. Among these, we are using thenormalizedPath
property, which is the one that stores the detected contours as aCGPath
object in normalized coordinates.
Drawing the contours overlaying the image
struct ContoursShape: Shape {
var contours: CGPath?
func path(in rect: CGRect) -> Path {
var path = Path()
if let contours = contours {
let transform = CGAffineTransform(
scaleX: rect.width, y: rect.height
)
path.addPath(Path(contours), transform: transform)
}
return path
}
}
To retrace the contours of an image, create a custom shape object that converts a CGPath
object into a Path
one. The struct checks if there are any contours to draw, and if it does, scales them to fit within a provided rectangle and returns the Path
object that represents the scaled contours.
Integration with SwiftUI
import SwiftUI
import Vision
struct ContourDetectionView: View {
@State private var contours: CGPath? = nil
var body: some View {
VStack {
Image("tomatoes")
.resizable()
.aspectRatio(contentMode: .fit)
.overlay(
ContoursShape(contours: contours)
.stroke(Color.blue, lineWidth: 2)
)
Button {
self.drawContours()
} label: {
Text("Draw Contours")
}
}
.padding()
}
private func drawContours() {
Task {
do {
contours = try await detectContours(image: UIImage(named: "tomatoes")!)
} catch {
print("Error detecting contours: \(error)")
}
}
}
private func detectContours(image: UIImage) async throws -> CGPath? {
// Image to be used
guard let image = CIImage(image: image) else {
return nil
}
// Set up the detect contours request
var request = DetectContoursRequest()
request.contrastAdjustment = 1.5
request.contrastPivot = nil
// Perform the detect contours request
let contoursObservations = try await request.perform(
on: image,
orientation: .downMirrored
)
// An array of all detected contours as a path object
let contours = contoursObservations.normalizedPath
return contours
}
}
Create a variable state property called contours
which is going to store the detected contours of the edges of the image.
Create a button that assigns the edges detected by calling the drawContours()
method and assigning the detected contours to the contours
property.
Add the CountoursShape
object in overlay to the image so that the edges will be drawn when the state variable is updated.
The Vision framework provides a powerful tool for detecting edges in images with the DetectContoursRequest
. This request processes an image, optionally considering its orientation, and returns detailed contour information. These results can then be easily accessed and used to draw the detected contours, enabling developers to create sophisticated image analysis and processing applications.