Make it Intelligent

Designing apps that learn and adapt

Throughout the years, Apple’s advancements in chip design have opened up incredible possibilities for developers. Beyond the improvements in performance and energy efficiency, specialized support for processing artificial intelligence tasks was introduced. AI tools like Create ML and Core ML were introduced to facilitate the development of applications that learn from data and make intelligent decisions, significantly enhancing the user experience.

Create ML simplifies the process of building machine learning models. With a robust dataset, you can create lightweight models that can be imported into your apps using Core ML.

Core ML powers technologies such as Vision, Natural Language, and Speech Recognition. It provides frameworks that simplify the process of building tools that enable the creation of powerful and capable applications.

Machine Learning - Apple Developer
Create intelligent features and enable new experiences for your apps by leveraging powerful on-device machine learning.

The power of artificial intelligence is seamlessly integrated into users’ devices with Apple Intelligence, announced in 2024 and set to revolutionize what applications can do and how users interact with them. Features like Genmoji and Writing Tools enhance daily interactions with text and images, while the new capabilities of Siri and its ability to communicate with apps unlock endless possibilities for developers worldwide.

Apple Intelligence
Apple Intelligence helps you write, express yourself, and get things done effortlessly. All while setting a brand-new standard for privacy in AI.

Machine Learning

Using Machine Learning with Core ML

Core ML is the framework that integrates machine learning models into applications, enabling direct predictions and decisions on the device.

By employing Core ML, developers can seamlessly integrate their custom models, ensuring that all data processing occurs locally. This approach significantly enhances the response time, maintains privacy, and offers ample opportunities for customization.

Core ML Explained: Apple’s Machine Learning Framework
This articles will help you to assess and judge the main features of More ML and how you can leverage machine learning in your apps.

It’s also possible to convert models trained using various tools, such as TensorFlow and PyTorch, to the format supported by Core ML using Core ML Tools.

Core ML offers a range of high-level APIs designed for immediate app integration, requiring no prior machine learning experience. These APIs encompass various use cases and features that can be effortlessly implemented. The Machine Learning-powered APIs are:

  • Vision: process and analyze image and video data
  • Natural Language: analyze text to deduce its language-specific metadata
  • Speech: use speech recognition and saliency features for various languages
  • Sound Analysis: analyze audio and recognize it as a particular type
  • Translation: translate text into other languages

Explore the Vision Framework

Vision is the Apple framework that allows the processing and analysis of images and videos using the default Apple models or customized models created with Create ML.

The framework offers more than 25 different requests for tasks like detection, tracking, recognition, and evaluation. Among those you can find:

The following articles cover features offered by the Vision framework:

Detecting the contour of the edges of an image with the Vision framework
Learn how to detect and draw the edges of an image using the Vision framework in a SwiftUI app.
Classifying image content with the Vision framework
Learn how to use the Vision framework to classify images in a SwiftUI application.
Removing image background using the Vision framework
Learn how to use the Vision framework to easily remove the background of images in a SwiftUI application.
Scoring the aesthetics of an image with the Vision framework
Learn how to calculate the overall aesthetic score of an image with the Vision framework in a SwiftUI app.
Reading QR codes and barcodes with the Vision framework
Learn how to read information from QR codes and barcodes using the Vision framework in a SwiftUI app.
Using an Object Detection Machine Learning Model in an iOS App
By the of of this tutorial you will be able to use an object detection Core ML model in an iOS App with the Vision framework
Using an Image Classification Machine Learning Model in an iOS App with SwiftUI
By the of of this tutorial you will be able to use an image classification Core ML model in an iOS App with SwiftUI
Using an Image Classification Machine Learning Model in Swift Playgrounds
By the end of this tutorial you will be able to use an image classification Core ML model in Swift Playgrounds.
Using an Object Detection Machine Learning Model in Swift Playgrounds
By the end of this tutorial you will be able to use an object detection Core ML model in Swift Playgrounds with the Vision framework.

Explore the Natural Language Framework

The Natural Language framework enables the segmentation of text into distinct units, such as paragraphs, sentences, and words. Each of these units can be tagged with linguistic information, including part of speech, lexical class, lemma, script, and language.

The following articles cover features of the Natural Language framework:

Identifying the Language in a Text using the Natural Language Framework
By the end of this article, you will be able to identify the dominant language in a piece of text using the Natural Language framework.
Applying sentiment analysis using the Natural Language framework
Use the Natural Language framework from Apple to apply sentiment analysis to text.
Lexical classification with the Natural Language framework
Learn how to identify nouns, adjectives, and more with the Natural Language framework on a SwiftUI app.
Calculating the semantic distance between words with the Natural Language framework
Use the Natural Language framework to find synonyms by calculating the semantic distance of words.

Explore the Speech Framework

The Speech framework offers features to enable the recognition of spoken words, including verbal commands or dictation, from live audio captured by the device’s microphone or prerecorded audio, and converts them into transcribed text.

Transcribing audio from live audio using the Speech framework
Learn how to create a SwiftUI application that transcribes audio to text using the Speech framework.
Transcribing audio from a file using the Speech framework
Learn how to transcribe text from an audio file using the Speech framework in a SwiftUI application.

Explore the Sound Analysis Framework

The Sound Analysis framework is a powerful tool designed to analyze and identify specific sounds within audio content. By recognizing and differentiating between various sounds, this framework significantly enhances the user experience.

For instance, the framework powers the sound recognition accessibility features in iOS, enabling the device to detect and notify users of crucial sounds such as doorbells, alarms, and crying babies. Moreover, the framework can recognize over 300 types of sounds and supports the creation of custom Core ML models, offering personalized sound recognition capabilities.

Identify individual sounds in a live audio buffer
Learn how to create a SwiftUI app that uses the Sound Analysis framework to identify sounds with an audio buffer.

Explore the Translation Framework

The latest addition to the family of frameworks powered by machine learning, the Translation framework enables in-app translations through either a built-in framework UI or a customized translation experience.

Translating text in your SwiftUI app with the Translation framework
Discover how to use the Translation framework to provide text translation features within a SwiftUI app.
Using the Translation framework for language-to-language translation
Learn how to translate text to a specific language using on-device models with the Translation framework.
Checking language availability for translation with the Translation framework
Learn how to check availability of a language for translation using the Translation framework.

Creating Models with Create ML

The quality of models in machine learning is essential as they impact the effectiveness and reliability of the applications they power. Great models accurately capture the patterns and relationships in the data, leading to reliable predictions, classifications, or decisions.

Great machine-learning models are effective, reliable, and accurate.

Create ML Explained: Apple’s Toolchain to Build and Train Machine Learning Models
This articles will help you to understand the main features of Create ML and how you can create your own custom machine learning models.

Besides being a powerful tool to support the training of custom machine learning models, Create ML is also present as frameworks (Create ML framework and Create ML Components) to support the automatization of model creation and building on-device personalization.

Here are some articles to explore the creation of machine learning models to be used within the Core ML framework.

Creating an Object Detection Machine Learning Model with Create ML
By the end of this tutorial, you will be able to build and train an object detection machine learning model with Create ML

Apple Intelligence

Writing Tools

Powered by Apple Intelligence, Writing Tools changes how we write. These intelligent tools transform how people compose, edit, and communicate across platforms, enhancing productivity and clarity by providing proofreading, generating multiple text rewrites, and instantly summarizing text.

Exploring Apple Intelligence: Writing Tools
Understand Writing Tools, powered by Apple Intelligence.

An assistant to everything your users write, making the writing experience smoother and more enjoyable.

From proofreading and grammar checks to rewriting text in different tones and creating instant summaries, these tools work seamlessly everywhere they type, even with non-editable text, making writing clearer, more concise, and compelling.

Supporting Writing Tools on your app
Learn how to have access and manage support to Writing Tools within text fields.
Keeping parts of the text unchanged by Writing Tools
Learn how to define parts of the text in which Writing Tools should not apply changes to.

Image Generation

Generating images has become an integral part of communication for many people, and Apple Intelligence offers tools to support developers in integrating these features into their applications.

Exploring Apple Intelligence: Image Generation
Understand how image generation powered by Apple Intelligence is taking shape as system features.

Image Playgrounds demonstrates the power of image generation in your apps by providing users with the ability to create images directly within your application. All they need to do is provide a description and select a style.

With Genmoji, developers can create brand-new emojis using just their keyboard. Integrating Genmoji support into your applications is incredibly simple, and if you’re working on communication apps in particular, don’t miss out on the opportunity to include Genmoji support in them.

Siri Integration

Integrating your app with Siri and Apple Intelligence opens a revolutionary pathway for developers to enhance user experience and accessibility. By enabling your app to communicate with Siri, you create a seamless, intuitive interaction model that allows users to access your app's features through natural language commands.

Exploring Apple Intelligence: Talking with Siri
Understand how Siri is evolving, being powered by Apple Intelligence.

Integration with Siri is no longer a nice-to-have; it is an essential strategy for modern app development, providing users with a more fluid and intelligent way to engage with your application's capabilities.

Using App Intents in a SwiftUI app
Learn how to add App Intents to your SwiftUI app enabling users to reach its main functionalities from anywhere on the phone.

Third-party tools

Besides the native tools provided by Apple developers can use artificial intelligence tools and features in their applications by integrating their apps with third-party services or delegating the processing of the machine learning tasks to a server.

The following collection of articles covers the integration of different services in app development with Swift and SwiftUI.

Using server-side Swift for machine learning processing
In this tutorial learn how to use a machine learning model in a Vapor server using Swift.
Creating a SwiftUI App to generate Text Completions with GPT-3.5 through the OpenAI API
Understand how to use the OpenAI Swift Package to connect with the OpenAI API to generate text completions within your SwiftUI app.
Creating a SwiftUI App to generate images with Dall-E through the OpenAI API
Understand how to use the OpenAI Swift Package to connect with the OpenAI API to generate images within your SwiftUI app.
Generating Images with Stable Diffusion on Apple Silicon with Core ML
Understand how to use Apple’s Core ML Stable Diffusion Package to generate images with Stable Diffusion on Apple Silicon using Swift.
Creating a SwiftUI App to interact with the OpenAI ChatGPT API
Understand how to use the OpenAISwift Package to easily connect with the OpenAI API to create your own ChaptGPT SwiftUI app.
Prototyping SwiftUI interfaces with OpenAI’s ChatGPT
Understand how to use OpenAI’s ChatGPT conversational machine learning model to create working code for SwitfUI apps within a few minutes.