Mux Real-Time Video has been sunset and is unavailable for new usage. Existing access will end on December 31, 2023. We recommend migrating your application to our partner, LiveKit. Please reach out to real-time-video@mux.com if you need more help or details.
This guide contains instructions for setting up the Mux Spaces Swift SDK in a SwiftUI based iOS app Xcode project. By the end of the guide you'll have a working app that will be able to connect to a Space, as well as send and receive audio and video from the Space's participants.
Understand core abstractions
Understand the concepts of working with Mux Real-Time Video.
Application setup
Set up your iOS application with Xcode and the Spaces SDK with Swift Package Manager.
Join a space
Join a Mux space using a signed JWT.
Render participants
Build a UI to show the current participants.
Publish your camera and microphone
Publish the local participant's audio and video.
A Space is the basic abstraction for creating real-time communications with Mux. In order for clients to authenticate to a space, they need to provide a signed JSON Web Token, or JWT. For more information about Signing JWTs, refer to this guide.
A participant
is an abstraction for a single user connected into a space. A participant
can be a subscriber-only, or a publisher who sends one or more streams of audio or video media in the form of a track
.
A track
is a single stream of media (audio or video). A participant
can publish one or more tracks of media.
A space must be created either through the Mux dashboard or via the Mux API. See the Create a space section of the Real-Time Video guide for more details about creating a space.
If you already have Mux Access Tokens and just want to create a space from the command-line, use this command.
curl https://api.mux.com/video/v1/spaces \
-H "Content-Type: application/json" \
-X POST \
-u "${MUX_TOKEN_ID}:${MUX_TOKEN_SECRET}"
To join a Space, we will need a Signed JSON Web Token (JWT). See the Sign a JWT section of the Real-Time Video guide.
In order to complete this example you should have the latest Xcode development tools, an Apple Developer account, and at least one hardware device to test on.
Download and open the SDK API documentation in Xcode for easy browsing and reference.
If you are running iOS 16 or higher, you'll need to enable developer mode on your device in order to build and run an app from Xcode. Devices running iOS 15 will need to enable the appropriate trust settings in order to build and run an app from Xcode. For more on code-signing see the Apple Support guide here.
Let's take a moment to think through how our app will work.
When the app launches it will display a button that will trigger a call to the SDK to join a space. Once the app joins, it will show the local participant's video being captured as well as any remote participants who are also in the space. To keep things simple we'll arrange the videos with a simple grid.
The Xcode simulator is unable to capture video or audio from a webcam or microphone, therefore to test your app during development you will need to run it from a physical device.
As you're going through these steps, we recommend keeping open the SDK API documentation in Xcode to look up more information about the SDK APIs you'll be using.
Create a new Xcode project. We'll select the iOS platform and App application template.
Enter the name of your app, and make sure to select SwiftUI as the Interface and Swift as the Language.
Install the Mux Spaces Swift SDK for iOS by following Apple's documentation for adding a dependency. We will include both the MuxSpaces
and MuxSpacesUX
package products when installing the SDK.
Use this URL when adding the package and for the Dependency Rule select Branch and use main
as the branch name.
https://github.com/muxinc/mux-spaces-sdk-swift-distribution
When choosing Package Products for adding the SDK to your app target make sure to select the MuxSpaces
and MuxSpacesUX
Package Products. MuxSpacesReplayKit
is used when screensharing.
Verify that the MuxSpaces
package is installed correctly in the list of Package Dependencies in the Project navigator of Xcode on the left. And that both MuxSpaces
and MuxSpacesUX
show up in the list of Frameworks, Libraries, and Embedded Content from the General tab when your app target is selected.
If you're not using Swift Package Manager to manage dependencies and you need another way to install the Swift Spaces SDK for iOS, please let us know at real-time-video@mux.com.
We're using the MuxSpaces
SDK to publish audio and video, so we'll need to configure our app for accessing the on-device camera and microphone.
Your app must be configured to request microphone and camera permissions before accessing them. When the request is made to your user, they will be presented with a description of why your app needs these permissions. Include these descriptions in the Info
section of your app target as NSCameraUsageDescription
and NSMicrophoneUsageDescription
keys.
In previous project templates an Info.plist
file was created along with the project. This is no longer the case and should instead by supplied in the Info
section of your app's target.
A missing description will result in the operating system terminating your app. For more details see the Apple documentation on Requesting Authorization for Media Capture on iOS.
In order to join a space, you'll need to provide a Signed JWT to the SDK. Follow this guide to learn more.
Your new SwiftUI project has a ContentView.swift
file that we'll be adding our app code to. Feel free to break this up into multiple files as your project expands after finishing this guide.
At the top of ContentView.swift
add the following imports in addition to importing SwiftUI
.
import Combine
import MuxSpaces
import MuxSpacesUX
For keeping track of state, we'll use an ObservableObject. Let's call it SpaceModel
.
let spacesToken: String = "Replace With Your Own Token"
class SpaceModel: ObservableObject {
var space: Space = try! Space(token: spacesToken)
func join() {
self.space.join()
}
func leave() {
self.space.leave()
}
}
Don't forget to replace spacesToken
with your own JWT.
Replace the provided ContentView
with the following snippet.
struct ContentView: View {
@StateObject var model: SpaceModel = SpaceModel()
var body: some View {
return NavigationView {
VStack {
NavigationLink(
destination: SpaceView(),
label: {
Text("Join Space")
.bold()
})
}
}
.navigationViewStyle(.stack)
.environmentObject(model)
}
}
Our SpaceModel
is initialized as an instance variable. We use the StateObject
property wrapper because model
is intended to be a single source of truth for the state of the SwiftUI app we're building. For more info see the SwiftUI StateObject documentation or the Data Essentials in SwiftUI video from WWDC 2020.
The default SwiftUI view body
is replaced with a NavigationView
linking to the SpaceView
. It also stores the model
as an environment object using the environmentObject view modifier, doing so makes it available in the NavigationView
view subhierarchy.
Now let's add the SpaceView
below the ContentView
. This is the destination of the NavigationLink
in the ContentView
.
struct SpaceView: View {
@EnvironmentObject var model: SpaceModel
var body: some View {
ZStack {
// ParticipantsView()
}.onAppear {
model.join()
}.onDisappear {
model.leave()
}
}
}
The SpaceView
uses an @EnvironmentObject
property wrapper to access SpaceModel
from the environment and calls the join
and leave
methods on the model when the view appears and disappears. For more on this see the Environment Object SwiftUI documentation.
Build and run your project. At this point, the app should compile and run on your phone. When you run the app it will display a "Join Space" button that navigates to a blank screen. Next you'll display the current participants in the space.
The SpaceView
above has a commented out ParticipantsView
within the ZStack
. Go ahead and uncomment that and paste in the ParticipantsView
from below.
struct ParticipantsView: View {
@EnvironmentObject var model: SpaceModel
let layout = [
GridItem(.flexible()),
]
var body: some View {
ScrollView {
LazyVGrid(columns: layout, spacing: 20) {
ForEach(model.participants) { participant in
ParticipantView(participant: participant)
}
}
.padding(.vertical, 10)
.padding(.horizontal, 20.0)
}
}
}
The ParticipantsView
accesses the SpaceModel
from the environment and observes changes to state using the @EnvironmentObject
property wrapper. Then it renders a list of participants in a flexible grid one after the other. We haven't yet added the list of participants, but we'll do that soon.
Now add the ParticipantView
that renders each participant in the grid.
struct ParticipantView: View {
var participant: Participant
var body: some View {
ZStack {
if let videoTrack = participant.videoTracks.first?.value, videoTrack.hasMedia {
SpacesVideo(track: videoTrack)
} else {
Text(participant.id)
.font(.system(size: 24))
.padding(.horizontal)
}
}
.frame(minHeight: 200.0)
}
}
The ParticipantView
receives a Participant
instance and displays a video inside SpacesVideo
if the video is available. SpacesVideo
is a container that bridges the SpacesVideoView
API exposed by the SDK to SwiftUI via a UIHostingView
. If there's no video, the id
of the participant is displayed.
Now let's modify SpaceModel
from earlier to include a list of participants and event handlers for populating that list. Below is the complete SpaceModel
.
let spacesToken: String = "Replace With Your Own Token"
class SpaceModel: ObservableObject {
var space: Space = try! Space(token: spacesToken)
var cancellables: Set<AnyCancellable> = []
@Published var participants: [Participant] = []
init() {
space.events.joinSuccess.sink { [self] event in
self.participants.insert(event.localParticipant, at: 0)
let microphone = space.makeMicrophoneCaptureAudioTrack(
options: AudioCaptureOptions()
)
let camera = space.makeCameraCaptureVideoTrack(
options: CameraCaptureOptions()
)
space.publishTrack(microphone) { error in }
space.publishTrack(camera) { error in }
}.store(in: &cancellables)
space.events.participantJoined.sink { event in
self.participants.append(event.participant)
}.store(in: &cancellables)
space.events.participantLeft.sink { event in
self.participants.removeAll { $0.id == event.participant.id }
}.store(in: &cancellables)
space.events.videoTrackPublished.sink { event in
self.participants = self.participants.map { participant in
if(participant.id == event.participant.id) {
return event.participant
}
return participant
}
}.store(in: &cancellables)
space.events.videoTrackSubscribed.sink { event in
self.participants = self.participants.map { participant in
if(participant.id == event.participant.id) {
return event.participant
}
return participant
}
}.store(in: &cancellables)
}
func join() {
self.space.join()
}
func leave() {
self.space.leave()
}
}
The difference is that the model now keeps track of a list of participants. As well as listens for events on the space for adding and removing participants to join and leave the space and updating participants when a track is published or subscribed. This code also publishes a camera and microphone after a successful join.
Try building and running the app. Tap on the Join Space button. You should see a video from your front camera appear.
If you have a second device to test with, try building and running the app again on the second device to see multiple participants in the same space.