Swift 6 Concurrency: The Fundamentals (1/3)

image
image

Damjan Dabo

17 minute read in Development

Published on February 12, 2026

Part 1 of 3: Master async/await, actors, and thread safety

Table of Contents


Introduction: Why Concurrency Matters

Imagine you're running a restaurant kitchen. You could cook each dish one at a time—chop vegetables for dish A, cook dish A, plate dish A, then start on dish B. This serial approach is simple and safe, but customers would wait forever.

Or, you could run the kitchen like a real restaurant: multiple chefs working simultaneously—one chopping vegetables, another grilling, another plating. This parallel approach serves customers faster, but introduces complexity. What if two chefs reach for the same knife? What if the plating chef starts before the grilling chef finishes?

This is concurrency in a nutshell. And the performance gains are real. In one of our examples, we took three operations that each take 2 seconds:

  • Serial execution: 3 tasks × 2 seconds = 6 seconds total
  • Parallel execution: All 3 tasks simultaneously = 2 seconds total

That's a 3x performance improvement just by doing work concurrently. For your users, that means faster API calls, smoother scrolling, and a more responsive app.

Why Swift 6 Matters

Swift 6's concurrency features aren't just about speed—they're about correctness, safety, and long-term sustainability:

  1. Fewer Crashes & Data Race Prevention: Swift 6's compiler catches data races (when multiple tasks access the same data unsafely) before your app ships. No more mysterious crashes that only happen under load. This compile-time safety is Swift 6's unique contribution—async patterns have enabled non-blocking operations for years, but Swift 6 ensures they're implemented safely.

  2. Better User Experience Through Safety: Users get a more stable, predictable app. When concurrent operations are guaranteed safe by the compiler, your app is less prone to race conditions, unexpected state changes, and hard-to-reproduce bugs that degrade the user experience.

  3. Modern Codebase: The async/await syntax is cleaner and easier to understand than callbacks or completion handlers. New team members ramp up faster.

  4. Future-Proof: Apple's frameworks are all moving to async/await. SwiftUI, SwiftData, and networking APIs all embrace this model.

  5. Reduced Technical Debt: This is crucial for CTOs and technical leadership. Adopting Apple's standard concurrency model means you avoid accumulating technical debt that compounds over time:

    • No custom solutions to maintain: Custom GCD patterns, operation queue architectures, or third-party concurrency frameworks all require ongoing maintenance as iOS evolves. Swift 6 concurrency is maintained by Apple—you get improvements and bug fixes for free with each iOS release.
    • Ecosystem alignment: Every new Apple framework—SwiftUI, SwiftData, StoreKit, CloudKit—uses async/await natively. Not adopting means building and maintaining translation layers indefinitely. Each new API release adds to this maintenance burden.
    • Lower onboarding costs: New iOS developers learn async/await as the industry standard. Hiring gets easier, and ramp-up time decreases dramatically when your codebase uses patterns developers already know. Legacy concurrency patterns require dedicated training time and institutional knowledge.
    • Avoiding forced migrations: Apple's history shows they eventually deprecate old patterns. Teams that adopted async/await early avoided expensive, urgent rewrites when deprecations hit. Getting ahead of the curve is always cheaper than being forced to catch up under deadline pressure.
    • Community momentum: The iOS ecosystem is standardizing on async/await. Third-party libraries, blog posts, Stack Overflow answers, conference talks—everything assumes you're using modern concurrency. Fighting this tide increases your long-term maintenance burden and isolates your team from community knowledge.

What's New in Swift 6

Swift's concurrency journey has been evolutionary. Let's look at how the same task—fetching a user, then their posts, then updating the UI—has evolved over the years:

The Old Way - Callbacks (2010s)

This approach chains callbacks together, leading to deeply nested code that's hard to read and error-prone:

fetchUser { result in
    fetchPosts(for: result.user) { posts in
        updateUI(with: posts) { _ in
            // Callback hell 🔥
        }
    }
}

The Better Way - Combine (2019)

Combine improved this with a reactive pipeline, but still required careful management of publishers and subscriptions:

fetchUser()
    .flatMap { fetchPosts(for: $0) }
    .receive(on: DispatchQueue.main)
    .sink { posts in updateUI(with: posts) }

The Swift 6 Way - async/await (based on Swift 5.5)

async/await was introduced in Swift 5.5 (September 2021). Swift 6 builds on this foundation by making concurrency safety mandatory through strict checking.

Now the same operation reads like synchronous code, but executes asynchronously. No nesting, no publisher management:

@MainActor
func loadData() {
    Task {
        let user = await fetchUser()
        let posts = await fetchPosts(for: user)
        updateUI(with: posts)  // Automatically on main thread (Task inherits @MainActor)
    }
}

Swift 6 enforces strict concurrency checking by default that catches data races at compile-time. The compiler won't let you accidentally share mutable state across threads. It's like having a safety inspector in your kitchen making sure two chefs never reach for the same knife.

This feature was available as an opt-in compiler flag since Swift 5.7 (September 2022), but Swift 6 makes it mandatory, ensuring all code passes concurrency safety checks.


Async/Await Fundamentals

Let's start with the basics. This example shows a ViewModel that fetches user data from a server. It demonstrates how to create an async Task, handle errors with do/catch, manage loading states, and automatically update the UI on the main thread using @MainActor:

@MainActor
class UserViewModel: ObservableObject {
    @Published var userData: String = ""
    @Published var isLoading: Bool = false
    @Published var errorMessage: String?

    func fetchUserData() {
        isLoading = true
        errorMessage = nil

        Task {
            do {
                // Fetch data from server
                let user = try await fetchUserFromServer()

                // Update UI - automatically on main thread thanks to @MainActor
                self.userData = user
                self.isLoading = false
            } catch {
                self.errorMessage = error.localizedDescription
                self.isLoading = false
            }
        }
    }

    private func fetchUserFromServer() async throws -> String {
        // Simulate network delay
        try await Task.sleep(for: .seconds(2))
        return "John Doe"
    }
}

Key concepts:

  1. async keyword: Marks a function as asynchronous—it can be suspended while waiting for results
  2. await keyword: Marks a suspension point where execution can pause
  3. Task { }: Creates a new concurrent task that runs asynchronously
  4. @MainActor: Ensures all methods run on the main thread (perfect for UI classes)

The beauty of this approach? Your code reads sequentially even though it executes asynchronously. No callback pyramids, no completion handler juggling.


Serial vs Parallel: Real Performance Gains

Now for the performance magic. Let's say you need to fetch data from three different endpoints. We'll compare two approaches to see the dramatic performance difference.

Serial Approach (tasks run one after another)

This example runs three tasks sequentially—each must complete before the next begins. Notice how we use await three times, causing each operation to block until complete:

func fetchDataSerially() async {
    let start = Date()

    let result1 = await performTask(1)  // Takes 2 seconds
    let result2 = await performTask(2)  // Takes 2 seconds
    let result3 = await performTask(3)  // Takes 2 seconds

    let duration = Date().timeIntervalSince(start)
    // Duration: ~6 seconds
}

Parallel Approach (tasks run simultaneously)

Now the same three tasks using async let. All three start immediately and run concurrently. We only wait when we need the actual results:

func fetchDataInParallel() async {
    let start = Date()

    // All three tasks start immediately
    async let result1 = performTask(1)
    async let result2 = performTask(2)
    async let result3 = performTask(3)

    // Wait for all to complete
    let results = await [result1, result2, result3]

    let duration = Date().timeIntervalSince(start)
    // Duration: ~2 seconds (3x faster!)
}

The async let syntax is Swift's magic for parallel execution. Each task starts immediately, and we only wait when we actually need the results.

When to use TaskGroup (dynamic number of tasks)

For situations where you don't know how many tasks you'll need upfront, use withTaskGroup. This example fetches data for 10 IDs and processes results as they complete:

await withTaskGroup(of: String.self) { group in
    for id in 1...10 {
        group.addTask {
            await fetchData(for: id)
        }
    }

    // Collect results as they complete
    for await result in group {
        processResult(result)
    }
}

Task Cancellation

In the real world, users change their minds. They navigate away before a download finishes, or cancel a search before results arrive. Swift's task cancellation is cooperative—your code decides when to check for cancellation.

This example shows a 10-second countdown that can be cancelled mid-execution. The task periodically checks Task.isCancelled and performs cleanup when cancellation is detected:

class DownloadViewModel: ObservableObject {
    @Published var countdown: Int = 10
    @Published var wasCancelled: Bool = false
    private var task: Task<Void, Never>?

    func startDownload() {
        task = Task {
            for i in (0...10).reversed() {
                // Check if user cancelled
                if Task.isCancelled {
                    wasCancelled = true
                    return  // Clean up and exit
                }

                countdown = i
                try? await Task.sleep(for: .seconds(1))
            }
            // Download complete!
        }
    }

    func cancel() {
        task?.cancel()
    }
}

Important: Cancellation isn't forced—you must check Task.isCancelled periodically. This gives you control over cleanup. Maybe you want to save partial progress or log analytics before exiting.

Structured Concurrency and Automatic Cancellation

In structured concurrency, cancelling a parent task automatically cancels all child tasks:

let parentTask = Task {
    async let child1 = longRunningWork()
    async let child2 = moreWork()

    await (child1, child2)  // Both children auto-cancelled if parent cancels
}

parentTask.cancel()  // Cancels parent AND children

Exception - Detached Tasks:

Task.detached creates tasks outside the parent's cancellation scope. These must be cancelled manually:

let detached = Task.detached {
    await independentWork()  // NOT auto-cancelled with parent
}

// Must explicitly cancel detached tasks
detached.cancel()

This is rarely needed—prefer structured concurrency with automatic cancellation unless you have a specific reason for independent task lifecycles.


The Data Race Problem

Here's where things get interesting. What happens when multiple tasks try to modify the same data simultaneously?

This example demonstrates a data race by running 1000 concurrent increments on a simple counter class. Watch what happens to the final count:

// ❌ UNSAFE - Data race!
class UnsafeCounter {
    var count = 0

    func increment() {
        count += 1
    }
}

// Run 1000 concurrent increments
let counter = UnsafeCounter()
await withTaskGroup(of: Void.self) { group in
    for _ in 0..<1000 {
        group.addTask {
            counter.increment()
        }
    }
}

print(counter.count)  // Expected: 1000, Actual: 923 😱

Why did we lose 77 increments? Because count += 1 isn't atomic—it's actually three operations:

  1. Read current value
  2. Add 1
  3. Write new value

When multiple tasks run these steps simultaneously, they overwrite each other's changes. This is a data race, and it's one of the most insidious bugs in concurrent programming.


Actor Isolation

Swift's solution? Actors. An actor is like a bouncer for your data—only one task at a time gets access.

Here's the exact same test (1000 concurrent increments) but using an actor instead of a class. Notice how the actor guarantees correct results:

// ✅ SAFE - Actor serializes access
actor SafeCounter {
    var count = 0

    func increment() {
        count += 1
    }

    func getCount() -> Int {
        return count
    }
}

// Same test - 1000 concurrent increments
let counter = SafeCounter()
await withTaskGroup(of: Void.self) { group in
    for _ in 0..<1000 {
        group.addTask {
            await counter.increment()  // Note the 'await'
        }
    }
}

let finalCount = await counter.getCount()
print(finalCount)  // Always 1000! ✅

Key differences:

  • actor instead of class
  • Calling actor methods requires await
  • The actor serializes access—tasks wait their turn
  • No data races possible

The await isn't just ceremony—it's telling you "this might suspend while we wait for the actor to be available."

Trade-off: Actors are safer but slightly slower because tasks must wait. For most apps, this overhead is negligible compared to the cost of debugging data races.

⚠️ Important: Actor Reentrancy

The bouncer analogy is helpful but incomplete. Unlike a real bouncer who handles one person at a time from start to finish, actors can suspend and allow other tasks to enter. This happens at every await point in an actor method.

Example - The Banking Bug:

actor BankAccount {
    var balance = 1000

    func withdraw(_ amount: Int) async -> Bool {
        print("[\(amount)] Checking balance: \(balance)")
        guard balance >= amount else {
            print("[\(amount)] Insufficient funds")
            return false
        }

        print("[\(amount)] Balance OK, calling network...")
        // 🚨 SUSPENSION POINT - Another task can enter here!
        try? await Task.sleep(for: .milliseconds(100))

        print("[\(amount)] Back from network, deducting. Current balance: \(balance)")
        balance -= amount
        print("[\(amount)] New balance: \(balance)")
        return true
    }
}

// Both withdrawals pass the guard, balance becomes negative!
async let first = account.withdraw(800)
async let second = account.withdraw(800)

// Output:
// [800] Checking balance: 1000
// [800] Balance OK, calling network...
// [800] Checking balance: 1000      ← Second task enters during suspension!
// [800] Balance OK, calling network...
// [800] Back from network, deducting. Current balance: 1000
// [800] New balance: 200
// [800] Back from network, deducting. Current balance: 200
// [800] New balance: -600           ← Oops! 😱

Key insight: Actors serialize access between suspension points, not across entire methods. Every await is a potential re-entry point where the actor's state might change. To prevent bugs like this:

  • Check conditions again after suspensions
  • Avoid suspending between check and mutation
  • Redesign to make operations atomic (no awaits between related state changes)

The Sendable Protocol

Not all types are safe to pass between concurrent contexts. The Sendable protocol marks types that can be safely shared across actor boundaries and concurrent tasks.

Value types are automatically Sendable

Structs and enums with immutable properties are automatically Sendable because they're copied, not shared. Each task gets its own independent copy:

// ✅ Safe - Structs are copied, not shared
struct UserData: Sendable {
    let id: Int
    let name: String
}

Task.detached {
    let user = UserData(id: 1, name: "Alice")
    // Each task gets its own copy
}

Reference types need careful handling

Classes are reference types—multiple variables can point to the same instance. To make them Sendable, you need manual synchronization with locks:

// ❌ NOT Sendable - Mutable class
class MutableUser {
    var name: String  // Mutable = dangerous
    init(name: String) { self.name = name }
}

// ✅ Sendable with manual synchronization
final class ThreadSafeCounter: @unchecked Sendable {
    private let lock = NSLock()
    private var _count: Int = 0

    var count: Int {
        lock.lock()
        defer { lock.unlock() }
        return _count
    }

    func increment() {
        lock.lock()
        defer { lock.unlock() }
        _count += 1
    }
}

The @unchecked Sendable annotation tells the compiler: "I promise this is safe. I'm handling synchronization myself with locks."

⚠️ Treat like force unwrapping (!): @unchecked Sendable disables compiler safety checks. Only use when you're absolutely certain the type is thread-safe and can prove it with documentation or locks. Like force unwrapping, it's a promise to the compiler that can cause crashes and data races if wrong. Prefer actors whenever possible.

Real-world example - Value vs Reference semantics

This comparison shows why structs are preferred for concurrent code. We'll run 50 concurrent tasks that modify configuration data—one using a struct (safe), the other using a class (data race):

// Value type (struct) - Safe
struct Config: Sendable {
    let apiKey: String
    let timeout: Double
}

let config = Config(apiKey: "secret", timeout: 30)

// Each task gets a COPY
await withTaskGroup(of: Void.self) { group in
    for i in 1...50 {
        group.addTask {
            var localConfig = config
            // Can't affect other tasks - it's a copy!
        }
    }
}

// Reference type (class) - Dangerous
class MutableConfig {
    var apiKey: String
    var timeout: Double
    init(apiKey: String, timeout: Double) {
        self.apiKey = apiKey
        self.timeout = timeout
    }
}

let mutableConfig = MutableConfig(apiKey: "secret", timeout: 30)

// All tasks share the SAME instance - data race!
await withTaskGroup(of: Void.self) { group in
    for i in 1...50 {
        group.addTask {
            mutableConfig.apiKey = "key_\(i)"  // Race condition!
        }
    }
}
// Final apiKey is unpredictable

Rule of thumb: Prefer immutable structs for data you need to pass between tasks. Use actors when you need shared mutable state.


MainActor for UI Updates

In iOS development, all UI updates must happen on the main thread. Swift 6 makes this explicit with @MainActor.

This example shows a ViewModel where most methods automatically run on the main thread, but we can opt out for expensive background work using nonisolated:

@MainActor
class ViewModel: ObservableObject {
    @Published var data: String = ""

    // Automatically runs on main thread
    func updateUI() {
        self.data = "Updated"
    }

    // Opt out for background work
    nonisolated func performExpensiveWork() async {
        // Heavy computation on background thread
        var result = 0
        for i in 0..<1_000_000 {
            result += i
        }

        // Explicitly hop back to main thread for UI update
        await MainActor.run {
            self.data = "Result: \(result)"
        }
    }
}

Key concepts:

  • @MainActor on the class: All methods run on main thread by default
  • nonisolated: Opts out—this method can run on any thread
  • MainActor.run { }: Explicitly run a closure on the main thread

Swift 6.2: @concurrent Attribute

Swift 6.2 introduces @concurrent to control isolation inheritance for nonisolated async functions. With the NonisolatedNonsendingByDefault feature flag enabled, nonisolated async methods inherit the caller's actor isolation by default. Use @concurrent to opt out:

class DataProcessor {
    // Without @concurrent: inherits caller's actor (might block MainActor!)
    nonisolated func process() async {
        // Heavy work - but runs on caller's actor
        await heavyComputation()
    }

    // With @concurrent: runs independently, not inheriting caller's isolation
    @concurrent nonisolated func process() async {
        // Heavy work - runs off caller's actor
        await heavyComputation()
    }
}

// Called from MainActor
@MainActor
func updateUI() {
    Task {
        // Without @concurrent: process() blocks MainActor
        // With @concurrent: process() doesn't inherit MainActor isolation
        await processor.process()
    }
}

When to use:

  • Add @concurrent when a nonisolated async method should NOT inherit the caller's isolation
  • Use for expensive operations that shouldn't block the caller's actor
  • Requires Swift 6.2+ and NonisolatedNonsendingByDefault feature flag

Future default: This behavior will eventually become standard in Swift, with migration tools automatically adding @concurrent where needed.

Task context inheritance

Understanding how Tasks inherit their execution context is crucial. Task inherits the current actor context, while Task.detached creates a new independent context:

@MainActor
func doWork() {
    // We're on the main thread

    Task {
        // Still on main thread! Task inherits MainActor context
        updateUI()
    }

    Task.detached {
        // Runs independently, not inheriting MainActor context
        // Can't access MainActor properties directly
        let result = await heavyComputation()

        await MainActor.run {
            // Explicitly hop to main thread for UI update
            self.status = "Done"
        }
    }
}

This is a major improvement over the old DispatchQueue.main.async approach. The compiler enforces thread safety, and the code is more explicit about where work happens.

💡 Migration Aid: Default Actor Isolation (Swift 6.2+)

If you're migrating an existing codebase to Swift 6, you might feel overwhelmed by the number of places you need to add @MainActor annotations. Swift 6.2 provides Default Actor Isolation as a migration tool to ease this burden.

This is not a core concurrency pattern—it's a compiler setting that changes default assumptions to reduce migration warnings. When enabled, the compiler assumes @MainActor by default for your code, rather than assuming no isolation.

Benefits for migration:

  • Less boilerplate: You don't need to explicitly mark every UI class with @MainActor
  • Smoother migration: Significantly fewer warnings when enabling strict concurrency
  • Better defaults: Aligns with how most iOS code actually works (on the main thread)
  • Optional & per-module: Can be configured independently for each module in your codebase

How to enable:

For Xcode projects: Build Settings → Swift Compiler → Set "Default Actor Isolation" to MainActor

For Swift Package Manager:

.target(
    name: "MyTarget",
    swiftSettings: [
        .defaultIsolation(MainActor.self)
    ]
)

When to use nonisolated - SDK/Library Design

While most app code should avoid nonisolated, there's an important exception: SDK and library development. Apple recommends using nonisolated for public SDK/library APIs to allow consumers to call your code from any isolation context. This gives library users flexibility—they can call your APIs from @MainActor code, from actors, or from unstructured contexts. App developers should generally avoid nonisolated, but library authors should embrace it for public APIs.


What's Next

You've now learned the fundamentals of Swift 6 concurrency:

  • async/await for clean asynchronous code
  • Parallel execution with async let and TaskGroup for 3x performance gains
  • Task cancellation with cooperative checking
  • Actors for thread-safe shared state
  • Sendable for safe data passing between tasks
  • MainActor for UI updates

In Part 2: Advanced Patterns, we'll cover:

  • Streaming data with AsyncSequence
  • Task priorities and cooperative scheduling
  • Custom global actors
  • SwiftUI integration with @Sendable closures
  • When to use each pattern (decision tree)
  • Migrating from Combine to async/await
  • Common pitfalls and how to avoid them

SHARE ARTICLE

More from our blog