Tiered Caching in Swift
In my previous post, I showed how closure-based services with observable stores can replace the traditional view model layer. The store updates, SwiftUI redraws, and views stay simple. But we left out something important: caching.
Without caching, every screen transition hits the network. Users see loading spinners repeatedly for data they just viewed. The app feels slow even on fast connections. And offline? The app is useless.
The solution is a tiered cache: memory for speed, disk for persistence, with intelligent fetch policies that let you control exactly when to use cached data versus fresh data. You can find the complete implementation in the example project on GitHub.
The Cacheable Protocol
Before we can cache anything, we need a way to identify cached items. The Cacheable protocol gives us two pieces of information: an instance identifier for retrieval, and a type identifier for organization.
public protocol Cacheable: Codable {
/// Unique identifier for this item within its type.
var cacheId: String { get }
/// Identifier for this type of cached item.
static var cacheIdentifier: String { get }
}
The instance cacheId distinguishes individual items (like a landmark's UUID). The static cacheIdentifier groups items by type so we can store landmarks separately from users or settings.
For our domain model, conformance is straightforward:
extension Landmark: Cacheable {
var cacheId: String { id.uuidString }
static var cacheIdentifier: String { "landmarks" }
}
But here's a useful trick: make arrays of cacheable items cacheable too. This lets you cache entire list responses as a single unit:
extension Array: Cacheable where Element: Cacheable {
public var cacheId: String {
Self.cacheIdentifier
}
public static var cacheIdentifier: String {
"\(Element.cacheIdentifier)-array"
}
}
Now [Landmark] is automatically cacheable with the identifier "landmarks-array". You can cache a list of landmarks just as easily as a single landmark.
Cache Entries with Metadata
Raw values aren't enough. You need to know when something was cached and whether it's expired. The CacheEntry wrapper adds this metadata:
public struct CacheEntry<T: Cacheable>: Codable {
public let value: T
public let cachedAt: Date
public let expiresAt: Date?
public var isExpired: Bool {
guard let expiresAt else { return false }
return Date() > expiresAt
}
public var age: TimeInterval {
Date().timeIntervalSince(cachedAt)
}
}
The expiresAt is optional because not everything needs to expire. User preferences might live forever, while a feed of posts might expire after five minutes. The age property is handy for debugging and for implementing "stale-while-revalidate" patterns.
The Cache Protocol
With our data types defined, we need a common interface for cache implementations:
public protocol CacheServicing: Sendable {
func get<T: Cacheable>(_ type: T.Type, id: String) async -> CacheEntry<T>?
func set<T: Cacheable>(_ value: T, expiresIn: TimeInterval?) async
func remove<T: Cacheable>(_ type: T.Type, id: String) async
func removeAll<T: Cacheable>(_ type: T.Type) async
func clear() async
}
The protocol is generic over Cacheable types and fully async. This lets us swap implementations freely: memory cache for tests, disk cache for persistence, or a tiered cache for production.
A convenience extension makes the common case cleaner:
extension CacheServicing {
public func getValue<T: Cacheable>(_ type: T.Type, id: String) async -> T? {
guard let entry = await get(type, id: id) else { return nil }
guard !entry.isExpired else {
await remove(type, id: id)
return nil
}
return entry.value
}
}
This unwraps the cache entry, checks expiration, and cleans up expired entries automatically. Most call sites just want the value, not the metadata.
Memory Cache with LRU Eviction
The memory cache is the fast path. It uses Swift's actor isolation for thread safety and implements LRU (Least Recently Used) eviction to bound memory usage:
public actor MemoryCache: CacheServicing {
private var storage: [String: [String: Any]] = [:]
private var accessOrder: [String: [String]] = [:]
private let maxItemsPerType: Int
public init(maxItemsPerType: Int = 100) {
self.maxItemsPerType = maxItemsPerType
}
}
Storage is organized by type identifier first, then item id. This two-level structure keeps different types isolated and makes clearing all items of a type efficient.
The accessOrder dictionary tracks which items were used most recently. When you read an item, it moves to the end of the list. When you exceed capacity, items at the front get evicted:
public func get<T: Cacheable>(_ type: T.Type, id: String) async -> CacheEntry<T>? {
let typeKey = T.cacheIdentifier
guard let typeStorage = storage[typeKey],
let entry = typeStorage[id] as? CacheEntry<T> else {
return nil
}
// Move to end of access order (most recently used)
updateAccessOrder(typeKey: typeKey, id: id)
return entry
}
private func evictIfNeeded(typeKey: String) {
guard let order = accessOrder[typeKey],
order.count > maxItemsPerType else { return }
let toRemove = order.prefix(order.count - maxItemsPerType)
for id in toRemove {
storage[typeKey]?[id] = nil
}
accessOrder[typeKey] = Array(order.dropFirst(toRemove.count))
}
LRU eviction means frequently accessed items stay cached while rarely used items get pushed out. This is usually what you want: the detail screen you keep revisiting stays fast, while that one category you browsed once doesn't waste memory.
Disk Cache for Persistence
Memory cache is fast but volatile. When the app terminates, everything is gone. The disk cache persists data as JSON files:
public actor DiskCache: CacheServicing {
private let baseDirectory: URL
private let encoder = JSONEncoder()
private let decoder = JSONDecoder()
public init(directory: URL? = nil) {
if let directory {
self.baseDirectory = directory
} else {
let caches = FileManager.default.urls(for: .cachesDirectory, in: .userDomainMask).first!
self.baseDirectory = caches.appendingPathComponent("AppCache", isDirectory: true)
}
}
}
Files are organized in subdirectories by type, with each item stored as {id}.json. This structure makes it easy to browse the cache during development and clear specific types without affecting others.
Reading handles corrupted files gracefully:
public func get<T: Cacheable>(_ type: T.Type, id: String) async -> CacheEntry<T>? {
let fileURL = fileURL(for: type, id: id)
guard FileManager.default.fileExists(atPath: fileURL.path) else {
return nil
}
do {
let data = try Data(contentsOf: fileURL)
return try decoder.decode(CacheEntry<T>.self, from: data)
} catch {
// Corrupted file - remove it
try? FileManager.default.removeItem(at: fileURL)
return nil
}
}
If decoding fails, the file is likely corrupted (maybe from a schema change or a crash during write). Deleting it lets the next fetch populate a fresh copy.
The disk cache also includes maintenance methods for managing storage:
public func totalSize() -> Int64 {
// Enumerate all files and sum their sizes
}
public func removeOlderThan(_ date: Date) async {
// Delete files with modification dates before the cutoff
}
These are useful for implementing storage limits or cleanup on app launch.
Tiered Cache: Best of Both Worlds
The tiered cache combines memory and disk. Memory is checked first for speed. If that misses, disk is checked. Items found on disk are promoted to memory for next time:
public struct TieredCache: CacheServicing, Sendable {
private let memory: MemoryCache
private let disk: DiskCache
public func get<T: Cacheable>(_ type: T.Type, id: String) async -> CacheEntry<T>? {
// Check memory first (fast path)
if let entry = await memory.get(type, id: id) {
return entry
}
// Fall back to disk
if let entry = await disk.get(type, id: id) {
// Promote to memory for next time
await memory.set(entry.value, expiresIn: entry.expiresAt.map { $0.timeIntervalSinceNow })
return entry
}
return nil
}
public func set<T: Cacheable>(_ value: T, expiresIn: TimeInterval?) async {
// Write to both layers
await memory.set(value, expiresIn: expiresIn)
await disk.set(value, expiresIn: expiresIn)
}
}
The promotion step is key. When you fetch something from disk, it's probably because you're about to use it. Promoting to memory means the second access is instant. This gives you disk persistence with memory-like speed for active data.
The tiered cache also exposes layer-specific operations:
public func clearMemory() async {
await memory.clear()
}
public func diskSize() async -> Int64 {
await disk.totalSize()
}
public func pruneOlderThan(_ date: Date) async {
await disk.removeOlderThan(date)
}
clearMemory() is particularly useful for responding to memory warnings. You can drop the memory cache without losing persisted data.
Cache Policies
Having a cache is only half the solution. You also need to decide when to use it. Different screens have different needs: a list might want to show cached data immediately while fetching fresh data, while a detail view might be fine with slightly stale data.
The CachePolicy enum captures these strategies:
public enum CachePolicy: Sendable {
/// Show cached data immediately, then fetch fresh data.
case cacheThenFetch
/// Use cache if available, otherwise fetch.
case cacheElseFetch
/// Always fetch from network, ignore cache.
case networkOnly
/// Use cache only, never hit the network.
case cacheOnly
/// Fetch from network first, fall back to cache on failure.
case networkElseCache
}
Each policy serves a specific use case:
cacheThenFetch is perfect for list screens. Users see cached data instantly (no spinner), then the list updates when fresh data arrives. The observable store handles both updates automatically.
cacheElseFetch works well for detail views. If you have cached data, use it. Only fetch if the cache is empty. This reduces network traffic when users navigate back and forth.
networkOnly is for post-mutation refreshes. After creating or updating data, you want guaranteed fresh data, not stale cache.
cacheOnly enables offline mode. Show whatever you have cached, don't even try the network.
networkElseCache is the graceful degradation option. Try to get fresh data, but if the network fails, fall back to cache rather than showing an error.
Integrating with Services
The cache integrates naturally with the closure-based services from the previous post. The service's fetch closure accepts a cache policy and updates the observable store accordingly:
struct LandmarkService: Sendable {
let store: LandmarkStore
var fetchLandmarks: @Sendable (CachePolicy) async throws -> Void
var clearCache: @Sendable () async -> Void
}
The implementation handles each policy:
fetchLandmarks: { policy in
switch policy {
case .cacheThenFetch:
// Show cached data immediately if available
if let cached = await cache.getValue([Landmark].self, id: [Landmark].cacheIdentifier) {
await store.setLandmarks(cached, source: .cache)
} else {
await store.setLoading()
}
// Then fetch fresh data
let landmarks = try await fetchFromNetwork(client: client, baseURL: baseURL)
await cache.set(landmarks)
await store.setLandmarks(landmarks, source: .network)
case .cacheElseFetch:
if let cached = await cache.getValue([Landmark].self, id: [Landmark].cacheIdentifier) {
await store.setLandmarks(cached, source: .cache)
} else {
await store.setLoading()
let landmarks = try await fetchFromNetwork(client: client, baseURL: baseURL)
await cache.set(landmarks)
await store.setLandmarks(landmarks, source: .network)
}
// ... other policies
}
}
Notice the store now tracks where data came from with a DataSource enum:
enum DataSource: Sendable {
case cache
case network
}
@MainActor @Observable
final class LandmarkStore {
private(set) var landmarks: [Landmark] = []
private(set) var dataSource: DataSource?
private(set) var lastUpdated: Date?
var isShowingCachedData: Bool {
dataSource == .cache
}
func setLandmarks(_ landmarks: [Landmark], source: DataSource) {
self.landmarks = landmarks
self.dataSource = source
self.lastUpdated = Date()
loadingState = .loaded(landmarks)
}
}
Views can use isShowingCachedData to show a subtle indicator that data might be stale, or lastUpdated to display how fresh the data is.
Using It in Views
Views stay simple. They call the service with their preferred policy and let the store handle updates:
struct LandmarkListView: View {
@Environment(\.landmarkService) private var landmarkService
private var store: LandmarkStore { landmarkService.store }
var body: some View {
List(store.landmarks) { landmark in
LandmarkRow(landmark: landmark)
}
.overlay {
if store.isShowingCachedData {
// Subtle indicator that we're showing cached data
}
}
.task {
try? await landmarkService.fetchLandmarks(.cacheThenFetch)
}
}
}
With .cacheThenFetch, users see the cached list immediately. If the cache is empty, they see a loading state. Either way, fresh data arrives in the background and the list updates automatically. No view model, no manual state management, just the service and the store doing their jobs.
Wrapping Up
A tiered cache gives you fast access through memory with persistence through disk. LRU eviction keeps memory bounded. Cache policies let you tune behavior per screen. And because it integrates with observable stores, views update automatically when cached or fresh data arrives.
The pattern scales well. Add more cache layers (like a CDN-backed remote cache) by implementing CacheServicing. Add more policies as needed. The service layer stays clean, the views stay simple, and users get a fast, responsive app that works offline.
Check out the complete implementation for all the details.
In the next post, we'll build the real backend with Vapor - applying the same model separation patterns from the iOS side to a server that our app can talk to.