Modularized and scalable iOS Network layer
If you’ve ever felt overwhelmed by the difficulties of maintaining and scaling a complex iOS application, you’re not alone. The intricate process of building an app that not only functions effectively but also adapts seamlessly to growing functional demands and evolving technology capabilities is a common challenge for many developers. Today we delve into the nuanced world of iOS app modularization and scalability, with a special focus on optimizing the network layer.
Modularization, the art of breaking down a software application into distinct, manageable pieces, can be a game-changer in how we approach iOS app development. It’s about transforming a monolithic structure into a more organized, compartmentalized system where each module serves a specific purpose. This structure not only makes the app easier to navigate and update but also streamlines the development process, allowing teams to work on different parts simultaneously without stepping on each other’s toes.
On the flip side, scalability tackles the app’s ability to gracefully handle growth. Whether it’s accommodating a rapid flow of new users, dealing with increasing data volumes, or integrating new features, a scalable app can rise to these challenges without a hitch. It’s about foreseeing future needs and engineering an application that won’t just survive but thrive under evolving conditions.
✋🏻 🛑 So if you anticipate that your application is set to grow into something complex, STOP making network requests logic like this with all the URLSession.shared.dataTask
and data, response, error
thing in a single method already (especially if doing networking inside View Controller 😩):
// A method in some spaghetti MVC view controller code
func fetchPosts() {
guard let url = URL(string: "https://jsonplaceholder.typicode.com/posts") else { return }
let task = URLSession.shared.dataTask(with: url) { [weak self] data, response, error in
guard let data = data, error == nil else {
return
}
do {
let posts = try JSONDecoder().decode([Post].self, from: data)
DispatchQueue.main.async {
self?.posts = posts
self?.tableView.reloadData()
}
} catch {
print(error)
}
}
task.resume()
}
Focusing on Separation of Concerns (SoC) should be one of the priorities in our code. SoC improves maintainability, scalability and readability. Having a modularized layer makes operations organized in a clean way. In our case, it is network layer with network operations.
We’ll break this down into different components. Such approach would ensure a clear and systematic approach to full lifecycle of building requests, executing them and handling the outcome.
So let’s make an easy-to-implement networking pipeline solution that abides the SoC and modularization principles.
Just make sure to follow every part of the article. Each aspect is necessary for the streamlined pipeline to work.
Request configuration
We begin by creating a configuration for building a network request. So we define a blueprint for our network requests configurations — a RequestConfig
protocol:
public protocol RequestConfig {
var path: String { get }
var method: HTTPMethod { get }
var body: [String: Any]? { get }
var headers: [String: String]? { get }
}
There can be different HTTP methods, so we declare an enumeration too:
public enum HTTPMethod: String {
case get
case post
case put
case delete
case patch
}
As an example, let’s create a configuration implementing this protocol:
enum CarDetailsRequestConfig {
case details(carData: CarData, apiKey: String)
}
extension CarDetailsRequestConfig: RequestConfig {
var path: String {
switch self {
case .details(_, let apiKey):
return "/carDetails?apiKey=\(apiKey)"
}
}
var method: HTTPMethod { .get }
var body: [String: Any]? {
switch self {
case .details(let carData, _):
return carData.dictionary() // About dictionary() read below
}
var headers: [String: String]? {
["Content-Type": "application/json"]
}
}
Our CarDetailsRequestConfig
has these configurations:
path
— it is part of the URL that comes after the domain (or hotname). May include any query parameters as per the given example. Starts with/
.method
— represents the type of action you want to perfom on the resource identified by the URL. Can be either GET, POST, PUT, DELETE or PATCH.body
— containts the data you want to send to the server. In dictionary format.
For body
conversion from Structure (that conforms Encodable) to Dictionary, we write this extension for objects that conform Encodable protocol:
public extension Encodable {
func dictionary() -> [String: Any]? {
guard let jsonData = try? JSONEncoder().encode(self),
let dictionary = try? JSONSerialization.jsonObject(with: json, options: []) as? [String: Any] else {
return nil
}
return dictionary
}
}
headers
— metadata associated with a HTTP request such as content type, encoding, authentication credentials, etc.
Endpoints
Each network request needs domain resolution. A simple enumeration works in our case:
public enum Endpoints {
public static let cars: String = "https://carsInfo.com/v1"
public static let googleTranslate: String = "https://translation.googleapis.com/language/translate/v2"
}
Make sure not to add an additional /
at the end of the URL.
It makes selecting endpoints very easy, especially when we are using multiple domains for our network requests, e.g. one is Apple’s WeatherKit REST API, another is Google’s Translate REST API.
Making structures conform generic type
Before implementing networking logic, we need a bridging layer between Scene Logic, Worker or Repository, and a Network execution factory — a Use Case.
NetworkResponse
‘s purpose is enabling JSON responses resolution and decoding in unified form as we are going to use generics. Meaning that every structure we are going to feed for decoding would be seen as the same type structure inside Network layer flow. NetworkResponse
conforms to Decodable
, which indicates that it is meant for types that can be decoded from a data format like JSON.
import Foundation
public protocol NetworkResponse: Decodable { }
For our network layer to work, any decodable structure for response handling must conform to this protocol:
public struct CarDetails: Decodable {
public let colorCode: String
public let doorsCount: Int
}
extension CarDetails: NetworkResponse { }
In case you are getting an error like this:
Conflicting conformance of ‘Array<Element>’ to protocol ‘NetworkResponse’; there cannot be more than one conformance, even with different conditional bounds
Then you may want to create a new structure like CarDetailsResponse
conforming to a NetworkResponse
protocol.
Lastly, let’s write our bridging layer — a FetchCarDetailsUseCase
Use Case (where we call FetchCarDetailsUseCase
for network requests execution):
import Combine
import Foundation
public protocol FetchCarDetailsUseCaseType {
func request(carData: CarData, apiKey: String) -> AnyPublisher<CarDetails, CustomError>
}
public final class FetchCarDetailsUseCase: FetchCarDetailsUseCaseType {
// MARK: - Variables
private let network: NetworkFactoryType
// MARK: - Methods
public init(network: NetworkFactoryType) {
self.network = network
}
public func request(carData: CarData, apiKey: String) -> AnyPublisher<CarDetails, CustomError> {
let response: AnyPublisher<CarDetails, CustomError> = network.fetch(
requestConfig: CarDetailsRequestConfig.details(carData: CarData, apiKey: String),
shouldCache: true,
endpointUrl: Endpoints.cars
)
return response
}
}
Building a request
After we have configured our request, we can continue with the fundamental base of a network layer — building the URL requests. This process is streamlined through the implementation of a class named RequestBuilder
.
import Foundation
final class RequestBuilder {
// MARK: - Methods
static func buildRequest(with requestConfig: RequestConfig, and endpointUrl: String) -> URLRequest? {
guard let url = URL(string: endpointUrl + requestConfig.path) else {
return nil
}
var request = URLRequest(url: url)
request.httpMethod = requestConfig.method.rawValue
request.allHTTPHeaderFields = requestConfig.headers
request.httpBody = httpBody(requestConfig)
request.timeoutInterval = 10
return request
}
private static func httpBody(_ requestConfig: RequestConfig) -> Data? {
guard let parameters = requestConfig.parameters else {
return nil
}
return try? JSONSerialization.data(withJSONObject: parameters)
}
}
The RequestBuilder
class takes a RequestConfig
object and an endpoint URL as input.
As previously defined, the RequestConfig
object contains all the necessary data for constructing a request, including the HTTP method, headers, and parameters. The class method buildRequest(with:and:)
combines the endpoint URL and the path from RequestConfig
to create the full request URL. It then initializes a URLRequest
object with this URL.
The HTTP method and headers from the RequestConfig
are set directly on the URLRequest
. For the HTTP body, the class includes a private method httpBody(requestConfig:)
, which converts the parameters in RequestConfig
into Data
using JSONSerialization
, suitable for HTTP transmission.
Key configurations like the cache policy and timeout interval are also set within the RequestBuilder
. The cache policy is defined as .reloadRevalidatingCacheData
to ensure the data is up-to-date, while the timeout interval is set to 10 seconds, balancing efficiency with network reliability.
By encapsulating the request construction in the RequestBuilder
class, the network layer's code becomes more organized and manageable. It allows for easy modification of request configurations and simplifies the process of creating different types of requests across the application.
Creating a session
The next piece of the puzzle is creating a network session. The network session, managed by URLSession
, plays an important role in handling network requests and responses. Its configuration is essential for defining the behavior and performance of network operations.
import Foundation
final class SessionManager {
// MARK: - Methods
static func createSession(cache: URLCache) -> URLSession {
let sessionConfiguration = URLSessionConfiguration.default
sessionConfiguration.requestCachePolicy = .reloadRevalidatingCacheData
sessionConfiguration.urlCache = cache
return URLSession(configuration: sessionConfiguration)
}
}
At the core of the network session setup is the SessionManager
class, which encapsulates the creation of a URLSession
instance via createSession(cache:)
method. It takes URLCache
as input for seting the cache (optional). The method utilizes URLSessionConfiguration.default
as the baseline for the session configuration, ensuring a standard set of network behaviors are in place.
One critical aspect of the session configuration is the cache policy. In our implementation, we set this to .reloadRevalidatingCacheData
. This choice aims to strike a balance between leveraging cached data and ensuring the freshness of the information by revalidating it against the server. More about caching later on.
Additionally, we integrate a custom cache, CacheManager.cache
, into the session configuration. It allows for more nuanced control over caching behavior, tailored to the specific needs of the application.
Confusing configurations
// URLRequest
request.timeoutInterval = 6
// URLSession
sessionConfiguration.timeoutIntervalForRequest = 10
⬆ We have ability to set timeout interval for both URLRequest
and URLSession
. What’s the difference?
Well.. URLSession
’s timeout is a broader timeout that applies to the entire duration of the request, including connection setup, sending data, server processing, and receiving data, while URLRequest
’s timeout is more related to each request timeout — it dictates how long the request will wait for the server to respond (i.e., how long to wait for the first byte of the response). It’s more granular and allows you to customize the timeout for individual requests.
However, each request created by URLRequest
builder will have its own timeout (in our code — 10 seconds) regardless of the session’s configuration. This timeout will override the session’s default timeout for these requests.
// URLRequest
request.cachePolicy = .reloadRevalidatingCacheData
// URLSession
sessionConfiguration.requestCachePolicy = .reloadRevalidatingCacheData
⬆ The similar goes for cache policy. Both can be configured in either URLRequest
or URLSession
. One is for set per request, another applies to all requests made within that session. And again, while URLSession
cache policies set a default caching behavior for all requests within a session, with the possibility of being overridden by individual requests via URLRequest
.
Executing network requests
In our NetworkExecutor
class, we use the power of both Combine and Grand Central Dispatch (GCD) to execute network requests asynchronously, ensuring they don’t block the main thread. This approach ensures high-priority network operations are handled swiftly without compromising the user experience.
import Combine
final class NetworkExecutor {
// MARK: - Methods
static func execute(request: URLRequest, with session: URLSession) -> AnyPublisher<(data: Data, response: URLResponse), URLError> {
session.dataTaskPublisher(for: request)
.subscribe(on: DispatchQueue.global(qos: .userInitiated))
.eraseToAnyPublisher()
}
}
We use the subscribe(on:)
operator to specify the execution context for the network request. This operator sets the context for the entire upstream pipeline, meaning all operations upstream of this operator (towards the original publisher) are executed in the specified context. In our implementation, we choose DispatchQueue.global(qos: .userInitiated)
as execution context.
The choice of .userInitiated
for the QoS class is intentional. This QoS class is intended for tasks that are initiated from the UI and can be considered high-priority but not immediate, such as loading data in response to a user's action. It's a level below .userInteractive
(which is reserved for tasks directly interacting with the UI), but above .default
or .utility
.
Decoding a response and handling status codes
Now let’s check the logic of decoding responses and handling various HTTP status codes. This part of networking ensures that the data received from a server is correctly interpreted and that appropriate actions are taken based on the server’s response.
Our implementation revolves around the ResponseDecoder
class, which serves a dual purpose: decoding the data payload from a successful response and handling different categories of HTTP status codes to appropriately manage errors.
import Foundation
final class ResponseDecoder {
static func decodeResponse<T: NetworkResponse>(data: Data, response: URLResponse) throws -> T {
guard let httpResponse = response as? HTTPURLResponse else {
throw CustomError.networkInvalidResponse
}
switch httpResponse.statusCode {
case 200..<300:
return try decodeData(data)
case 400...499:
throw CustomError.networkClientFailure(code: httpResponse.statusCode, url: httpResponse.url?.absoluteString ?? "")
case 500...599:
throw CustomError.networkServerFailure(code: httpResponse.statusCode, url: httpResponse.url?.absoluteString ?? "")
default:
throw CustomError.networkUnknownStatusCode(code: httpResponse.statusCode, url: httpResponse.url?.absoluteString ?? "")
}
}
static func decodeData<T: NetworkResponse>(_ data: Data) throws -> T {
do {
return try JSONDecoder().decode(T.self, from: data)
} catch {
throw CustomError.networkResponseDecodingFailure(error)
}
}
}
The core method of the ResponseDecoder
class is decodeResponse(data:response:)
. It begins by ensuring that the provided URLResponse
is a valid HTTPURLResponse
. If it's not, the method throws a CustomError.networkInvalidResponse
error, indicating that the response from the server is not in the expected HTTP format.
In our case, CustomError
enumeration (that conforms Error
protocol) is defined like this:
public enum CustomError: Error, Equatable {
case genericError(Error)
...
case networkRequestBuildFailure
case networkClientFailure(code: Int, url: String)
case networkServerFailure(code: Int, url: String)
case networkUnknownStatusCode(code: Int, url: String)
case networkInvalidResponse
case networkResponseDecodingFailure(Error)
...
}
The next step involves assessing the HTTP status code. Here, we use a switch
statement to categorize the status codes into three primary groups:
- Successful Responses (200–299): In this range, the response is considered successful, and the method proceeds to decode the data. The decoding is handled by another method,
decodeData(_:)
, which usesJSONDecoder
to convert the JSON response into a specified SwiftDecodable
type. If decoding fails, it throws aCustomError.networkResponseDecodingFailure
, indicating an issue with the response format. - Client Errors (400–499): These status codes indicate a problem with the request made by the client. In such cases, a
CustomError.networkClientFailure
is thrown, along with the status code and the URL of the request, to aid in diagnosing the issue. - Server Errors (500–599): These codes signal issues on the server side. Similar to client errors, the class throws a
CustomError.networkServerFailure
, providing the status code and URL for further investigation.
For status codes that fall outside these ranges, the class throws a CustomError.networkUnknownStatusCode
error. A catch-all case ensures that any unexpected status codes are also handled and flagged for review.
Caching data
⚠️ Caching data is optional, so you can remove all the logic regarding caching if found unnecessary.
We are also establishing a dedicated CacheManager
class to handle the caching of network responses. It is responsible for storing and retrieving cached data, thus enhancing the overall performance and user experience of the app. In other words, cache ensures that data frequently requested by the user is readily available, reducing reliance on network connectivity and speeding up the overall data retrieval process.
import Foundation
final class CacheManager {
// MARK: - Variables
private static let memoryCapacity = 0
private static let allowedDiskSize = 10 * 1024 * 1024 // 10 MB
static let cache = URLCache(memoryCapacity: memoryCapacity, diskCapacity: allowedDiskSize, diskPath: nil)
// MARK: - Methods
static func storeCachedResponse(response: URLResponse, data: Data, for request: URLRequest) {
let cachedData = CachedURLResponse(response: response, data: data)
cache.storeCachedResponse(cachedData, for: request)
}
static func loadCachedResponse(for request: URLRequest) -> CachedURLResponse? {
cache.cachedResponse(for: request)
}
}
The CacheManager
class is designed with a focus on simplicity and efficiency. It initializes a URLCache
with specific memory and disk capacity settings. We've set the memoryCapacity
to 0, indicating that the cache does not store any data in RAM, and the diskCapacity
to 10 MB. Such setup is tailored to ensure that the app doesn't consume excessive device memory while still providing a substantial cache size on disk for network responses.
Two key methods, storeCachedResponse(for:)
and loadCachedResponse(for:)
, encapsulate the caching functionality:
- storeCachedResponse(cachedData:for:): This method takes a
URLResponse
andData
objects, and the associatedURLRequest
as arguments. It utilizes theURLCache
instance to store the response in the cache. This method is invoked whenever a network response needs to be cached, allowing for later retrieval without the need for additional network requests. - loadCachedResponse(for:): Conversely, this method retrieves a cached response for a given
URLRequest
. It returns an optionalCachedURLResponse
, which can be nil if no cached response exists for the request. This method is particularly useful for reducing network load and improving response times, as it serves data from the cache when available, bypassing the need to make a new network call.
It’s essential to recognize the role of server-side cache control in effective network communication. In many scenarios, the server dictates the caching policy, which the client (in this case, a mobile application) is expected to respect. It ensures that caching behavior aligns with the data’s nature and the server’s requirements, leading to optimal network resource usage.
When our application makes a network request, the server’s response often includes HTTP headers that specify caching directives. Headers, like Cache-Control
, Expires
, and ETag
, provide instructions on how the response should be cached by the client. For instance, Cache-Control
can specify directives like no-cache
, no-store
, must-revalidate
, or a max-age value indicating how long the data should be considered fresh.
The URLCache
used in CacheManager
class is designed to respect these server-specified caching headers. When we store a response using storeCachedResponse(cachedData:for:)
, the caching behavior for that response is influenced by the server's instructions contained within the HTTP headers. Similarly, when retrieving a cached response using loadCachedResponse(for:)
, the cache checks the validity of the cached data based on these headers and determines whether to serve the cached data or fetch a new response.
Handling errors
A comprehensive approach to error management through the ErrorHandler
class is described below.
ErrorHandler
is specifically adapted to identify, categorize, and return appropriate error types based on the nature of the encountered issue, ensuring that the application can respond to network errors in a controlled and informative manner.
import Foundation
final class ErrorHandler {
// MARK: - Methods
static func handleError(_ error: Error, url: URL? = nil) -> CustomError {
print("Network error occurred: \(error) for URL: \(url?.absoluteString ?? "Unknown URL")")
if let urlError = error as? URLError {
switch urlError.code {
case URLError.notConnectedToInternet,
URLError.networkConnectionLost,
URLError.dataNotAllowed,
URLError.internationalRoamingOff,
URLError.timedOut:
return .notConnectedToInternet
default:
break
}
}
if let customError = error as? CustomError {
return customError
}
if let responseError = error as? HTTPURLResponse {
return .statusCode(code: responseError.statusCode, url: url?.absoluteString ?? "Unknown URL")
}
return .genericError(error)
}
}
The handleError(_:)
method is the centerpiece of our error handling strategy. It takes an Error
object and an optional URL
as parameters and returns a CustomError
, which is our custom error type tailored to the application's context. This method serves as a unified point for handling various error scenarios that can occur during network communication.
Here’s how the ErrorHandler
method works:
- Logging the Error: Initially, it logs the error to the console. This step is needed for debugging and can aid in quickly identifying issues during development.
- Handling URLErrors: The method then checks if the error is a
URLError
. In the case of common connectivity issues like not being connected to the internet, a network connection being lost, or a timeout, it returns a.notConnectedToInternet
error. This categorization simplifies the process of identifying connectivity-related problems, which are common in mobile applications. - Handling Custom CustomErrors: If the error is already a
CustomError
, it is simply returned. This part handles the errors that are already categorized according to our custom error definitions, ensuring that the error handling remains consistent across the application. - Handling HTTPURLResponses: The method also checks if the error is an
HTTPURLResponse
, in which case it returns a.statusCode
error with the response's status code and the URL. This is particularly useful for handling server-side errors indicated by specific HTTP status codes. - Generic Error Handling: Finally, for all other types of errors, the method returns a
.genericError
, encapsulating the original error. This catch-all ensures that no error goes unhandled, even if it doesn't fit into the specific categories defined.
Through the ErrorHandler
class, our network layer gains the ability to manage a wide range of error conditions effectively. This approach not only aids in troubleshooting and maintaining the application but also enhances the user experience by providing clearer and more specific error information. It demonstrates the importance of comprehensive error handling in building resilient and reliable network-driven iOS applications.
Logging network requests
Effective logging is a key aspect of monitoring and debugging the network layer. To facilitate this, we are implementing a NetworkLogger
class dedicated to logging details of network requests and responses. It provides live insights into the network activity of the app, greatly helping in the debugging process during the development phase.
import Foundation
final class NetworkLogger {
// MARK: - Methods
static func logRequest(request: URLRequest) {
#if DEBUG
print("-------------------------- NETWORK REQUEST START --------------------------")
if let url = request.url, let httpMethod = request.httpMethod {
print("\(httpMethod) \(url)")
}
if let headers = request.allHTTPHeaderFields {
print("Headers: \(headers)")
}
if let httpBody = request.httpBody, let bodyString = String(data: httpBody, encoding: .utf8) {
print("Body: \(bodyString)")
}
print("--------------------------------------------------------------------------------")
#endif
}
static func logResponse(response: URLResponse, data: Data?) {
#if DEBUG
print("--------------------------- NETWORK RESPONSE START ---------------------------")
if let httpResponse = response as? HTTPURLResponse {
print("Response - Status Code: \(httpResponse.statusCode)")
print("Response URL: \(httpResponse.url?.absoluteString ?? "Unknown URL")")
}
if let data = data, let dataString = String(data: data, encoding: .utf8) {
print("Response Body: \(dataString)")
}
print("--------------------------- NETWORK RESPONSE END -----------------------------")
#endif
}
}
You can always set your own parameters to be logged so it would be easier to perform debugging operations.
Factorying everything
Lastly, we have to integrate various components together into a streamlined process. The NetworkFactory
class, conforming to the NetworkFactoryType
protocol, manages all the network operations from start to the end.
It contains a single generic method fetch<T: NetworkResponse>(requestConfig:shouldCache:endpointUrl:)
for fetching the network data.
import Combine
import Foundation
public protocol NetworkFactoryType {
func fetch<T: NetworkResponse>(
requestConfig: RequestConfig,
shouldCache: Bool,
endpointUrl: String
) -> AnyPublisher<T, CustomError>
}
public final class NetworkFactory: NetworkFactoryType {
// MARK: - Variables
private static let retries: Int = 3
// MARK: - Init
public init() { }
// MARK: - Execute Request
public func fetch<T: NetworkResponse>(
requestConfig: RequestConfig,
shouldCache: Bool,
endpointUrl: String
) -> AnyPublisher<T, CustomError> {
guard let request = RequestBuilder.buildRequest(with: requestConfig, and: endpointUrl) else {
return .fail(.networkRequestBuildFailure)
}
if let cachedResponse = CacheManager.loadCachedResponse(for: request),
let decodedObject: T = try? ResponseDecoder.decodeData(cachedResponse.data) {
return .just(decodedObject)
}
NetworkLogger.logRequest(request: request)
let session = SessionManager.createSession(cache: CacheManager.cache)
return NetworkExecutor.execute(request: request, with: session)
.tryMap { data, response in
NetworkLogger.logResponse(response: response, data: data)
let decodedObject: T = try ResponseDecoder.decodeResponse(data: data, response: response)
if shouldCache {
CacheManager.storeCachedResponse(response: response, data: data, for: request)
}
return decodedObject
}
.mapError { error in
ErrorHandler.handleError(error, url: request.url)
}
.retry(Self.retries)
.catch { error -> AnyPublisher<T, CustomError> in
.fail(error)
}
.eraseToAnyPublisher()
}
}
It completes the whole process of making network requests, handling caching, logging, error handling, and decoding responses:
- Building the Request: Each method starts by constructing a
URLRequest
using theRequestBuilder
. If this fails, it immediately returns a failure with a.networkRequestBuildFailure
error. - Caching Logic: If
shouldCache
istrue
and a cached response exists, it tries to decode the cached data and return it directly, bypassing the network request. As mentioned previously, this greatly enhances performance and reduces network load. - Logging Requests and Responses: The pipeline logs the request and response details using the
NetworkLogger
. This is necessary for debugging and monitoring network activity. - Executing the Request: The request is executed using the
NetworkExecutor
. The response is then logged, and the data is decoded using theResponseDecoder
. It includes an additional logic to cache the response ifshouldCache
istrue
. - Error Handling: Errors are managed using the
ErrorHandler
, ensuring that all errors are processed and categorized appropriately.mapError
operator transforms any errors that occur in the upstream publisher. It’s used for error handling, such as logging or converting the error to a different type. Thecatch
block is executed if the error is still present after the three retries via.retry(Self.retries)
. Here, the error is caught, and the block transforms this error into a Combine’sFailure
type, emitting it as aCustomError
. - Type Erasure: Finally, the methods use
.eraseToAnyPublisher()
to return a type-erased publisher, providing a clean and simple interface for the callers.
The whole pipeline can be simplified into a chart:
Through the NetworkFactory
, we effectively leverage the power of Combine for asynchronous networking, along with the custom components we've developed. This class ties together the request building, execution, logging, error handling, and response decoding in a seamless and efficient manner. It exemplifies how modular components in a network layer can be orchestrated to create a powerful, flexible, and maintainable network infrastructure in an iOS application.
Thank you for reading and good luck modularizing your application! :)
More about me and my experience here.