Serverless Swift Lambda Performance

Kevin Hinkson
4 min readJan 19, 2023

Is A Serverless Swift Lambda Fast?

Quite often developers want to know if Swift offers good performance for running server applications. Where Swift on Server might be classified as a toddler, Swift in the Cloud is a new born baby. Very few people have experience configuring and running serverless applications build with Swift at the time of this writing. So what kind of performance does a Swift lambda offer?

Network Performance

The first way to get better server performace is simply to be closer to your customer, wherever they are. Many iOS apps have customers physically spread all over the globe. If your server is setup in the continental US, customers in Australia, Japan or Brazil will suffer from noticable latency. In Swift Cloud Compute with AWS Lambda we break down the infrastructure involved in setting up a serverless, multi-regional Swift API. This API will route customers to the region that responds to them with minimal latency. In the Github repository AWS Swift Multi-Region API we give you the detailed steps and code to get this API up and running.

Runtime Performance

The second way to get better server performance is by using a programming language and runtime platform that performs well. Unsurprisingly, Swift + Lambda meets this criteria. With the planned improvements for 2023 and beyond, the performance of Swift should continue to incrementally improve well into the future. As you will see below, the great out of the box performance of our SwiftLambdaAPI example takes very little work.

SwiftLambdaAPI

The app is very simple for the purpose of getting started with serverless Swift on Lamba.

  1. The app accepts incoming requests.
  2. If it receives a GET request with the URL path /hello then it returns an HTTP 200 Ok response status with a JSON body containing both static and random values.
  3. For any other received request, it returns an HTTP 404 Not Found response.

That’s it.

Measuring Network & Runtime Performance

Although our app is deployed across multiple regions, we will test the runtime performance by simply issuing multiple requests from with the same region. From the Ohio (us-east-2) region to the lambda API we will send multiple requests using the Apache Bench tool from an AWS EC2 instance. The lambda concurrency limits are set pretty low on this account to help limit abuse. As such, for benchmarking this open API we do not make use of concurrent request processing. We send one hundred (100) requests in total, one (1) at a time and track their performance.

ab -n 100 -c 1 "https://main.openapi.prediket.dev/hello"
This is ApacheBench, Version 2.3 <$Revision: 1901567 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking main.openapi.prediket.dev (be patient).....done


Server Software:
Server Hostname: main.openapi.prediket.dev
Server Port: 443
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-AES128-GCM-SHA256,2048,128
Server Temp Key: ECDH P-256 256 bits
TLS Server Name: main.openapi.prediket.dev

Document Path: /hello
Document Length: 150 bytes

Concurrency Level: 1
Time taken for tests: 2.282 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 31300 bytes
HTML transferred: 15000 bytes
Requests per second: 43.83 [#/sec] (mean)
Time per request: 22.816 [ms] (mean)
Time per request: 22.816 [ms] (mean, across all concurrent requests)
Transfer rate: 13.40 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 2 3 0.6 3 9
Processing: 12 20 4.6 19 42
Waiting: 12 20 4.6 19 42
Total: 15 23 4.7 22 45

Percentage of the requests served within a certain time (ms)
50% 22
66% 23
75% 25
80% 26
90% 27
95% 29
98% 42
99% 45
100% 45 (longest request)

Quirks of the Network

At first glance this seems slow for a very simple app. This can be explained by how the network infrastructure of the API is setup. We use network latency routing to determine which region to send requests to. Then those request are routed through API Gateway to the Lambda. In this case, coming from an EC2 instance within AWS, this request is __routed outside of the AWS network and then back inside the same region via API Gateway__. The response follows the same route in reverse. A better way to route this request, would be to stay on the internal network via Virtual Private Cloud (VPC). This would eliminate much of the network travel time and latency. If you doubt this, take a look at the actual Swift lambda runtime values below.

The Serverless Swift Runtime

AWS CloudWatch will automatically log how long each request to a lambda takes to run, including any startup time for initial launch. As you can see below, each request takes approximiately just over 1 millisecond to run. That means that in this use case, the network design is limiting the responsiveness of the Swift API.

Swift Lambda AWS CloudWatch screenshot

Serverless Swift is a Performance Win

Hopefully, I have piqued your interest as to how performant Swift can be in the cloud. Out of the box, your limiting factors will likely be your chosen network infrastructure. Long before you need to worry about Swift performance, you will be focused on designing and optimising you network infrastructure for your use case.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Kevin Hinkson
Kevin Hinkson

Written by Kevin Hinkson

Kevin is an experienced software developer who works mainly with start up companies and understands the unique perspective required. Twitter: @kevin_hinkson

No responses yet

Write a response