Mastering the Mobile Dev Maze: UIKit vs. SwiftUI vs. XML vs. Compose -Part 1

Ali Taha Dinçer
Commencis
Published in
11 min readJan 29, 2024

--

Compose Multiplatform Header (Gathered From Official CMP Github Page)

TLDR: I have implemented the same coin app 15 times using UIKit, SwiftUI, XML, Compose and Compose Multiplatform by changing various settings and comparing the FPS, memory usage and app sizes. Results show that you should either go full CMP (Compose Multiplatform) on your iOS app screen, or leave it entirely, as of January, 2024. For Android, it basically does not matter.

After spending many of those years tinkering with Flutter development, I found myself pretty excited about Kotlin Multiplatform (KMP) when I first came across it. The idea of using the same code across different platforms and still getting that native feel was a game-changer. I hopped on the KMP train early in 2023 and have been exploring every bit of it since. I’ve developed 2 apps and played around with some of the big libraries like Apollo GraphQL, Ktor, SQLDelight, Realm, Koin, and KMM-ViewModel. Having a couple of years of Android experience made diving into these libraries a breeze, and knowing my way around Kotlin was a big help. By the time KMP hit its stable release in December 2023, it felt like the perfect moment to get really serious about it.

JetBrains had this pitch for KMP: it’s flexible for developers. You can go all-in and share your entire business logic, or you can take it slow and integrate KMP parts into your existing business logic bit by bit. That’s just my take on what they said in their video. But JetBrains didn’t just stop with KMP; they had bigger plans. They wanted to challenge the likes of Flutter and React Native with something even better, more efficient, and more developer-friendly for the multi-platform scene. They were aiming to share not just the logic but the UI as well. And so, in August 2021, they rolled out Compose Multiplatform (CMP) in alpha, letting you share your UI code on the web, desktop, and Android. But what about iOS and the whole Apple ecosystem? Hold on… as of May 2023, CMP for iOS hit alpha too. That was a huge step in shaking up the multi-platform world.

But here’s the big question: what about the native performance issue? In my view, KMP does a stellar job when the UI stays native, and you’re just sharing business logic. But what’s the deal with the UI part? Can Compose really keep up with SwiftUI or UIKit in performance? I know CMP for iOS is just starting out and has a long way to go to reach stability. But what’s the situation right now? A bunch of Android developers have started using CMP in their apps, and some are even pushing them into production. But at the end of the day, as a user, all I care about is a smooth, hassle-free experience. Imagine having a top-tier phone like an iPhone 15 Pro Max or a Pixel and running into laggy UI. That would be a bummer, right? Most users would blame their phone rather than think the app isn’t optimized. We developers sometimes forget that our users don’t know or care about the tech behind our apps. They just want something stable that performs well and doesn’t kill their batteries. So, can CMP deliver that today?

With that thought, I decided to dig and benchmark CMP’s performance. I whipped up a simple app, which turned into 15 different versions, to gradually introduce CMP and see how it stacks up performance-wise. This series of articles will cover what I built and tested, including the methodology and criteria I used. I’ll wrap it up with my own takeaways and all the nifty solutions I figured out along the way.

Before we jump into the benchmarking adventure, check out my KMP projects from mid-2023. The second one’s still in the works, but I’ll keep you posted:

Finally, throughout reading the series, you can follow up the repo below, where I published all the apps along with libraries and resources:

Pre-thinking Possible Outcomes

Before diving into the nitty-gritty of this article series, let’s take a step back and consider what to expect. Starting a CMP project is straightforward with the Kotlin Multiplatform Wizard. Once you’ve created a project, downloaded it and you’re in, you’ll find a folder named ‘composeApp’. This is where all the Compose magic happens. In the ‘composeApp/src/commonMain/kotlin/’ directory, there’s a file named ‘App.kt’ which is essentially the starting point of our Compose UI. One key observation here is that the CMP imports are from ‘androidx.compose’, identical to what we use in Jetpack Compose for Android apps. This similarity is quite a revelation — it implies you’re using the same Jetpack Compose codebase in your multi-platform application.

With this in mind, it’s logical to expect similar performance, app size, and memory usage for the Android version, since it’s leveraging the same library. However, when it comes to iOS, we need to bear in mind that we should expect the unexpected. CMP is still in its alpha stage for iOS, so predicting outcomes here is tricky. Also, given that CMP for iOS uses SKIA (SKIKO) — much like Flutter did before introducing Impeller — there might be some shared challenges. These could include an increase in app size, as the IPA package includes the SKIA engine. Another critical aspect to consider is the garbage collector used by CMP, which is the Kotlin/Native GC. This means that for parts of the app where KMP and CMP come into play, Apple’s native garbage collection isn’t being used, potentially impacting memory management on iOS devices.

Methodology

Let’s see how I got about this. I’ll walk you through the names I’ve given to the apps, the libraries and architecture that were my go-to’s, and give you a peek into the code behind the FPS (Frames Per Second) and Memory Usage Measurers. I’ll also break down the tests I ran. So, let’s jump right in and see what’s under the hood!

Naming of the Apps

At the outset of this experiment, I initially planned to use both Retrofit and KTOR for Android app networking. However, as the project progressed, I decided to streamline the process by exclusively using KTOR for handling network requests. This decision was driven by the desire to minimize variables in the testing process. With that in mind,

  • For fully native apps, which do not incorporate any KMP or CMP dependencies and rely solely on XML, Jetpack Compose, UIKit, or SwiftUI, I’ve prefixed their names with ‘Base’. For example, an Android app that uses XML without any KMP or CMP dependencies is named ‘BaseAndroidXMLKtor’. Similarly, an iOS app using SwiftUI without these dependencies is ‘BaseiOSSwiftUI’.
  • Apps that integrate KMP libraries along with Jetpack Compose for the UI are named accordingly, such as ‘KMPAndroidComposeKtor’. For apps that utilize both KMP and CMP libraries with UIKit for UI implementation, I’ve chosen names like ‘KMPiOSCMPUIKit’.
  • The CMP usage in some iOS apps varies, with differences in how CMP elements are integrated. For example, an iOS app that renders only List Items with CMP and uses SwiftUI for other UI elements is named ‘KMPiOSCMPSwitUIListItem’. In contrast, an app that employs CMP for entire screen rendering and uses UIKit’s navigationController or SwiftUI’s NavigationStack for navigation is designated ‘CMPUIKit’ or ‘CMPSwiftUI’, respectively.
  • Notably, all Android app names conclude with ‘Ktor’, reflecting the decision to shift from Retrofit to Ktor after developing the first app, ‘BaseAndroidXMLKtor’. Sorry that I was too lazy to create another app namely ‘BaseAndroidXML’ and copy/paste the whole code from one to another.
  • Two specific libraries, ‘KMPKtorCoinbase’ and ‘CmpCoinbase’, have been implemented. Their names are indicative of their functionalities and integration into the respective apps.

Libraries and Architecture

In preparing the apps for this project, my primary aim was to minimize the number of libraries used. This approach was crucial because each library introduces a new variable, which could lead to differences in testing results from one app to another, affecting aspects like performance, memory usage, and app size.

Here’s the approach I took:

  • For navigation, I relied on native SDK functionalities. On Android, Activities were the backbone for screen management, with navigation is managed using startActivity() and finish(), and data passed via Intents. For iOS apps utilizing UIKit, navigation was handled with pushViewController() on the navigationController, while SwiftUI apps used NavigationStack and NavigationLink.
  • The CoinPaprika API serves as the data backbone for all apps. Android apps leverage Ktor for networking, which is also supported by my ‘KMPKtorCoinbase’ and ‘CmpCoinbase’ libraries. Meanwhile, ‘Base’ iOS apps use URLSession for networking.
  • Serialization of requests and responses was handled by ‘KTX-Serialization’ in all Android apps, including the apps that leverage the ‘KMPKtorCoinbase’ and ‘CmpCoinbase’ libraries. For ‘Base’ iOS apps, the Codable protocol was used for serialization.
  • The two libraries I developed, ‘KMPKtorCoinbase’ and ‘CmpCoinbase’, play specific roles. ‘KMPKtorCoinbase’ was designed to share business logic without including UI code, as per KMP’s functionality. ‘CMPCoinbase’, on the other hand, incorporates UI logic through CMP, in addition to the features of ‘KMPKtorCoinbase’. I copied and pasted the business logic directly from the KMP library to the CMP one, a decision I’ll elaborate on in the Acknowledgements section.

Architecturally, I chose Clean Architecture for both platforms, despite the small scale of the app. This decision inevitably increased my development time but was instrumental in enlarging the codebase. Here’s a quick layout:

  • Core components, such as UI Models, DTOs, their mappers, utility classes, exceptions, and extensions, are placed in the Core folder.
  • The Code organization was feature-centric. With two screens in the app, there are two main features.
  • Each feature folder was structured with distinct Data, Domain, and Presentation sections.

Measurers

Truth be told, using the built-in performance inspectors in Android Studio or Xcode would have been the easiest way to measuring app performance. However, my ultra genius mega-mind decided to create a separate KMP library for that purpose. My idea was to have a tool that could be easily integrated into any of my apps, perhaps even future ones. However, integrating this library with the native SDKs turned out to be a bigger challenge than I anticipated. After three days of wrestling with it, and probably due to my own knowledge gaps, I just couldn’t get it to work. So, I resorted to the good old method of copying and pasting code into all projects.

My aim was not just to measure performance, but to do it in a way that avoided the clutter and complexity of standard tools. This gave me exactly what I needed. I wanted to create clear, straightforward data points that could be easily parsed into JSON and visualized in Python for easy-to-understand charts. In the end, building my own performance measurers turned out to be the simplest solution to what I needed, even though it brought its own set of challenges along the way.

FPS Measurer

  • Let’s start with Android. Here, the Choreographer is like your new best buddy. It operates at set time intervals, waiting for your tasks to wrap up before redrawing the screen with any updated components. To put it simply, for a 60 Hz screen, it works in 16-millisecond cycles. Finish your work in those 16 milliseconds, and all is well. Miss that window, and you’ve got a ‘frame-skip’ — a big no-no, especially in animations, as it leads to visible screen lags. Diving deeper, Choreographer offers a postFrameCallback()method for adding a function that executes at the end of each 16-millisecond cycle. The trick was to increment a ‘frameCount’ variable each time this callback was triggered, then calculate the FPS from this count after a second had passed. The real challenge? Testing this on a physical device meant, I had to save these measurements to a file and then send it off to a server running locally on my network. You’ll find these files in the ‘measurers’ folder of the repo.
  • In iOS, things were challenging due to my limited experience with native iOS development, aside from my knowledge of SwiftUI and the 10 apps I had implemented for this experiment. With assistance from ChatGPT and insights from StackOverflow discussions, I managed to calculate FPS on iOS devices using CADisplayLink. I applied an approach similar to the one I used for Android. However, during testing on my good ol’ iPhone 11, I couldn’t send the file to my server because URLSession was unable to locate the file I had created. Consequently, I resorted to testing the iOS apps using the emulator and directly printed the file location to stdout via a print line (a mega-super-genius move), and then manually copied the file containing the measurements to my Python script for charting.

Memory Measurer

  • On the Android side, obtaining memory usage metrics was straightforward with Debug.getMemoryInfo(). The key task involved converting the data from KB to MB. Following this, similar to the FPS Measurer, these metrics were saved to a file and transmitted to my server for further analysis.
  • Now, on the iOS front, things got a bit tricky. Unlike the FPS Measurer, where the code was a walk in the park, the Memory Measurer required me to dive into native Objective-C methods and make calls to the kernel for memory insights — or at least, that’s how I interpreted it. I leaned heavily on resources like ChatGPT and StackOverflow to piece together functional code. The rest of the process followed the same routine as with FPS — saving the data, noting the file location, and then manually ferrying it over to my Python script for analysis and painting those nifty charts you can find in the next part of that article series.

App Size Measurer

Yes, that file doesn’t exist because measuring the app size is a straightforward process. It involves installing the app in release mode and checking the storage usage in the device settings.

Testing

At the beginning of these experiments, I set out to find a solid testing method that doesn’t rely on human error. Using an automated method was pretty logical and Maestro was the hottest thing in the X (Twitter) community. But, when I started testing Android with Maestro, the results got pretty complex and needed a lot of explaining to make sense to readers. Trying to draw clear conclusions from Maestro’s automated tests turned into a real head-scratcher, as you’ll see in the Experiments and Results section later on.

Then came iOS testing, and things got a bit tricky. It might have been SKIA or the fact that CMP code lacked some modifiers like .semantics() and .testTag(), but I hit some roadblocks. Maestro was unable to find my views when I used CMP in the iOS app. Additionally, in the "BaseiOSUIKit" app, TableView loading seemed to take forever, and the app started to use gigabytes of memory when I tried to connect to Maestro. So, I decided to try something different - the "User Scroll" method.

While the “User Scroll” approach is a bit more prone to human error, it gave us results that were not only easier to understand but also more revealing than Maestro. Here’s the drill: I scrolled rapidly 40 times in a minute and recorded the results. Most of the time, this quick-scroll test wrapped up in just 40 seconds, and I patiently waited for another 20 seconds (which is way more than needed) to let the garbage collector do its thing.

Testing X (Twitter) app in Maestro (Gathered from Maestro’s Front Page)

The Next Part

When I began writing this article, my intention was to compile everything on a single page. However, as the content grew, I found myself with over 9000 words, equivalent to nearly 35 minutes of reading time on Medium. In the interest of simplicity and to enhance readability, I’ve opted to split this article into three parts. The next installment will delve into the experiments, their outcomes, and offer straightforward explanations of the apps. You can explore these details in the upcoming section below:

I want to extend my gratitude to you for accompanying me on this journey. Feel free to share and utilize any part of this article series and project, as long as proper credits are attributed. I value your feedback and welcome any questions you may have. You can reach out to me at:

Keep learning, continue improving!

--

--

Ali Taha Dinçer
Commencis

Native and Cross platform mobile application developer. Bilkent University CS Grad. Experienced drummer. Photographer. Writer.