Schrodinger’s iPhone

Before we get into the body of this article, it is important to understand one of the guiding principles of modern cybersecurity: the Defender’s Dilemma.

The Defender’s Dilemma

The premise of the Defender’s Dilemma boils down to a simple fact: being a defender really sucks. The reason for this is multifold.

Firstly, the defender must constantly defend all their borders. In cybersecurity terms, this is the attack surface. In order to do this, the defender must also know where all of their borders lie, something which at first seems simple but is actually much harder when you think about all the possible avenues of attack.

Secondly, the defender can only defend against attacks they’re aware of. How does the defender know when something is friendly or hostile? I’m sure the Trojans looked at large wooden horses in a different light after the sacking of Troy. This holds true for security; the defender needs to understand an attack to know how to look for that attack.

Thirdly, the defender must be constantly checking for attacks. In the days of yore, you could agree to meet at dawn and know who your enemy was. Alas, those days are over. The defender must be constantly vigilant for attacks.

Finally, the defender must follow a clear set of rules. By definition the attacker is a bad guy and probably doesn’t follow the laws of the land. This particularly holds true in security. In terms of battlefields, who is the good guy and who is the bad guy in a cybersecurity battle is up for significantly more debate.

It’s Not a Pretty Picture

The Defender’s Dilemma doesn’t paint a pretty picture. In modern cybersecurity thinking, there’s a prevailing assumption that some part of an enterprise is compromised. How compromised and what an attacker can leverage to get further into the enterprise is what needs to be minimized.

What happens after an attacker successfully breaks in? Well, it’s sort of like robbing a bank, and it turns out that it’s actually pretty easy to rob a bank. The problem starts once the thief has broken in and has all that sweet cash because it’s really hard to get away with robbing the bank. The bank has many security measures in place to make it easy to catch the thief after they’ve left with that money and the bank also has many processes in place to minimize the amount of money the thief can access.

The same concept applies in cybersecurity. While the attacker has the advantage on the outside, once they’re in, well, now they’re in our world. We start seeing machines making odd connections, devices scanning networks, devices trying to log into services they’ve never logged into before, and large files being downloaded.

All of these behaviours are anomalous and we know how our enterprise works, but the attacker doesn’t. They need to do a lot of scouting, mapping the network, understanding where things live, etc. This is the disadvantage for the attacker: the more the attacker does, the more the behaviour is observed, and the greater the chance they will be recognized and shut out. So in cybersecurity, you monitor, you minimize the amount of data that can be accessed, you start implementing zero trust-like paradigms.

In modern cybersecurity, understanding what the attackers advantages are and what their disadvantages are is a key principle. It helps us understand our risks and vulnerabilities and gives us a rough framework to mitigate those.

Apple, iPhones, and Security through Obscurity

Now that we’re paragons of cybersecurity ideas, let’s talk about iPhones. If you’re reading this and you reside in the USA, there’s about a 46% chance that you have an iPhone. If you’re in EMEA there’s about a 14% chance that you’re using an iPhone as a daily driver.

If you’re one of those chosen few, I’m going to make an assertion. If that device is not rooted, you have no idea if that device is compromised. There is no way of knowing if your iPhone is compromised. Normally at this point you would expect me to provide some caveats to temper down the hyperbole, but really I mean it. Unless you have rooted your device, there is no way of knowing if your device has been compromised.

The traditional approach Apple has taken to security on iOS is interesting. The approach is best summed up as: hide everything. By design, as time has gone on, there are fewer and fewer insights into what your device is doing.

Now if Apple takes away visibility into the security of a device, then Apple is implicitly taking on the responsibility of the security of that device. That means that I must trust Apple that they’re making my device secure. This is the implicit agreement that we have entered any time we use an Apple device that isn’t rooted. The problem is that Apple is just not up to the job.

iMessage has a checkered past. Everyone who has had issues moving from Apple to Android will know that. But there is something more sinister about iMessage — it’s become a conduit for multiple remote zero click exploits. OK, that’s pretty bad, but it really just gets worse from there.

With iMessage, Apple has expanded the attack surface of your device. So following the principles of the Defender’s Dilemma, they’ve opened a path into the heart of our devices and their application security has been mediocre at best (multiple, as in more than 1, zero click exploits), but okay, no one is perfect and we know that exploits happen.

But with our knowledge of the Defender’s Dilemma as a shield, we know that once someone has compromised our device, we can figure that out and remedy it. Remember the attacker’s disadvantage right?! Well no, in the case of the iPhone, we can’t. With the device as it stands, there is no way of knowing if that device is compromised. Apple has gone to great lengths to make sure that that information isn’t available to any app or any user of that device.

Hence, my assertion; there’s no way of knowing that your device is compromised if it isn’t rooted. It’s ironic because the bad guys know this. On a rooted device, it’s trivial to see the processes the bad guys are running and what they’re doing. Why try subterfuge when Apple is already doing the heavy lifting for you? Apple’s security model is so flawed that there are companies whose entire business model actually exploits this.

NSO has multiple remote exploits which they leverage to install spyware on people’s devices. Now you’re thinking, “well they’re probably just installing on bad guys’ devices” but it was found on Jamal Khashoggi’s device. Was he a bad guy? The truth is, if companies like this exist and are doing this, you can bet there are criminals, national actors, who leverage this as well, and there is nothing that we can do about it.

Apple’s approach to security is essentially security through obscurity. This approach was popular about 30 years ago. It was dumb then, and it’s willfully ignorant in this day and age. It’s exacerbated by the fact that the obscurity is a one way street: the bad guys have rooted devices and are using them to understand and plan attacks. The customers and the enterprises who are trying to use these devices are at a disadvantage.

These types of issues, coupled with their lack of concern over user’s data (https://www.theguardian.com/technology/2020/may/20/apple-whistleblower-goes-public-over-lack-of-action) and their interesting approach to your data in the cloud (https://www.macrumors.com/2021/08/06/snowden-eff-slam-plan-to-scan-messages-images/), paints a picture of a company that really need to re-evaluate their entire approach to security.

This has gotten so bad that certain agencies around the world resort to rooting iPhone devices of their key government employees to take snapshots of the device to check for compromise. This is the only way they have to understand what’s running on a device which they own and they’re responsible for.

The situation, as it stands, needs to change.

If you have any questions or would like to reach Alex, please email him at [email protected].

Schrodinger’s iPhone

Before we get into the body of this article, it is important to understand one of the guiding principles of modern cybersecurity: the Defender’s Dilemma.

The Defender’s Dilemma

The premise of the Defender’s Dilemma boils down to a simple fact: being a defender really sucks. The reason for this is multifold.

Firstly, the defender must constantly defend all their borders. In cybersecurity terms, this is the attack surface. In order to do this, the defender must also know where all of their borders lie, something which at first seems simple but is actually much harder when you think about all the possible avenues of attack.

Secondly, the defender can only defend against attacks they’re aware of. How does the defender know when something is friendly or hostile? I’m sure the Trojans looked at large wooden horses in a different light after the sacking of Troy. This holds true for security; the defender needs to understand an attack to know how to look for that attack.

Thirdly, the defender must be constantly checking for attacks. In the days of yore, you could agree to meet at dawn and know who your enemy was. Alas, those days are over. The defender must be constantly vigilant for attacks.

Finally, the defender must follow a clear set of rules. By definition the attacker is a bad guy and probably doesn’t follow the laws of the land. This particularly holds true in security. In terms of battlefields, who is the good guy and who is the bad guy in a cybersecurity battle is up for significantly more debate.

It’s Not a Pretty Picture

The Defender’s Dilemma doesn’t paint a pretty picture. In modern cybersecurity thinking, there’s a prevailing assumption that some part of an enterprise is compromised. How compromised and what an attacker can leverage to get further into the enterprise is what needs to be minimized.

What happens after an attacker successfully breaks in? Well, it’s sort of like robbing a bank, and it turns out that it’s actually pretty easy to rob a bank. The problem starts once the thief has broken in and has all that sweet cash because it’s really hard to get away with robbing the bank. The bank has many security measures in place to make it easy to catch the thief after they’ve left with that money and the bank also has many processes in place to minimize the amount of money the thief can access.

The same concept applies in cybersecurity. While the attacker has the advantage on the outside, once they’re in, well, now they’re in our world. We start seeing machines making odd connections, devices scanning networks, devices trying to log into services they’ve never logged into before, and large files being downloaded.

All of these behaviours are anomalous and we know how our enterprise works, but the attacker doesn’t. They need to do a lot of scouting, mapping the network, understanding where things live, etc. This is the disadvantage for the attacker: the more the attacker does, the more the behaviour is observed, and the greater the chance they will be recognized and shut out. So in cybersecurity, you monitor, you minimize the amount of data that can be accessed, you start implementing zero trust-like paradigms.

In modern cybersecurity, understanding what the attackers advantages are and what their disadvantages are is a key principle. It helps us understand our risks and vulnerabilities and gives us a rough framework to mitigate those.

Apple, iPhones, and Security through Obscurity

Now that we’re paragons of cybersecurity ideas, let’s talk about iPhones. If you’re reading this and you reside in the USA, there’s about a 46% chance that you have an iPhone. If you’re in EMEA there’s about a 14% chance that you’re using an iPhone as a daily driver.

If you’re one of those chosen few, I’m going to make an assertion. If that device is not rooted, you have no idea if that device is compromised. There is no way of knowing if your iPhone is compromised. Normally at this point you would expect me to provide some caveats to temper down the hyperbole, but really I mean it. Unless you have rooted your device, there is no way of knowing if your device has been compromised.

The traditional approach Apple has taken to security on iOS is interesting. The approach is best summed up as: hide everything. By design, as time has gone on, there are fewer and fewer insights into what your device is doing.

Now if Apple takes away visibility into the security of a device, then Apple is implicitly taking on the responsibility of the security of that device. That means that I must trust Apple that they’re making my device secure. This is the implicit agreement that we have entered any time we use an Apple device that isn’t rooted. The problem is that Apple is just not up to the job.

iMessage has a checkered past. Everyone who has had issues moving from Apple to Android will know that. But there is something more sinister about iMessage — it’s become a conduit for multiple remote zero click exploits. OK, that’s pretty bad, but it really just gets worse from there.

With iMessage, Apple has expanded the attack surface of your device. So following the principles of the Defender’s Dilemma, they’ve opened a path into the heart of our devices and their application security has been mediocre at best (multiple, as in more than 1, zero click exploits), but okay, no one is perfect and we know that exploits happen.

But with our knowledge of the Defender’s Dilemma as a shield, we know that once someone has compromised our device, we can figure that out and remedy it. Remember the attacker’s disadvantage right?! Well no, in the case of the iPhone, we can’t. With the device as it stands, there is no way of knowing if that device is compromised. Apple has gone to great lengths to make sure that that information isn’t available to any app or any user of that device.

Hence, my assertion; there’s no way of knowing that your device is compromised if it isn’t rooted. It’s ironic because the bad guys know this. On a rooted device, it’s trivial to see the processes the bad guys are running and what they’re doing. Why try subterfuge when Apple is already doing the heavy lifting for you? Apple’s security model is so flawed that there are companies whose entire business model actually exploits this.

NSO has multiple remote exploits which they leverage to install spyware on people’s devices. Now you’re thinking, “well they’re probably just installing on bad guys’ devices” but it was found on Jamal Khashoggi’s device. Was he a bad guy? The truth is, if companies like this exist and are doing this, you can bet there are criminals, national actors, who leverage this as well, and there is nothing that we can do about it.

Apple’s approach to security is essentially security through obscurity. This approach was popular about 30 years ago. It was dumb then, and it’s willfully ignorant in this day and age. It’s exacerbated by the fact that the obscurity is a one way street: the bad guys have rooted devices and are using them to understand and plan attacks. The customers and the enterprises who are trying to use these devices are at a disadvantage.

These types of issues, coupled with their lack of concern over user’s data (https://www.theguardian.com/technology/2020/may/20/apple-whistleblower-goes-public-over-lack-of-action) and their interesting approach to your data in the cloud (https://www.macrumors.com/2021/08/06/snowden-eff-slam-plan-to-scan-messages-images/), paints a picture of a company that really need to re-evaluate their entire approach to security.

This has gotten so bad that certain agencies around the world resort to rooting iPhone devices of their key government employees to take snapshots of the device to check for compromise. This is the only way they have to understand what’s running on a device which they own and they’re responsible for.

The situation, as it stands, needs to change.

If you have any questions or would like to reach Alex, please email him at [email protected].