Six Lines

Code Signing Flaw in iOS

Posted by Aaron Massey on 08 Nov 2011.

My previous post about Apple security focused on an article by Wil Shipley wherein he discussed signing apps written for Mac OS X with certificates. One of Shipley’s main points was that the two primary mechanisms for enforcing security on the Mac App store (sandboxing and auditing) are fundamentally flawed. Now we have a great example of how auditing fails:

Miller, a former NSA analyst who now works as a researcher with consultancy Accuvant, created a proof-of-concept app called Instastock to show the vulnerability. The simple program appears to merely list stock tickers, but also communicates with a server in Miller’s house in St. Louis, pulling down and executing whatever new commands he wants. In the video above, he demonstrates it reading an iPhone’s files and making the phone vibrate. Miller applied for Instastock’s inclusion in the App Store and Apple approved the booby-trapped app.

The rest of that article includes more details on the code signing flaw Miller exploited, but I want to focus on a slightly different aspect of this story: responsible disclosure. Essentially, in responsible disclosure, when a researcher discovers a flaw in proprietary software, they immediately report it to the company responsible and setup a reasonable timeframe for fixing the problem before publicly disclosing the flaw.

Miller first contacted Apple about this problem on October 14th. I’m not sure that three weeks is really enough time to resolve a problem like this. I know he didn’t give all the details, and I know Apple has a reputation for not fixing security bugs until they become public (or perhaps well after they have been public for months…). Still, Miller would have a lot more sympathy with me if he reported the problem to Apple privately and gave them time to resolve the error. Another thing that would have made me a little more sympathetic is if he and Apple had agreed to a timeframe on resolving this problem prior to disclosing the flaw, though I’m not sure Apple would ever agree to something like that. Publicly acknowledging flaws of this nature isn’t really in their DNA.

Despite the flaw in Apple’s code signing, they have been able to respond by removing the exploited app from their app store and canceling Miller’s developer license. (Note: There’s some hypocrisy on Apple’s part here since canceling a developer license is a bit different from their treatment of other iOS security researchers.) Is this good enough for security? Everything in security is a tradeoff, so where does this response fall? It annoys me that there’s a bug in Apple’s code signing, but maybe the setup of the iOS App Store is enough of a response.

The original article points out that a similar issue in Android has resulted in a spate of malware for that platform. I’m not sure a similar thing will happen with iOS. Sure, Apple won’t be able to detect these apps in their review process, but they can always just remove them from the store after they’ve been found in the wild. I would probably prefer to see the code signing exception resolved, but I’m not sure what the tradeoffs really are. It’s hard to make security decisions that way.

Lastly, I should mention that this story is rather one-sided as of now. I haven’t seen anything from Apple about all of this yet. If you’ve seen something from Apple, please leave a comment.