Tuesday, April 7, 2015

Google Play + Safe Browsing = Safer Android Mobile Ecosystem

A recent incident at my work came to my attention involving a takedown request for an unauthorized app in Google Play using my company's brand.  This happens often in appstores all over the world, which is why having brand protection monitoring for these is really critical.  It is all too easy for these to slip into even legitimate appstores like Google Play.

One thing I noticed when I was investigating this incident was that the Google Play application page has a section that allows a developer to specify a website link, with a name "Visit Website".
Google Play app metadata, including Visit Website

I happened to notice that the website link for the application in question also included our brand/company name in the URL.  I wanted to visit it to see what else I could learn from what they had on that site.  When I clicked on the link, however, it went through a redirect at Google (e.g. https://www.google.com/url?q=http://example.example.com) where Google Safe Browsing actually flagged the URL as a phishing site.

Google phishing warning
Which made me wonder - if Google's left hand (Safe Browsing) has knowledge of a suspected phishing site, shouldn't that inform Google's right hand (Google Play) that any application tied to such a URL is also potentially untrustworthy?  Essentially, if trust can propagate transitively, then the opposite (suspicion / risk) should also propagate transitively.  If you take this even further, you should propagate that suspicion through a graph from the app containing the suspicious link up to the developer of the app and then back down to any other app that developer has associated with them in Google Play.  This would be something that would be easily automated given the description of the machine learning in the Google Android Security 2014 Report already done to analyze applications:
"Google’s systems use machine learning to see patterns and make connections that humans would not. Google Play analyzes millions of data points, asset nodes, and relationship graphs to build a high-precision security-detection system."
I would then imagine Google Play could take one or more of several actions if URLs are provided that get Safe Browsing scores low enough:

  1. Apps or developers and their apps could be delisted from Google Play until a human has reviewed the URL and app in more detail.  Google announced just last month they are going to be augmenting human review of apps in Google Play so this would dovetail with those efforts.
  2. Google Play could and should include clear, usable UI warnings for users searching and browsing apps about the suspicion/risk so that they can make informed trust decisions.
  3. The Google Play Verify Apps could further come into play if apps are confirmed malware/badware/Potentially Harmful Apps (PHAs) to warn users who may have already installed such an application or block the app.  This would also seem to dovetail with other recently-announced efforts in their Google Android Security 2014 Report to help crack down on these kinds of applications in the Android ecosystem.

Monday, March 16, 2015

Beating The Open-Source-Is-More-Secure Straw-Man

Given all of the serious security flaws in open source software lately, such as OpenSSL, it has been frequent subject of posters to use the open source hack-du-jour as a counterexample to a purported claim that "open source software is more secure" than proprietary software.  And I just saw it come up again the other day:
The problem with these statements is it seems to be a rampant straw-man.  When I see them come up, I wonder, "Who in the world is actually making the positive claim that open source software is, in fact, more secure than proprietary software?"  Is someone actually making these claims that are being "countered"?  On what basis could they even make such a claim?

So, I started to search for specific examples of specific individuals making this specific claim that "open source" is "more secure" and I found it more common to claim someone believes this than to cite actual examples.

I've found a lot of discussion of the topic, such as this treatment from David A Wheeler "Is Open Source Good for Security?." But even in those discussions, nobody quotes a specific person making this specific claim. Is everyone arguing with a straw man? Many articles have been written to debunk this "myth" of software security (this yields over 2 million hits in Google), yet not a single one seems to cite any source to back up the fact that this is even a myth at all? The best I found was Jon Viega's piece from 2004, "Open Source Security: Still a Myth" where he actually refers to nameless people he's encountered as believing this, but with David A Wheeler as being the only named proponent "Why Open Source Software / Free Software (OSS/FS, FOSS, or FLOSS)? Look at the Numbers!."

Much of the genesis appears to be an extrapolation of Eric S Raymond's famous assertion that, "given enough eyeballs, all bugs are shallow", which certainly does not seem to hold up in the general software defect case let alone security defects. I'm not sure how many actually believe that this is true in general these days, or even whether it is common for the average developer to believe that it leads to better security. It certainly does not seem to be a common "myth" that is promulgated by promoters from my searching - it's more the detractors that promote it as a myth.

Anyone know who the main proponents of this "myth" are these days?  Why aren't they called out in articles?