Tuesday, April 27, 2021

Update: Google Play store not taking advantage of Safe Browsing data to inform risk of apps in the store

I realized that I had never closed the loop on the flaw I discovered in the Google Play store years back.

I had discovered a missed opportunity for Google's own Safe Browsing information to inform the Google Play machine learning to detect suspicious mobile applications and alert users or block those apps and potentially force them through a human review cycle to verify them.

During an incident at JP Morgan Chase, we were alerted to a malicious banking application in the Google Play store targeting JP Morgan Chase customers. The URL in the Google Play application listing was correctly flagged by Google's own Safe Browsing API as malicious. However, Google's Android app review did not consider this information when deciding to allow the application to be published. Nor did Google Play take advantage of this information to flag the app for review or unpublish it or even warn users that the application may be suspicious due to its association with the malicious URL.

Google chose not to fix this. Closed as "Won't Fix (Infeasible)" ¯\_(ツ)_/¯

It's no surprise to still see articles like this 5 years later, Google Play Store Is Main Distributor of Malicious Apps, Study Reveals. (2020, November 12) and this one from just *yesterday* Malware From Google Play Store Infects 700,000 Users. (2021, April 26)

Their official Android safety page has this gem: 

Google Play Protect helps you download apps without worrying if they’ll hurt your phone or steal data. We carefully scan apps every day, and if we detect a bad one, we’ll let you know and tell you what to do next. And we study how it works. Because everything we learn improves the way we screen apps. So you stay safer.
https://www.android.com/safety/
Well, they're not using "everything we learn" to "improve the way we screen apps".

My original questions to the Android team are still unanswered:

  • Is Google Play store taking advantage of Safe Browsing API data to identify risky appstore apps?
  • Is it able to flag app uploads that match risky Safe Browsing data and block them from the appstore unless there is human review, for example?  
  • Is it able to hide or flag applications that are already in the Appstore so that unsuspecting users do not unwittingly install a likely malicious application associated with unsavory sites?

My original writeup:

Google Play + Safe Browsing = Safer Android Mobile Ecosystem. (2015, April 7). Retrieved from https://truthimperative.axley.net/2015/04/google-play-safe-browsing-safer-android.html


Tuesday, April 7, 2015

Google Play + Safe Browsing = Safer Android Mobile Ecosystem

A recent incident at my work came to my attention involving a takedown request for an unauthorized app in Google Play using my company's brand.  This happens often in appstores all over the world, which is why having brand protection monitoring for these is really critical.  It is all too easy for these to slip into even legitimate appstores like Google Play.

One thing I noticed when I was investigating this incident was that the Google Play application page has a section that allows a developer to specify a website link, with a name "Visit Website".
Google Play app metadata, including Visit Website

I happened to notice that the website link for the application in question also included our brand/company name in the URL.  I wanted to visit it to see what else I could learn from what they had on that site.  When I clicked on the link, however, it went through a redirect at Google (e.g. https://www.google.com/url?q=http://example.example.com) where Google Safe Browsing actually flagged the URL as a phishing site.

Google phishing warning
Which made me wonder - if Google's left hand (Safe Browsing) has knowledge of a suspected phishing site, shouldn't that inform Google's right hand (Google Play) that any application tied to such a URL is also potentially untrustworthy?  Essentially, if trust can propagate transitively, then the opposite (suspicion / risk) should also propagate transitively.  If you take this even further, you should propagate that suspicion through a graph from the app containing the suspicious link up to the developer of the app and then back down to any other app that developer has associated with them in Google Play.  This would be something that would be easily automated given the description of the machine learning in the Google Android Security 2014 Report already done to analyze applications:
"Google’s systems use machine learning to see patterns and make connections that humans would not. Google Play analyzes millions of data points, asset nodes, and relationship graphs to build a high-precision security-detection system."
I would then imagine Google Play could take one or more of several actions if URLs are provided that get Safe Browsing scores low enough:

  1. Apps or developers and their apps could be delisted from Google Play until a human has reviewed the URL and app in more detail.  Google announced just last month they are going to be augmenting human review of apps in Google Play so this would dovetail with those efforts.
  2. Google Play could and should include clear, usable UI warnings for users searching and browsing apps about the suspicion/risk so that they can make informed trust decisions.
  3. The Google Play Verify Apps could further come into play if apps are confirmed malware/badware/Potentially Harmful Apps (PHAs) to warn users who may have already installed such an application or block the app.  This would also seem to dovetail with other recently-announced efforts in their Google Android Security 2014 Report to help crack down on these kinds of applications in the Android ecosystem.

Monday, March 16, 2015

Beating The Open-Source-Is-More-Secure Straw-Man

Given all of the serious security flaws in open source software lately, such as OpenSSL, it has been frequent subject of posters to use the open source hack-du-jour as a counterexample to a purported claim that "open source software is more secure" than proprietary software.  And I just saw it come up again the other day:
The problem with these statements is it seems to be a rampant straw-man.  When I see them come up, I wonder, "Who in the world is actually making the positive claim that open source software is, in fact, more secure than proprietary software?"  Is someone actually making these claims that are being "countered"?  On what basis could they even make such a claim?

So, I started to search for specific examples of specific individuals making this specific claim that "open source" is "more secure" and I found it more common to claim someone believes this than to cite actual examples.

I've found a lot of discussion of the topic, such as this treatment from David A Wheeler "Is Open Source Good for Security?." But even in those discussions, nobody quotes a specific person making this specific claim. Is everyone arguing with a straw man? Many articles have been written to debunk this "myth" of software security (this yields over 2 million hits in Google), yet not a single one seems to cite any source to back up the fact that this is even a myth at all? The best I found was Jon Viega's piece from 2004, "Open Source Security: Still a Myth" where he actually refers to nameless people he's encountered as believing this, but with David A Wheeler as being the only named proponent "Why Open Source Software / Free Software (OSS/FS, FOSS, or FLOSS)? Look at the Numbers!."

Much of the genesis appears to be an extrapolation of Eric S Raymond's famous assertion that, "given enough eyeballs, all bugs are shallow", which certainly does not seem to hold up in the general software defect case let alone security defects. I'm not sure how many actually believe that this is true in general these days, or even whether it is common for the average developer to believe that it leads to better security. It certainly does not seem to be a common "myth" that is promulgated by promoters from my searching - it's more the detractors that promote it as a myth.

Anyone know who the main proponents of this "myth" are these days?  Why aren't they called out in articles?

Wednesday, August 20, 2014

Free Community For Youth lunch that will feed your soul

Community For Youth changes lives.  I know -- it's changed mine!  

Personal Integrity.  The CFY curriculum and core values have challenged the students in the community as well as mentors like me to be our best selves.  When I started, I didn't challenge myself with clear life goals and share them with others.  I was too afraid of opening myself up to the shame of failure.  However, through CFY, I've come to learn that sharing goals with a powerful community that can support you is exactly what can actually increase your chances of success.  You learn to be more accountable to yourself by being accountable to a supportive community.  And this has bled over into my daily life so much that even for small commitments, I maintain personal integrity.  "Darn, I did say that I was going to bike to work tomorrow.  Guess I have to suck it up and do it."

Authenticity.  I didn't realize how much compartmentalization went on in my head regarding how I presented myself to others.  We learn together how much more pleasant it is to be your own true self and how richer your connections are when you are not holding back or censoring yourself unnecessarily or trying to be someone you are not.  "You let your students see your Facebook posts?"  Sure.  What I post and what I believe are important to me and I only share what interests me.  Who I am or believe should not be something that I have to parcel out in small doses to particular people.  It's much freer to just be myself.  How do people know they have something in common if they don't share of themselves anyway?

Vulnerability.  I felt somewhat comfortable in front of crowds, strangers talking about something abstract or technical.  But CFY challenged everyone, including mentors, to share openly as your true authentic self.  "Get comfortable with being uncomfortable", we say.  That was initially a very difficult thing for me to get used to, "You want me to talk about personal things...in front of everyone?"  But you quickly find that, as social animals, human relationships are strengthened by vulnerability because it cuts through the pretense and superficiality that we often use when interacting with others -- that's not authentic and it shields you from truly connecting with others on a deeper level.  Oh, and one of the biggest ways this has always manifested itself in my live is my reluctance to ask for help and instead go-it-alone.  I've definitely gotten better at realizing when I need help -- not perfect -- but better.

There are many, many other ways that I've changed.  And I have seen my students and other students change as well because of CFY.  It truly does change your life and although you don't always get direct evidence of it, the student's lives are changed as well.

The most moving experience of a transformation I can recall from my 8 years with CFY was when a student who had been paralyzed by fear when speaking in front of crowds was encouraged to perform her spoken word poetry in front of the whole community at one of our weekend retreats.  It took her a while to warm up to the idea and when she started speaking, my jaw dropped.  She gradually transformed into a confident young woman creatively and boldly expressing herself through her words -- compelling us to feel them as she felt them.  She said later that she was incredibly nervous but honestly I had no idea.  You could hear a pin drop in that room.  Everyone was blown away in rapt attention.  That was a turning point for her.  From that point, she was able to challenge herself more and grow into a real leader with things to say and express with less and less fear.  Truly inspiring.

That kind of growth and moving experiences is one of the most rewarding aspects.  But even the challenges are rewarding.  You are faced with situations and kids in situations that you never had to face in your life.  Sometimes you're thinking, "What the f* do I do with that?"  But, you have a supportive community to help find ways of dealing with those solutions.  Then that experience of tackling and possibly overcoming that challenge just makes you more ready for the next challenge in your own life.

Share in the Community Experience

This upcoming year will mark 9 years as a mentor with Community For Youth (CFY).  It is impossible to sum up the impact that CFY has had on the community, the students it serves, and the mentors (especially myself) in a simple blog post.  But there is an opportunity coming up that is far better that I hope you take me up on:  come have a free lunch downtown Seattle on September 30th and learn about CFY, hear from the 2013-2014 mentors of the year, and on top of that, you get to hear from Seahawks wide receiver, Doug Baldwin.

The lunch is an opportunity for those who might know that I'm involved with Community For Youth but may not quite know what it's all about.  I absolutely love CFY and would appreciate any opportunity to share my experiences for others to see how impactful the program is.  You can sign up at www.communitylunch.com and join me at my table.


Get inspired

In a student's own words, on the importance of CFY to their life.
"I appreciate the work that everyone has contributed in one way or another, to keep this program alive. Because there are teenagers like me, who need people, even if it’s just one person, to believe in them." - See more at: http://communityforyouth.org/2013/04/my-introduction-to-cfy/#sthash.Xk9ZikAj.dpuf
Even if you can't join, you should take some time to watch this 12-minute video to learn about who we serve from the students and mentors that are part of this powerful community.  And if you're feeling moved or generous or both, you can head on over and donate to Community For Youth too!
and
Community For Youth from Greg Hay on Vimeo.

Thursday, April 17, 2014

iOS clients not vulnerable to Heartbleed. What does the source say?



Apple's language in their assertion that they are not vulnerable to heartbleed on iOS are troubling as they specifically say (via ReCode), "IOS and OS X never incorporated the vulnerable software..."  However, not incorporating the vulnerable OpenSSL software is merely one way that their customers could have been made vulnerable.  What about the Apple SSL/TLS implementation?  Has anyone checked it?  Did they incorporate RFC 6520 for heartbeat support?  I couldn't find anything Google so figured I would share what I found.

Since the Apple SSL library code is open sourced, we can actually look at the code.  And based on my read of the code, Apple doesn’t even implement the heartbeat extension. http://opensource.apple.com/source/Security/Security-55471/libsecurity_ssl/lib/sslHandshake.h doesn’t even define the heartbeat helloextension code 15 in the data structure:

/* Hello Extensions per RFC 3546 */
typedef enum
{
 SSL_HE_ServerName = 0,
 SSL_HE_MaxFragmentLength = 1,
 SSL_HE_ClientCertificateURL = 2,
 SSL_HE_TrustedCAKeys = 3,
 SSL_HE_TruncatedHMAC = 4,
 SSL_HE_StatusReguest = 5,

 /* ECDSA, RFC 4492 */
 SSL_HE_EllipticCurves  = 10,
 SSL_HE_EC_PointFormats = 11,

    /* TLS 1.2 */
    SSL_HE_SignatureAlgorithms = 13,

    /* RFC 5746 */
    SSL_HE_SecureRenegotation = 0xff01,

 /*
  * This one is suggested but not formally defined in
  * I.D.salowey-tls-ticket-07
  */
 SSL_HE_SessionTicket = 35
} SSLHelloExtensionType;

Then in the implementation http://opensource.apple.com/source/Security/Security-55471/libsecurity_ssl/lib/sslHandshakeHello.c, they actually only support one extension, SSL_HE_SecureRenegotation. All others return an error code.

     switch (extType) {
            case SSL_HE_SecureRenegotation:
                if(got_secure_renegotiation)
                    return errSSLProtocol;            /* Fail if we already processed one */
                got_secure_renegotiation = true;
                SSLProcessServerHelloExtension_SecureRenegotiation(ctx, extLen, p);
                break;
            default:
                /*
                 Do nothing for other extensions. Per RFC 5246, we should (MUST) error
                 if we received extensions we didnt specify in the Client Hello.
                 Client should also abort handshake if multiple extensions of the same
                 type are found
                 */
                break;
        }
So, it appears from the library code that they would not be vulnerable to this bug at all.

Sunday, April 13, 2014

Using VNC to securely connect to OSX without exposing an unlocked console

I couldn't believe how supremely difficult it is to securely use VNC to access an OSX mac remotely.  Turns out that by default, using a standard VNC client (as opposed to an Apple Remote Desktop client) does not afford you an option to have the physical console lock when someone connects to the VNC server.  Some third-party clients make this an option, but all that I could find were paid VNC clients that support it.  It is somewhat ridiculous that this setting is left to the client rather than enforced on the server, but I digress...

I tried a few things suggested, such as enabling the screen saver or screen blanker, but those did not solve the problem as they did not differentiate between the VNC session and the physical desktop session so applied equally (the only states that were valid were either both unlocked or both locked).  Other options people suggested were to just turn the screen brightness all the way down.  This is security through obscurity though (the display is still unlocked and anyone who can get to your mouse/keyboard could mess with your computer, they just would be blind to what's on the screen).  It also seems problematic for usability (imagine you turn the brightness down and then come into the office the next day; how are you supposed to see the screen when you login if the brightness is still forced to the minimum?)

The solution I found that had the right security and usability properties was to use fast user switching + the Vine VNC Server.  This enables you to have a different set of content on the physical display from what you see remotely on VNC.  Unfortunately, fast user switching with the Apple VNC "Screen sharing" server doesn't work.  It mirrors your display exactly to the VNC display so does not allow you to have separate physical and remote displays.  I presume that's why it has a name like "Screen sharing".  It's also not surprising that this doesn't quite work as well outside of the Apple monoculture.
  1. Download and install Vine VNC Server
  2. Enable Fast User Switching on the mac
  3. Enable fast user switching on OSX Mavericks
  4. Connect to Vine VNC Server on OSX with any VNC client (e.g. on port 5901).  I configure Vine to require SSH so it doesn't listen to any remote port and requires SSH port tunneling to use it.  Less attack surface.
  5. Go to the fast user switching menu and select "Login Window..."  When you do this, the physical display will change to the login screen but the VNC window will remain unlocked and functional, as desired.
Switch to login screen


I get an IRS scam voice-mail

Had to share this hilarious voice-mail I received from an IRS scammer (happened to come in with Unknown caller ID -- I read online that others had been spoofing US phone numbers for caller ID in the past). The transcript does not do it justice.  I laughed out loud when I heard the phrase, "and you get arrested" as that is precisely what one would expect to hear from the IRS.


They actually tried calling me back and I got to talk to one of the people that afternoon, but my crummy cell service in my office resulted in the call dropping before I could chat with them too much. I told them that I didn't believe them that they were from the IRS. Maybe they'll call back again this week?

I plan on reporting it, as suggested.  Head over to the IRS Tax Fraud Alerts page.  Perhaps the best channel will be via their Phishing page.  The IRS warning regarding this scam provides some information but there is of course no direct links to report the issue.  I wonder if the 20,000 that reported it are a small fraction of those victimized since it's so difficult to find a way to report it?  They also suggest lodging a complaint with the FTC as well, but that is also somewhat difficult to determine how to categorize it for reporting.

See also: "IRS monitor: $1 million phone scam 'largest ever' - Mar. 20, 2014 ." Last modified 04/14/2014 05:10:31. http://money.cnn.com/2014/03/20/pf/taxes/irs-phone-scam/ (accessed 4/13/2014).


Transcript

Good morning. This is Willy ["Villy"] Mandersen, calling you from Internal Revenue Service...Crime Investigation Department.  The nature and the purpose of this call is just to let you know that....we have received...a legal petition notice...against your name...under your social security number. So, before this matter goes to the Federal claim court house...and you get arrested, kindly call us back at (866) 978-8320. I repeat (866) 978-8320.  Remember, don't disregard the message...as it is very important for you.  And if you don't return the call, then the situation will be worse. So take care about it, and call us back as soon as possible. Goodbye.