reader comments 21
The “Google Docs” phishing attack that wormed its way through thousands of e-mail inboxes earlier this week exploited a threat that had been flagged earlier by at least three security researchers—one raised issues about the threat as early as October of 2011. In fact, the person or persons behind the attack may have copied the technique from a proof of concept posted by one security researcher to GitHub in February.far too easy to fool unsuspecting targets into giving away access to their cloud, e-mail, storage, and other Google-associated accounts. The websites used in the phishing attack each used domains that mimicked Google’s in some way. The sites would call a Google Apps Script that used Google’s own authentication system against itself. The malicious Web application (named “Google Docs”) was delivered by an HTML e-mail message that looked so much like a genuine Google Docs sharing request that many users just sailed right through the permissions requested without thinking.
Google Apps Script code from his GitHub page or from snippets of the code in his blog post. But Carson revised that theory after doing a bit more analysis on the phishing attack. “Looking through [the phishing attack] closely, I don’t believe any code was ripped directly,” he said. “One of their for loops looked similar to mine, but it isn’t a direct copy, so I don’t think it’s fair to claim that.”
However, whoever was behind the “Google Docs” e-mail did exactly what Carson discussed in his post. Using a malicious webpage and Google’s own authentication interfaces, the attacker was able to gain access to victims’ Gmail accounts and harvest contacts to further spread the phishing campaign using the victims’ own accounts.
“I end the post by talking about how this attack is extremely dangerous,” Carson noted. “You can enumerate all of a user’s contacts and then send an e-mail to each of them using the victim’s account. It’s highly deceptive for unaware users as it appears to come from a legitimate contact, and it’s hosted by Google. It’s a worm of sorts for the modern era, and since the e-mail may appear trusted at a glance, it could be used to link to malicious websites and downloads.”
Carson had also warned of other attack cases based on malicious Web applications leveraging Google’s authentication system, based on the OAuth 2 standard. An attacker could conceivably use keyword searches to identify and harvest sensitive documents off of a Google Drive account. “I wouldn’t be surprised if future phishing campaigns employ these techniques.”
The potential for an OAuth-based attack on Google users was first discussed on an Internet Engineering Task Force OAuth mailing list by researcher Andre DeMarre in October of 2011. DeMarre wrote:
Imagine someone registers a client application with an OAuth service, let’s call it Foobar, and he names his client app “Google, Inc.” The Foobar authorization server will engage the user with “Google, Inc. is requesting permission to do the following.” The resource owner might reason, “I see that I’m legitimately on the https://www.foobar.com site, and Foobar is telling me that Google wants permission. I trust Foobar and Google, so I’ll click Allow.” To make the masquerade act even more convincing, many of the most popular OAuth services allow app developers to upload images which could be official logos of the organizations they are posing as. Often app developers can supply arbitrary, unconfirmed URIs which are shown to the resource owner as the app’s website, even if the domain does not match the redirect URI. Some OAuth services blindly entrust client apps to customize the authorization page in other ways.
Web developer Andrew Cantino issued a similar warning in a September 2014 blog post. He had built a proof-of-concept Web application called “Google Security Updater” and found that during the authentication process, “Google in no way makes it clear that this app was created by a 3rd party and is not affiliated with Google.”
Cantino’s Apps Script code only added a new label to Gmail messages. “But it could have deleted data, e-mailed a link to the script to everyone in the user’s contact list, manipulated personal information, or stolen data and sent it to a 3rd party,” he wrote.
There are a number of ways that organizations can try to stop future OAuth-based attacks like this week’s worm. The use of “look-alike” host names using non-standard top-level domains (TLDs) such as .pro, .win, and .download could potentially be blocked by intrusion prevention systems or DNS “greylisting” or spotted early by monitoring DNS traffic. Carson suggested organizations use cloud access security brokers (CASBs) a term coined by Gartner to describe platforms that run within an organization’s network and check cloud applications’ permission requests against established policies. A CASB would have blocked access for a fake “Google Docs” application, for example.
For individuals, the only real defense is paying close attention to OAuth permission requests and closely examining where they’re coming from (and whether they make sense). That will continue to be the case unless Google makes it harder for attackers to spoof Google services. “I’d like to see Google provide better warnings to users when loading applications for Google services and provide Stricter controls for admins around app script authorization,” said Carson.