Thursday, November 11, 2010

Quick update on our vulnerability reward program



About a week and a half ago we launched a new web vulnerability reward program, and the response has been fantastic. We've received many high quality reports from across the globe. Our bug review committee has been working hard, and we’re pleased to say that so far we plan to award over $20,000 to various talented researchers. We'll update our 'Hall of Fame' page with relevant details over the next few days.

Based on what we've received over the past week, we've clarified a few things about the program — in particular, the types of issues and Google services that are in scope for a reward. The review committee has been somewhat generous this first week, and we’ve granted a number of awards for bugs of low severity, or that wouldn’t normally fall under the conditions we originally described. Please be sure to review our original post and clarification thoroughly before reporting a potential issue to us.

Monday, November 1, 2010

Rewarding web application security research



Back in January of this year, the Chromium open source project launched a well-received vulnerability reward program. In the months since launch, researchers reporting a wide range of great bugs have received rewards — a small summary of which can be found in the Hall of Fame. We've seen a sustained increase in the number of high quality reports from researchers, and their combined efforts are contributing to a more secure Chromium browser for millions of users.

Today, we are announcing an experimental new vulnerability reward program that applies to Google web properties. We already enjoy working with an array of researchers to improve Google security, and some individuals who have provided high caliber reports are listed on our credits page. As well as enabling us to thank regular contributors in a new way, we hope our new program will attract new researchers and the types of reports that help make our users safer.

In the spirit of the original Chromium blog post, we have some information about the new program in a question and answer format below:

Q) What applications are in scope?
A) Any Google web properties which display or manage highly sensitive authenticated user data or accounts may be in scope. Some examples could include:
  • *.google.com
  • *.youtube.com
  • *.blogger.com
  • *.orkut.com
For now, Google's client applications (e.g. Android, Picasa, Google Desktop, etc) are not in scope. We may expand the program in the future.

UPDATE: We also recommend reading our additional thoughts about these guidelines to help clarify what types of applications and bugs are eligible for this program.

Q) What classes of bug are in scope?
A) It's difficult to provide a definitive list of vulnerabilities that will be rewarded; however, any serious bug which directly affects the confidentiality or integrity of user data may be in scope. We anticipate most rewards will be in bug categories such as:
  • XSS
  • XSRF / CSRF
  • XSSI (cross-site script inclusion)
  • Bypassing authorization controls (e.g. User A can access User B's private data)
  • Server side code execution or command injection
Out of concern for the availability of our services to all users, we ask you to refrain from using automated testing tools.

These categories of bugs are definitively excluded:
  • attacks against Google’s corporate infrastructure
  • social engineering and physical attacks
  • denial of service bugs
  • non-web application vulnerabilities, including vulnerabilities in client applications
  • SEO blackhat techniques
  • vulnerabilities in Google-branded websites hosted by third parties
  • bugs in technologies recently acquired by Google
Q) How far should I go to demonstrate a vulnerability?
A) Please, only ever target your own account or a test account. Never attempt to access anyone else's data. Do not engage in any activity that bombards Google services with large numbers of requests or large volumes of data.

Q) I've found a vulnerability — how do I report it?
A) Contact details are listed here. Please only use the email address given for actual vulnerabilities in Google products. Non-security bugs and queries about problems with your account should should instead be directed to the Google Help Centers.

Q) What reward might I get?
A) The base reward for qualifying bugs is $500. If the rewards panel finds a particular bug to be severe or unusually clever, rewards of up to $3,133.7 may be issued. The panel may also decide a single report actually constitutes multiple bugs requiring reward, or that multiple reports constitute only a single reward.

We understand that some researchers aren’t interested in the money, so we’d also like to give you the option to donate your reward to charity. If you do, we'll match it — subject to our discretion.

Regardless of whether you're rewarded monetarily or not, all vulnerability reporters who interact with us in a respectful, productive manner will be credited on a new vulnerability reporter page. If we file a bug internally, you'll be credited.

Superstar performers will continue to be acknowledged under the "We Thank You" section of this page.

Q) How do I find out if my bug qualified for a reward?
A) You will receive a comment to this effect in an emailed response from the Google Security Team.

Q) What if someone else also found the same bug?
A) Only the first report of a given issue that we had not yet identified is eligible. In the event of a duplicate submission, only the earliest received report is considered.

Q) Will bugs disclosed without giving Google developers an opportunity to fix them first still qualify?
A) We believe handling vulnerabilities responsibly is a two-way street. It's our job to fix serious bugs within a reasonable time frame, and we in turn request advance, private notice of any issues that are uncovered. Vulnerabilities that are disclosed to any party other than Google, except for the purposes of resolving the vulnerability (for example, an issue affecting multiple vendors), will usually not qualify. This includes both full public disclosure and limited private release.

Q) Do I still qualify if I disclose the problem publicly once fixed?
A) Yes, absolutely! We encourage open collaboration. We will also make sure to credit you on our new vulnerability reporter page.

Q) Who determines whether a given bug is eligible?
A) Several members of the Google Security Team including Chris Evans, Neel Mehta, Adam Mein, Matt Moore, and Michal Zalewski.

Q) Are you going to list my name on a public web page?
A) Only if you want us to. If selected as the recipient of a reward, and you accept, we will need your contact details in order to pay you. However, at your discretion, you can choose not to be listed on any credit page.

Q) No doubt you wanted to make some legal points?
A) Sure. We encourage broad participation. However, we are unable to issue rewards to individuals who are on sanctions lists, or who are in countries (e.g. Cuba, Iran, North Korea, Sudan and Syria) on sanctions lists. This program is also not open to minors. You are responsible for any tax implications depending on your country of residency and citizenship. There may be additional restrictions on your ability to enter depending upon your local law.

This is not a competition, but rather an experimental and discretionary rewards program. You should understand that we can cancel the program at any time, and the decision as to whether or not to pay a reward has to be entirely at our discretion.

Of course, your testing must not violate any law, or disrupt or compromise any data that is not your own.

Thank you for helping us to make Google's products more secure. We look forward to issuing our first reward in this new program.

Thursday, October 21, 2010

This Internet is Your Internet: Digital Citizenship from California to Washtenaw County



In the physical world, basic safety measures are second-nature to almost everyone (look both ways, stop drop and roll!). In the digital world, however, many of us expect security to be handled on our behalf by experts, or come in a single-box solution. Together, we must reset those expectations.

The Internet is the biggest neighborhood in the world. Security-related initiatives in the technology sector and government play an important role in making the Internet safer, but efforts from Silicon Valley and Washington, D.C. alone are not enough. Much of the important work that needs to be done must happen closer to home—wherever that may be.

As part of National Cyber Security Awareness Month I recently traveled from California to Washtenaw County, MI to speak to group of local community leaders, educators, business owners, law enforcement officials and residents who recently formed the Washtenaw Cyber Citizenship Coalition. They are working to create a digitally aware, knowledgeable and more secure community by providing residents with the tools and resources to be good digital citizens. No one in the room self-identified as a “cyber security expert,” but the information sharing that’s happening in Washtenaw County is the kind of holistic effort that can enable everyone to use the Internet more safely and benefit from the great opportunities that it provides.

The Washtenaw Cyber Citizenship Coalition is channeling the community’s efforts through volunteer workgroups in areas such as public/private partnerships, awareness, education and law enforcement. Their strategy is to “share the wheel" whenever possible, instead of recreating it. They’ve collected tips and resources for kids, parents, businesses, educators and crime victims so that citizens can find and access these materials with ease.

If you are interested in raising awareness in your own community, staysafeonline.org, stopthinkconnect.org and onguardonline.gov are examples of sites that offer such materials for public use.

Friday, October 15, 2010

Protecting your data in the cloud



Like many people, you probably store a lot of important information in your Google Account. I personally check my Gmail account every day (sometimes several times a day) and rely on having access to my mail and contacts wherever I go. Aside from Gmail, my Google Account is tied to lots of other services that help me manage my life and interests: photos, documents, blogs, calendars, and more. That is to say, my Google Account is very valuable to me.

Unfortunately, a Google Account is also valuable in the eyes of spammers and other people looking to do harm. It’s not so much about your specific account, but rather the fact that your friends and family see your Google Account as trustworthy. A perfect example is the “Mugged in London” phishing scam that aims to trick your contacts into wiring money — ostensibly to help you out. If your account is compromised and used to send these messages, your well-meaning friends may find themselves out a chunk of change. If you have sensitive information in your account, it may also be at risk of improper access.

As part of National Cyber Security Awareness month, we want to let you know what you can do to better protect your Google Account.

Stay one step ahead of the bad guys

Account hijackers prey on the bad habits of the average Internet user. Understanding common hijacking techniques and using better security practices will help you stay one step ahead of them.

The most common ways hijackers can get access to your Google password are:
  • Password re-use: You sign up for an account on a third-party site with your Google username and password. If that site is hacked and your sign-in information is discovered, the hijacker has easy access to your Google Account.
  • Malware: You use a computer with infected software that is designed to steal your passwords as you type (“keylogging”) or grab them from your browser’s cache data.
  • Phishing: You respond to a website, email, or phone call that claims to come from a legitimate organization and asks for your username and password.
  • Brute force: You use a password that’s easy to guess, like your first or last name plus your birth date (“Laura1968”), or you provide an answer to a secret question that’s common and therefore easy to guess, like “pizza” for “What is your favorite food?”
As you can see, hijackers have many tactics for stealing your password, and it’s important to be aware of all of them.

Take control of your account security across the web

Online accounts that share passwords are like a line of dominoes: When one falls, it doesn’t take much for the others to fall, too. This is why you should choose unique passwords for important accounts like Gmail (your Google Account), your bank, commerce sites, and social networking sites. We’re also working on technology that adds another layer of protection beyond your password to make your Google Account significantly more secure.

Choosing a unique password is not enough to secure your Google Account against every possible threat. That’s why we’ve created an easy-to-use checklist to help you secure your computer, browser, Gmail, and Google Account. We encourage you to go through the entire checklist, but want to highlight these tips:
  • Never re-use passwords for your important accounts like online banking, email, social networking, and commerce.
  • Change your password periodically, and be sure to do so for important accounts whenever you suspect one of them may have been at risk. Don’t just change your password by a few letters or numbers (“Aquarius5” to “Aquarius6”); change the combination of letters and numbers to something unique each time.
  • Never respond to messages, non-Google websites, or phone calls asking for your Google username or password; a legitimate organization will not ask you for this type of information. Report these messages to us so we can take action. If you responded and can no longer access your account, visit our account recovery page.
We hope you’ll take action to ensure your security across the web, not just on Google. Run regular virus scans, don’t re-use your passwords, and keep your software and account recovery information up to date. These simple yet powerful steps can make a difference when it really counts.

Thursday, October 14, 2010

Phishing URLs and XML Notifications



Recently, we announced Safe Browsing Alerts for Network Administrators. Today we’re adding phishing URLs to the notification messages. This means that in addition to being alerted to compromised URLs found on networks, you’ll be alerted to phishing URLs as well.

We’d also like to point out the XML notification feature. By default, we send notification messages in a simple email message. However, we realize that some of you may want to process these notifications by a script, so we’ve added the ability to receive messages in XML format. Click on an AS in your list to modify preferences, such as enabling the XML notification feature. If you decide to use XML email messages, you should familiarize yourself with the XML Schema.

If you’re a network administrator and haven’t yet registered your AS, you can do so here.

Tuesday, September 28, 2010

Safe Browsing Alerts for Network Administrators



Google has been working hard to protect its users from malicious web pages, and also to help webmasters keep their websites clean. When we find malicious content on websites, we attempt to notify their webmasters via email about the bad URLs. There is even a Webmaster Tools feature that helps webmasters identify specific malicious content that has been surreptitiously added to their sites, so that they can clean up their site and help prevent it from being compromised in the future.

Today, we’re happy to announce Google Safe Browsing Alerts for Network Administrators -- an experimental tool which allows Autonomous System (AS) owners to receive early notifications for malicious content found on their networks. A single network or ISP can host hundreds or thousands of different websites. Although network administrators may not be responsible for running the websites themselves, they have an interest in the quality of the content being hosted on their networks. We’re hoping that with this additional level of information, administrators can help make the Internet safer by working with webmasters to remove malicious content and fix security vulnerabilities.

To get started, visit safebrowsingalerts.googlelabs.com.

Monday, September 20, 2010

Moving security beyond passwords



Entering your username and password on a standard website gives you access to everything from your email and bank accounts to your favorite social networking site. Your passwords possess a lot of power, so it's critical to keep them from falling into the wrong hands. Unfortunately, we often find that passwords are the weakest link in the security chain. Keeping track of many passwords is a pain, and unfortunately accounts are regularly compromised when passwords are too weak, are reused across websites, or when people are tricked into sharing their password with someone untrustworthy. These are difficult industry problems to solve, and when re-thinking the traditional username/password design, we wanted to do more.

As we explained today on our Google Enterprise Blog, we've developed an option to add two-step verification to Google Apps accounts. When signing in, Google will send a verification code to your phone, or let you generate one yourself using an application on your Android, BlackBerry or iPhone device. Entering this code, in addition to a normal password, gives us a strong indication that the person signing in is actually you. This new feature significantly improves the security of your Google Account, as it requires not only something you know: your username and password, but also something that only you should have: your phone. Even if someone has stolen your password, they'll need more than that to access your account.



Building the technology and infrastructure to support this kind of feature has taken careful thought. We wanted to develop a security feature that would be easy to use and not get in your way. Along those lines, we're offering a variety of sign in options, along with the ability to indicate when you're using a computer you trust and don't want to be asked for a verification code from that machine in the future. Making this service available to millions of users at no cost took a great deal of coordination across Google’s specialized infrastructure, from building a scalable SMS and voice call system to developing open source mobile applications for your smart phone. The result is a feature we hope you'll find simple to manage and that makes it easy to better protect your account.

We look forward to gathering feedback about this feature and making it available to all of our users in the coming months.

If you'd like to learn more about about staying safe online, see our ongoing security blog series or visit http://www.staysafeonline.org/.

Thursday, September 16, 2010

Stay safe while browsing



We are constantly working on detecting sites that are compromised or are deliberately set up to infect your machine while browsing the web. We provide warnings on our search results and to browsers such as Firefox and Chrome. A lot of the warnings take people by surprise — they can trigger on your favorite news site, a blog you read daily, or another site you would never consider to be involved in malicious activities.

In fact, it’s very important to heed these warnings because they show up for sites that are under attack. We are very confident with the results of our scanners that create these warnings, and we work with webmasters to show where attack code was injected. As soon as we think the site has been cleaned up, we lift the warning.

This week in particular, a lot of web users have become vulnerable. A number of live public exploits were attacking the latest versions of some very popular browser plug-ins. Our automated detection systems encounter these attacks every day, e.g. exploits against PDF (CVE-2010-2883), Quicktime (CVE-2010-1818) and Flash (CVE-2010-2884).

We found it interesting that we discovered the PDF exploit on the same page as a more “traditional” fake anti-virus page, in which users are prompted to install an executable file. So, even if you run into a fake anti-virus page and ignore it, we suggest you run a thorough anti-virus scan on your machine.

We and others have observed that once a vulnerability has been exploited and announced, it does not take long for it to be abused widely on the web. For example, the stack overflow vulnerability in PDF was announced on September 7th, 2010, and the Metasploit project made an exploit module available only one day later. Our systems found the vulnerability abused across multiple exploit sites on September 13th.

Here’s a few suggestions for protecting yourself against web attacks:
  • Keep your OS, browser, and browser plugins up-to-date.
  • Run anti-virus software, and keep this up-to-date, too.
  • Disable or uninstall any software or browser plug-ins you don’t use — this reduces your vulnerability surface.
  • If you receive a PDF attachment in Gmail, select “View” to view it in Gmail instead of downloading it.

Monday, August 30, 2010

Vulnerability trends: how are companies really doing?



Quite a few security companies and organizations produce vulnerability databases, cataloguing bugs and reporting trends across the industry based on the data they compile. There is value in this exercise; specifically, getting a look at examples across a range of companies and industries gives us information about the most common types of threats, as well as how they are distributed.

Unfortunately, the data behind these reports is commonly inaccurate or outdated to some degree. The truth is that maintaining an accurate and reliable database of this type of information is a significant challenge. We most recently saw this reality play out last week after the appearance of the IBM X-Force® 2010 Mid-Year Trend and Risk Report. We questioned a number of surprising findings concerning Google’s vulnerability rate and response record, and after discussions with IBM, we discovered a number of errors that had important implications for the report’s conclusions. IBM worked together with us and promptly issued a correction to address the inaccuracies.

Google maintains a Product Security Response Team that prioritizes bug reports and coordinates their handling across relevant engineering groups. Unsurprisingly, particular attention is paid to high-risk and critical vulnerabilities. For this reason, we were confused by a claim that 33% of critical and high-risk bugs uncovered in our services in the first half of 2010 were left unpatched. We learned after investigating that the 33% figure referred to a single unpatched vulnerability out of a total of three — and importantly, the one item that was considered unpatched was only mistakenly considered a security vulnerability due to a terminology mix-up. As a result, the true unpatched rate for these high-risk bugs is 0 out of 2, or 0%.

How do these types of errors occur? Maintainers of vulnerability databases have a number of factors working against them:
  • Vendors disclose their vulnerabilities in inconsistent formats, using different severity classifications. This makes the process of measuring the number of total vulnerabilities assigned to a given vendor much more difficult.
  • Assessing the severity, scope, and nature of a bug sometimes requires intimate knowledge of a product or technology, and this can lead to errors and misinterpretation.
  • Keeping the fix status updated for thousands of entries is no small task, and we’ve consistently seen long-fixed errors marked as unfixed in a number of databases.
  • Not all compilers of vulnerability databases perform their own independent verification of bugs they find reported from other sources. As a result, errors in one source can be replicated to others.
To make these databases more useful for the industry and less likely to spread misinformation, we feel there must be more frequent collaboration between vendors and compilers. As a first step, database compilers should reach out to vendors they plan to cover in order to devise a sustainable solution for both parties that will allow for a more consistent flow of information. Another big improvement would be increased transparency on the part of the compilers — for example, the inclusion of more hard data, the methodology behind the data gathering, and caveat language acknowledging the limitations of the presented data. We hope to see these common research practices employed more broadly to increase the quality and usefulness of vulnerability trend reports.

Tuesday, July 20, 2010

Rebooting Responsible Disclosure: a focus on protecting end users


Vulnerability disclosure policies have become a hot topic in recent years. Security researchers generally practice “responsible disclosure”, which involves privately notifying affected software vendors of vulnerabilities. The vendors then typically address the vulnerability at some later date, and the researcher reveals full details publicly at or after this time.

A competing philosophy, "full disclosure", involves the researcher making full details of a vulnerability available to everybody simultaneously, giving no preferential treatment to any single party.

The argument for responsible disclosure goes briefly thus: by giving the vendor the chance to patch the vulnerability before details are public, end users of the affected software are not put at undue risk, and are safer. Conversely, the argument for full disclosure proceeds: because a given bug may be under active exploitation, full disclosure enables immediate preventative action, and pressures vendors for fast fixes. Speedy fixes, in turn, make users safer by reducing the number of vulnerabilities available to attackers at any given time.

Note that there's no particular consensus on which disclosure policy is safer for users. Although responsible disclosure is more common, we recommend this 2001 post by Bruce Schneier as background reading on some of the advantages and disadvantages of both approaches.

So, is the current take on responsible disclosure working to best protect end users in 2010? Not in all cases, no. The emotionally loaded name suggests that it is the most responsible way to conduct vulnerability research - but if we define being responsible as doing whatever it best takes to make end users safer, we will find a disconnect. We’ve seen an increase in vendors invoking the principles of “responsible” disclosure to delay fixing vulnerabilities indefinitely, sometimes for years; in that timeframe, these flaws are often rediscovered and used by rogue parties using the same tools and methodologies used by ethical researchers. The important implication of referring to this process as "responsible" is that researchers who do not comply are seen as behaving improperly. However, the inverse situation is often true: it can be irresponsible to permit a flaw to remain live for such an extended period of time.

Skilled attackers are using 0-day vulnerabilities in the wild, and there are increasing instances of:
  • 0-day attacks that rely on vulnerabilities known to the vendor for a long while.
  • Situations where it became clear that a vulnerability was being actively exploited in the wild, subsequent to the bug being fixed or disclosed.
Accordingly, we believe that responsible disclosure is a two-way street. Vendors, as well as researchers, must act responsibly. Serious bugs should be fixed within a reasonable timescale. Whilst every bug is unique, we would suggest that 60 days is a reasonable upper bound for a genuinely critical issue in widely deployed software. This time scale is only meant to apply to critical issues. Some bugs are mischaracterized as “critical", but we look to established guidelines to help make these important distinctions — e.g. Chromium severity guidelines and Mozilla severity ratings.

As software engineers, we understand the pain of trying to fix, test and release a product rapidly; this especially applies to widely-deployed and complicated client software. Recognizing this, we put a lot of effort into keeping our release processes agile so that security fixes can be pushed out to users as quickly as possible.

A lot of talented security researchers work at Google. These researchers discover many vulnerabilities in products from vendors across the board, and they share a detailed analysis of their findings with vendors to help them get started on patch development. We will be supportive of the following practices by our researchers:
  • Placing a disclosure deadline on any serious vulnerability they report, consistent with complexity of the fix. (For example, a design error needs more time to address than a simple memory corruption bug).
  • Responding to a missed disclosure deadline or refusal to address the problem by publishing an analysis of the vulnerability, along with any suggested workarounds.
  • Setting an aggressive disclosure deadline where there exists evidence that blackhats already have knowledge of a given bug.
We of course expect to be held to the same standards ourselves. We recognize that we’ve handled bug reports in the past where we’ve been unable to meet reasonable publication deadlines -- due to unexpected dependencies, code complexity, or even simple mix-ups. In other instances, we’ve simply disagreed with a researcher on the scope or severity of a bug. In all these above cases, we’ve been happy for publication to proceed, and grateful for the heads-up.

We would invite other researchers to join us in using the proposed disclosure deadlines to drive faster security response efforts. Creating pressure towards more reasonably-timed fixes will result in smaller windows of opportunity for blackhats to abuse vulnerabilities. In our opinion, this small tweak to the rules of engagement will result in greater overall safety for users of the Internet.

Update September 10, 2010: We'd like to clarify a few of the points above about how we approach the issue of vulnerability disclosure. While we believe vendors have an obligation to be responsive, the 60 day period before public notification about critical bugs is not intended to be a punishment for unresponsive vendors. We understand that not all bugs can be fixed in 60 days, although many can and should be. Rather, we thought of 60 days when considering how large the window of exposure for a critical vulnerability should be permitted to grow before users are best served by hearing enough details to make a decision about implementing possible mitigations, such as disabling a service, restricting access, setting a killbit, or contacting the vendor for more information. In most cases, we don't feel it's in people's best interest to be kept in the dark about critical vulnerabilities affecting their software for any longer period.

Friday, May 21, 2010

Extending SSL to Google search



Google understands the potential risks of browsing the web on an unsecured network, particularly when information is sent over the wire unencrypted — as it is for most major websites today. That’s why we offered SSL support for Gmail back when we launched the product in 2004. Most other webmail providers don’t provide this feature even today. We’ve since added SSL support for Calendar, Docs, Sites, and several other products. Additionally, early this year we made SSL the default setting for all Gmail users.

As we work to provide more support for SSL across our products, today we’re introducing the ability to search with Google over SSL. We still have some testing to do, but you can try out the new encrypted version of Google search at https://www.google.com and read more about it on the Official Google Blog.

Tuesday, May 4, 2010

Do Know Evil: web application vulnerabilities



UPDATE July 13: We have changed the name of the codelab application to Gruyere. The codelab is now located at http://google-gruyere.appspot.com.

We want Google employees to have a firm understanding of the threats our services face, as well as how to help protect against those threats. We work toward these goals in a variety of ways, including security training for new engineers, technical presentations about security, and other types of documentation. We also use codelabs — interactive programming tutorials that walk participants through specific programming tasks.

One codelab in particular teaches developers about common types of web application vulnerabilities. In the spirit of the thinking that "it takes a hacker to catch a hacker," the codelab also demonstrates how an attacker could exploit such vulnerabilities.

We're releasing this codelab, entitled "Web Application Exploits and Defenses," today in coordination with Google Code University and Google Labs to help software developers better recognize, fix, and avoid similar flaws in their own applications. The codelab is built around Gruyere, a small yet full-featured microblogging application designed to contain lots of security bugs. The vulnerabilities covered by the lab include cross-site scripting (XSS), cross-site request forgery (XSRF) and cross-site script inclusion (XSSI), as well as client-state manipulation, path traversal and AJAX and configuration vulnerabilities. It also shows how simple bugs can lead to information disclosure, denial-of-service and remote code execution.

The maxim, "given enough eyeballs, all bugs are shallow" is only true if the eyeballs know what to look for. To that end, the security bugs in Gruyere are real bugs — just like those in many other applications. The Gruyere source code is published under a Creative Commons license and is available for use in whitebox hacking exercises or in computer science classes covering security, software engineering or general software development.

To get started, visit http://google-gruyere.appspot.com. An instructor's guide for using the codelab is now available on Google Code University.

Wednesday, April 14, 2010

The Rise of Fake Anti-Virus


For years, we have detected malicious content on the web and helped protect users from it. Vulnerabilities in web browsers and popular plugins have resulted in an increased number of users whose systems can be compromised by attacks known as drive-by downloads. Such attacks do not require any user interaction, and they allow the adversary to execute code on a user’s computer without their knowledge. However, even without any vulnerabilities present, clever social engineering attacks can cause an unsuspecting user to unwittingly install malicious code supplied by an attacker on their computer.

One increasingly prevalent threat is the spread of Fake Anti-Virus (Fake AV) products. This malicious software takes advantage of users’ fear that their computer is vulnerable, as well as their desire to take the proper corrective action. Visiting a malicious or compromised web site — or sometimes even viewing a malicious ad — can produce a screen looking something like the following:

At Google, we have been working to help protect users against Fake AV threats on the web since we first discovered them in March 2007. In addition to protections like adding warnings to browsers and search results, we’re also actively engaged in malware research. We conducted an in-depth analysis of the prevalence of Fake AV over the course of the last 13 months, and the research paper containing our findings, “The Nocebo Effect on the Web: An Analysis of Fake AV distribution” is going to be presented at the Workshop on Large-Scale Exploits and Emergent Threats (LEET) in San Jose, CA on April 27th. While we do not want to spoil any surprises, here are a few previews. Our analysis of 240 million web pages over the 13 months of our study uncovered over 11,000 domains involved in Fake AV distribution — or, roughly 15% of the malware domains we detected on the web during that period.

Also, over the last year, the lifespan of domains distributing Fake AV attacks has decreased significantly:


In the meantime, we recommend only running antivirus and antispyware products from trusted companies. Be sure to use the latest versions of this software, and if the scan detects any suspicious programs or applications, remove them immediately.

Tuesday, March 30, 2010

The chilling effects of malware



In January, we discussed a set of highly sophisticated cyber attacks that originated in China and targeted many corporations around the world. We believe that malware is a general threat to the Internet, but it is especially harmful when it is used to suppress opinions of dissent. In that case, the attacks involved surveillance of email accounts belonging to Chinese human rights activists. Perhaps unsurprisingly, these are not the only examples of malicious software being used for political ends. We have gathered information about a separate cyber threat that was less sophisticated but that nonetheless was employed against another community.

This particular malware broadly targeted Vietnamese computer users around the world. The malware infected the computers of potentially tens of thousands of users who downloaded Vietnamese keyboard language software and possibly other legitimate software that was altered to infect users. While the malware itself was not especially sophisticated, it has nonetheless been used for damaging purposes. These infected machines have been used both to spy on their owners as well as participate in distributed denial of service (DDoS) attacks against blogs containing messages of political dissent. Specifically, these attacks have tried to squelch opposition to bauxite mining efforts in Vietnam, an important and emotionally charged issue in the country.

Since some anti-virus vendors have already introduced signatures to help detect this specific malware, we recommend the following actions, particularly if you believe that you may have been exposed to the malware: run regular anti-virus as well as anti-spyware scans from trusted vendors, and be sure to install all web browser and operating system updates to ensure you’re using only the latest versions. New technology like our suspicious account activity alerts in Gmail should also help detect surveillance efforts. At a larger scale, we feel the international community needs to take cybersecurity seriously to help keep free opinion flowing.

Phishing phree



To help protect you from a wide array of Internet scams you may encounter while searching, we analyze millions of webpages daily for phishing behavior. Each year, we find hundreds of thousands of phishing pages and add them to our list that we use to directly warn users of Firefox, Safari, and Chrome via our SafeBrowsing API. How do we find all these phishing pages? We certainly don’t do it by hand!

Instead, we have taught computers to look for certain telltale signs, as we describe in a paper that we recently presented at the 17th Annual Network and Distributed System Security Symposium. In a nutshell, our system looks at many of the same elements of a webpage that you would check to help evaluate whether the page is designed for phishing. If our system determines that a page is being used for phishing, it will automatically produce a warning for all of our users who try to visit that page. This scalable design enables us to review all these potentially “phishy” pages in about a minute each.

What we look for


Our system analyzes a number of webpage features to help make a verdict about whether a site is a phishing site. Starting with a page’s URL, we look to see if there is anything unusual about the host, such as whether the hostname is unusually long or whether the URL uses an IP address to specify the host. We also look to see if the URL contains any phrases like “banking” or “login” that might indicate that the page is trying to steal information.

We don’t just look at the URL, though. After all, a perfectly legitimate site could certainly use words like “banking” or “login.” We collect a snapshot of the page’s content to examine it closely for phishing behavior. For example, we check to see if the page has a password field or whether most of the links point to a common phishing target, as both of these characteristics can be a sign of phishing. Additionally, we pick out some of the most characteristic terms that show up on a page (as defined by their TF-IDF scores), and look for terms like “password” or “PIN number,” which also may indicate that the page is intended for phishing.

We also check the page’s hosting information to find out which networks host the page and where the page’s servers are located geographically. If a site purporting to be an American bank runs its servers in a different country and is hosted on a local residential ISP’s network, we have a strong signal that the site is bad.

Finally, we check the page’s PageRank to see if the page is popular or not, and we check the spam reputation of the page’s domain. We discovered in our research findings that almost all phishing pages are found on domains that almost exclusively send spam. You can observe this trend in the CCDF graph of the spam reputation scores for phishing pages as compared to the graph of other, non-phishing pages.


How we learn to recognize phishing pages

We use a sample of the data that our system generates to train the classifier that lies at the core of our automatic system using a machine learning algorithm. Coming up with good labels (phishing/not phishing) for this data is tricky because we can’t label each of the millions of pages ourselves. Instead, we use our published phishing page list, largely generated by our classifier, to assign labels for the training data.

You might be wondering if this system is going to lead to situations where the classifier makes a mistake, puts that mistake on our list, and then uses the list to learn to make more mistakes. Fortunately, the chain doesn’t make it that far. Our classifier only makes a relatively small number of mistakes, which we can correct manually when you report them to us. Our learning algorithms can handle a few temporary errors in the training labels, and the overall learning process remains stable.

How well does this work?

Of the millions of webpages that our scanners analyze for phishing, we successfully identify 9 out of 10 phishing pages. Our classification system only incorrectly flags a non-phishing site as a phishing site about 1 in 10,000 times, which is significantly better than similar systems. In our experience, these “false positive” sites are usually built to distribute spam or may be involved with other suspicious activity. While phishers are constantly changing their strategies, we find that they do not change them enough to reliably escape our system. Our experiments showed that our classification system remained effective for over a month without retraining.

If you are a webmaster and would like more information about how to keep your site from looking like a phishing site, please check out our post on the Webmaster Central Blog. If you find that your site has been added to our phishing page list ("Reported Web Forgery!") by mistake, please report the error to us.

Wednesday, March 24, 2010

Detecting suspicious account activity

(Cross-posted from the Gmail Blog)



A few weeks ago, I got an email presumably from a friend stuck in London asking for some money to help him out. It turned out that the email was sent by a scammer who had hijacked my friend's account. By reading his email, the scammer had figured out my friend's whereabouts and was emailing all of his contacts. Here at Google, we work hard to protect Gmail accounts against this kind of abuse. Today we're introducing a new feature to notify you when we detect suspicious login activity on your account.

You may remember that a while back we launched remote sign out and information about recent account activity to help you understand and manage your account usage. This information is still at the bottom of your inbox. Now, if it looks like something unusual is going on with your account, we’ll also alert you by posting a warning message saying, "Warning: We believe your account was last accessed from…" along with the geographic region that we can best associate with the access.


To determine when to display this message, our automated system matches the relevant IP address, logged per the Gmail privacy policy, to a broad geographical location. While we don't have the capability to determine the specific location from which an account is accessed, a login appearing to come from one country and occurring a few hours after a login from another country may trigger an alert.

By clicking on the "Details" link next to the message, you'll see the last account activity window that you're used to, along with the most recent access points.


If you think your account has been compromised, you can change your password from the same window. Or, if you know it was legitimate access (e.g. you were traveling, your husband/wife who accesses the account was also traveling, etc.), you can click "Dismiss" to remove the message.

Keep in mind that these notifications are meant to alert you of suspicious activity but are not a replacement for account security best practices. If you'd like more information on account security, read these tips on keeping your information secure or visit the Google Online Security Blog.

Finally, we know that security is also a top priority for businesses and schools, and we look forward to offering this feature to Google Apps customers once we have gathered and incorporated their feedback.

Friday, March 19, 2010

Meet skipfish, our automated web security scanner



The safety of the Internet is of paramount importance to Google, and helping web developers build secure, reliable web applications is an important part of the equation. To advance this goal, we have released projects such as ratproxy, a passive security assessment tool; and Browser Security Handbook, a comprehensive guide for web developers. We also worked with the community to improve the security of third-party browsers.

Today, we are happy to announce the availability of skipfish - our free, open source, fully automated, active web application security reconnaissance tool. We think this project is interesting for a few reasons:
  • High speed: written in pure C, with highly optimized HTTP handling and a minimal CPU footprint, the tool easily achieves 2000 requests per second with responsive targets.

  • Ease of use: the tool features heuristics to support a variety of quirky web frameworks and mixed-technology sites, with automatic learning capabilities, on-the-fly wordlist creation, and form autocompletion.

  • Cutting-edge security logic: we incorporated high quality, low false positive, differential security checks capable of spotting a range of subtle flaws, including blind injection vectors.
As with ratproxy, we feel that skipfish will be a valuable contribution to the information security community, making security assessments significantly more accessible and easier to execute.

To download the scanner, please visit this page; detailed project documentation is available here.

Wednesday, March 3, 2010

Federal Support for Federated Login



Last November, we discussed the progress that account login systems operating via standards-based identity technologies like OpenID have achieved across the web. As more websites seek to interact with one another to provide a richer experience for users, we're seeing even more interest in finding a secure way to enable that kind of information sharing while avoiding the hassle for users of creating new accounts and passwords.

Excitement for technology like OpenID is not limited to the private sector. President Obama's open government memorandum last year spurred the creation of a pilot initiative in September to enable U.S. citizens to more easily sign in to government-run websites. Google joined a number of other companies to explore ways to answer that call.

Now, several months later, some interesting things are taking shape. The Open Identity Exchange (OIX), a new organization and certification body focused on online identity management, today named Google among the first identity providers to be approved by the U.S. Government as meeting federal standards for identity assurance. This means that Google's identity, security, and privacy specifications have been certified so that a user can register and log in at U.S. government websites using their Google account login credentials. The National Institute of Health (NIH) is the first government website ready to accept such credentials, and we look forward to seeing other websites open up to certified identity providers so that users will have an easier and more secure time interacting with these resources.

Our hope is that the work of the OIX and other groups will continue to grow and help facilitate more open government participation, as well as improve security on the Internet by reducing password use across websites.