Saturday, December 7, 2013
Late on December 3rd, we became aware of unauthorized digital certificates for several Google domains. We investigated immediately and found the certificate was issued by an intermediate certificate authority (CA) linking back to ANSSI, a French certificate authority. Intermediate CA certificates carry the full authority of the CA, so anyone who has one can use it to create a certificate for any website they wish to impersonate.
In response, we updated Chrome’s certificate revocation metadata immediately to block that intermediate CA, and then alerted ANSSI and other browser vendors. Our actions addressed the immediate problem for our users.
ANSSI has found that the intermediate CA certificate was used in a commercial device, on a private network, to inspect encrypted traffic with the knowledge of the users on that network. This was a violation of their procedures and they have asked for the certificate in question to be revoked by browsers. We updated Chrome’s revocation metadata again to implement this.
This incident represents a serious breach and demonstrates why Certificate Transparency, which we developed in 2011 and have been advocating for since, is so critical.
Since our priority is the security and privacy of our users, we are carefully considering what additional actions may be necessary.
[Update December 12: We have decided that the ANSSI certificate authority will be limited to the following top-level domains in a future version of Chrome:
.pm (Saint-Pierre et Miquelon)
.bl (Saint Barthélemy)
.mf (Saint Martin)
.wf (Wallis et Futuna)
.pf (Polynésie française)
.nc (Nouvelle Calédonie)
.tf (Terres australes et antarctiques françaises)]
Friday, December 6, 2013
Since 2004, industry groups and standards bodies have been working on developing and deploying email authentication standards to prevent email impersonation. At its core, email authentication standardizes how an email’s sending and receiving domains can exchange information to authenticate that the email came from the rightful sender.
Now, nearly a decade later, adoption of these standards is widespread across the industry, dramatically reducing spammers’ ability to impersonate domains that users trust, and making email phishing less effective. 91.4% of non-spam emails sent to Gmail users come from authenticated senders, which helps Gmail filter billions of impersonating email messages a year from entering our users’ inboxes.
More specifically, the 91.4% of the authenticated non-spam emails sent to Gmail users come from senders that have adopted one or more of the following email authentication standards: DKIM (DomainKey Identified Email) or SPF (Sender Policy Framework).
Here are some statistics that illustrate the scale of what we’re seeing:
- 76.9% of the emails we received are signed according to the (DKIM) standard. Over half a million domains (weekly active) have adopted this standard.
- 89.1% of incoming email we receive comes from SMTP servers that are authenticated using the SPF standard. Over 3.5 million domains (weekly active) have adopted the SPF standard.
- 74.7% of incoming email we receive is protected by both the DKIM and SPF standards.
- Over 80,000 domains have deployed domain-wide policies that allow us to reject hundreds of millions of unauthenticated emails every week via the DMARC standard.
As more domains implement authentication, phishers are forced to target domains that are not yet protected. If you own a domain that sends email, the most effective action you can take to help us and prevent spammers from impersonating your domain is to set up DKIM, SPF and DMARC. Check our help pages on DKIM, SPF, DMARC to get started.
When using DKIM, please make sure that your public key is at least 1024 bits, so that attackers can’t crack it and impersonate your domain. The use of weak cryptographic keys -- ones that are 512 bits or less -- is one of the major sources of DKIM configuration errors (21%).
If you own domains that are never used to send email, you can still help prevent abuse. All you need to do is create a DMARC policy that describes your domain as a non-sender. Adding a “reject” policy for these domains ensures that no emails impersonating you will ever reach Gmail users’ inboxes.
While the fight against spammers is far from over, it’s nevertheless encouraging to see that community efforts are paying off. Gmail has been an early adopter of these standards and we remain a strong advocate of email authentication. We hope that publishing these results will inspire more domain owners to adopt the standards that protect them from impersonation and help keep email inboxes safe and clean.
Monday, November 18, 2013
About a month ago, we kicked off our Patch Reward Program. The goal is very simple: to recognize and reward proactive security improvements to third-party open-source projects that are vital to the health of the entire Internet.
We started with a fairly conservative scope, but said we would expand the program soon. Today, we are adding the following to the list of projects that are eligible for rewards:
- All the open-source components of Android: Android Open Source Project
- Widely used web servers: Apache httpd, lighttpd, nginx
- Popular mail delivery services: Sendmail, Postfix, Exim, Dovecot
- Virtual private networking: OpenVPN
- Network time: University of Delaware NTPD
- Additional core libraries: Mozilla NSS, libxml2
- Toolchain security improvements for GCC, binutils, and llvm
For more information about eligibility, reward amounts, and the submission process, please visit this page. Happy patching!
We take the security and privacy of our users very seriously and, as we noted in May, Google has been working to upgrade all its SSL certificates to 2048-bit RSA or better by the end of 2013. Coming in ahead of schedule, we have completed this process, which will allow the industry to start removing trust from weaker, 1024-bit keys next year.
Thanks to our use of forward secrecy, the confidentiality of SSL connections to Google services from modern browsers was never dependent on our 1024-bit RSA keys. But the deprecation of 1024-bit RSA is an industry-wide effort that we’re happy to support, particularly light of concerns about overbroad government surveillance and other forms of unwanted intrusion.
The hardware security module (HSM) that contained our old, 1024-bit, intermediate certificate has served us well. Its final duty after all outstanding certificates were revoked, was to be carefully destroyed.
With the demolition of the HSM and revocation of the old certificates, Google Internet Authority G2 will issue 2048-bit certificates for Google web sites and properties going forward.
Thursday, November 14, 2013
SSL/TLS combines a number of choices about cryptographic primitives, including the choice of cipher, into a collection that it calls a “cipher suite.” A list of cipher suites is maintained by the Internet Assigned Names and Numbers Authority.
Because of recent research, this area of TLS is currently in flux as older, flawed, cipher suites are deprecated and newer replacements introduced into service. In this post we’ll be discussing known flaws in some of them.
Cipher suites are written like this: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, which breaks down into the following parts:
For this discussion, only the ‘cipher’ part of the cipher suite is pertinent.
RC4 is a very common stream cipher but is showing its 26-year age.
Biases in the RC4 keystream have been known for over a decade or more  and were used to attack WEP, the original security standard for Wi-Fi. HTTPS was believed to be substantially unaffected by these results until Paterson et al compiled and extended them  and demonstrated that belief to be incorrect.
The best, known attack against using RC4 with HTTPS involves causing a browser to transmit many HTTP requests -- each with the same cookie -- and exploiting known biases in RC4 to build an increasingly precise probability distribution for each byte in a cookie. However, the attack needs to see on the order of 10 billion copies of the cookie in order to make a good guess. This involves the browser sending ~7TB of data. In ideal situations, this requires nearly three months to complete.
This attack cannot be mitigated without replacing RC4.
AES-CBC has a couple of problems, both of which are problems with the way that TLS uses CBC (Cipher Block Chaining) mode, and not problems with AES.
The first is called BEAST and was demonstrated by Duong and Rizzo 2011 (although the idea was originally described by Rogaway in 1995). It exploits a flaw in the way that TLS prior to version 1.1 generated CBC initialization vectors.
The attack requires precise control over the TLS connection which is not generally possible from a vanilla browser; the demo used a Java applet to obtain this control. The version of the WebSockets protocol used at the time may have allowed the necessary degree of control, but that had already been replaced by the time that the issue was demonstrated.
However, browsers are complex and evolving pieces of software, and the necessary degree of control is certainly not a comfortable barrier to exploitation. If possible, the exploit is very practical. It requires the attacker to have access to the network near the computer but otherwise completes quickly and deterministically.
The issue is fixed either by using TLS >= 1.1, or by a trick called 1/n-1 record splitting, which has been implemented by all major browsers now. However, many older installations may still exist with Java enabled and would thus be vulnerable to this attack.
The second issue is called Lucky13. This attack uses the fact that TLS servers take a slightly different amount of time to process different types of invalid TLS records. This attack is the first one that we have discussed that requires the use of timing side-channels and is thus probabilistic.
The attack needs nearly 10,000 TLS connections per byte of plaintext decoded and the attacker needs to be situated close to the TLS server in order to reduce the amount of timing noise added by the network. Under absolutely ideal situations, an attacker could extract a short (16 byte) cookie from a victim's browser in around 10 minutes. With optimistic but plausible parameters, the attack could work in an hour.
This attack can only be fixed at the server by making the decoding of all CBC records take a constant amount of time. It’s not plausible for a browser to detect whether a server has fixed this issue before using AES-CBC.
There are no known breaks of AES-GCM and it is one of the ciphers that TLS servers are urged to support. However it suffers from a couple of practical issues:
The first is that it’s very challenging to implement AES-GCM in software in a way which is both fast and secure. Some CPUs implement AES-GCM directly in hardware (this is called AES-NI by Intel, the most prominent example of this) and these CPUs allow for implementations that are secure and very fast, but hardware support is far from ubiquitous.
The second nit with AES-GCM is that, as integrated in TLS, implementations are free to use a random nonce value. However, the size of this nonce (8 bytes) is too small to safely support using this mode. Implementations that do so are at risk of a catastrophic nonce reuse after sending on the order of a terabyte of data on a single connection. This issue can be resolved by using a counter for the nonce but using random nonces is the most common practice at this time.
When both parties to a TLS connection support hardware AES-GCM and use counters, this cipher is essentially optimal.
This cipher (technically an AEAD, not a cipher, as is AES-GCM) also has no known breaks but is designed to facilitate fast and secure software implementations. For situations where hardware AES-GCM support is not available, it provides a fast alternative. Even when AES-GCM hardware is provided, ChaCha20-Poly1305 is currently within a factor of two in speed.
While we recommend the world move to support TLS 1.2, AES-GCM and ChaCha20-Poly1305 (as Chrome and Google are doing) we have to deal with a large fraction of the Internet that moves more slowly than we would like. While RC4 is fundamentally flawed and must be replaced, the attacks against it are very costly. The attacks against CBC mode, however, are much more practical and only one can be conclusively addressed on the client side. It is not clear which is best when nothing better is available.
TLS 1.2 is needed in order to use AES-GCM and ChaCha20-Poly1305. TLS 1.2 deployment is hampered by older servers that fail to process valid TLS messages and thus break version negotiation. It also remains to be seen whether firewalls and other network intermediaries are erroneously processing TLS connections that pass through them, breaking TLS 1.2. Chrome 32 includes an experiment that tests for this issue. If TLS 1.2 is found to be viable on the modern Internet, remedial measures can be taken to repair the TLS version negotiation without breaking the previously mentioned, flawed TLS servers.
Thursday, October 31, 2013
Bad guys trick you into installing and running this kind of software by bundling it with something you might want, like a free screensaver, a video plugin or—ironically—a supposed security update. These malicious programs disguise themselves so you won’t know they’re there and they may change your homepage or inject ads into the sites you browse. Worse, they block your ability to change your settings back and make themselves hard to uninstall, keeping you trapped in an undesired state.
We're taking steps to help, including adding a "reset browser settings" button in the last Chrome update, which lets you easily return your Chrome to a factory-fresh state. You can find this in the “Advanced Settings” section of Chrome settings.
In the current Canary build of Chrome, we’ll automatically block downloads of malware that we detect. If you see this message in the download tray at the bottom of your screen, you can click “Dismiss” knowing Chrome is working to keep you safe.
This is in addition to the 10,000 new websites we flag per day with Safe Browsing, which also detects and blocks malicious downloads, to keep more than 1 billion web users safe across multiple browsers that use this technology. Keeping you secure is a top priority, which is why we’re working on additional means to stop malicious software installs as well.
Update: 11/1/13: Updated to mention that Safe Browsing already detects and blocks malware.
Linus Upson, Vice President
Friday, October 25, 2013
For over a decade, CAPTCHAs have used visual puzzles to help webmasters keep automated software from engaging in abusive activities on their sites. However, over the last few years advances in artificial intelligence have reduced the gap between human and machine capabilities in deciphering distorted text. Today, a successful CAPTCHA solution needs to go beyond just relying on text distortions to separate man from machine.
The reCAPTCHA team has been performing extensive research and making steady improvements to learn how to better protect users from attackers. As a result, reCAPTCHA is now more adaptive and better-equipped to distinguish legitimate users from automated software.
The updated system uses advanced risk analysis techniques, actively considering the user’s entire engagement with the CAPTCHA—before, during and after they interact with it. That means that today the distorted letters serve less as a test of humanity and more as a medium of engagement to elicit a broad range of cues that characterize humans and bots.
As part of this, we’ve recently released an update that creates different classes of CAPTCHAs for different kinds of users. This multi-faceted approach allows us to determine whether a potential user is actually a human or not, and serve our legitimate users CAPTCHAs that most of them will find easy to solve. Bots, on the other hand, will see CAPTCHAs that are considerably more difficult and designed to stop them from getting through.
Humans find numeric CAPTCHAs significantly easier to solve than those containing arbitrary text and achieve nearly perfect pass rates on them. So with our new system, you’ll encounter CAPTCHAs that are a breeze to solve. Bots, however, won’t even see them. While we’ve already made significant advancements to reCAPTCHA technology, we’ll have even more to report on in the next few months, so stay tuned.
Wednesday, October 9, 2013
We all benefit from the amazing volunteer work done by the open source community. That’s why we keep asking ourselves how to take the model pioneered with our Vulnerability Reward Program - and employ it to improve the security of key third-party software critical to the health of the entire Internet.
We thought about simply kicking off an OSS bug-hunting program, but this approach can easily backfire. In addition to valid reports, bug bounties invite a significant volume of spurious traffic - enough to completely overwhelm a small community of volunteers. On top of this, fixing a problem often requires more effort than finding it.
So we decided to try something new: provide financial incentives for down-to-earth, proactive improvements that go beyond merely fixing a known security bug. Whether you want to switch to a more secure allocator, to add privilege separation, to clean up a bunch of sketchy calls to strcat(), or even just to enable ASLR - we want to help!
We intend to roll out the program gradually, based on the quality of the received submissions and the feedback from the developer community. For the initial run, we decided to limit the scope to the following projects:
- Core infrastructure network services: OpenSSH, BIND, ISC DHCP
- Core infrastructure image parsers: libjpeg, libjpeg-turbo, libpng, giflib
- Open-source foundations of Google Chrome: Chromium, Blink
- Other high-impact libraries: OpenSSL, zlib
- Security-critical, commonly used components of the Linux kernel (including KVM)
- Widely used web servers: Apache httpd, lighttpd, nginx
- Popular SMTP services: Sendmail, Postfix, Exim
- Toolchain security improvements for GCC, binutils, and llvm
- Virtual private networking: OpenVPN
Please submit your patches directly to the maintainers of the individual projects. Once your patch is accepted and merged into the repository, please send all the relevant details to email@example.com. If we think that the submission has a demonstrable, positive impact on the security of the project, you will qualify for a reward ranging from $500 to $3,133.7.
Before participating, please read the official rules posted on this page; the document provides additional information about eligibility, rewards, and other important stuff.
Monday, August 12, 2013
One of Google’s core security principles is to engage the community, to better protect our users and build relationships with security researchers. We had this principle in mind as we launched our Chromium and Google Web Vulnerability Reward Programs. We didn’t know what to expect, but in the three years since launch, we’ve rewarded (and fixed!) more than 2,000 security bug reports and also received recognition for setting leading standards for response time.
The collective creativity of the wider security community has surpassed all expectations, and their expertise has helped make Chrome even safer for hundreds of millions of users around the world. Today we’re delighted to announce we’ve now paid out in excess of $2,000,000 (USD) across Google’s security reward initiatives. Broken down, this total includes more than $1,000,000 (USD) for the Chromium VRP / Pwnium rewards, and in excess of $1,000,000 (USD) for the Google Web VRP rewards.
Today, the Chromium program is raising reward levels significantly. In a nutshell, bugs previously rewarded at the $1,000 level will now be considered for reward at up to $5,000. In many cases, this will be a 5x increase in reward level! We’ll issue higher rewards for bugs we believe present a more significant threat to user safety, and when the researcher provides an accurate analysis of exploitability and severity. We will continue to pay previously announced bonuses on top, such as those for providing a patch or finding an issue in a critical piece of open source software.
Interested Chromium researchers should familiarize themselves with our documentation on how to report a security bug well and how we determine higher reward eligibility.
These Chromium reward level increases follow on from similar increases under the Google Web program. With all these new levels, we’re excited to march towards new milestones and a more secure web.
Tuesday, June 25, 2013
[Cross-posted from the Official Google Blog]
Two of the biggest threats online are malicious software (known as malware) that can take control of your computer, and phishing scams that try to trick you into sharing passwords or other private information.
So in 2006 we started a Safe Browsing program to find and flag suspect websites. This means that when you are surfing the web, we can now warn you when a site is unsafe. We're currently flagging up to 10,000 sites a day—and because we share this technology with other browsers there are about 1 billion users we can help keep safe.
But we're always looking for new ways to protect users' security. So today we're launching a new section on our Transparency Report that will shed more light on the sources of malware and phishing attacks. You can now learn how many people see Safe Browsing warnings each week, where malicious sites are hosted around the world, how quickly websites become reinfected after their owners clean malware from their sites, and other tidbits we’ve surfaced.
Sharing this information also aligns well with our Transparency Report, which already gives information about government requests for user data, government requests to remove content, and current disruptions to our services.
To learn more, explore the new Safe Browsing information on this page. Webmasters and network administrators can find recommendations for dealing with malware infections, including resources like Google Webmaster Tools and Safe Browsing Alerts for Network Administrators.
Wednesday, June 12, 2013
[Update June 13: This post is available in Farsi on the Google Persian Blog.]
For almost three weeks, we have detected and disrupted multiple email-based phishing campaigns aimed at compromising the accounts owned by tens of thousands of Iranian users. These campaigns, which originate from within Iran, represent a significant jump in the overall volume of phishing activity in the region. The timing and targeting of the campaigns suggest that the attacks are politically motivated in connection with the Iranian presidential election on Friday.
Our Chrome browser previously helped detect what appears to be the same group using SSL certificates to conduct attacks that targeted users within Iran. In this case, the phishing technique we detected is more routine: users receive an email containing a link to a web page that purports to provide a way to perform account maintenance. If the user clicks the link, they see a fake Google sign-in page that will steal their username and password.
Protecting our users’ accounts is one of our top priorities, so we notify targets of state-sponsored attacks and other suspicious activity, and we take other appropriate actions to limit the impact of these attacks on our users. Especially if you are in Iran, we encourage you to take extra steps to protect your account. Watching out for phishing, using a modern browser like Chrome and enabling 2-step verification can make you significantly more secure against these and many other types of attacks. Also, before typing your Google password, always verify that the URL in the address bar of your browser begins with https://accounts.google.com/. If the website's address does not match this text, please don’t enter your Google password.
Thursday, June 6, 2013
Our vulnerability reward programs have been very successful in helping us fix more bugs and better protect our users, while also strengthening our relationships with security researchers. Since introducing our reward program for web properties in November 2010, we’ve received over 1,500 qualifying vulnerability reports that span across Google’s services, as well as software written by companies we have acquired. We’ve paid $828,000 to more than 250 individuals, some of whom have doubled their total by donating their rewards to charity. For example, one of our bug finders decided to support a school project in East Africa.
In recognition of the difficulty involved in finding bugs in our most critical applications, we’re once again rolling out updated rules and significant reward increases for another group of bug categories:
- Cross-site scripting (XSS) bugs on https://accounts.google.com now receive a reward of $7,500 (previously $3,133.7). Rewards for XSS bugs in other highly sensitive services such as Gmail and Google Wallet have been bumped up to $5,000 (previously $1,337), with normal Google properties increasing to $3,133.70 (previously $500).
- The top reward for significant authentication bypasses / information leaks is now $7,500 (previously $5,000).
Wednesday, May 29, 2013
We recently discovered that attackers are actively targeting a previously unknown and unpatched vulnerability in software belonging to another company. This isn’t an isolated incident -- on a semi-regular basis, Google security researchers uncover real-world exploitation of publicly unknown (“zero-day”) vulnerabilities. We always report these cases to the affected vendor immediately, and we work closely with them to drive the issue to resolution. Over the years, we’ve reported dozens of actively exploited zero-day vulnerabilities to affected vendors, including XML parsing vulnerabilities, universal cross-site scripting bugs, and targeted web application attacks.
Often, we find that zero-day vulnerabilities are used to target a limited subset of people. In many cases, this targeting actually makes the attack more serious than a broader attack, and more urgent to resolve quickly. Political activists are frequent targets, and the consequences of being compromised can have real safety implications in parts of the world.
Our standing recommendation is that companies should fix critical vulnerabilities within 60 days -- or, if a fix is not possible, they should notify the public about the risk and offer workarounds. We encourage researchers to publish their findings if reported issues will take longer to patch. Based on our experience, however, we believe that more urgent action -- within 7 days -- is appropriate for critical vulnerabilities under active exploitation. The reason for this special designation is that each day an actively exploited vulnerability remains undisclosed to the public and unpatched, more computers will be compromised.
Seven days is an aggressive timeline and may be too short for some vendors to update their products, but it should be enough time to publish advice about possible mitigations, such as temporarily disabling a service, restricting access, or contacting the vendor for more information. As a result, after 7 days have elapsed without a patch or advisory, we will support researchers making details available so that users can take steps to protect themselves. By holding ourselves to the same standard, we hope to improve both the state of web security and the coordination of vulnerability management.
Thursday, May 23, 2013
Protecting the security and privacy of our users is one of our most important tasks at Google, which is why we utilize encryption on almost all connections made to Google.
This encryption needs to be updated at times to make it even stronger, so this year our SSL services will undergo a series of certificate upgrades—specifically, all of our SSL certificates will be upgraded to 2048-bit keys by the end of 2013. We will begin switching to the new 2048-bit certificates on August 1st, to ensure adequate time for a careful rollout before the end of the year. We’re also going to change the root certificate that signs all of our SSL certificates because it has a 1024-bit key.
Most client software won’t have any problems with either of these changes, but we know that some configurations will require some extra steps to avoid complications. This is more often true of client software embedded in devices such as certain types of phones, printers, set-top boxes, gaming consoles, and cameras.
For a smooth upgrade, client software that makes SSL connections to Google (e.g. HTTPS) must:
- Perform normal validation of the certificate chain;
- Include a properly extensive set of root certificates contained. We have an example set which should be sufficient for connecting to Google in our FAQ. (Note: the contents of this list may change over time, so clients should have a way to update themselves as changes occur);
- Support Subject Alternative Names (SANs).
On the flip side, here are some examples of improper validation practices that could very well lead to the inability of client software to connect to Google using SSL after the upgrade:
- Matching the leaf certificate exactly (e.g. by hashing it)
- Matching any other certificate (e.g. Root or Intermediate signing certificate) exactly
- Hard-coding the expected Root certificate, especially in firmware. This is sometimes done based on assumptions like the following:
- The Root Certificate of our chain will not change on short notice.
- Google will always use Thawte as its Root CA.
- Google will always use Equifax as its Root CA.
- Google will always use one of a small number of Root CAs.
- The certificate will always contain exactly the expected hostname in the Common Name field and therefore clients do not need to worry about SANs.
- The certificate will always contain exactly the expected hostname in a SAN and therefore clients don't need to worry about wildcards.
Friday, May 10, 2013
This January, Google and SyScan announced a secure coding competition open to students from all over the world. While Google’s Summer of Code and Code-in encourage students to contribute to open source projects, Hardcode was a call for students who wanted to showcase their skills both in software development and security. Given the scope of today’s online threats, we think it’s incredibly important to practice secure coding habits early on. Hundreds of students from 25 countries and across five continents signed up to receive updates and information about the competition, and over 100 teams participated.
During the preliminary online round, teams built applications on Google App Engine that were judged for both functionality and security. Five teams were then selected to participate in the final round at the SyScan 2013 security conference in Singapore, where they had to do the following: fix security bugs from the preliminary round, collaborate to develop an API standard to allow their applications to interoperate, implement the API, and finally, try to hack each other’s applications. To add to the challenge, many of the students balanced the competition with all of their school commitments.
We’re extremely impressed with the caliber of the contestants’ work. Everyone had a lot of fun, and we think these students have a bright future ahead of them. We are pleased to announce the final results of the 2013 Hardcode Competition:
1st Place: Team 0xC0DEBA5E
Vienna University of Technology, Austria (SGD $20,000)
Daniel Marth (http://proggen.org/)
Lukas Pfeifhofer (https://www.devlabs.pro/)
2nd Place: Team Gridlock
Loyola School, Jamshedpur, India (SGD $15,000)
Aviral Dasgupta (http://www.aviraldg.com/)
3rd Place: Team CeciliaSec
University of California, Santa Barbara, California, USA (SGD $10,000)
Runner-up: Team AppDaptor
The Hong Kong Polytechnic University, Hong Kong (SGD $5,000)
Lau Chun Wai (http://www.cwlau.com/)
Runner-up: Team DesiCoders
Birla Institute of Technology & Science, Pilani, India (SGD $5,000)
Vishesh Singhal (http://visheshsinghal.blogspot.com)
Honorable Mention: Team Saviors of Middle Earth (withdrew due to school commitments)
Walt Whitman High School, Maryland, USA
Marc Rosen (https://github.com/maz)
A big congratulations to this very talented group of students!
Wednesday, April 17, 2013
If you use Chrome, you shouldn’t have to work hard to know what Chrome extensions you have installed and enabled. That’s why last December we announced that Chrome (version 25 and beyond) would disable silent extension installation by default. In addition to protecting users from unauthorized installations, these measures resulted in noticeable performance improvements in Chrome and improved user experience.
To further safeguard you while browsing the web, we recently added new measures to protect you and your computer. These measures will identify software that violates Chrome’s standard mechanisms for deploying extensions, flagging such binaries as malware. Within a week, you will start seeing Safe Browsing malicious download warnings when attempting to download malware identified by this criteria.
This kind of malware commonly tries to get around silent installation blockers by misusing Chrome’s central management settings that are intended be used to configure instances of Chrome internally within an organization. In doing so, the installed extensions are enabled by default and cannot be uninstalled or disabled by the user from within Chrome. Other variants include binaries that directly manipulate Chrome preferences in order to silently install and enable extensions bundled with these binaries. Our recent measures expand our capabilities to detect and block these types of malware.
Application developers should adhere to Chrome’s standard mechanisms for extension installation, which include the Chrome Web Store, inline installation, and the other deployment options described in the extensions development documentation.
Tuesday, March 19, 2013
We launched Google Public DNS three years ago to help make the Internet faster and more secure. Today, we are taking a major step towards this security goal: we now fully support DNSSEC (Domain Name System Security Extensions) validation on our Google Public DNS resolvers. Previously, we accepted and forwarded DNSSEC-formatted messages but did not perform validation. With this new security feature, we can better protect people from DNS-based attacks and make DNS more secure overall by identifying and rejecting invalid responses from DNSSEC-protected domains.
DNS translates human-readable domain names into IP addresses so that they are accessible by computers. Despite its critical role in Internet applications, the lack of security protection for DNS up to this point meant that a significantly large portion of today’s Internet attacks target the name resolution process, attempting to return the IP addresses of malicious websites to DNS queries. Probably the most common DNS attack is DNS cache poisoning, which tries to “pollute” the cache of DNS resolvers (such as Google Public DNS or those provided by most ISPs) by injecting spoofed responses to upstream DNS queries.
To counter cache poisoning attacks, resolvers must be able to verify the authenticity of the response. DNSSEC solves the problem by authenticating DNS responses using digital signatures and public key cryptography. Each DNS zone maintains a set of private/public key pairs, and for each DNS record, a unique digital signature is generated and encrypted using the private key. The corresponding public key is then authenticated via a chain of trust by keys of upper-level zones. DNSSEC effectively prevents response tampering because in practice, signatures are almost impossible to forge without access to private keys. Also, the resolvers will reject responses without correct signatures.
DNSSEC is a critical step towards securing the Internet. By validating data origin and data integrity, DNSSEC complements other Internet security mechanisms, such as SSL. It is worth noting that although we have used web access in the examples above, DNS infrastructure is widely used in many other Internet applications, including email.
Currently Google Public DNS is serving more than 130 billion DNS queries on average (peaking at 150 billion) from more than 70 million unique IP addresses each day. However, only 7% of queries from the client side are DNSSEC-enabled (about 3% requesting validation and 4% requesting DNSSEC data but no validation) and about 1% of DNS responses from the name server side are signed. Overall, DNSSEC is still at an early stage and we hope that our support will help expedite its deployment.
Effective deployment of DNSSEC requires action from both DNS resolvers and authoritative name servers. Resolvers, especially those of ISPs and other public resolvers, need to start validating DNS responses. Meanwhile, domain owners have to sign their domains. Today, about 1/3 of top-level domains have been signed, but most second-level domains remain unsigned. We encourage all involved parties to push DNSSEC deployment and further protect Internet users from DNS-based network intrusions.
For more information about Google Public DNS, please visit: https://developers.google.com/speed/public-dns. In particular, more details about our DNSSEC support can be found in the FAQ and Security pages. Additionally, general specifications of the DNSSEC standard can be found in RFCs 4033, 4034, 4035, and 5155.
Update March 21: We've been listening to your questions and would like to clarify that validation is not yet enabled for non-DNSSEC aware clients. As a first step, we launched DNSSEC validation as an opt-in feature and will only perform validation if clients explicitly request it. We're going to work to minimize the impact of any DNSSEC misconfigurations that could cause connection breakages before we enable validation by default for all clients that have not explicitly opted out.
Update May 6: We've enabled DNSSEC validation by default. That means all clients are now protected and responses to all queries will be validated unless clients explicitly opt out.
Tuesday, March 12, 2013
We created a new Help for hacked sites informational series to help all levels of site owners understand how they can recover their hacked site. The series includes over a dozen articles and 80+ minutes of informational videos—from the basics of what it means for a site to be hacked to diagnosing specific malware infection types.
Over 25% of sites that are hacked may remain compromised
In StopBadware and Commtouch’s 2012 survey of more than 600 webmasters of hacked sites, 26% of site owners reported that their site was still compromised while 2% completely abandoned their site. We hope that by adding our educational resources to the great tools and information already available from the security community, more hacked sites can restore their unique content and make it safely available to users. The fact remains, however, that the process to recovery requires fairly advanced system administrator skills and knowledge of source code. Without help from others—perhaps their hoster or a trusted expert—many site owners may still struggle to recover.
Hackers’ tactics are difficult for site owners to detect
Cybercriminals employ various tricks to avoid the site owner’s detection, making recovery difficult for the average site owner. One technique is adding “hidden text” to the site’s page so users don’t see the damage, but search engines still process the content. Often the case for sites hacked with spam, hackers abuse a good site to help their site (commonly pharmaceutical or poker sites) rank in search results.
In cases of sites hacked to distribute malware, Google provides verified site owners with a sample of infected URLs, often with their malware infection type, such as Server configuration (using the server’s configuration file to redirect users to malicious content). In Help for hacked sites, Lucas Ballard, a software engineer on our Safe Browsing team, explains how to locate and clean this malware infection type.
Reminder to keep your site secure
I realize that reminding you to keep your site secure is a bit like my mother yelling “don’t forget to bring a coat!” as I leave her sunny California residence. Like my mother, I can’t help myself. Please remember to:
- Be vigilant about keeping software updated
- Understand the security practices of all applications, plugins, third-party software, etc., before you install them on your server
- Remove unnecessary or unused software
- Enforce creation of strong passwords
- Keep all devices used to log in to your web server secure (updated operating system and browser)
- Make regular, automated backups
Tuesday, February 19, 2013
Have you ever gotten a plea to wire money to a friend stranded at an international airport? An oddly written message from someone you haven’t heard from in ages? Compared to five years ago, more scams, illegal, fraudulent or spammy messages today come from someone you know. Although spam filters have become very powerful—in Gmail, less than 1 percent of spam emails make it into an inbox—these unwanted messages are much more likely to make it through if they come from someone you’ve been in contact with before. As a result, in 2010 spammers started changing their tactics—and we saw a large increase in fraudulent mail sent from Google Accounts. In turn, our security team has developed new ways to keep you safe, and dramatically reduced the amount of these messages.
Spammers’ new trick—hijacking accounts
To improve their chances of beating a spam filter by sending you spam from your contact’s account, the spammer first has to break into that account. This means many spammers are turning into account thieves. Every day, cyber criminals break into websites to steal databases of usernames and passwords—the online “keys” to accounts. They put the databases up for sale on the black market, or use them for their own nefarious purposes. Because many people re-use the same password across different accounts, stolen passwords from one site are often valid on others.
With stolen passwords in hand, attackers attempt to break into accounts across the web and across many different services. We’ve seen a single attacker using stolen passwords to attempt to break into a million different Google accounts every single day, for weeks at a time. A different gang attempted sign-ins at a rate of more than 100 accounts per second. Other services are often more vulnerable to this type of attack, but when someone tries to log into your Google Account, our security system does more than just check that a password is correct.
Every time you sign in to Google, whether via your web browser once a month or an email program that checks for new mail every five minutes, our system performs a complex risk analysis to determine how likely it is that the sign-in really comes from you. In fact, there are more than 120 variables that can factor into how a decision is made.
If a sign-in is deemed suspicious or risky for some reason—maybe it’s coming from a country oceans away from your last sign-in—we ask some simple questions about your account. For example, we may ask for the phone number associated with your account, or for the answer to your security question. These questions are normally hard for a hijacker to solve, but are easy for the real owner. Using security measures like these, we've dramatically reduced the number of compromised accounts by 99.7 percent since the peak of these hijacking attempts in 2011.
While we do our best to keep spammers at bay, you can help protect your account by making sure you’re using a strong, unique password for your Google Account, upgrading your account to use 2-step verification, and updating the recovery options on your account such as your secondary email address and your phone number. Following these three steps can help prevent your account from being hijacked—this means less spam for your friends and contacts, and improved security and privacy for you.
(Cross-posted from the Official Google Blog)
Thursday, January 10, 2013
Protecting user security and privacy is a huge responsibility, and software security is a big part of it. Learning about new ways to “break” applications is important, but learning preventative skills to use when “building” software, like secure design and coding practices, is just as critical. To help promote secure development habits, Google is once again partnering with the organizers of SyScan to host Hardcode, a secure coding contest on the Google App Engine platform.
Participation will be open to teams of up to 5 full-time students (undergraduate or high school, additional restrictions may apply). Contestants will be asked to develop open source applications that meet a set of functional and security requirements. The contest will consist of two rounds: a qualifying round over the Internet, with broad participation from any team of students, and a final round, to be held during SyScan on April 23-25 in Singapore.
During the qualifying round, teams will be tasked with building an application and describing its security design. A panel of judges will assess all submitted applications and select the top five to compete in the final round.
At SyScan, the five finalist teams will be asked to develop a set of additional features and fix any security flaws identified in their qualifying submission. After two more days of hacking, a panel of judges will rank the projects and select a grand prize winning team that will receive $20,000 Singapore dollars. The 2nd-5th place finalist teams will receive $15,000, $10,000, $5,000, and $5,000 Singapore dollars, respectively.
Hardcode begins on Friday, January 18th. Full contest details will be be announced via our mailing list, so subscribe there for more information!
Thursday, January 3, 2013
Late on December 24, Chrome detected and blocked an unauthorized digital certificate for the "*.google.com" domain. We investigated immediately and found the certificate was issued by an intermediate certificate authority (CA) linking back to TURKTRUST, a Turkish certificate authority. Intermediate CA certificates carry the full authority of the CA, so anyone who has one can use it to create a certificate for any website they wish to impersonate.
In response, we updated Chrome’s certificate revocation metadata on December 25 to block that intermediate CA, and then alerted TURKTRUST and other browser vendors. TURKTRUST told us that based on our information, they discovered that, in August 2011, they had mistakenly issued two intermediate CA certificates to organizations that should have instead received regular SSL certificates. On December 26, we pushed another Chrome metadata update to block the second mistaken CA certificate and informed the other browser vendors.
Our actions addressed the immediate problem for our users. Given the severity of the situation, we will update Chrome again in January to no longer indicate Extended Validation status for certificates issued by TURKTRUST, though connections to TURKTRUST-validated HTTPS servers may continue to be allowed.
Since our priority is the security and privacy of our users, we may also decide to take additional action after further discussion and careful consideration.