Wednesday, December 12, 2012

Helping webmasters with hacked sites

(Cross-posted from the Webmaster Central Blog)

Having your website hacked can be a frustrating experience and we want to do everything we can to help webmasters get their sites cleaned up and prevent compromises from happening again. With this post we wanted to outline two common types of attacks as well as provide clean-up steps and additional resources that webmasters may find helpful.

To best serve our users it’s important that the pages that we link to in our search results are safe to visit. Unfortunately, malicious third-parties may take advantage of legitimate webmasters by hacking their sites to manipulate search engine results or distribute malicious content and spam. We will alert users and webmasters alike by labeling sites we’ve detected as hacked by displaying a “This site may be compromised” warning in our search results:

We want to give webmasters the necessary information to help them clean up their sites as quickly as possible. If you’ve verified your site in Webmaster Tools we’ll also send you a message when we’ve identified your site has been hacked, and when possible give you example URLs.

Occasionally, your site may become compromised to facilitate the distribution of malware. When we recognize that, we’ll identify the site in our search results with a label of “This site may harm your computer” and browsers such as Chrome may display a warning when users attempt to visit. In some cases, we may share more specific information in the Malware section of Webmaster Tools. We also have specific tips for preventing and removing malware from your site in our Help Center.

Two common ways malicious third-parties may compromise your site are the following:

Injected Content

Hackers may attempt to influence search engines by injecting links leading to sites they own. These links are often hidden to make it difficult for a webmaster to detect this has occurred. The site may also be compromised in such a way that the content is only displayed when the site is visited by search engine crawlers.

Example of injected pharmaceutical content

If we’re able to detect this, we’ll send a message to your Webmaster Tools account with useful details. If you suspect your site has been compromised in this way, you can check the content your site returns to Google by using the Fetch as Google tool. A few good places to look for the source of such behavior of such a compromise are .php files, template files and CMS plugins.

Redirecting Users

Hackers might also try to redirect users to spammy or malicious sites. They may do it to all users or target specific users, such as those coming from search engines or those on mobile devices. If you’re able to access your site when visiting it directly but you experience unexpected redirects when coming from a search engine, it’s very likely your site has been compromised in this manner.

One of the ways hackers accomplish this is by modifying server configuration files (such as Apache’s .htaccess) to serve different content to different users, so it’s a good idea to check your server configuration files for any such modifications.

This malicious behavior can also be accomplished by injecting JavaScript into the source code of your site. The JavaScript may be designed to hide its purpose so it may help to look for terms like “eval”, “decode”, and “escape”.

Cleanup and Prevention

If your site has been compromised, it’s important to not only clean up the changes made to your site but to also address the vulnerability that allowed the compromise to occur. We have instructions for cleaning your site and preventing compromises while your hosting provider and our Malware and Hacked sites forum are great resources if you need more specific advice.

Once you’ve cleaned up your site you should submit a reconsideration request that if successful will remove the warning label in our search results.

As always, if you have any questions or feedback, please tell us in the Webmaster Help Forum.

Monday, September 17, 2012

Adding OAuth 2.0 support for IMAP/SMTP and XMPP to enhance auth security

(Cross-posted from the Google Developers Blog)

Our users and developers take password security seriously and so do we. Passwords alone have weaknesses we all know about, so we’re working over the long term to support additional mechanisms to help protect user information. Over a year ago, we announced a recommendation that OAuth 2.0 become the standard authentication mechanism for our APIs so you can make the safest apps using Google platforms. You can use OAuth 2.0 to build clients and websites that securely access account data and work with our advanced security features, such as 2-step verification. But our commitment to OAuth 2.0 is not limited to web APIs. Today we’re going a step further by adding OAuth 2.0 support for IMAP/SMTP and XMPP. Developers using these protocols can now move to OAuth 2.0, and users will experience the benefits of more secure OAuth 2.0 clients.

When clients use OAuth 2.0, they never ask users for passwords. Users have tighter control over what data clients have access to, and clients never see a user's password, making it much harder for a password to be stolen. If a user has their laptop stolen, or has any reason to believe that a client has been compromised, they can revoke the client’s access without impacting anything else that has access to their data.

We are also announcing the deprecation of older authentication mechanisms. If you’re using these you should move to the new OAuth 2.0 APIs.
Our team has been working hard since we announced our support of OAuth in 2008 to make it easy for you to create applications that use more secure mechanisms than passwords to protect user information. Check out the Google Developers Blog for examples, including the OAuth 2.0 Playground and Service Accounts, or see Using OAuth 2.0 to Access Google APIs.

Wednesday, August 29, 2012

Content hosting for the modern web

Our applications host a variety of web content on behalf of our users, and over the years we learned that even something as simple as serving a profile image can be surprisingly fraught with pitfalls. Today, we wanted to share some of our findings about content hosting, along with the approaches we developed to mitigate the risks.

Historically, all browsers and browser plugins were designed simply to excel at displaying several common types of web content, and to be tolerant of any mistakes made by website owners. In the days of static HTML and simple web applications, giving the owner of the domain authoritative control over how the content is displayed wasn’t of any importance.

It wasn’t until the mid-2000s that we started to notice a problem: a clever attacker could manipulate the browser into interpreting seemingly harmless images or text documents as HTML, Java, or Flash—thus gaining the ability to execute malicious scripts in the security context of the application displaying these documents (essentially, a cross-site scripting flaw). For all the increasingly sensitive web applications, this was very bad news.

During the past few years, modern browsers began to improve. For example, the browser vendors limited the amount of second-guessing performed on text documents, certain types of images, and unknown MIME types. However, there are many standards-enshrined design decisions—such as ignoring MIME information on any content loaded through <object> , <embed> , or <applet> —that are much more difficult to fix; these practices may lead to vulnerabilities similar to the GIFAR bug.

Google’s security team played an active role in investigating and remediating many content sniffing vulnerabilities during this period. In fact, many of the enforcement proposals were first prototyped in Chrome. Even still, the overall progress is slow; for every resolved problem, researchers discover a previously unknown flaw in another browser mechanism. Two recent examples are the Byte Order Mark (BOM) vulnerability reported to us by Masato Kinugawa, or the MHTML attacks that we have seen happening in the wild.

For a while, we focused on content sanitization as a possible workaround - but in many cases, we found it to be insufficient. For example, Aleksandr Dobkin managed to construct a purely alphanumeric Flash applet, and in our internal work the Google security team created images that can be forced to include a particular plaintext string in their body, after being scrubbed and recoded in a deterministic way.

In the end, we reacted to this raft of content hosting problems by placing some of the high-risk content in separate, isolated web origins—most commonly * There, the “sandboxed” files pose virtually no threat to the applications themselves, or to authentication cookies. For public content, that’s all we need: we may use random or user-specific subdomains, depending on the degree of isolation required between unrelated documents, but otherwise the solution just works.

The situation gets more interesting for non-public documents, however. Copying users’ normal authentication cookies to the “sandbox” domain would defeat the purpose. The natural alternative is to move the secret token used to confer access rights from the Cookie header to a value embedded in the URL, and make the token unique to every document instead of keeping it global.

While this solution eliminates many of the significant design flaws associated with HTTP cookies, it trades one imperfect authentication mechanism for another. In particular, it’s important to note there are more ways to accidentally leak a capability-bearing URL than there are to accidentally leak cookies; the most notable risk is disclosure through the Referer header for any document format capable of including external subresources or of linking to external sites.

In our applications, we take a risk-based approach. Generally speaking, we tend to use three strategies:
  • In higher risk situations (e.g. documents with elevated risk of URL disclosure), we may couple the URL token scheme with short-lived, document-specific cookies issued for specific subdomains of This mechanism, known within Google as FileComp, relies on a range of attack mitigation strategies that are too disruptive for Google applications at large, but work well in this highly constrained use case.
  • In cases where the risk of leaks is limited but responsive access controls are preferable (e.g., embedded images), we may issue URLs bound to a specific user, or ones that expire quickly.
  • In low-risk scenarios, where usability requirements necessitate a more balanced approach, we may opt for globally valid, longer-lived URLs.
Of course, the research into the security of web browsers continues, and the landscape of web applications is evolving rapidly. We are constantly tweaking our solutions to protect Google users even better, and even the solutions described here may change. Our commitment to making the Internet a safer place, however, will never waver.

Tuesday, June 19, 2012

Safe Browsing - Protecting Web Users for 5 Years and Counting

It’s been five years since we officially announced malware and phishing protection via our Safe Browsing effort. The goal of Safe Browsing is still the same today as it was five years ago: to protect people from malicious content on the Internet. Today, this protection extends not only to Google’s search results and ads, but also to popular web browsers such as Chrome, Firefox and Safari.

To achieve comprehensive and timely detection of new threats, the Safe Browsing team at Google has labored continuously to adapt to rising challenges and to build an infrastructure that automatically detects harmful content around the globe.

For a quick sense of the scale of our effort:
  • We protect 600 million users through built-in protection for Chrome, Firefox, and Safari, where we show several million warnings every day to Internet users. You may have seen our telltale red warnings pop up — when you do, please don’t go to sites we've flagged for malware or phishing. Our free and public Safe Browsing API allows other organizations to keep their users safe by using the data we’ve compiled.
  • We find about 9,500 new malicious websites every day. These are either innocent websites that have been compromised by malware authors, or others that are built specifically for malware distribution or phishing. While we flag many sites daily, we strive for high quality and have had only a handful of false positives.
  • Approximately 12-14 million Google Search queries per day show our warning to caution users from going to sites that are currently compromised. Once a site has been cleaned up, the warning is lifted.
  • We provide malware warnings for about 300 thousand downloads per day through our download protection service for Chrome.
  • We send thousands of notifications daily to webmasters. Signing up with Webmaster Tools helps us communicate directly with webmasters when we find something on their site, and our ongoing partnership with helps webmasters who can't sign up or need additional help.
  • We also send thousands of notifications daily to Internet Service Providers (ISPs) & CERTs to help them keep their networks clean. Network administrators can sign up to receive frequent alerts.
By protecting Internet users, webmasters, ISPs, and Google over the years, we've built up a steadily more sophisticated understanding of web-based malware and phishing. These aren’t completely solvable problems because threats continue to evolve, but our technologies and processes do, too.

From here we’ll try to hit a few highlights from our journey.


Many phishers go right for the money, and that pattern is reflected in the continued heavy targeting of online commerce sites like eBay & PayPal. Even though we’re still seeing some of the same techniques we first saw 5+ years ago, since they unfortunately still catch victims, phishing attacks are also getting more creative and sophisticated. As they evolve, we improve our system to catch more and newer attacks (Chart 1). Modern attacks are:
  • Faster - Many phishing webpages (URLs) remain online for less than an hour in an attempt to avoid detection.
  • More diverse - Targeted “spear phishing” attacks have become increasingly common. Additionally, phishing attacks are now targeting companies, banks, and merchants globally (Chart 2).
  • Used to distribute malware - Phishing sites commonly use the look and feel of popular sites and social networks to trick users into installing malware. For example, these rogue sites may ask to install a binary or browser extension to enable certain fake content.
(Chart 1)

(Chart 2)


Safe Browsing identifies two main categories of websites that may harm visitors:
  • Legitimate websites that are compromised in large numbers so they can deliver or redirect to malware (Chart 3).
  • Attack websites that are specifically built to distribute malware are used in increasing numbers (Chart 4).
When a legitimate website is compromised, it’s usually modified to include content from an attack site or to redirect to an attack site. These attack sites will often deliver "Drive by downloads" to visitors. A drive by download exploits a vulnerability in the browser to execute a malicious program on a user's computer without their knowledge.

Drive by downloads install and run a variety of malicious programs, such as:
  • Spyware to gather information like your banking credentials.
  • Malware that uses your computer to send spam.
(Chart 3)

Attack sites are purposely built for distributing malware and try to avoid detection by services such as Safe Browsing. To do so, they adopt several techniques, such as rapidly changing their location through free web hosting, dynamic DNS records, and automated generation of new domain names (Chart 4).

(Chart 4)

As companies have designed browsers and plugins to be more secure over time, malware purveyors have also employed social engineering, where the malware author tries to deceive the user into installing malicious software without the need for any software vulnerabilities. A good example is a “Fake Anti-Virus” alert that masquerades as a legitimate security warning, but it actually infects computers with malware.

While we see socially engineered attacks still trailing behind drive by downloads in frequency, this is a fast-growing category likely due to improved browser security.

How can you help prevent malware and phishing?

Our system is designed to protect users at high volumes (Chart 5), yet here are a few things that you can do to help:
  • Don't ignore our warnings. Legitimate sites are commonly modified to contain malware or phishing threats until the webmaster has cleaned their site. Malware is often designed to not be seen, so you won't know if your computer becomes infected. It’s best to wait for the warning to be removed before potentially exposing your machine to a harmful infection.
  • Help us find bad sites. Chrome users can select the check box on the red warning page. The data sent to us helps us find bad sites more quickly and helps protect other users.
  • Register your website with Google Webmaster Tools. Doing so helps us inform you quickly if we find suspicious code on your website at any point.

(Chart 5)

Looking Forward

The threat landscape changes rapidly. Our adversaries are highly motivated by making money from unsuspecting victims, and at great cost to everyone involved.

Our tangible impact in making the web more secure and our ability to directly protect users from harm has been a great source of motivation for everyone on the Safe Browsing team. We are also happy that our free data feed has become the de facto base of comparison for academic research in this space.

As we look forward, Google continues to invest heavily in the Safe Browsing team, enabling us to counter newer forms of abuse. In particular, our team supplied the technology underpinning these recent efforts:
For their strong efforts over the years, I thank Panayiotis Mavrommatis, Brian Ryner, Lucas Ballard, Moheeb Abu Rajab, Fabrice Jaubert, Nav Jagpal, Ian Fette, along with the whole Safe Browsing Team.

Tuesday, June 12, 2012

Microsoft XML vulnerability under active exploitation

Today Microsoft issued a Security Advisory describing a vulnerability in the Microsoft XML component. We discovered this vulnerability—which is leveraged via an uninitialized variable—being actively exploited in the wild for targeted attacks, and we reported it to Microsoft on May 30th. Over the past two weeks, Microsoft has been responsive to the issue and has been working with us. These attacks are being distributed both via malicious web pages intended for Internet Explorer users and through Office documents. Users running Windows XP up to and including Windows 7 are known to be vulnerable.

As part of the advisory, Microsoft suggests installing a Fix it solution that will prevent the exploitation of this vulnerability. We strongly recommend Internet Explorer and Microsoft Office users immediately install the Fix it while Microsoft develops and publishes a final fix as part of a future advisory.

Tuesday, June 5, 2012

Security warnings for suspected state-sponsored attacks

We are constantly on the lookout for malicious activity on our systems, in particular attempts by third parties to log into users’ accounts unauthorized. When we have specific intelligence—either directly from users or from our own monitoring efforts—we show clear warning signs and put in place extra roadblocks to thwart these bad actors.

Today, we’re taking that a step further for a subset of our users, who we believe may be the target of state-sponsored attacks. You can see what this new warning looks like here:

If you see this warning it does not necessarily mean that your account has been hijacked. It just means that we believe you may be a target, of phishing or malware for example, and that you should take immediate steps to secure your account. Here are some things you should do immediately: create a unique password that has a good mix of capital and lowercase letters, as well punctuation marks and numbers; enable 2-step verification as additional security; and update your browser, operating system, plugins, and document editors. Attackers often send links to fake sign-in pages to try to steal your password, so be careful about where you sign in to Google and look for in your browser bar. These warnings are not being shown because Google’s internal systems have been compromised or because of a particular attack.

You might ask how we know this activity is state-sponsored. We can’t go into the details without giving away information that would be helpful to these bad actors, but our detailed analysis—as well as victim reports—strongly suggest the involvement of states or groups that are state-sponsored.

We believe it is our duty to be proactive in notifying users about attacks or potential attacks so that they can take action to protect their information. And we will continue to update these notifications based on the latest information.

Tuesday, May 22, 2012

Notifying users affected by the DNSChanger malware

Starting today we’re undertaking an effort to notify roughly half a million people whose computers or home routers are infected with a well-publicized form of malware known as DNSChanger. After successfully alerting a million users last summer to a different type of malware, we’ve replicated this method and have started showing warnings via a special message that will appear at the top of the Google search results page for users with affected devices.

The Domain Name System (DNS) translates familiar web address names like into a numerical address that computers use to send traffic to the right place. The DNSChanger malware modifies DNS settings to use malicious servers that point users to fake sites and other harmful locations. DNSChanger attempts to modify the settings on home routers as well, meaning other computers and mobile devices may also be affected.

Since the FBI and Estonian law enforcement arrested a group of people and transferred control of the rogue DNS servers to the Internet Systems Consortium in November 2011, various ISPs and other groups have attempted to alert victims. However, many of these campaigns have had limited success because they could not target the affected users, or did not appear in the user’s preferred language (only half the affected users speak English as their primary language). At the current disinfection rate hundreds of thousands of devices will still be infected when the court order expires on July 9th and the replacement DNS servers are shut down. At that time, any remaining infected machines may experience slowdowns or completely lose Internet access.

Our goal with this notification is to raise awareness of DNSChanger among affected users. We believe directly messaging affected users on a trusted site and in their preferred language will produce the best possible results. While we expect to notify over 500,000 users within a week, we realize we won’t reach every affected user. Some ISPs have been taking their own actions, a few of which will prevent our warning from being displayed on affected devices. We also can’t guarantee that our recommendations will always clean infected devices completely, so some users may need to seek additional help. These conditions aside, if more devices are cleaned and steps are taken to better secure the machines against further abuse, the notification effort will be well worth it.

Monday, April 23, 2012

Spurring more vulnerability research through increased rewards

We recently marked the anniversary of our Vulnerability Reward Program, possibly the first permanent program of its kind for web properties. This collaboration with the security research community has far surpassed our expectations: we have received over 780 qualifying vulnerability reports that span across the hundreds of Google-developed services, as well as the software written by fifty or so companies that we have acquired. In just over a year, the program paid out around $460,000 to roughly 200 individuals. We’re confident beyond any doubt the program has made Google users safer.

Today, to celebrate the success of this effort and to underscore our commitment to security, we are rolling out updated rules for our program — including new reward amounts for critical bugs:
  • $20,000 for qualifying vulnerabilities that the reward panel determines will allow code execution on our production systems. 
  • $10,000 for SQL injection and equivalent vulnerabilities; and for certain types of information disclosure, authentication, and authorization bypass bugs. 
  • Up to $3,133.7 for many types of XSS, XSRF, and other high-impact flaws in highly sensitive applications. 
To help focus the research on bringing the greatest benefit to our users, the new rules offer reduced rewards for vulnerabilities discovered in non-integrated acquisitions and for lower risk issues. For example, while every flaw deserves appropriate attention, we are likely to issue a higher reward for a cross-site scripting vulnerability in Google Wallet than one in Google Art Project, where the potential risk to user data is significantly smaller.

Happy hunting - and if you find a security problem, please let us know!

Friday, March 30, 2012

An improved Google Authenticator app to celebrate millions of 2-step verification users

Since we first made 2-step verification available to all Google users in February of 2011, millions of people around the world have chosen to use this extra layer of security to protect their Google Accounts. Thousands more are signing up every day. And recently, we updated the feature’s companion smartphone app, Google Authenticator, for Android users.

2-step verification works by requiring users to enter a verification code when signing in using a computer they haven’t previously marked as “trusted.” Many users choose to receive their codes via SMS or voice call, but smartphone users also have the option to generate codes on their phone by installing the Google Authenticator app — an option that is particularly useful while traveling, or where cellular coverage is unreliable. You can use Google Authenticator to generate a valid code even when your phone isn’t connected to a cellular or data network.

We want 2-step verification to be simple to use, and therefore we are working continually to make it easier for users to sign up, manage their settings, and maintain easy access to their verification codes at any time and from anywhere. Our updated Google Authenticator app has an improved look-and-feel, as well as fundamental upgrades to the back-end security and infrastructure that necessitated the migration to a new app. Future improvements, however, will use the familiar Android update procedure.

Current Google Authenticator users will be prompted to upgrade to the new version when they launch the app. We’ve worked hard to make the upgrade process as smooth as possible, but if you have questions please refer to the Help Center article for more information. And, if you aren’t already a 2-step verification user, we encourage you to give it a try.

Thursday, February 9, 2012

Celebrating one year of web vulnerability research

In November 2010, we introduced a different kind of vulnerability reward program that encourages people to find and report security bugs in Google’s web applications. By all available measures, the program has been a big success. Before we embark further, we wanted to pause and share a few things that we’ve learned from the experience.

“Bug bounty” programs open up vulnerability research to wider participation.

On the morning of our announcement of the program last November, several of us guessed how many valid reports we might see during the first week. Thanks to an already successful Chromium reward program and a healthy stream of regular contributions to our general security submissions queue, most estimates settled around 10 or so. At the end of the first week, we ended up with 43 bug reports. Over the course of the program, we’ve seen more than 1100 legitimate issues (ranging from low severity to higher) reported by over 200 individuals, with 730 of those bugs qualifying for a reward. Roughly half of the bugs that received a reward were discovered in software written by approximately 50 companies that Google acquired; the rest were distributed across applications developed by Google (several hundred new ones each year). Significantly, the vast majority of our initial bug reporters had never filed bugs with us before we started offering monetary rewards.

Developing quality bug reports pays off... for everyone.

A well-run vulnerability reward program attracts high quality reports, and we’ve seen a whole lot of them. To date we’ve paid out over $410,000 for web app vulnerabilities to directly support researchers and their efforts. Thanks to the generosity of these bug reporters, we have also donated $19,000 to charities of their choice. It’s not all about money, though. Google has gotten better and stronger as a result of this work. We get more bug reports, which means we get more bug fixes, which means a safer experience for our users.

Bug bounties — the more, the merrier!

We benefited from looking at examples of other types of vulnerability reward programs when designing our own. Similarly, in the months following our reward program kick-off, we saw other companies developing reward programs and starting to focus more on web properties. Over time, these programs can help companies build better relationships with the security research community. As the model replicates, the opportunity to improve the overall security of the web broadens.

And with that, we turn toward the year ahead. We’re looking forward to new reports and ongoing relationships with the researchers who are helping make Google products more secure.

Thursday, February 2, 2012

Android and Security

We frequently get asked about how we defend Android users from malware and other threats. As the Android platform continues its tremendous growth, people wonder how we can maintain a trustworthy experience with Android Market while preserving the openness that remains a hallmark of our overall approach. We’ve been working on lots of defenses, and they have already made a real and measurable difference for our users’ security. Read more about how we defend against malware in Android Market on the Google Mobile Blog here.

Sunday, January 29, 2012

Landing another blow against email phishing

Email phishing, in which someone tries to trick you into revealing personal information by sending fake emails that look legitimate, remains one of the biggest online threats. One of the most popular methods that scammers employ is something called domain spoofing. With this technique, someone sends a message that seems legitimate when you look at the “From” line even though it’s actually a fake. Email phishing is costing regular people and companies millions of dollars each year, if not more, and in response, Google and other companies have been talking about how we can move beyond the solutions we’ve developed individually over the years to make a real difference for the whole email industry.

Industry groups come and go, and it’s not always easy to tell at the beginning which ones are actually going to generate good solutions. When the right contributors come together to solve real problems, though, real things happen. That’s why we’re particularly optimistic about today’s announcement of, a passionate collection of companies focused on significantly cutting down on email phishing and other malicious mail.

Building upon the work of previous mail authentication standards like SPF and DKIM, DMARC is responding to domain spoofing and other phishing methods by creating a standard protocol by which we’ll be able to measure and enforce the authenticity of emails. With DMARC, large email senders can ensure that the email they send is being recognized by mail providers like Gmail as legitimate, as well as set policies so that mail providers can reject messages that try to spoof the senders’ addresses.

We’ve been active in the leadership of the DMARC group for almost two years, and now that Gmail and several other large mail senders and providers — namely Facebook, LinkedIn, and PayPal — are actively using the DMARC specification, the road is paved for more members of the email ecosystem to start getting a handle on phishing. Our recent data indicates that roughly 15% of non-spam messages in Gmail are already coming from domains protected by DMARC, which means Gmail users like you don’t need to worry about spoofed messages from these senders. The phishing potential plummets when the system just works, and that’s what DMARC provides.

If you’re a large email sender and you want to try out the DMARC specification, you can learn more at the DMARC website. Even if you’re not ready to take on the challenge of authenticating all your outbound mail just yet, there’s no reason to not sign up to start receiving reports of mail that fraudulently claims to originate from your address. With further adoption of DMARC, we can all look forward to a more trustworthy overall experience with email.

Monday, January 16, 2012

Tech tips that are Good to Know

(Cross-posted from the Official Google Blog)

Does this person sound familiar? He can’t be bothered to type a password into his phone every time he wants to play a game of Angry Birds. When he does need a password, maybe for his email or bank website, he chooses one that’s easy to remember like his sister’s name—and he uses the same one for each website he visits. For him, cookies come from the bakery, IP addresses are the locations of Intellectual Property and a correct Google search result is basically magic.

Most of us know someone like this. Technology can be confusing, and the industry often fails to explain clearly enough why digital literacy matters. So today in the U.S. we’re kicking off Good to Know, our biggest-ever consumer education campaign focused on making the web a safer, more comfortable place. Our ad campaign, which we introduced in the U.K. and Germany last fall, offers privacy and security tips: Use 2-step verification! Remember to lock your computer when you step away! Make sure your connection to a website is secure! It also explains some of the building blocks of the web like cookies and IP addresses. Keep an eye out for the ads in newspapers and magazines, online and in New York and Washington, D.C. subway stations.

The campaign and Good to Know website build on our commitment to keeping people safe online. We’ve created resources like privacy videos, the Google Security Center, the Family Safety Center and Teach Parents Tech to help you develop strong privacy and security habits. We design for privacy, building tools like Google Dashboard, Me on the Web, the Ads Preferences Manager and Google+ Circles—with more on the way.

We encourage you to take a few minutes to check out the Good to Know site, watch some of the videos, and be on the lookout for ads in your favorite newspaper or website. We hope you’ll learn something new about how to protect yourself online—tips that are always good to know!

Update 1/17: Updated to include more background on Good to Know.