<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Google Data &#187; Google Online Security</title>
	<atom:link href="/category/google-online-security/feed/" rel="self" type="application/rss+xml" />
	<link>https://googledata.org</link>
	<description>Everything Google: News, Products, Services, Content, Culture</description>
	<lastBuildDate>Wed, 28 Dec 2016 21:09:26 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1.13</generator>
	<item>
		<title>Project Wycheproof</title>
		<link>https://googledata.org/google-online-security/project-wycheproof/</link>
		<comments>https://googledata.org/google-online-security/project-wycheproof/#comments</comments>
		<pubDate>Mon, 19 Dec 2016 17:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=de3f4f0882a8427fb1011104e4e7fb63</guid>
		<description><![CDATA[<span>Posted by Daniel Bleichenbacher, Security Engineer and Thai Duong, Security Engineer</span><br />We&#8217;re excited to announce the release of <a href="https://github.com/google/wycheproof">Project Wycheproof</a>, a set of security tests that check cryptographic software libraries for known weaknesses. We&#8217;ve developed over 80 test cases which have uncovered more than <a href="https://github.com/google/wycheproof/blob/master/doc/bugs.md">40 security bugs</a> (some tests or bugs are not open sourced today, as they are being fixed by vendors). For example, we found that we could recover the private key of widely-used <a href="https://github.com/google/wycheproof/blob/master/java/com/google/security/wycheproof/testcases/DsaTest.java">DSA</a> and <a href="https://github.com/google/wycheproof/blob/master/java/com/google/security/wycheproof/testcases/EcdhTest.java">ECDHC</a> implementations. We also provide ready-to-use tools to check <a href="https://en.wikipedia.org/wiki/Java_Cryptography_Architecture">Java Cryptography Architecture</a> providers such as <a href="http://bouncycastle.org/">Bouncy Castle</a> and the default providers in <a href="http://openjdk.java.net/">OpenJDK</a>.<br /><br />The main motivation for the project is to have an achievable goal. That&#8217;s why we&#8217;ve named it after the Mount Wycheproof, the <a href="https://www.google.com/search?q=smallest+mountain+in+the+world&#38;oq=smallest+mountain+in+the&#38;aqs=chrome.0.0j69i57j0l4.4374j0j4&#38;sourceid=chrome&#38;ie=UTF-8">smallest mountain in the world</a>. The smaller the mountain the easier it is to climb it!<br /><br />In cryptography, subtle mistakes can have catastrophic consequences, and mistakes in open source cryptographic software libraries repeat too often and remain undiscovered for too long. Good implementation guidelines, however, are hard to come by: understanding how to implement cryptography securely requires digesting decades' worth of academic literature. We recognize that software engineers fix and prevent bugs with unit testing, and we found that many cryptographic issues can be resolved by the same means.<br /><br />These observations have prompted us to develop <a href="https://github.com/google/wycheproof">Project Wycheproof</a>, a collection of unit tests that detect known weaknesses or check for expected behaviors of some cryptographic algorithm. Our cryptographers have surveyed the literature and implemented most known attacks. As a result, Project Wycheproof provides tests for most cryptographic algorithms, including RSA, elliptic curve crypto, and authenticated encryption.<br /><br />Our first set of tests are written in Java, because Java has a common cryptographic interface. This allowed us to test multiple providers with a single test suite. While this interface is somewhat low level, and should not be used directly, we still apply a "defense in depth" argument and expect that the implementations are as robust as possible. For example, we consider weak default values to be a significant security flaw. We are converting as many tests into sets of test vectors to simplify porting the tests to other languages.<br /><br />While we are committed to develop as many tests as possible and external contributions are welcome &#8212; if you want to contribute, please read <a href="https://github.com/google/wycheproof/blob/master/CONTRIBUTING.md">CONTRIBUTING</a> before sending us pull requests &#8212; Project Wycheproof is by no means complete. Passing the tests does not imply that the library is secure, it just means that it is not vulnerable to the attacks that Project Wycheproof tries to detect. Cryptographers constantly discover new weaknesses in cryptographic protocols. Nevertheless, with Project Wycheproof developers and users now can check their libraries against a large number of known attacks without having to sift through hundreds of academic papers or become cryptographers themselves.<br /><br />For more information about the tests and what you can do with them, please visit <a href="https://github.com/google/wycheproof">our homepage on GitHub</a>.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Daniel Bleichenbacher, Security Engineer and Thai Duong, Security Engineer</span><br />We’re excited to announce the release of <a href="https://github.com/google/wycheproof">Project Wycheproof</a>, a set of security tests that check cryptographic software libraries for known weaknesses. We’ve developed over 80 test cases which have uncovered more than <a href="https://github.com/google/wycheproof/blob/master/doc/bugs.md">40 security bugs</a> (some tests or bugs are not open sourced today, as they are being fixed by vendors). For example, we found that we could recover the private key of widely-used <a href="https://github.com/google/wycheproof/blob/master/java/com/google/security/wycheproof/testcases/DsaTest.java">DSA</a> and <a href="https://github.com/google/wycheproof/blob/master/java/com/google/security/wycheproof/testcases/EcdhTest.java">ECDHC</a> implementations. We also provide ready-to-use tools to check <a href="https://en.wikipedia.org/wiki/Java_Cryptography_Architecture">Java Cryptography Architecture</a> providers such as <a href="http://bouncycastle.org/">Bouncy Castle</a> and the default providers in <a href="http://openjdk.java.net/">OpenJDK</a>.<br /><br />The main motivation for the project is to have an achievable goal. That’s why we’ve named it after the Mount Wycheproof, the <a href="https://www.google.com/search?q=smallest+mountain+in+the+world&amp;oq=smallest+mountain+in+the&amp;aqs=chrome.0.0j69i57j0l4.4374j0j4&amp;sourceid=chrome&amp;ie=UTF-8">smallest mountain in the world</a>. The smaller the mountain the easier it is to climb it!<br /><br />In cryptography, subtle mistakes can have catastrophic consequences, and mistakes in open source cryptographic software libraries repeat too often and remain undiscovered for too long. Good implementation guidelines, however, are hard to come by: understanding how to implement cryptography securely requires digesting decades' worth of academic literature. We recognize that software engineers fix and prevent bugs with unit testing, and we found that many cryptographic issues can be resolved by the same means.<br /><br />These observations have prompted us to develop <a href="https://github.com/google/wycheproof">Project Wycheproof</a>, a collection of unit tests that detect known weaknesses or check for expected behaviors of some cryptographic algorithm. Our cryptographers have surveyed the literature and implemented most known attacks. As a result, Project Wycheproof provides tests for most cryptographic algorithms, including RSA, elliptic curve crypto, and authenticated encryption.<br /><br />Our first set of tests are written in Java, because Java has a common cryptographic interface. This allowed us to test multiple providers with a single test suite. While this interface is somewhat low level, and should not be used directly, we still apply a "defense in depth" argument and expect that the implementations are as robust as possible. For example, we consider weak default values to be a significant security flaw. We are converting as many tests into sets of test vectors to simplify porting the tests to other languages.<br /><br />While we are committed to develop as many tests as possible and external contributions are welcome — if you want to contribute, please read <a href="https://github.com/google/wycheproof/blob/master/CONTRIBUTING.md">CONTRIBUTING</a> before sending us pull requests — Project Wycheproof is by no means complete. Passing the tests does not imply that the library is secure, it just means that it is not vulnerable to the attacks that Project Wycheproof tries to detect. Cryptographers constantly discover new weaknesses in cryptographic protocols. Nevertheless, with Project Wycheproof developers and users now can check their libraries against a large number of known attacks without having to sift through hundreds of academic papers or become cryptographers themselves.<br /><br />For more information about the tests and what you can do with them, please visit <a href="https://github.com/google/wycheproof">our homepage on GitHub</a>.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/project-wycheproof/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Announcing OSS-Fuzz: Continuous Fuzzing for Open Source Software</title>
		<link>https://googledata.org/google-online-security/announcing-oss-fuzz-continuous-fuzzing-for-open-source-software-3/</link>
		<comments>https://googledata.org/google-online-security/announcing-oss-fuzz-continuous-fuzzing-for-open-source-software-3/#comments</comments>
		<pubDate>Thu, 01 Dec 2016 17:30:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=062a771bc89ade2a4a3bef1a59829dc4</guid>
		<description><![CDATA[<span>Posted by Mike Aizatsky, Kostya Serebryany (Software Engineers, Dynamic Tools); Oliver Chang, Abhishek Arya (Security Engineers, Google Chrome); and Meredith Whittaker (Open Research Lead)</span><br /><i>[Cross-posted from the <a href="https://testing.googleblog.com/2016/12/announcing-oss-fuzz-continuous-fuzzing.html">Google Testing Blog</a>&#160;and the <a href="https://opensource.googleblog.com/2016/12/announcing-oss-fuzz-continuous-fuzzing.html">Google Open Source Blog</a>]</i><br /><br />We are happy to announce <a href="https://github.com/google/oss-fuzz">OSS-Fuzz</a>, a new Beta program developed over the past years with the <a href="https://www.coreinfrastructure.org/">Core Infrastructure Initiative</a> community. This program will provide continuous fuzzing for select core open source software.<br /><br />Open source software is the backbone of the many apps, sites, services, and networked things that make up &#8220;the internet.&#8221; It is important that the open source foundation be stable, secure, and reliable, as cracks and weaknesses impact all who build on it.<br /><br /><a href="https://en.wikipedia.org/wiki/Heartbleed">Recent</a> <a href="https://en.wikipedia.org/wiki/Stagefright_(bug)">security</a> <a href="https://googlechromereleases.blogspot.com/2016/09/stable-channel-updates-for-chrome-os.html">stories</a> confirm that errors like <a href="https://en.wikipedia.org/wiki/Buffer_overflow">buffer overflow</a> and <a href="https://en.wikipedia.org/wiki/Dangling_pointer">use-after-free</a> can have serious, widespread consequences when they occur in critical open source software. These errors are not only serious, but notoriously difficult to find via routine code audits, even for experienced developers. That's where <a href="https://en.wikipedia.org/wiki/Fuzz_testing">fuzz testing</a> comes in. By generating random inputs to a given program, fuzzing triggers and helps uncover errors quickly and thoroughly.<br /><br />In recent years, several efficient general purpose fuzzing engines have been implemented (e.g. <a href="http://lcamtuf.coredump.cx/afl/">AFL</a> and <a href="http://libfuzzer.info/">libFuzzer</a>), and we use them to <a href="https://security.googleblog.com/2016/08/guided-in-process-fuzzing-of-chrome.html">fuzz various components of the Chrome browser</a>. These fuzzers, when combined with <a href="https://github.com/google/sanitizers">Sanitizers</a>, can help find security vulnerabilities (e.g. buffer overflows, use-after-free, bad casts, integer overflows, etc), stability bugs (e.g. null dereferences, memory leaks, out-of-memory, assertion failures, etc) and <a href="https://blog.fuzzing-project.org/31-Fuzzing-Math-miscalculations-in-OpenSSLs-BN_mod_exp-CVE-2015-3193.html">sometimes</a> even logical bugs.<br /><br />OSS-Fuzz&#8217;s goal is to make common software infrastructure more secure and stable by combining modern fuzzing techniques with scalable distributed execution. OSS-Fuzz combines various fuzzing engines (initially, libFuzzer) with Sanitizers (initially, <a href="http://clang.llvm.org/docs/AddressSanitizer.html">AddressSanitizer</a>) and provides a massive distributed execution environment powered by <a href="https://github.com/google/oss-fuzz/blob/master/docs/clusterfuzz.md">ClusterFuzz</a>.<br /><b>Early successes</b><br /><b><br /></b>Our initial trials with OSS-Fuzz have had good results. An example is the <a href="https://www.freetype.org/">FreeType</a> library, which is used on over a <a href="https://en.wikipedia.org/wiki/FreeType#Users">billion devices</a> to display text (and which might even be rendering the characters you are reading now). It is important for FreeType to be stable and secure in an age when fonts are loaded over the Internet. Werner Lemberg, one of the FreeType developers, <a href="https://savannah.nongnu.org/search/?type_of_search=bugs&#38;words=LibFuzzer&#38;offset=0&#38;max_rows=100#results">was</a> an early adopter of OSS-Fuzz. Recently the <a href="http://git.savannah.gnu.org/cgit/freetype/freetype2.git/tree/src/tools/ftfuzzer/ftfuzzer.cc">FreeType fuzzer</a> found a <a href="https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=53">new heap buffer overflow</a> only a few hours after the source change:<br /><pre>ERROR: AddressSanitizer: heap-buffer-overflow on address 0x615000000ffa <br />READ of size 2 at 0x615000000ffa thread T0<br />SCARINESS: 24 (2-byte-read-heap-buffer-overflow-far-from-bounds)<br />   #0 0x885e06 in tt_face_vary_cvtsrc/truetype/ttgxvar.c:1556:31<br /></pre>OSS-Fuzz automatically <a href="https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=53">notified</a> the maintainer, who <a href="http://git.savannah.gnu.org/cgit/freetype/freetype2.git/commit/?id=7eeaf986b5ebb43cbbc8368c6e528ac311d87805">fixed</a> the bug; then OSS-Fuzz automatically <a href="https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=53#c3">confirmed</a> the fix. All in one day! You can see the <a href="https://bugs.chromium.org/p/oss-fuzz/issues/list?can=1&#38;q=type=Bug-Security,Bug%20-component:Infra%20status:Fixed,Verified&#38;sort=-id&#38;colspec=ID%20Type%20Component%20Status%20Library%20Reported%20Owner%20Summary">full list</a> of fixed and disclosed bugs found by OSS-Fuzz so far.<br /><b><br /></b><b>Contributions and feedback are welcome</b><br /><b><br /></b>OSS-Fuzz has already found <a href="https://bugs.chromium.org/p/oss-fuzz/issues/list?can=1&#38;q=-component%3AInfra+-status%3ADuplicate%2CWontFix&#38;sort=-id&#38;colspec=ID+Type+Component+Status+Library+Reported+Owner+Summary&#38;cells=ids">150 bugs</a> in several widely used open source <a href="https://github.com/google/oss-fuzz/tree/master/projects">projects</a> (and churns <b>~4 trillion test cases</b> a week). With your help, we can make fuzzing a standard part of open source development, and work with the broader community of developers and security testers to ensure that bugs in critical open source applications, libraries, and APIs are discovered and fixed. We believe that this approach to automated security testing will result in real improvements to the security and stability of open source software.<br /><br />OSS-Fuzz is launching in Beta right now, and will be accepting suggestions for candidate open source projects. In order for a project to be accepted to OSS-Fuzz, it needs to have a large user base and/or be critical to Global IT infrastructure, a general heuristic that we are intentionally leaving open to interpretation at this early stage. See more details and instructions on how to apply <a href="https://github.com/google/oss-fuzz#accepting-new-projects">here</a>.<br /><br />Once a project is signed up for OSS-Fuzz, it is automatically subject to the 90-day disclosure deadline for newly reported bugs in our <a href="https://bugs.chromium.org/p/oss-fuzz/issues/list">tracker</a> (see details <a href="https://github.com/google/oss-fuzz#bug-disclosure-guidelines">here</a>). This matches industry&#8217;s <a href="https://googleprojectzero.blogspot.com/2015/02/feedback-and-data-driven-updates-to.html">best practices</a> and improves end-user security and stability by getting patches to users faster.<br /><br />Help us ensure this program is truly serving the open source community and the internet which relies on this critical software, contribute and leave your feedback on <a href="https://github.com/google/oss-fuzz">GitHub</a>.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Mike Aizatsky, Kostya Serebryany (Software Engineers, Dynamic Tools); Oliver Chang, Abhishek Arya (Security Engineers, Google Chrome); and Meredith Whittaker (Open Research Lead)</span><br /><i>[Cross-posted from the <a href="https://testing.googleblog.com/2016/12/announcing-oss-fuzz-continuous-fuzzing.html">Google Testing Blog</a>&nbsp;and the <a href="https://opensource.googleblog.com/2016/12/announcing-oss-fuzz-continuous-fuzzing.html">Google Open Source Blog</a>]</i><br /><br />We are happy to announce <a href="https://github.com/google/oss-fuzz">OSS-Fuzz</a>, a new Beta program developed over the past years with the <a href="https://www.coreinfrastructure.org/">Core Infrastructure Initiative</a> community. This program will provide continuous fuzzing for select core open source software.<br /><br />Open source software is the backbone of the many apps, sites, services, and networked things that make up “the internet.” It is important that the open source foundation be stable, secure, and reliable, as cracks and weaknesses impact all who build on it.<br /><br /><a href="https://en.wikipedia.org/wiki/Heartbleed">Recent</a> <a href="https://en.wikipedia.org/wiki/Stagefright_(bug)">security</a> <a href="https://googlechromereleases.blogspot.com/2016/09/stable-channel-updates-for-chrome-os.html">stories</a> confirm that errors like <a href="https://en.wikipedia.org/wiki/Buffer_overflow">buffer overflow</a> and <a href="https://en.wikipedia.org/wiki/Dangling_pointer">use-after-free</a> can have serious, widespread consequences when they occur in critical open source software. These errors are not only serious, but notoriously difficult to find via routine code audits, even for experienced developers. That's where <a href="https://en.wikipedia.org/wiki/Fuzz_testing">fuzz testing</a> comes in. By generating random inputs to a given program, fuzzing triggers and helps uncover errors quickly and thoroughly.<br /><br />In recent years, several efficient general purpose fuzzing engines have been implemented (e.g. <a href="http://lcamtuf.coredump.cx/afl/">AFL</a> and <a href="http://libfuzzer.info/">libFuzzer</a>), and we use them to <a href="https://security.googleblog.com/2016/08/guided-in-process-fuzzing-of-chrome.html">fuzz various components of the Chrome browser</a>. These fuzzers, when combined with <a href="https://github.com/google/sanitizers">Sanitizers</a>, can help find security vulnerabilities (e.g. buffer overflows, use-after-free, bad casts, integer overflows, etc), stability bugs (e.g. null dereferences, memory leaks, out-of-memory, assertion failures, etc) and <a href="https://blog.fuzzing-project.org/31-Fuzzing-Math-miscalculations-in-OpenSSLs-BN_mod_exp-CVE-2015-3193.html">sometimes</a> even logical bugs.<br /><br />OSS-Fuzz’s goal is to make common software infrastructure more secure and stable by combining modern fuzzing techniques with scalable distributed execution. OSS-Fuzz combines various fuzzing engines (initially, libFuzzer) with Sanitizers (initially, <a href="http://clang.llvm.org/docs/AddressSanitizer.html">AddressSanitizer</a>) and provides a massive distributed execution environment powered by <a href="https://github.com/google/oss-fuzz/blob/master/docs/clusterfuzz.md">ClusterFuzz</a>.<br /><b>Early successes</b><br /><b><br /></b>Our initial trials with OSS-Fuzz have had good results. An example is the <a href="https://www.freetype.org/">FreeType</a> library, which is used on over a <a href="https://en.wikipedia.org/wiki/FreeType#Users">billion devices</a> to display text (and which might even be rendering the characters you are reading now). It is important for FreeType to be stable and secure in an age when fonts are loaded over the Internet. Werner Lemberg, one of the FreeType developers, <a href="https://savannah.nongnu.org/search/?type_of_search=bugs&amp;words=LibFuzzer&amp;offset=0&amp;max_rows=100#results">was</a> an early adopter of OSS-Fuzz. Recently the <a href="http://git.savannah.gnu.org/cgit/freetype/freetype2.git/tree/src/tools/ftfuzzer/ftfuzzer.cc">FreeType fuzzer</a> found a <a href="https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=53">new heap buffer overflow</a> only a few hours after the source change:<br /><pre class="prettyprint">ERROR: AddressSanitizer: heap-buffer-overflow on address 0x615000000ffa <br />READ of size 2 at 0x615000000ffa thread T0<br />SCARINESS: 24 (2-byte-read-heap-buffer-overflow-far-from-bounds)<br />   #0 0x885e06 in tt_face_vary_cvtsrc/truetype/ttgxvar.c:1556:31<br /></pre>OSS-Fuzz automatically <a href="https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=53">notified</a> the maintainer, who <a href="http://git.savannah.gnu.org/cgit/freetype/freetype2.git/commit/?id=7eeaf986b5ebb43cbbc8368c6e528ac311d87805">fixed</a> the bug; then OSS-Fuzz automatically <a href="https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=53#c3">confirmed</a> the fix. All in one day! You can see the <a href="https://bugs.chromium.org/p/oss-fuzz/issues/list?can=1&amp;q=type=Bug-Security,Bug%20-component:Infra%20status:Fixed,Verified&amp;sort=-id&amp;colspec=ID%20Type%20Component%20Status%20Library%20Reported%20Owner%20Summary">full list</a> of fixed and disclosed bugs found by OSS-Fuzz so far.<br /><b><br /></b><b>Contributions and feedback are welcome</b><br /><b><br /></b>OSS-Fuzz has already found <a href="https://bugs.chromium.org/p/oss-fuzz/issues/list?can=1&amp;q=-component%3AInfra+-status%3ADuplicate%2CWontFix&amp;sort=-id&amp;colspec=ID+Type+Component+Status+Library+Reported+Owner+Summary&amp;cells=ids">150 bugs</a> in several widely used open source <a href="https://github.com/google/oss-fuzz/tree/master/projects">projects</a> (and churns <b>~4 trillion test cases</b> a week). With your help, we can make fuzzing a standard part of open source development, and work with the broader community of developers and security testers to ensure that bugs in critical open source applications, libraries, and APIs are discovered and fixed. We believe that this approach to automated security testing will result in real improvements to the security and stability of open source software.<br /><br />OSS-Fuzz is launching in Beta right now, and will be accepting suggestions for candidate open source projects. In order for a project to be accepted to OSS-Fuzz, it needs to have a large user base and/or be critical to Global IT infrastructure, a general heuristic that we are intentionally leaving open to interpretation at this early stage. See more details and instructions on how to apply <a href="https://github.com/google/oss-fuzz#accepting-new-projects">here</a>.<br /><br />Once a project is signed up for OSS-Fuzz, it is automatically subject to the 90-day disclosure deadline for newly reported bugs in our <a href="https://bugs.chromium.org/p/oss-fuzz/issues/list">tracker</a> (see details <a href="https://github.com/google/oss-fuzz#bug-disclosure-guidelines">here</a>). This matches industry’s <a href="https://googleprojectzero.blogspot.com/2015/02/feedback-and-data-driven-updates-to.html">best practices</a> and improves end-user security and stability by getting patches to users faster.<br /><br />Help us ensure this program is truly serving the open source community and the internet which relies on this critical software, contribute and leave your feedback on <a href="https://github.com/google/oss-fuzz">GitHub</a>.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/announcing-oss-fuzz-continuous-fuzzing-for-open-source-software-3/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Pixel Security: Better, Faster, Stronger</title>
		<link>https://googledata.org/google-online-security/pixel-security-better-faster-stronger-2/</link>
		<comments>https://googledata.org/google-online-security/pixel-security-better-faster-stronger-2/#comments</comments>
		<pubDate>Thu, 17 Nov 2016 21:56:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=f7dcf2b150ceac41c3ee139110ded1b3</guid>
		<description><![CDATA[<span>Posted by Paul Crowley, Senior Software Engineer and Paul Lawrence, Senior Software Engineer</span><br /><i>[Cross-posted from the <a href="http://android-developers.blogspot.com/2016/11/pixel-security-better-faster-stronger.html">Android Developers Blog</a>]</i><br /><br />Encryption protects your data if your phone falls into someone else's hands. The new Google Pixel and Pixel XL are encrypted by default to offer strong data protection, while maintaining a great user experience with high I/O performance and long battery life. In addition to encryption, the Pixel phones debuted running the Android Nougat release, which has even more <a href="http://android-developers.blogspot.com/2016/09/security-enhancements-in-nougat.html">security improvements</a>.<br /><br />This blog post covers the encryption implementation on Google Pixel devices and how it improves the user experience, performance, and security of the device.<br /><div><a href="https://2.bp.blogspot.com/-xJ9tGc-VGTw/WC4nR_Y3TXI/AAAAAAAAAYw/_nN5zaIcNRYXgr_UWkEHmoIE2x-KUImYgCLcB/s1600/pixel%2Bpic.jpg"><img border="0" height="384" src="https://2.bp.blogspot.com/-xJ9tGc-VGTw/WC4nR_Y3TXI/AAAAAAAAAYw/_nN5zaIcNRYXgr_UWkEHmoIE2x-KUImYgCLcB/s640/pixel%2Bpic.jpg" width="640"></a></div><b>File-Based Encryption Direct Boot experience</b><br />One of the security features introduced in Android Nougat was <a href="https://source.android.com/security/encryption/file-based.html">file-based encryption</a>. File-based encryption (FBE) means different files are encrypted with different keys that can be unlocked independently. FBE also separates data into device encrypted (DE) data and credential encrypted (CE) data.<br /><br /><a href="https://developer.android.com/training/articles/direct-boot.html">Direct boot</a> uses file-based encryption to allow a seamless user experience when a device reboots by combining the unlock and decrypt screen. For users, this means that applications like alarm clocks, accessibility settings, and phone calls are available immediately after boot.<br /><b><br /></b><b>Enhanced with TrustZone&#174; security</b><br /><b><br /></b>Modern processors provide a means to execute code in a mode that remains secure even if the kernel is compromised. On ARM&#174;-based processors this mode is known as TrustZone. Starting in Android Nougat, all disk encryption keys are stored encrypted with keys held by TrustZone software.<br /><br />This secures encrypted data in two ways:<br /><br /><ul><li>TrustZone enforces the <a href="https://source.android.com/security/verifiedboot/">Verified Boot</a> process. If TrustZone detects that the operating system has been modified, it won't decrypt disk encryption keys; this helps to secure device encrypted (DE) data.</li><li>TrustZone enforces a waiting period between guesses at the user credential, which gets longer after a sequence of wrong guesses. With 1624 valid four-point patterns and TrustZone's ever-growing waiting period, trying all patterns would take more than four years. This improves security for all users, especially those who have a shorter and more easily guessed pattern, PIN, or password.</li></ul><div><br /></div><div><b>Encryption on Pixel phones</b></div><div><br /></div><div>Protecting different folders with different keys required a distinct approach from <a href="http://source.android.com/security/encryption/full-disk.html">full-disk encryption</a> (FDE). The natural choice for Linux-based systems is the industry-standard eCryptFS. However, eCryptFS didn't meet our performance requirements. Fortunately one of the eCryptFS creators, Michael Halcrow, worked with the ext4 maintainer, Ted Ts'o, to add encryption natively to ext4, and Android became the first consumer of this technology. ext4 encryption performance is similar to full-disk encryption, which is as performant as a software-only solution can be.</div><div><br /></div><div><br /></div><div>Additionally, Pixel phones have an inline hardware encryption engine, which gives them the ability to write encrypted data at line speed to the flash memory. To take advantage of this, we modified ext4 encryption to use this hardware by adding a key reference to the bio structure, within the ext4 driver before passing it to the block layer. (The bio structure is the basic container for block I/O in the Linux kernel.) We then modified the inline encryption block driver to pass this to the hardware. As with ext4 encryption, keys are managed by the Linux keyring. To see our implementation, take a look at the <a href="https://android.googlesource.com/kernel/msm/+/android-msm-marlin-3.18-nougat-dr1/fs/ext4/crypto_key.c">source code</a> for the Pixel kernel.</div><div><br /></div><div><br /></div><div>While this specific implementation of file-based encryption using ext4 with inline encryption benefits Pixel users, FBE is available in AOSP and ready to use, along with the other features mentioned in this post.</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Paul Crowley, Senior Software Engineer and Paul Lawrence, Senior Software Engineer</span><br /><i>[Cross-posted from the <a href="http://android-developers.blogspot.com/2016/11/pixel-security-better-faster-stronger.html">Android Developers Blog</a>]</i><br /><br />Encryption protects your data if your phone falls into someone else's hands. The new Google Pixel and Pixel XL are encrypted by default to offer strong data protection, while maintaining a great user experience with high I/O performance and long battery life. In addition to encryption, the Pixel phones debuted running the Android Nougat release, which has even more <a href="http://android-developers.blogspot.com/2016/09/security-enhancements-in-nougat.html">security improvements</a>.<br /><br />This blog post covers the encryption implementation on Google Pixel devices and how it improves the user experience, performance, and security of the device.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-xJ9tGc-VGTw/WC4nR_Y3TXI/AAAAAAAAAYw/_nN5zaIcNRYXgr_UWkEHmoIE2x-KUImYgCLcB/s1600/pixel%2Bpic.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="384" src="https://2.bp.blogspot.com/-xJ9tGc-VGTw/WC4nR_Y3TXI/AAAAAAAAAYw/_nN5zaIcNRYXgr_UWkEHmoIE2x-KUImYgCLcB/s640/pixel%2Bpic.jpg" width="640" /></a></div><b>File-Based Encryption Direct Boot experience</b><br />One of the security features introduced in Android Nougat was <a href="https://source.android.com/security/encryption/file-based.html">file-based encryption</a>. File-based encryption (FBE) means different files are encrypted with different keys that can be unlocked independently. FBE also separates data into device encrypted (DE) data and credential encrypted (CE) data.<br /><br /><a href="https://developer.android.com/training/articles/direct-boot.html">Direct boot</a> uses file-based encryption to allow a seamless user experience when a device reboots by combining the unlock and decrypt screen. For users, this means that applications like alarm clocks, accessibility settings, and phone calls are available immediately after boot.<br /><b><br /></b><b>Enhanced with TrustZone® security</b><br /><b><br /></b>Modern processors provide a means to execute code in a mode that remains secure even if the kernel is compromised. On ARM®-based processors this mode is known as TrustZone. Starting in Android Nougat, all disk encryption keys are stored encrypted with keys held by TrustZone software.<br /><br />This secures encrypted data in two ways:<br /><br /><ul><li>TrustZone enforces the <a href="https://source.android.com/security/verifiedboot/">Verified Boot</a> process. If TrustZone detects that the operating system has been modified, it won't decrypt disk encryption keys; this helps to secure device encrypted (DE) data.</li><li>TrustZone enforces a waiting period between guesses at the user credential, which gets longer after a sequence of wrong guesses. With 1624 valid four-point patterns and TrustZone's ever-growing waiting period, trying all patterns would take more than four years. This improves security for all users, especially those who have a shorter and more easily guessed pattern, PIN, or password.</li></ul><div><br /></div><div><b>Encryption on Pixel phones</b></div><div><br /></div><div>Protecting different folders with different keys required a distinct approach from <a href="http://source.android.com/security/encryption/full-disk.html">full-disk encryption</a> (FDE). The natural choice for Linux-based systems is the industry-standard eCryptFS. However, eCryptFS didn't meet our performance requirements. Fortunately one of the eCryptFS creators, Michael Halcrow, worked with the ext4 maintainer, Ted Ts'o, to add encryption natively to ext4, and Android became the first consumer of this technology. ext4 encryption performance is similar to full-disk encryption, which is as performant as a software-only solution can be.</div><div><br /></div><div><br /></div><div>Additionally, Pixel phones have an inline hardware encryption engine, which gives them the ability to write encrypted data at line speed to the flash memory. To take advantage of this, we modified ext4 encryption to use this hardware by adding a key reference to the bio structure, within the ext4 driver before passing it to the block layer. (The bio structure is the basic container for block I/O in the Linux kernel.) We then modified the inline encryption block driver to pass this to the hardware. As with ext4 encryption, keys are managed by the Linux keyring. To see our implementation, take a look at the <a href="https://android.googlesource.com/kernel/msm/+/android-msm-marlin-3.18-nougat-dr1/fs/ext4/crypto_key.c">source code</a> for the Pixel kernel.</div><div><br /></div><div><br /></div><div>While this specific implementation of file-based encryption using ext4 with inline encryption benefits Pixel users, FBE is available in AOSP and ready to use, along with the other features mentioned in this post.</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/pixel-security-better-faster-stronger-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>SHA-1 Certificates in Chrome</title>
		<link>https://googledata.org/google-online-security/sha-1-certificates-in-chrome/</link>
		<comments>https://googledata.org/google-online-security/sha-1-certificates-in-chrome/#comments</comments>
		<pubDate>Thu, 17 Nov 2016 00:08:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=ed252fd1f60d854ee22273b93cc41f3f</guid>
		<description><![CDATA[<span>Posted by Andrew Whalley, Chrome Security</span><br />We&#8217;ve previously made <a href="https://security.googleblog.com/2014/09/gradually-sunsetting-sha-1.html">several</a> <a href="https://security.googleblog.com/2015/12/an-update-on-sha-1-certificates-in.html">announcements</a> about Google Chrome's deprecation plans for SHA-1 certificates. This post provides an update on the final removal of support.<br /><br />The SHA-1 cryptographic hash algorithm first <a href="https://www.schneier.com/blog/archives/2005/02/cryptanalysis_o.html">showed signs of weakness</a> over eleven years ago and <a href="https://eprint.iacr.org/2015/967">recent research</a> points to the imminent possibility of attacks that could directly impact the integrity of the Web PKI. To protect users from such attacks, Chrome will stop trusting certificates that use the SHA-1 algorithm, and visiting a site using such a certificate will result in an interstitial warning.<br /><b>Release schedule</b><br />We are planning to remove support for SHA-1 certificates in Chrome 56, which will be released to the stable channel <a href="https://www.chromium.org/developers/calendar">around the end of January 2017</a>. The removal will follow the <a href="https://www.chromium.org/getting-involved/dev-channel">Chrome release process</a>, moving from Dev to Beta to Stable; there won't be a date-based change in behaviour.<br /><br />Website operators are urged <a href="https://www.ssllabs.com/ssltest/">to check</a> for the use of SHA-1 certificates and immediately contact their CA for a SHA-256 based replacement if any are found.<br /><b>SHA-1 use in private PKIs</b><br />Previous posts made a distinction between certificates which chain to a public CA and those which chain to a locally installed trust anchor, such as those of a private PKI within an enterprise. We recognise there might be rare cases where an enterprise wishes to make their own risk management decision to continue using SHA-1 certificates.<br /><br />Starting with <a href="https://googlechromereleases.blogspot.com/2016/10/stable-channel-update-for-desktop.html">Chrome 54</a> we provide the <a href="https://www.chromium.org/administrators/policy-list-3#EnableSha1ForLocalAnchors"><span>EnableSha1ForLocalAnchors</span></a> <a href="https://support.google.com/chrome/a/answer/187202">policy</a> that allows certificates which chain to a locally installed trust anchor to be used after support has otherwise been removed from Chrome. Features which <a href="https://sites.google.com/a/chromium.org/dev/Home/chromium-security/deprecating-powerful-features-on-insecure-origins">require a secure origin</a>, such as geolocation, will continue to work, however pages will be displayed as &#8220;neutral, lacking security&#8221;. Without this policy set, SHA-1 certificates that chain to locally installed roots will not be trusted starting with Chrome 57, which will be released to the stable channel in March 2017. Note that even without the policy set, SHA-1 client certificates will still be presented to websites requesting client authentication.<br /><br />Since this policy is intended only to allow additional time to complete the migration away from SHA-1, it will eventually be removed in the first Chrome release after January 1st 2019.<br /><br />As Chrome makes use of certificate validation libraries provided by the host OS when possible, this option will have no effect if the underlying cryptographic library disables support for SHA-1 certificates; at that point, they will be unconditionally blocked. We may also remove support before 2019 if there is a serious cryptographic break of SHA-1. Enterprises are encouraged to make every effort to stop using SHA-1 certificates as soon as possible and to consult with their security team before enabling the policy.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Andrew Whalley, Chrome Security</span><br />We’ve previously made <a href="https://security.googleblog.com/2014/09/gradually-sunsetting-sha-1.html">several</a> <a href="https://security.googleblog.com/2015/12/an-update-on-sha-1-certificates-in.html">announcements</a> about Google Chrome's deprecation plans for SHA-1 certificates. This post provides an update on the final removal of support.<br /><br />The SHA-1 cryptographic hash algorithm first <a href="https://www.schneier.com/blog/archives/2005/02/cryptanalysis_o.html">showed signs of weakness</a> over eleven years ago and <a href="https://eprint.iacr.org/2015/967">recent research</a> points to the imminent possibility of attacks that could directly impact the integrity of the Web PKI. To protect users from such attacks, Chrome will stop trusting certificates that use the SHA-1 algorithm, and visiting a site using such a certificate will result in an interstitial warning.<br /><b>Release schedule</b><br />We are planning to remove support for SHA-1 certificates in Chrome 56, which will be released to the stable channel <a href="https://www.chromium.org/developers/calendar">around the end of January 2017</a>. The removal will follow the <a href="https://www.chromium.org/getting-involved/dev-channel">Chrome release process</a>, moving from Dev to Beta to Stable; there won't be a date-based change in behaviour.<br /><br />Website operators are urged <a href="https://www.ssllabs.com/ssltest/">to check</a> for the use of SHA-1 certificates and immediately contact their CA for a SHA-256 based replacement if any are found.<br /><b>SHA-1 use in private PKIs</b><br />Previous posts made a distinction between certificates which chain to a public CA and those which chain to a locally installed trust anchor, such as those of a private PKI within an enterprise. We recognise there might be rare cases where an enterprise wishes to make their own risk management decision to continue using SHA-1 certificates.<br /><br />Starting with <a href="https://googlechromereleases.blogspot.com/2016/10/stable-channel-update-for-desktop.html">Chrome 54</a> we provide the <a href="https://www.chromium.org/administrators/policy-list-3#EnableSha1ForLocalAnchors"><span style="font-family: Courier New, Courier, monospace;">EnableSha1ForLocalAnchors</span></a> <a href="https://support.google.com/chrome/a/answer/187202">policy</a> that allows certificates which chain to a locally installed trust anchor to be used after support has otherwise been removed from Chrome. Features which <a href="https://sites.google.com/a/chromium.org/dev/Home/chromium-security/deprecating-powerful-features-on-insecure-origins">require a secure origin</a>, such as geolocation, will continue to work, however pages will be displayed as “neutral, lacking security”. Without this policy set, SHA-1 certificates that chain to locally installed roots will not be trusted starting with Chrome 57, which will be released to the stable channel in March 2017. Note that even without the policy set, SHA-1 client certificates will still be presented to websites requesting client authentication.<br /><br />Since this policy is intended only to allow additional time to complete the migration away from SHA-1, it will eventually be removed in the first Chrome release after January 1st 2019.<br /><br />As Chrome makes use of certificate validation libraries provided by the host OS when possible, this option will have no effect if the underlying cryptographic library disables support for SHA-1 certificates; at that point, they will be unconditionally blocked. We may also remove support before 2019 if there is a serious cryptographic break of SHA-1. Enterprises are encouraged to make every effort to stop using SHA-1 certificates as soon as possible and to consult with their security team before enabling the policy.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/sha-1-certificates-in-chrome/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>A new site for Safe Browsing</title>
		<link>https://googledata.org/google-online-security/a-new-site-for-safe-browsing/</link>
		<comments>https://googledata.org/google-online-security/a-new-site-for-safe-browsing/#comments</comments>
		<pubDate>Wed, 09 Nov 2016 18:37:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=c61b4b133c93d38e63b148b62edf25a2</guid>
		<description><![CDATA[<span>Posted by Mike Castner and Brooke Heinichen, Safe Browsing Team</span><br />Since <a href="https://security.googleblog.com/2007/05/introducing-googles-anti-malware.html">launching in 2007</a>, the Safe Browsing team has been dedicated to our mission of protecting users from phishing, malware, and unwanted software on the web. Our coverage currently extends to more than two billion internet-connected devices, including <a href="https://security.googleblog.com/2015/12/protecting-hundreds-of-millions-more.html">Chrome users on Android</a>. As part of our commitment to keep our users both protected and informed, we&#8217;ve recently launched <a href="https://security.googleblog.com/2015/10/behind-red-warning-more-info-about.html">several</a> <a href="https://security.googleblog.com/2016/09/more-safe-browsing-help-for-webmasters.html">improvements</a> to the way we share information.<br /><br />Today, we&#8217;re happy to announce <a href="https://safebrowsing.google.com/">a new site for Safe Browsing</a> that makes it easier for users to quickly report malicious sites, access our developer documentation, and find our policies. Our new site also serves as a central hub for our tools, including the <a href="https://www.google.com/transparencyreport/safebrowsing/">Transparency Report</a>, <a href="https://www.google.com/webmasters/tools/home?pli=1">Search Console</a>, and <a href="https://www.google.com/safebrowsing/alerts/">Safe Browsing Alerts</a> for Network Administrators.<br /><br />The new <a href="https://safebrowsing.google.com/">Safe Browsing website</a> will be a platform for consolidated policy and help content. We&#8217;re excited to make this new, single source of information available to users, developers, and webmasters.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Mike Castner and Brooke Heinichen, Safe Browsing Team</span><br />Since <a href="https://security.googleblog.com/2007/05/introducing-googles-anti-malware.html">launching in 2007</a>, the Safe Browsing team has been dedicated to our mission of protecting users from phishing, malware, and unwanted software on the web. Our coverage currently extends to more than two billion internet-connected devices, including <a href="https://security.googleblog.com/2015/12/protecting-hundreds-of-millions-more.html">Chrome users on Android</a>. As part of our commitment to keep our users both protected and informed, we’ve recently launched <a href="https://security.googleblog.com/2015/10/behind-red-warning-more-info-about.html">several</a> <a href="https://security.googleblog.com/2016/09/more-safe-browsing-help-for-webmasters.html">improvements</a> to the way we share information.<br /><br />Today, we’re happy to announce <a href="https://safebrowsing.google.com/">a new site for Safe Browsing</a> that makes it easier for users to quickly report malicious sites, access our developer documentation, and find our policies. Our new site also serves as a central hub for our tools, including the <a href="https://www.google.com/transparencyreport/safebrowsing/">Transparency Report</a>, <a href="https://www.google.com/webmasters/tools/home?pli=1">Search Console</a>, and <a href="https://www.google.com/safebrowsing/alerts/">Safe Browsing Alerts</a> for Network Administrators.<br /><br />The new <a href="https://safebrowsing.google.com/">Safe Browsing website</a> will be a platform for consolidated policy and help content. We’re excited to make this new, single source of information available to users, developers, and webmasters.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/a-new-site-for-safe-browsing/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Protecting users from repeatedly dangerous sites</title>
		<link>https://googledata.org/google-online-security/protecting-users-from-repeatedly-dangerous-sites/</link>
		<comments>https://googledata.org/google-online-security/protecting-users-from-repeatedly-dangerous-sites/#comments</comments>
		<pubDate>Tue, 08 Nov 2016 22:22:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=e382a87f01c328c045326e2dc0509eea</guid>
		<description><![CDATA[<span>Posted by Brooke Heinichen, Safe Browsing Team</span><br /><br />Since 2005, Safe Browsing has been protecting users from harm on the Internet, and has evolved over the years to adapt to the changing nature of threats and user harm.<br /><br />Today, sites in violation of Google&#8217;s <a href="https://support.google.com/webmasters/answer/3258249?hl=en&#38;ref_topic=4596795">Malware</a>, <a href="https://www.google.com/about/company/unwanted-software-policy.html">Unwanted Software</a>, <a href="https://support.google.com/webmasters/answer/6350487?hl=en">Phishing, and Social Engineering Policies</a> show warnings until Google verifies that the site is no longer harmful. The verification can be triggered automatically, or at the request of the webmaster via the <a href="https://developers.google.com/webmasters/hacked/docs/request_review?rd=1">Search Console</a>.<br /><br />However, over time, we&#8217;ve observed that a small number of websites will cease harming users for long enough to have the warnings removed, and will then revert to harmful activity.<br /><br />As a result of this gap in user protection, we have adjusted our policies to reduce risks borne by end-users. Starting today, Safe Browsing will begin to classify these types of sites as &#8220;<a href="https://support.google.com/webmasters/answer/7104997">Repeat Offenders</a>.&#8221; With regards to Safe Browsing-related policies, Repeat Offenders are websites that repeatedly switch between compliant and policy-violating behavior for the purpose of having a successful review and having warnings removed. Please note that websites that are hacked will not be classified as Repeat Offenders; only sites that purposefully post harmful content will be subject to the policy.<br /><br />Once Safe Browsing has determined that a site is a Repeat Offender, the webmaster will be unable to request additional reviews via the Search Console for 30 days, and warnings will continue to show to users. When a site is established as a Repeat Offender, the webmaster will be notified via email to their registered Search Console email address.<br /><br />We continuously update our policies and practices to address evolving threats. This is yet another change to help protect users from harm online.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Brooke Heinichen, Safe Browsing Team</span><br /><br />Since 2005, Safe Browsing has been protecting users from harm on the Internet, and has evolved over the years to adapt to the changing nature of threats and user harm.<br /><br />Today, sites in violation of Google’s <a href="https://support.google.com/webmasters/answer/3258249?hl=en&amp;ref_topic=4596795">Malware</a>, <a href="https://www.google.com/about/company/unwanted-software-policy.html">Unwanted Software</a>, <a href="https://support.google.com/webmasters/answer/6350487?hl=en">Phishing, and Social Engineering Policies</a> show warnings until Google verifies that the site is no longer harmful. The verification can be triggered automatically, or at the request of the webmaster via the <a href="https://developers.google.com/webmasters/hacked/docs/request_review?rd=1">Search Console</a>.<br /><br />However, over time, we’ve observed that a small number of websites will cease harming users for long enough to have the warnings removed, and will then revert to harmful activity.<br /><br />As a result of this gap in user protection, we have adjusted our policies to reduce risks borne by end-users. Starting today, Safe Browsing will begin to classify these types of sites as “<a href="https://support.google.com/webmasters/answer/7104997">Repeat Offenders</a>.” With regards to Safe Browsing-related policies, Repeat Offenders are websites that repeatedly switch between compliant and policy-violating behavior for the purpose of having a successful review and having warnings removed. Please note that websites that are hacked will not be classified as Repeat Offenders; only sites that purposefully post harmful content will be subject to the policy.<br /><br />Once Safe Browsing has determined that a site is a Repeat Offender, the webmaster will be unable to request additional reviews via the Search Console for 30 days, and warnings will continue to show to users. When a site is established as a Repeat Offender, the webmaster will be notified via email to their registered Search Console email address.<br /><br />We continuously update our policies and practices to address evolving threats. This is yet another change to help protect users from harm online.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/protecting-users-from-repeatedly-dangerous-sites/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Here’s to more HTTPS on the web!</title>
		<link>https://googledata.org/google-online-security/heres-to-more-https-on-the-web/</link>
		<comments>https://googledata.org/google-online-security/heres-to-more-https-on-the-web/#comments</comments>
		<pubDate>Fri, 04 Nov 2016 01:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=0dd31a2a02f52aea45cbe45924ab3e80</guid>
		<description><![CDATA[<span>Posted by Adrienne Porter Felt and Emily Schechter, Chrome Security Team</span><br />Security has always been critical to the web, but challenges involved in site migration have inhibited HTTPS adoption for several years. In the interest of a safer web for all, at Google we&#8217;ve worked alongside many others across the online ecosystem to better understand and address these challenges, resulting in real change. A web with ubiquitous HTTPS is not the distant future. It&#8217;s happening now, with secure browsing becoming standard for users of Chrome.<br /><br />Today, we&#8217;re adding a <a href="https://www.google.com/transparencyreport/https/metrics/?hl=en">new section to the HTTPS Report Card in our Transparency Report</a> that includes data about how HTTPS usage has been increasing over time. More than half of pages loaded and two-thirds of total time spent by Chrome desktop users occur via HTTPS, and we expect these metrics to continue their strong upward trajectory.<br /><div><a href="https://3.bp.blogspot.com/-3MYXgqBv7NA/WBvDbfmK8uI/AAAAAAAAAYY/XUNpxG4tQnUdtIR0veJD-6rxApS6rgS-QCLcB/s1600/chart.png"><img border="0" height="436" src="https://3.bp.blogspot.com/-3MYXgqBv7NA/WBvDbfmK8uI/AAAAAAAAAYY/XUNpxG4tQnUdtIR0veJD-6rxApS6rgS-QCLcB/s640/chart.png" width="640"></a></div><div>Percentage of pages loaded over HTTPS in Chrome</div><div><br /></div><div><br /></div><div><br /></div><div>As the remainder of the web transitions to HTTPS, we&#8217;ll continue working to ensure that migrating to HTTPS is a no-brainer, providing business benefit beyond increased security. HTTPS currently enables the <a href="https://istlsfastyet.com/">best</a> <a href="https://blog.chromium.org/2013/11/making-web-faster-with-spdy-and-http2.html">performance</a> the web offers and powerful features that <a href="https://developers.google.com/web/showcase/">benefit</a> site conversions, including both new features such as <a href="https://developers.google.com/web/fundamentals/getting-started/primers/service-workers">service workers</a> for offline support and <a href="https://www.mobify.com/insights/google-web-push-notifications-case-study/">web push notifications</a>, and existing features such as <a href="https://developers.google.com/web/updates/2015/06/checkout-faster-with-autofill">credit card autofill</a> and the <a href="https://developers.google.com/web/updates/2016/04/geolocation-on-secure-contexts-only">HTML5 geolocation API</a> that are <a href="https://www.chromium.org/Home/chromium-security/deprecating-powerful-features-on-insecure-origins">too powerful to be used</a> over non-secure HTTP. As with all major site migrations, there are certain steps webmasters should take to ensure that search ranking transitions are smooth when moving to HTTPS. To help with this, we&#8217;ve posted <a href="https://plus.google.com/+JohnMueller/posts/PY1xCWbeDVC">two</a> <a href="https://plus.google.com/+GoogleWebmasters/posts/eYmUYvNNT5J">FAQs</a> to help sites transition correctly, and will continue to improve our <a href="https://developers.google.com/web/fundamentals/security/encrypt-in-transit/why-https">web fundamentals guidance</a>.</div><div><br /></div><div><br /></div><div>We&#8217;ve seen many sites successfully transition with negligible effect on their search ranking and traffic. Brian Wood, Director of Marketing SEO at Wayfair, a large retail site, commented:&#160;&#8220;We were able to migrate Wayfair.com to HTTPS with no meaningful impact to Google rankings or Google organic search traffic. We are very pleased to say that all Wayfair sites are now fully HTTPS.&#8221; CNET, a large tech news site, had a similar experience: &#8220;We successfully completed our move of CNET.com to HTTPS last month,&#8221; said John Sherwood, Vice President of Engineering &#38; Technology at CNET. &#8220;Since then, there has been no change in our Google rankings or Google organic search traffic.&#8221;</div><div><br /></div><div><br /></div><div>Webmasters that include ads on their sites also should carefully monitor ad performance and revenue during large site migrations. The portion of Google ad traffic served over HTTPS has <a href="https://g.co/https">increased dramatically</a> over the past 3 years. All ads that come from any Google source always support HTTPS, including AdWords, AdSense, or DoubleClick Ad Exchange; ads sold directly, such as those through DoubleClick for Publishers, still need to be designed to be HTTPS-friendly. This means there will be no change to the Google-sourced ads that appear on a site after migrating to HTTPS. Many publishing partners have seen this in practice after a successful HTTPS transition. Jason Tollestrup, Director of Programmatic Advertising for the <a href="https://developer.washingtonpost.com/pb/blog/post/2015/12/10/moving-the-washington-post-to-https/">Washington Post</a>, &#8220;saw no material impact to AdX revenue with the transition to SSL.&#8221;</div><div><br /></div><div><br /></div><div>As migrating to HTTPS becomes even easier, <a href="https://security.googleblog.com/2016/09/moving-towards-more-secure-web.html">we&#8217;ll continue</a> working towards a web that&#8217;s secure by default. Don&#8217;t hesitate to start planning your HTTPS migration today!</div><br />]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Adrienne Porter Felt and Emily Schechter, Chrome Security Team</span><br />Security has always been critical to the web, but challenges involved in site migration have inhibited HTTPS adoption for several years. In the interest of a safer web for all, at Google we’ve worked alongside many others across the online ecosystem to better understand and address these challenges, resulting in real change. A web with ubiquitous HTTPS is not the distant future. It’s happening now, with secure browsing becoming standard for users of Chrome.<br /><br />Today, we’re adding a <a href="https://www.google.com/transparencyreport/https/metrics/?hl=en">new section to the HTTPS Report Card in our Transparency Report</a> that includes data about how HTTPS usage has been increasing over time. More than half of pages loaded and two-thirds of total time spent by Chrome desktop users occur via HTTPS, and we expect these metrics to continue their strong upward trajectory.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-3MYXgqBv7NA/WBvDbfmK8uI/AAAAAAAAAYY/XUNpxG4tQnUdtIR0veJD-6rxApS6rgS-QCLcB/s1600/chart.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="436" src="https://3.bp.blogspot.com/-3MYXgqBv7NA/WBvDbfmK8uI/AAAAAAAAAYY/XUNpxG4tQnUdtIR0veJD-6rxApS6rgS-QCLcB/s640/chart.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;">Percentage of pages loaded over HTTPS in Chrome</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">As the remainder of the web transitions to HTTPS, we’ll continue working to ensure that migrating to HTTPS is a no-brainer, providing business benefit beyond increased security. HTTPS currently enables the <a href="https://istlsfastyet.com/">best</a> <a href="https://blog.chromium.org/2013/11/making-web-faster-with-spdy-and-http2.html">performance</a> the web offers and powerful features that <a href="https://developers.google.com/web/showcase/">benefit</a> site conversions, including both new features such as <a href="https://developers.google.com/web/fundamentals/getting-started/primers/service-workers">service workers</a> for offline support and <a href="https://www.mobify.com/insights/google-web-push-notifications-case-study/">web push notifications</a>, and existing features such as <a href="https://developers.google.com/web/updates/2015/06/checkout-faster-with-autofill">credit card autofill</a> and the <a href="https://developers.google.com/web/updates/2016/04/geolocation-on-secure-contexts-only">HTML5 geolocation API</a> that are <a href="https://www.chromium.org/Home/chromium-security/deprecating-powerful-features-on-insecure-origins">too powerful to be used</a> over non-secure HTTP. As with all major site migrations, there are certain steps webmasters should take to ensure that search ranking transitions are smooth when moving to HTTPS. To help with this, we’ve posted <a href="https://plus.google.com/+JohnMueller/posts/PY1xCWbeDVC">two</a> <a href="https://plus.google.com/+GoogleWebmasters/posts/eYmUYvNNT5J">FAQs</a> to help sites transition correctly, and will continue to improve our <a href="https://developers.google.com/web/fundamentals/security/encrypt-in-transit/why-https">web fundamentals guidance</a>.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">We’ve seen many sites successfully transition with negligible effect on their search ranking and traffic. Brian Wood, Director of Marketing SEO at Wayfair, a large retail site, commented:&nbsp;“We were able to migrate Wayfair.com to HTTPS with no meaningful impact to Google rankings or Google organic search traffic. We are very pleased to say that all Wayfair sites are now fully HTTPS.” CNET, a large tech news site, had a similar experience: “We successfully completed our move of CNET.com to HTTPS last month,” said John Sherwood, Vice President of Engineering &amp; Technology at CNET. “Since then, there has been no change in our Google rankings or Google organic search traffic.”</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Webmasters that include ads on their sites also should carefully monitor ad performance and revenue during large site migrations. The portion of Google ad traffic served over HTTPS has <a href="https://g.co/https">increased dramatically</a> over the past 3 years. All ads that come from any Google source always support HTTPS, including AdWords, AdSense, or DoubleClick Ad Exchange; ads sold directly, such as those through DoubleClick for Publishers, still need to be designed to be HTTPS-friendly. This means there will be no change to the Google-sourced ads that appear on a site after migrating to HTTPS. Many publishing partners have seen this in practice after a successful HTTPS transition. Jason Tollestrup, Director of Programmatic Advertising for the <a href="https://developer.washingtonpost.com/pb/blog/post/2015/12/10/moving-the-washington-post-to-https/">Washington Post</a>, “saw no material impact to AdX revenue with the transition to SSL.”</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">As migrating to HTTPS becomes even easier, <a href="https://security.googleblog.com/2016/09/moving-towards-more-secure-web.html">we’ll continue</a> working towards a web that’s secure by default. Don’t hesitate to start planning your HTTPS migration today!</div><br />]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/heres-to-more-https-on-the-web/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Distrusting WoSign and StartCom Certificates</title>
		<link>https://googledata.org/google-online-security/distrusting-wosign-and-startcom-certificates/</link>
		<comments>https://googledata.org/google-online-security/distrusting-wosign-and-startcom-certificates/#comments</comments>
		<pubDate>Mon, 31 Oct 2016 23:14:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=1953542394e890f0277993c3c8ec42b9</guid>
		<description><![CDATA[<span>Posted by Andrew Whalley, Chrome Security</span><br />Certificate Authorities (CAs) play a key role in web security by issuing digital certificates to website operators. These certificates are trusted by browsers to authenticate secure connections to websites. CAs who issue certificates outside the policies required by browsers and industry bodies can put the security and privacy of every web user at risk.<br /><br />Google has determined that two CAs, WoSign and StartCom, have not maintained the high standards expected of CAs and will no longer be trusted by Google Chrome, in accordance with our <a href="https://www.chromium.org/Home/chromium-security/root-ca-policy#TOC-Removal-of-Trust">Root Certificate Policy</a>. This view is similar to the recent announcements by the root certificate programs of both <a href="https://support.apple.com/en-us/HT204132">Apple</a> and <a href="https://blog.mozilla.org/security/2016/10/24/distrusting-new-wosign-and-startcom-certificates/">Mozilla</a>. The rest of this post provides background to that decision and how we plan to minimize disruption while still protecting users.<br /><b><br /></b><b>Background</b><br /><br />On August 17, 2016, Google was notified by GitHub's security team that WoSign had issued a certificate for one of GitHub's domains without their authorization. This prompted an investigation, conducted in public as a collaboration with Mozilla and the security community, which found a number of other cases of <a href="https://wiki.mozilla.org/CA:WoSign_Issues">WoSign misissuance</a>.<br /><br />The investigation concluded that WoSign knowingly and intentionally misissued certificates in order to circumvent browser restrictions and CA requirements. Further, it determined that StartCom, another CA, had been purchased by WoSign, and had replaced infrastructure, staff, policies, and issuance systems with WoSign's. When presented with this evidence, WoSign and StartCom management actively attempted to mislead the browser community about the acquisition and the relationship of these two companies. For both CAs, we have concluded there is a pattern of issues and incidents that indicate an approach to security that is not in concordance with the responsibilities of a publicly trusted CA.<br /><b>Action</b><br /><br />Beginning with Chrome 56, certificates issued by WoSign and StartCom after October 21, 2016 00:00:00 UTC will not be trusted. Certificates issued before this date may continue to be trusted, for a time, if they comply with the <a href="https://www.chromium.org/Home/chromium-security/root-ca-policy/CTPolicyMay2016edition.pdf?attredirects=0">Certificate Transparency in Chrome</a> policy or are issued to a limited set of domains known to be customers of WoSign and StartCom.<br /><br />Due to a number of technical limitations and concerns, Google Chrome is unable to trust all pre-existing certificates while ensuring our users are sufficiently protected from further misissuance. As a result of these changes, customers of WoSign and StartCom may find their certificates no longer work in Chrome 56.<br /><br />In subsequent Chrome releases, these exceptions will be reduced and ultimately removed, culminating in the full distrust of these CAs. This staged approach is solely to ensure sites have the opportunity to transition to other Certificate Authorities that are still trusted in Google Chrome, thus minimizing disruption to users of these sites. Sites that find themselves on this whitelist will be able to request early removal once they&#8217;ve transitioned to new certificates. Any attempt by WoSign or StartCom to circumvent these controls will result in immediate and complete removal of trust.<br /><br />We remain committed to ensuring the safety and privacy of Google Chrome users. We appreciate the impact to users visiting sites with affected certificates and to the operators who run these sites, but the nature of these incidents, and the need to protect our users, prevent us from being able to take less disruptive steps.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Andrew Whalley, Chrome Security</span><br />Certificate Authorities (CAs) play a key role in web security by issuing digital certificates to website operators. These certificates are trusted by browsers to authenticate secure connections to websites. CAs who issue certificates outside the policies required by browsers and industry bodies can put the security and privacy of every web user at risk.<br /><br />Google has determined that two CAs, WoSign and StartCom, have not maintained the high standards expected of CAs and will no longer be trusted by Google Chrome, in accordance with our <a href="https://www.chromium.org/Home/chromium-security/root-ca-policy#TOC-Removal-of-Trust">Root Certificate Policy</a>. This view is similar to the recent announcements by the root certificate programs of both <a href="https://support.apple.com/en-us/HT204132">Apple</a> and <a href="https://blog.mozilla.org/security/2016/10/24/distrusting-new-wosign-and-startcom-certificates/">Mozilla</a>. The rest of this post provides background to that decision and how we plan to minimize disruption while still protecting users.<br /><b><br /></b><b>Background</b><br /><br />On August 17, 2016, Google was notified by GitHub's security team that WoSign had issued a certificate for one of GitHub's domains without their authorization. This prompted an investigation, conducted in public as a collaboration with Mozilla and the security community, which found a number of other cases of <a href="https://wiki.mozilla.org/CA:WoSign_Issues">WoSign misissuance</a>.<br /><br />The investigation concluded that WoSign knowingly and intentionally misissued certificates in order to circumvent browser restrictions and CA requirements. Further, it determined that StartCom, another CA, had been purchased by WoSign, and had replaced infrastructure, staff, policies, and issuance systems with WoSign's. When presented with this evidence, WoSign and StartCom management actively attempted to mislead the browser community about the acquisition and the relationship of these two companies. For both CAs, we have concluded there is a pattern of issues and incidents that indicate an approach to security that is not in concordance with the responsibilities of a publicly trusted CA.<br /><b>Action</b><br /><br />Beginning with Chrome 56, certificates issued by WoSign and StartCom after October 21, 2016 00:00:00 UTC will not be trusted. Certificates issued before this date may continue to be trusted, for a time, if they comply with the <a href="https://www.chromium.org/Home/chromium-security/root-ca-policy/CTPolicyMay2016edition.pdf?attredirects=0">Certificate Transparency in Chrome</a> policy or are issued to a limited set of domains known to be customers of WoSign and StartCom.<br /><br />Due to a number of technical limitations and concerns, Google Chrome is unable to trust all pre-existing certificates while ensuring our users are sufficiently protected from further misissuance. As a result of these changes, customers of WoSign and StartCom may find their certificates no longer work in Chrome 56.<br /><br />In subsequent Chrome releases, these exceptions will be reduced and ultimately removed, culminating in the full distrust of these CAs. This staged approach is solely to ensure sites have the opportunity to transition to other Certificate Authorities that are still trusted in Google Chrome, thus minimizing disruption to users of these sites. Sites that find themselves on this whitelist will be able to request early removal once they’ve transitioned to new certificates. Any attempt by WoSign or StartCom to circumvent these controls will result in immediate and complete removal of trust.<br /><br />We remain committed to ensuring the safety and privacy of Google Chrome users. We appreciate the impact to users visiting sites with affected certificates and to the operators who run these sites, but the nature of these incidents, and the need to protect our users, prevent us from being able to take less disruptive steps.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/distrusting-wosign-and-startcom-certificates/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Disclosing vulnerabilities to protect users</title>
		<link>https://googledata.org/google-online-security/disclosing-vulnerabilities-to-protect-users/</link>
		<comments>https://googledata.org/google-online-security/disclosing-vulnerabilities-to-protect-users/#comments</comments>
		<pubDate>Mon, 31 Oct 2016 17:15:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=ce6ed0942f7ac6b46e74cc3d8b73afb5</guid>
		<description><![CDATA[<span>Posted by Neel Mehta and Billy Leonard, Threat Analysis Group</span><br />On Friday, October 21st, we reported 0-day vulnerabilities &#8212; previously publicly-unknown vulnerabilities &#8212; to Adobe and Microsoft. Adobe updated Flash&#160;<a href="https://helpx.adobe.com/security/products/flash-player/apsb16-36.html">on October 26th</a>&#160;to address CVE-2016-7855; this update is available via Adobe's updater and Chrome auto-update.<br /><br />After 7 days, per our&#160;<a href="https://security.googleblog.com/2013/05/disclosure-timeline-for-vulnerabilities.html">published policy for actively exploited critical vulnerabilities</a>, we are today disclosing the existence of a remaining critical vulnerability in Windows for which no advisory or fix has yet been released. This vulnerability is particularly serious because we know it is being actively exploited.<br /><br />The Windows vulnerability is a local privilege escalation in the Windows kernel that can be used as a security sandbox escape. It can be triggered via the win32k.sys system call NtSetWindowLongPtr() for the index GWLP_ID on a window handle with GWL_STYLE set to WS_CHILD. Chrome's sandbox blocks win32k.sys system calls using the <a href="https://docs.google.com/document/d/1gJDlk-9xkh6_8M_awrczWCaUuyr0Zd2TKjNBCiPO_G4/edit">Win32k lockdown</a> mitigation on Windows 10, which prevents exploitation of this sandbox escape vulnerability.<br /><br />We encourage users to verify that auto-updaters have already updated Flash &#8212; and to manually update if not &#8212; and to apply Windows patches from Microsoft when they become available for the Windows vulnerability.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Neel Mehta and Billy Leonard, Threat Analysis Group</span><br />On Friday, October 21st, we reported 0-day vulnerabilities — previously publicly-unknown vulnerabilities — to Adobe and Microsoft. Adobe updated Flash&nbsp;<a href="https://helpx.adobe.com/security/products/flash-player/apsb16-36.html">on October 26th</a>&nbsp;to address CVE-2016-7855; this update is available via Adobe's updater and Chrome auto-update.<br /><br />After 7 days, per our&nbsp;<a href="https://security.googleblog.com/2013/05/disclosure-timeline-for-vulnerabilities.html">published policy for actively exploited critical vulnerabilities</a>, we are today disclosing the existence of a remaining critical vulnerability in Windows for which no advisory or fix has yet been released. This vulnerability is particularly serious because we know it is being actively exploited.<br /><br />The Windows vulnerability is a local privilege escalation in the Windows kernel that can be used as a security sandbox escape. It can be triggered via the win32k.sys system call NtSetWindowLongPtr() for the index GWLP_ID on a window handle with GWL_STYLE set to WS_CHILD. Chrome's sandbox blocks win32k.sys system calls using the <a href="https://docs.google.com/document/d/1gJDlk-9xkh6_8M_awrczWCaUuyr0Zd2TKjNBCiPO_G4/edit">Win32k lockdown</a> mitigation on Windows 10, which prevents exploitation of this sandbox escape vulnerability.<br /><br />We encourage users to verify that auto-updaters have already updated Flash — and to manually update if not — and to apply Windows patches from Microsoft when they become available for the Windows vulnerability.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/disclosing-vulnerabilities-to-protect-users/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>OnHub: Powerful protection for peace of mind</title>
		<link>https://googledata.org/google-online-security/onhub-powerful-protection-for-peace-of-mind/</link>
		<comments>https://googledata.org/google-online-security/onhub-powerful-protection-for-peace-of-mind/#comments</comments>
		<pubDate>Tue, 27 Sep 2016 17:30:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=e3bf7a0694cbcec1e540df5e086458de</guid>
		<description><![CDATA[<span>Posted by Chris Millikin, Public Defender (Security Engineering Manager)</span><br /><i>[Cross-posted from the <a href="https://on.google.com/hub/blog/2016-09-27/onhub-powerful-protection-for-peace-of-mind/">Official OnHub Blog</a>]</i><br />Since OnHub launched, we've highlighted a variety of features that enable users to do the things they love online without having to deal with the annoying router issues that we've all experienced at one time or another. These include: Fast, reliable Wi-Fi for more than 100 devices at a time, easy streaming and sharing, and wide-ranging coverage that helps eliminate dead zones.<br /><br />We haven't, however, highlighted one of OnHub's most powerful features: Industry-leading security. Your router is the first line of defense for your online world. Because bad actors are aware of the critical position routers occupy in the network, routers are frequently the target of security attacks.<br /><div><a href="https://4.bp.blogspot.com/-yJqcOIpuP5s/V-oEyTGp7yI/AAAAAAAAAX8/Whi0GfF4SncWVrSBtrm0X6B3c41Fmk1zgCLcB/s1600/router-network3.png"><img border="0" height="160" src="https://4.bp.blogspot.com/-yJqcOIpuP5s/V-oEyTGp7yI/AAAAAAAAAX8/Whi0GfF4SncWVrSBtrm0X6B3c41Fmk1zgCLcB/s400/router-network3.png" width="400"></a></div>OnHub&#8217;s security features go beyond those of the typical router: OnHub is hardened against a variety of attacks, protecting your home network from many online threats. Three features in particular help ensure OnHub protects your data and devices from a variety of threats.<br /><br /><b>Three Security features that set OnHub apart</b><br /><b><br /></b><b>1. Defense in Depth</b><br /><div><a href="https://4.bp.blogspot.com/-vxGj9-G2_as/V-oEtEURvSI/AAAAAAAAAX4/kEJIchi05UgI3choIhpX0_2i1TcWZM63ACLcB/s1600/security-graphic.png"><img border="0" height="320" src="https://4.bp.blogspot.com/-vxGj9-G2_as/V-oEtEURvSI/AAAAAAAAAX4/kEJIchi05UgI3choIhpX0_2i1TcWZM63ACLcB/s320/security-graphic.png" width="320"></a></div>There are many elements that go into creating a robust defense in depth.<br /><br />Auto updates: OnHub regularly downloads automatic updates without you having to do anything--a long-established practice on mobile devices and software like Chrome, but one that appliances haven&#8217;t caught up with yet. These updates provide regular maintenance fixes and address critical vulnerabilities. They&#8217;re like the seatbelts of online security&#8212; <a href="https://security.googleblog.com/2015/07/new-research-comparing-how-security.html">internet security experts</a> recommend that users always accept updates.<br /><br />However, when updates don&#8217;t happen automatically, many people don&#8217;t bother. OnHub communicates directly with Google, and makes sure all software is signed and verified. For instance, when a vulnerability was found in a software library (<a href="https://security.googleblog.com/2016/02/cve-2015-7547-glibc-getaddrinfo-stack.html">glibc</a>) earlier this year, we were able to update OnHub&#8217;s entire fleet of devices within just a few days. In comparison, the vast majority of other routers require active user intervention to protect against such threats.<br /><br />Verified Boot: Verified Boot protects you by preventing compromised OnHubs from booting. We use this technology in Chromebooks, <a href="https://security.googleblog.com/2016/09/keeping-android-safe-security.html">strictly enforce it in Android Nougat</a>, and we implemented it in OnHub from the very beginning. This makes OnHub extremely difficult to attack or compromise. For instance, the device runs software that has been cryptographically signed by Google.<br /><br />Cloud administration: A traditional router is commonly attacked through its local administration web interface, where attackers have taken advantage of exploits like <a href="https://en.wikipedia.org/wiki/Cross-site_request_forgery">CSRF</a> to remotely take control and change critical settings like DNS, so we eliminated that from the beginning. Instead OnHub is managed through the cloud, with strong authentication and authorization, using a simple phone app. A read-only API is available only on the internal network, to provide important data to the OnHub app during setup and when troubleshooting.<br /><br />Process isolation: We also layer multiple techniques such as process isolation (uid/gid separation, namespaces, capability whitelists) and <a href="https://en.wikipedia.org/wiki/Seccomp">seccomp</a> filtering to isolate network-facing services, which helps reduce potential attack scenarios in a given application by preventing an attacker from making lateral movements in the system.<br /><br /><b>2. Hardware Provenance</b><br />Modern hardware devices include many types of chips, drivers, and firmware. It&#8217;s important to know what each part is doing and where it came from. Our security team works to track the origins of all hardware, software, and firmware that goes into OnHub, including those from third-party sources. If a vulnerability is ever found, OnHub security works to fix the problem immediately.<br /><br />The same goes for the open source components of OnHub. Before shipping, we do comprehensive code reviews of critical attack surfaces (i.e. network facing daemons), looking for security vulnerabilities. For example, we reviewed miniupnpd, hostapd, and dnsmasq. As a result of those reviews, Google reported security bugs to the open source project maintainers and offered patches. Here are three that we fixed: <a href="http://w1.fi/security/2015-2/">bugs</a> <a href="http://w1.fi/security/2015-3/">in</a> <a href="http://w1.fi/security/2015-4/">hosted</a>.<br /><br /><b>3. Cloud Intelligence</b><br />We use anonymized metrics from our fleet of OnHubs to quickly detect and counter potential threats. For example, since we know that DNS is often a target of attacks, we monitor DNS settings on all OnHub routers for activity that could indicate a security compromise. This is &#8220;cloud intelligence&#8221; &#8211; a benefit that Google is uniquely able to deliver. By connecting OnHub to the Google cloud, we provide the same level of protection you expect across all your Google apps and devices. Because you manage your router through the cloud using your secure Google identity, you don&#8217;t have to remember yet another password for managing your OnHub, and you don&#8217;t have to be at home to control it.<br /><br /><b>Security Improvements, Automatically</b><br />OnHub also participates in <a href="https://security.googleblog.com/2016/01/google-security-rewards-2015-year-in.html">Google&#8217;s Vulnerability Reward Program</a>, which started in 2010 to honor all of the cutting-edge external contributions that help us keep our users safe. Through this program, if you can find a qualifying bug in OnHub&#8217;s security, rewards range from $100 to $20,000. Click <a href="https://www.google.com/about/appsecurity/reward-program/">here</a> for an outline of the rewards for the most common classes of bugs.<br /><br />When it comes to security, not all routers are created equal. OnHub protects you and your network with security that continues to adapt to threats. We&#8217;re always improving OnHub security, and automatically update without users having to take any actions. As cybersecurity evolves and new threats emerge, OnHub will be ready to meet the latest challenges for years to come.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Chris Millikin, Public Defender (Security Engineering Manager)</span><br /><i>[Cross-posted from the <a href="https://on.google.com/hub/blog/2016-09-27/onhub-powerful-protection-for-peace-of-mind/">Official OnHub Blog</a>]</i><br />Since OnHub launched, we've highlighted a variety of features that enable users to do the things they love online without having to deal with the annoying router issues that we've all experienced at one time or another. These include: Fast, reliable Wi-Fi for more than 100 devices at a time, easy streaming and sharing, and wide-ranging coverage that helps eliminate dead zones.<br /><br />We haven't, however, highlighted one of OnHub's most powerful features: Industry-leading security. Your router is the first line of defense for your online world. Because bad actors are aware of the critical position routers occupy in the network, routers are frequently the target of security attacks.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-yJqcOIpuP5s/V-oEyTGp7yI/AAAAAAAAAX8/Whi0GfF4SncWVrSBtrm0X6B3c41Fmk1zgCLcB/s1600/router-network3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="160" src="https://4.bp.blogspot.com/-yJqcOIpuP5s/V-oEyTGp7yI/AAAAAAAAAX8/Whi0GfF4SncWVrSBtrm0X6B3c41Fmk1zgCLcB/s400/router-network3.png" width="400" /></a></div>OnHub’s security features go beyond those of the typical router: OnHub is hardened against a variety of attacks, protecting your home network from many online threats. Three features in particular help ensure OnHub protects your data and devices from a variety of threats.<br /><br /><b>Three Security features that set OnHub apart</b><br /><b><br /></b><b>1. Defense in Depth</b><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-vxGj9-G2_as/V-oEtEURvSI/AAAAAAAAAX4/kEJIchi05UgI3choIhpX0_2i1TcWZM63ACLcB/s1600/security-graphic.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://4.bp.blogspot.com/-vxGj9-G2_as/V-oEtEURvSI/AAAAAAAAAX4/kEJIchi05UgI3choIhpX0_2i1TcWZM63ACLcB/s320/security-graphic.png" width="320" /></a></div>There are many elements that go into creating a robust defense in depth.<br /><br />Auto updates: OnHub regularly downloads automatic updates without you having to do anything--a long-established practice on mobile devices and software like Chrome, but one that appliances haven’t caught up with yet. These updates provide regular maintenance fixes and address critical vulnerabilities. They’re like the seatbelts of online security— <a href="https://security.googleblog.com/2015/07/new-research-comparing-how-security.html">internet security experts</a> recommend that users always accept updates.<br /><br />However, when updates don’t happen automatically, many people don’t bother. OnHub communicates directly with Google, and makes sure all software is signed and verified. For instance, when a vulnerability was found in a software library (<a href="https://security.googleblog.com/2016/02/cve-2015-7547-glibc-getaddrinfo-stack.html">glibc</a>) earlier this year, we were able to update OnHub’s entire fleet of devices within just a few days. In comparison, the vast majority of other routers require active user intervention to protect against such threats.<br /><br />Verified Boot: Verified Boot protects you by preventing compromised OnHubs from booting. We use this technology in Chromebooks, <a href="https://security.googleblog.com/2016/09/keeping-android-safe-security.html">strictly enforce it in Android Nougat</a>, and we implemented it in OnHub from the very beginning. This makes OnHub extremely difficult to attack or compromise. For instance, the device runs software that has been cryptographically signed by Google.<br /><br />Cloud administration: A traditional router is commonly attacked through its local administration web interface, where attackers have taken advantage of exploits like <a href="https://en.wikipedia.org/wiki/Cross-site_request_forgery">CSRF</a> to remotely take control and change critical settings like DNS, so we eliminated that from the beginning. Instead OnHub is managed through the cloud, with strong authentication and authorization, using a simple phone app. A read-only API is available only on the internal network, to provide important data to the OnHub app during setup and when troubleshooting.<br /><br />Process isolation: We also layer multiple techniques such as process isolation (uid/gid separation, namespaces, capability whitelists) and <a href="https://en.wikipedia.org/wiki/Seccomp">seccomp</a> filtering to isolate network-facing services, which helps reduce potential attack scenarios in a given application by preventing an attacker from making lateral movements in the system.<br /><br /><b>2. Hardware Provenance</b><br />Modern hardware devices include many types of chips, drivers, and firmware. It’s important to know what each part is doing and where it came from. Our security team works to track the origins of all hardware, software, and firmware that goes into OnHub, including those from third-party sources. If a vulnerability is ever found, OnHub security works to fix the problem immediately.<br /><br />The same goes for the open source components of OnHub. Before shipping, we do comprehensive code reviews of critical attack surfaces (i.e. network facing daemons), looking for security vulnerabilities. For example, we reviewed miniupnpd, hostapd, and dnsmasq. As a result of those reviews, Google reported security bugs to the open source project maintainers and offered patches. Here are three that we fixed: <a href="http://w1.fi/security/2015-2/">bugs</a> <a href="http://w1.fi/security/2015-3/">in</a> <a href="http://w1.fi/security/2015-4/">hosted</a>.<br /><br /><b>3. Cloud Intelligence</b><br />We use anonymized metrics from our fleet of OnHubs to quickly detect and counter potential threats. For example, since we know that DNS is often a target of attacks, we monitor DNS settings on all OnHub routers for activity that could indicate a security compromise. This is “cloud intelligence” – a benefit that Google is uniquely able to deliver. By connecting OnHub to the Google cloud, we provide the same level of protection you expect across all your Google apps and devices. Because you manage your router through the cloud using your secure Google identity, you don’t have to remember yet another password for managing your OnHub, and you don’t have to be at home to control it.<br /><br /><b>Security Improvements, Automatically</b><br />OnHub also participates in <a href="https://security.googleblog.com/2016/01/google-security-rewards-2015-year-in.html">Google’s Vulnerability Reward Program</a>, which started in 2010 to honor all of the cutting-edge external contributions that help us keep our users safe. Through this program, if you can find a qualifying bug in OnHub’s security, rewards range from $100 to $20,000. Click <a href="https://www.google.com/about/appsecurity/reward-program/">here</a> for an outline of the rewards for the most common classes of bugs.<br /><br />When it comes to security, not all routers are created equal. OnHub protects you and your network with security that continues to adapt to threats. We’re always improving OnHub security, and automatically update without users having to take any actions. As cybersecurity evolves and new threats emerge, OnHub will be ready to meet the latest challenges for years to come.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/onhub-powerful-protection-for-peace-of-mind/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Reshaping web defenses with strict Content Security Policy</title>
		<link>https://googledata.org/google-online-security/reshaping-web-defenses-with-strict-content-security-policy/</link>
		<comments>https://googledata.org/google-online-security/reshaping-web-defenses-with-strict-content-security-policy/#comments</comments>
		<pubDate>Mon, 26 Sep 2016 18:21:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=f437c9c9dcb9f571cb012904d4c9e682</guid>
		<description><![CDATA[<span>Posted by Artur Janc, Michele Spagnuolo, Lukas Weichselbaum, and David Ross, Information Security Engineers</span><br /><br /><a href="https://www.google.com/about/appsecurity/learning/xss/">Cross-site scripting</a>&#160;&#8212; the ability to inject undesired scripts into a trusted web application &#8212; has been one of the top web security vulnerabilities for over a decade. Just in the past 2 years Google has awarded researchers over $1.2 million for reporting XSS bugs in our applications via the <a href="https://www.google.com/about/appsecurity/reward-program/index.html">Vulnerability Reward Program</a>. Modern web technologies such as <a href="https://queue.acm.org/detail.cfm?id=2663760">strict contextual auto-escaping</a> help developers avoid mistakes which lead to XSS, and <a href="https://cloudplatform.googleblog.com/2015/02/using-google-cloud-platform-for.html">automated scanners</a> can catch classes of vulnerabilities during the testing process. However, in complex applications bugs inevitably slip by, allowing attacks ranging from harmless pranks to malicious <a href="https://security.googleblog.com/2011/03/mhtml-vulnerability-under-active.html">targeted exploits</a>.<br /><br />Content Security Policy (CSP) is a mechanism designed to step in precisely when such bugs happen; it provides developers the ability to restrict which scripts are allowed to execute so that even if attackers can inject HTML into a vulnerable page, they should not be able to load malicious scripts and other types of resources. CSP is a flexible tool allowing developers to set a wide range of policies; it is supported &#8212; though not always in its entirety &#8212; by all modern browsers.<br /><br />However, the flexibility of CSP also leads to its biggest problem: it makes it easy to set policies which appear to work, but offer no real security benefit. In a <a href="https://research.google.com/pubs/pub45542.html">recent Internet-wide study</a> we analyzed over 1 billion domains and found that 95% of deployed CSP policies are ineffective as a protection against XSS. One of the underlying reasons is that out of the 15 domains most commonly whitelisted by developers for loading external scripts as many as 14 expose patterns which allow attackers to bypass CSP protections.&#160;We believe it's important to improve this, and help the web ecosystem make full use of the potential of CSP.<br /><br /><b>Towards safer CSP policies</b><br />To help developers craft policies which meaningfully protect their applications, today we&#8217;re releasing the <a href="https://csp-evaluator.withgoogle.com/">CSP Evaluator</a>, a tool to visualize the effect of setting a policy and detect subtle misconfigurations. CSP Evaluator is used by security engineers and developers at Google to make sure policies provide a meaningful security benefit and cannot be subverted by attackers.<br /><div><a href="https://1.bp.blogspot.com/-2jTX5XO-mmk/V-i93HvnYoI/AAAAAAAAAXc/Vo9vANUBHpYq7WrKuDsS_Prpe698Y7UlwCLcB/s1600/pasted%2Bimage%2B0%2B%25288%2529.png"><img border="0" height="398" src="https://1.bp.blogspot.com/-2jTX5XO-mmk/V-i93HvnYoI/AAAAAAAAAXc/Vo9vANUBHpYq7WrKuDsS_Prpe698Y7UlwCLcB/s640/pasted%2Bimage%2B0%2B%25288%2529.png" width="640"></a></div>Even with such a helpful tool, building a safe script whitelist for a complex application is often all but impossible due to the number of popular domains with resources that allow CSP to be bypassed. Here&#8217;s where the idea of a nonce-based CSP policy comes in. Instead of whitelisting all allowed script locations, it&#8217;s often simpler to modify the application to prove that a script is trusted by the developer by giving it a nonce -- an unpredictable, single-use token which has to match a value set in the policy:<br /><br /><span>Content-Security-Policy: script-src 'nonce-random123'</span><br /><span><br /></span><span>&#60;script nonce='random123'&#62;alert('This script will run')&#60;/script&#62;</span><br /><span>&#60;script&#62;alert('Will not run: missing nonce')&#60;/script&#62;</span><br /><span>&#60;script nonce='bad123'&#62;alert("Won't run: invalid nonce")&#60;/script&#62;</span><br /><span><br /></span>With '<a href="https://www.w3.org/TR/CSP3/#strict-dynamic-usage">strict-dynamic'</a>, a part of the upcoming CSP3 specification already <a href="https://www.chromestatus.com/feature/5633814718054400">supported</a> by Chrome and Opera (and coming soon to Firefox), adopting such policies in complex, modern applications becomes much easier. Developers can now set a single, short policy such as:<br /><br /><span>script-src 'nonce-random123' 'strict-dynamic'; object-src 'none'</span><br /><span><br /></span>and make sure that all static <span>&#60;script&#62;</span> elements contain a matching nonce attribute &#8212; in many cases this is all that&#8217;s needed to enjoy added protection against XSS since &#8216;strict-dynamic&#8217; will take care of loading any trusted scripts added at runtime. This approach allows setting policies which are <a href="https://csp.withgoogle.com/docs/strict-csp.html#example">backwards-compatible</a> with all CSP-aware browsers, and <a href="https://csp.withgoogle.com/docs/faq.html#strict-dynamic-with-whitelists">plays well</a> with applications which already use a traditional CSP policy; it also simplifies the process of adopting CSP and doesn&#8217;t require changing the policy as the application evolves.<br /><br /><b>Adopting strict CSP</b><br />In the past months we&#8217;ve deployed this approach in several large Google applications, including <a href="https://console.developers.google.com/">Cloud Console</a>, <a href="https://photos.google.com/">Photos</a>, <a href="https://myactivity.google.com/myactivity">History</a>, <a href="https://www.google.com/about/careers/jobs">Careers Search</a>, <a href="https://www.google.com/maps/timeline">Maps Timeline</a>, <a href="https://www.google.com/culturalinstitute/">Cultural Institute</a> and are working on many more. We believe this approach can also help other developers so today we&#8217;re publishing documentation discussing the <a href="https://csp.withgoogle.com/docs/strict-csp.html">best strategies for implementing CSP</a>, including an overview of the <a href="https://csp.withgoogle.com/docs/why-csp.html">benefits of CSP</a>, sample policies, and examples of common <a href="https://csp.withgoogle.com/docs/adopting-csp.html#code-changes">code changes</a>.<br /><br />Further, today we&#8217;re releasing <a href="https://chrome.google.com/webstore/detail/csp-mitigator/gijlobangojajlbodabkpjpheeeokhfa">CSP Mitigator</a>, a Chrome extension that helps developers review an application for compatibility with nonce-based CSP. The extension can be enabled for any URL prefix and will collect data about any programming patterns that need to be refactored to support CSP. This includes identifying scripts which do not have the correct nonce attribute, detecting inline event handlers, javascript: URIs, and several other more subtle patterns which might need attention.<br /><div><a href="https://3.bp.blogspot.com/-FUp4l0GCGOk/V-i9wBSAgxI/AAAAAAAAAXY/rLhV3aYC5Lwp1MUw5YC7erF0Wi7Zhj3OwCLcB/s1600/pasted%2Bimage%2B0%2B%25289%2529.png"><img border="0" height="336" src="https://3.bp.blogspot.com/-FUp4l0GCGOk/V-i9wBSAgxI/AAAAAAAAAXY/rLhV3aYC5Lwp1MUw5YC7erF0Wi7Zhj3OwCLcB/s640/pasted%2Bimage%2B0%2B%25289%2529.png" width="640"></a></div>As with the CSP Evaluator, we use the extension with our applications to help speed up the process of adopting nonce-based CSP policies nonce-based policies across Google.<br /><br /><b>Encouraging broader use of strict CSP</b><br />Finally, today we&#8217;re including CSP adoption efforts in the scope of the <a href="https://www.google.com/about/appsecurity/patch-rewards/">Patch Reward Program</a>; proactive work to help make popular open-source web frameworks compatible with nonce-based CSP can qualify for rewards (but please read the&#160;<a href="https://www.google.com/about/appsecurity/patch-rewards/#content">program rules</a> and <a href="https://csp.withgoogle.com/docs/faq.html#csp-patches">CSP refactoring tips</a> first). We hope that increased attention to this area will also encourage researchers to find new, creative ways to circumvent CSP restrictions, and help us further improve the mechanism so that we can better protect Internet users from web threats.<br /><br />To reach out to us, email more-csp@google.com.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Artur Janc, Michele Spagnuolo, Lukas Weichselbaum, and David Ross, Information Security Engineers</span><br /><br /><a href="https://www.google.com/about/appsecurity/learning/xss/">Cross-site scripting</a>&nbsp;— the ability to inject undesired scripts into a trusted web application — has been one of the top web security vulnerabilities for over a decade. Just in the past 2 years Google has awarded researchers over $1.2 million for reporting XSS bugs in our applications via the <a href="https://www.google.com/about/appsecurity/reward-program/index.html">Vulnerability Reward Program</a>. Modern web technologies such as <a href="https://queue.acm.org/detail.cfm?id=2663760">strict contextual auto-escaping</a> help developers avoid mistakes which lead to XSS, and <a href="https://cloudplatform.googleblog.com/2015/02/using-google-cloud-platform-for.html">automated scanners</a> can catch classes of vulnerabilities during the testing process. However, in complex applications bugs inevitably slip by, allowing attacks ranging from harmless pranks to malicious <a href="https://security.googleblog.com/2011/03/mhtml-vulnerability-under-active.html">targeted exploits</a>.<br /><br />Content Security Policy (CSP) is a mechanism designed to step in precisely when such bugs happen; it provides developers the ability to restrict which scripts are allowed to execute so that even if attackers can inject HTML into a vulnerable page, they should not be able to load malicious scripts and other types of resources. CSP is a flexible tool allowing developers to set a wide range of policies; it is supported — though not always in its entirety — by all modern browsers.<br /><br />However, the flexibility of CSP also leads to its biggest problem: it makes it easy to set policies which appear to work, but offer no real security benefit. In a <a href="https://research.google.com/pubs/pub45542.html">recent Internet-wide study</a> we analyzed over 1 billion domains and found that 95% of deployed CSP policies are ineffective as a protection against XSS. One of the underlying reasons is that out of the 15 domains most commonly whitelisted by developers for loading external scripts as many as 14 expose patterns which allow attackers to bypass CSP protections.&nbsp;We believe it's important to improve this, and help the web ecosystem make full use of the potential of CSP.<br /><br /><b>Towards safer CSP policies</b><br />To help developers craft policies which meaningfully protect their applications, today we’re releasing the <a href="https://csp-evaluator.withgoogle.com/">CSP Evaluator</a>, a tool to visualize the effect of setting a policy and detect subtle misconfigurations. CSP Evaluator is used by security engineers and developers at Google to make sure policies provide a meaningful security benefit and cannot be subverted by attackers.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-2jTX5XO-mmk/V-i93HvnYoI/AAAAAAAAAXc/Vo9vANUBHpYq7WrKuDsS_Prpe698Y7UlwCLcB/s1600/pasted%2Bimage%2B0%2B%25288%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="398" src="https://1.bp.blogspot.com/-2jTX5XO-mmk/V-i93HvnYoI/AAAAAAAAAXc/Vo9vANUBHpYq7WrKuDsS_Prpe698Y7UlwCLcB/s640/pasted%2Bimage%2B0%2B%25288%2529.png" width="640" /></a></div>Even with such a helpful tool, building a safe script whitelist for a complex application is often all but impossible due to the number of popular domains with resources that allow CSP to be bypassed. Here’s where the idea of a nonce-based CSP policy comes in. Instead of whitelisting all allowed script locations, it’s often simpler to modify the application to prove that a script is trusted by the developer by giving it a nonce -- an unpredictable, single-use token which has to match a value set in the policy:<br /><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">Content-Security-Policy: script-src 'nonce-random123'</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&lt;script nonce='random123'&gt;alert('This script will run')&lt;/script&gt;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&lt;script&gt;alert('Will not run: missing nonce')&lt;/script&gt;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&lt;script nonce='bad123'&gt;alert("Won't run: invalid nonce")&lt;/script&gt;</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span>With '<a href="https://www.w3.org/TR/CSP3/#strict-dynamic-usage">strict-dynamic'</a>, a part of the upcoming CSP3 specification already <a href="https://www.chromestatus.com/feature/5633814718054400">supported</a> by Chrome and Opera (and coming soon to Firefox), adopting such policies in complex, modern applications becomes much easier. Developers can now set a single, short policy such as:<br /><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">script-src 'nonce-random123' 'strict-dynamic'; object-src 'none'</span><br /><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><br /></span>and make sure that all static <span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">&lt;script&gt;</span> elements contain a matching nonce attribute — in many cases this is all that’s needed to enjoy added protection against XSS since ‘strict-dynamic’ will take care of loading any trusted scripts added at runtime. This approach allows setting policies which are <a href="https://csp.withgoogle.com/docs/strict-csp.html#example">backwards-compatible</a> with all CSP-aware browsers, and <a href="https://csp.withgoogle.com/docs/faq.html#strict-dynamic-with-whitelists">plays well</a> with applications which already use a traditional CSP policy; it also simplifies the process of adopting CSP and doesn’t require changing the policy as the application evolves.<br /><br /><b>Adopting strict CSP</b><br />In the past months we’ve deployed this approach in several large Google applications, including <a href="https://console.developers.google.com/">Cloud Console</a>, <a href="https://photos.google.com/">Photos</a>, <a href="https://myactivity.google.com/myactivity">History</a>, <a href="https://www.google.com/about/careers/jobs">Careers Search</a>, <a href="https://www.google.com/maps/timeline">Maps Timeline</a>, <a href="https://www.google.com/culturalinstitute/">Cultural Institute</a> and are working on many more. We believe this approach can also help other developers so today we’re publishing documentation discussing the <a href="https://csp.withgoogle.com/docs/strict-csp.html">best strategies for implementing CSP</a>, including an overview of the <a href="https://csp.withgoogle.com/docs/why-csp.html">benefits of CSP</a>, sample policies, and examples of common <a href="https://csp.withgoogle.com/docs/adopting-csp.html#code-changes">code changes</a>.<br /><br />Further, today we’re releasing <a href="https://chrome.google.com/webstore/detail/csp-mitigator/gijlobangojajlbodabkpjpheeeokhfa">CSP Mitigator</a>, a Chrome extension that helps developers review an application for compatibility with nonce-based CSP. The extension can be enabled for any URL prefix and will collect data about any programming patterns that need to be refactored to support CSP. This includes identifying scripts which do not have the correct nonce attribute, detecting inline event handlers, javascript: URIs, and several other more subtle patterns which might need attention.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-FUp4l0GCGOk/V-i9wBSAgxI/AAAAAAAAAXY/rLhV3aYC5Lwp1MUw5YC7erF0Wi7Zhj3OwCLcB/s1600/pasted%2Bimage%2B0%2B%25289%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="336" src="https://3.bp.blogspot.com/-FUp4l0GCGOk/V-i9wBSAgxI/AAAAAAAAAXY/rLhV3aYC5Lwp1MUw5YC7erF0Wi7Zhj3OwCLcB/s640/pasted%2Bimage%2B0%2B%25289%2529.png" width="640" /></a></div>As with the CSP Evaluator, we use the extension with our applications to help speed up the process of adopting nonce-based CSP policies nonce-based policies across Google.<br /><br /><b>Encouraging broader use of strict CSP</b><br />Finally, today we’re including CSP adoption efforts in the scope of the <a href="https://www.google.com/about/appsecurity/patch-rewards/">Patch Reward Program</a>; proactive work to help make popular open-source web frameworks compatible with nonce-based CSP can qualify for rewards (but please read the&nbsp;<a href="https://www.google.com/about/appsecurity/patch-rewards/#content">program rules</a> and <a href="https://csp.withgoogle.com/docs/faq.html#csp-patches">CSP refactoring tips</a> first). We hope that increased attention to this area will also encourage researchers to find new, creative ways to circumvent CSP restrictions, and help us further improve the mechanism so that we can better protect Internet users from web threats.<br /><br />To reach out to us, email more-csp@google.com.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/reshaping-web-defenses-with-strict-content-security-policy/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Even More Safe Browsing on Android!</title>
		<link>https://googledata.org/google-online-security/even-more-safe-browsing-on-android/</link>
		<comments>https://googledata.org/google-online-security/even-more-safe-browsing-on-android/#comments</comments>
		<pubDate>Thu, 15 Sep 2016 16:36:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=f07de3cb07a8df6a23a0780d078f32a8</guid>
		<description><![CDATA[<span>Posted by Stephan Somogyi, Safe Browsing Team &#38; William Luh, Android Security Team</span><br />During Google I/O in June, we <a href="https://www.youtube.com/watch?v=gwbmGvur5VE&#38;t=29m16s">told everyone</a> that we were going to make a device-local Safe Browsing API available to all Android developers later in the year. That time has come!<br /><br />Starting with <a href="https://developers.google.com/android/guides/releases#august_1_-_v94">Google Play Services version 9.4</a>, all Android developers can use our privacy-preserving, and highly network as well as power-efficient on-device Safe Browsing infrastructure to protect all of their apps&#8217; users. Even better, <a href="https://developer.android.com/training/safebrowsing/index.html">the API is simple and straightforward to use</a>.<br /><br />Since we introduced client-side Safe Browsing on Android, updated our <a href="https://developers.google.com/safe-browsing/">documentation for Safe Browsing Protocol Version 4</a> (pver4), and also released our <a href="https://github.com/google/safebrowsing/">reference pver4 implementation in Go</a>, we&#8217;ve been able to see how much protection this new technology provides to all our users. Since <a href="https://security.googleblog.com/2015/12/protecting-hundreds-of-millions-more.html">our initial launch</a> we&#8217;ve shown hundreds of millions of warnings, actively warning many millions of mobile users about badness before they&#8217;re exposed to it.<br /><br />We look forward to all Android developers extending this same protection to their users, too.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Stephan Somogyi, Safe Browsing Team &amp; William Luh, Android Security Team</span><br />During Google I/O in June, we <a href="https://www.youtube.com/watch?v=gwbmGvur5VE&amp;t=29m16s">told everyone</a> that we were going to make a device-local Safe Browsing API available to all Android developers later in the year. That time has come!<br /><br />Starting with <a href="https://developers.google.com/android/guides/releases#august_1_-_v94">Google Play Services version 9.4</a>, all Android developers can use our privacy-preserving, and highly network as well as power-efficient on-device Safe Browsing infrastructure to protect all of their apps’ users. Even better, <a href="https://developer.android.com/training/safebrowsing/index.html">the API is simple and straightforward to use</a>.<br /><br />Since we introduced client-side Safe Browsing on Android, updated our <a href="https://developers.google.com/safe-browsing/">documentation for Safe Browsing Protocol Version 4</a> (pver4), and also released our <a href="https://github.com/google/safebrowsing/">reference pver4 implementation in Go</a>, we’ve been able to see how much protection this new technology provides to all our users. Since <a href="https://security.googleblog.com/2015/12/protecting-hundreds-of-millions-more.html">our initial launch</a> we’ve shown hundreds of millions of warnings, actively warning many millions of mobile users about badness before they’re exposed to it.<br /><br />We look forward to all Android developers extending this same protection to their users, too.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/even-more-safe-browsing-on-android/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Moving towards a more secure web</title>
		<link>https://googledata.org/google-online-security/moving-towards-a-more-secure-web/</link>
		<comments>https://googledata.org/google-online-security/moving-towards-a-more-secure-web/#comments</comments>
		<pubDate>Thu, 08 Sep 2016 15:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=cce10bedb68b2f06220d9120ae5788fb</guid>
		<description><![CDATA[<span>Posted by Emily Schechter, Chrome Security Team</span><br /><i>[Updated on 12/5/16 with instructions for developers]</i><br /><b>Developers</b>: Read more about how to update your sites&#160;<a href="https://developers.google.com/web/updates/2016/10/avoid-not-secure-warn">here</a>.<br /><br />To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we&#8217;ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.<br /><div><a href="https://4.bp.blogspot.com/-rBbNGiLQzMw/V9CudVXYkjI/AAAAAAAAAWk/SIol_AChYQITBcYJ34xcGsC0a7_VP755gCLcB/s1600/blog%2Bimage%2B1.png"><img border="0" height="248" src="https://4.bp.blogspot.com/-rBbNGiLQzMw/V9CudVXYkjI/AAAAAAAAAWk/SIol_AChYQITBcYJ34xcGsC0a7_VP755gCLcB/s640/blog%2Bimage%2B1.png" width="640"></a></div><div><br /></div><div>Chrome currently indicates HTTP connections with a neutral indicator. This doesn&#8217;t reflect the true lack of security for HTTP connections. When you load a website over HTTP, someone else on the network can look at or <a href="http://webpolicy.org/2015/08/25/att-hotspots-now-with-advertising-injection/">modify</a> the site before it gets to you.</div><div><br /></div><div><br /></div><div>A substantial portion of web traffic has transitioned to HTTPS so far, and HTTPS usage is consistently increasing. We recently hit a milestone with more than half of Chrome desktop page loads now served over HTTPS. In addition, since the time we <a href="https://www.google.com/transparencyreport/https/grid/">released our HTTPS report</a> in February, 12 more of the top 100 websites have changed their serving default from HTTP to HTTPS.</div><div><br /></div><div><br /></div><div>Studies show that users <a href="https://www.usenix.org/system/files/conference/soups2016/soups2016-paper-porter-felt.pdf">do not perceive</a> the lack of a &#8220;secure&#8221; icon as a warning, but also that users become blind to warnings that occur too frequently. <a href="https://www.chromium.org/Home/chromium-security/marking-http-as-non-secure">Our plan</a> to label HTTP sites more clearly and accurately as non-secure will take place in gradual steps, based on increasingly stringent criteria. Starting January 2017, Chrome 56 will label HTTP pages with password or credit card form fields as "not secure," given their particularly sensitive nature.</div><div><br /></div><div><br /></div><div>In following releases, we will continue to extend HTTP warnings, for example, by labelling HTTP pages as &#8220;not secure&#8221; in Incognito mode, where users may have higher expectations of privacy. Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.</div><div><a href="https://3.bp.blogspot.com/-DG70U0Y-y9w/V9Cwuym53AI/AAAAAAAAAW0/6zO81T_hqWMjdAF_YYK7dfXV-26DL7OYACLcB/s1600/blog%2Bimage%2B2.png"><img border="0" height="193" src="https://3.bp.blogspot.com/-DG70U0Y-y9w/V9Cwuym53AI/AAAAAAAAAW0/6zO81T_hqWMjdAF_YYK7dfXV-26DL7OYACLcB/s400/blog%2Bimage%2B2.png" width="400"></a></div><div><br /></div><div>We will publish updates to this plan as we approach future releases, but don&#8217;t wait to get started moving to HTTPS. HTTPS is <a href="https://www.youtube.com/watch?v=e6DUrH56g14">easier and cheaper than ever before</a>, and enables both the <a href="https://istlsfastyet.com/">best</a> <a href="https://www.httpvshttps.com/">performance</a> the web offers and <a href="https://developer.mozilla.org/en-US/docs/Web/API/Navigator/getUserMedia">powerful</a> <a href="http://www.html5rocks.com/en/tutorials/service-worker/introduction/">new</a> <a href="https://www.chromium.org/Home/chromium-security/deprecating-powerful-features-on-insecure-origins">features</a> that are too sensitive for HTTP. Check out our <a href="https://developers.google.com/web/fundamentals/security/encrypt-in-transit/?hl=en">set-up guides</a> to get started.</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Emily Schechter, Chrome Security Team</span><br /><i>[Updated on 12/5/16 with instructions for developers]</i><br /><b>Developers</b>: Read more about how to update your sites&nbsp;<a href="https://developers.google.com/web/updates/2016/10/avoid-not-secure-warn">here</a>.<br /><br />To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-rBbNGiLQzMw/V9CudVXYkjI/AAAAAAAAAWk/SIol_AChYQITBcYJ34xcGsC0a7_VP755gCLcB/s1600/blog%2Bimage%2B1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="248" src="https://4.bp.blogspot.com/-rBbNGiLQzMw/V9CudVXYkjI/AAAAAAAAAWk/SIol_AChYQITBcYJ34xcGsC0a7_VP755gCLcB/s640/blog%2Bimage%2B1.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Chrome currently indicates HTTP connections with a neutral indicator. This doesn’t reflect the true lack of security for HTTP connections. When you load a website over HTTP, someone else on the network can look at or <a href="http://webpolicy.org/2015/08/25/att-hotspots-now-with-advertising-injection/">modify</a> the site before it gets to you.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">A substantial portion of web traffic has transitioned to HTTPS so far, and HTTPS usage is consistently increasing. We recently hit a milestone with more than half of Chrome desktop page loads now served over HTTPS. In addition, since the time we <a href="https://www.google.com/transparencyreport/https/grid/">released our HTTPS report</a> in February, 12 more of the top 100 websites have changed their serving default from HTTP to HTTPS.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Studies show that users <a href="https://www.usenix.org/system/files/conference/soups2016/soups2016-paper-porter-felt.pdf">do not perceive</a> the lack of a “secure” icon as a warning, but also that users become blind to warnings that occur too frequently. <a href="https://www.chromium.org/Home/chromium-security/marking-http-as-non-secure">Our plan</a> to label HTTP sites more clearly and accurately as non-secure will take place in gradual steps, based on increasingly stringent criteria. Starting January 2017, Chrome 56 will label HTTP pages with password or credit card form fields as "not secure," given their particularly sensitive nature.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">In following releases, we will continue to extend HTTP warnings, for example, by labelling HTTP pages as “not secure” in Incognito mode, where users may have higher expectations of privacy. Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.</div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-DG70U0Y-y9w/V9Cwuym53AI/AAAAAAAAAW0/6zO81T_hqWMjdAF_YYK7dfXV-26DL7OYACLcB/s1600/blog%2Bimage%2B2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="193" src="https://3.bp.blogspot.com/-DG70U0Y-y9w/V9Cwuym53AI/AAAAAAAAAW0/6zO81T_hqWMjdAF_YYK7dfXV-26DL7OYACLcB/s400/blog%2Bimage%2B2.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">We will publish updates to this plan as we approach future releases, but don’t wait to get started moving to HTTPS. HTTPS is <a href="https://www.youtube.com/watch?v=e6DUrH56g14">easier and cheaper than ever before</a>, and enables both the <a href="https://istlsfastyet.com/">best</a> <a href="https://www.httpvshttps.com/">performance</a> the web offers and <a href="https://developer.mozilla.org/en-US/docs/Web/API/Navigator/getUserMedia">powerful</a> <a href="http://www.html5rocks.com/en/tutorials/service-worker/introduction/">new</a> <a href="https://www.chromium.org/Home/chromium-security/deprecating-powerful-features-on-insecure-origins">features</a> that are too sensitive for HTTP. Check out our <a href="https://developers.google.com/web/fundamentals/security/encrypt-in-transit/?hl=en">set-up guides</a> to get started.</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/moving-towards-a-more-secure-web/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Keeping Android safe: Security enhancements in Nougat</title>
		<link>https://googledata.org/google-online-security/keeping-android-safe-security-enhancements-in-nougat-2/</link>
		<comments>https://googledata.org/google-online-security/keeping-android-safe-security-enhancements-in-nougat-2/#comments</comments>
		<pubDate>Tue, 06 Sep 2016 23:43:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=bf75278478cedc9ff7e00982cf483995</guid>
		<description><![CDATA[<span>Posted by Xiaowen Xin, Android Security Team</span><br /><i>[Cross-posted from the <a href="https://android-developers.blogspot.com/2016/09/security-enhancements-in-nougat.html">Android Developers Blog</a>]</i><br /><br />Over the course of the summer, we previewed a variety of security enhancements in Android 7.0 Nougat: an increased focus on security with our <a href="https://android-developers.blogspot.com/2016/06/one-year-of-android-security-rewards.html">vulnerability rewards program</a>, a new <a href="https://android-developers.blogspot.com/2016/04/developing-for-direct-boot.html">Direct Boot mode</a>, re-architected mediaserver and <a href="https://android-developers.blogspot.com/2016/05/hardening-media-stack.html">hardened media stack</a>, apps that are protected from <a href="https://android-developers.blogspot.com/2016/04/protecting-against-unintentional.html">accidental regressions to cleartext traffic</a>, an update to the way Android handles <a href="https://android-developers.blogspot.com/2016/07/changes-to-trusted-certificate.html">trusted certificate authorities</a>, strict enforcement of <a href="https://android-developers.blogspot.com/2016/07/strictly-enforced-verified-boot-with.html">verified boot</a> with error correction, and <a href="https://android-developers.blogspot.com/2016/07/protecting-android-with-more-linux.html">updates to the Linux kernel to reduce the attack surface and increase memory protection</a>. Phew!<br /><br />Now that Nougat has begun to roll out, we wanted to recap these updates in a single overview and highlight a few new improvements.<br /><b>Direct Boot and encryption</b><br /><b><br /></b>In previous versions of Android, users with encrypted devices would have to enter their PIN/pattern/password by default during the boot process to decrypt their storage area and finish booting. With Android 7.0 Nougat, we&#8217;ve updated the underlying encryption scheme and streamlined the boot process to speed up rebooting your phone. Now your phone&#8217;s main features, like the phone app and your alarm clock, are ready right away before you even type your PIN, so people can call you and your alarm clock can wake you up. We call this feature <a href="https://android-developers.blogspot.com/2016/04/developing-for-direct-boot.html">Direct Boot</a>.<br /><br />Under the hood, file-based encryption enables this improved user experience. With this new encryption scheme, the system storage area, as well as each user profile storage area, are all encrypted separately. Unlike with full-disk encryption, where all data was encrypted as a single unit, per-profile-based encryption enables the system to reboot normally into a functional state using just device keys. Essential apps can opt-in to run in a limited state after reboot, and when you enter your lock screen credential, these apps then get access your user data to provide full functionality.<br /><br />File-based encryption better isolates and protects individual users and profiles on a device by encrypting data at a finer granularity. Each profile is encrypted using a unique key that can only be unlocked by your PIN or password, so that your data can only be decrypted by you.<br /><br /><div><a href="https://1.bp.blogspot.com/-M1hMaG1w3hw/V89PovCOdYI/AAAAAAAAAWI/ePEsP5f1ABs9ib5kWSeCoJyYxy4XtnRygCLcB/s1600/image00.png"><img border="0" height="381" src="https://1.bp.blogspot.com/-M1hMaG1w3hw/V89PovCOdYI/AAAAAAAAAWI/ePEsP5f1ABs9ib5kWSeCoJyYxy4XtnRygCLcB/s400/image00.png" width="400"></a></div><div>Encryption support is getting stronger across the Android ecosystem as well. Starting with Marshmallow, all capable devices were required to support encryption. Many devices, like Nexus 5X and 6P also use unique keys that are accessible only with trusted hardware, such as the ARM TrustZone. Now with 7.0 Nougat, all new capable Android devices must also have this kind of hardware support for key storage and provide brute force protection while verifying your lock screen credential before these keys can be used. This way, all of your data can only be decrypted on that exact device and only by you.</div><div><br /></div><div><b><br /></b></div><div><b>The media stack and platform hardening</b></div><div><b><br /></b></div><div>In Android Nougat, we&#8217;ve both hardened and <a href="https://android-developers.blogspot.com/2016/05/hardening-media-stack.html">re-architected</a> mediaserver, one of the main system services that processes untrusted input. First, by incorporating integer overflow sanitization, part of Clang&#8217;s <a href="http://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html">UndefinedBehaviorSanitizer</a>, we prevent an entire class of vulnerabilities, which comprise the majority of reported libstagefright bugs. As soon as an integer overflow is detected, we shut down the process so an attack is stopped. Second, we&#8217;ve modularized the media stack to put different components into individual sandboxes and tightened the privileges of each sandbox to have the minimum privileges required to perform its job. With this containment technique, a compromise in many parts of the stack grants the attacker access to significantly fewer permissions and significantly reduced exposed kernel attack surface.</div><div><br /></div><div>In addition to hardening the mediaserver, we&#8217;ve added a large list of protections for the platform, including:</div><div></div><ul><li>Verified Boot: Verified Boot is now strictly enforced to prevent compromised devices from booting; it supports <a href="https://android-developers.blogspot.com/2016/07/strictly-enforced-verified-boot-with.html">error correction</a> to improve reliability against non-malicious data corruption.</li><li>SELinux: Updated SELinux configuration and increased Seccomp coverage further locks down the application sandbox and reduces attack surface. Library load order randomization and improved ASLR: Increased randomness makes some code-reuse attacks less reliable.</li><li><a href="https://android-developers.blogspot.com/2016/07/protecting-android-with-more-linux.html">Kernel hardening</a>: Added additional memory protection for newer kernels by <a href="https://android-review.googlesource.com/#/q/status:merged+project:kernel/common+branch:android-3.18+topic:arm64-ronx">marking portions of kernel memory as read-only</a>, <a href="https://android-review.googlesource.com/#/q/status:merged+project:kernel/common+branch:android-4.1+topic:sw_PAN">restricting kernel access to userspace addresses</a>, and further reducing the existing attack surface.</li><li><a href="https://developer.android.com/preview/api-overview.html?utm_campaign=android_discussion_security_090616&#38;utm_source=anddev&#38;utm_medium=blog#apk_signature_v2">APK signature scheme v2</a>: Introduced a whole-file signature scheme that improves <a href="https://source.android.com/security/apksigning/v2.html#verification">verification speed</a> and strengthens integrity guarantees.</li></ul><div><b>App security improvements</b></div><div><b><br /></b></div><div>Android Nougat is the safest and easiest version of Android for application developers to use.</div><div><ul><li>Apps that want to share data with other apps now must explicitly opt-in by offering their files through a <a href="https://developer.android.com/guide/topics/providers/content-providers.html?utm_campaign=android_discussion_security_090616&#38;utm_source=anddev&#38;utm_medium=blog">Content Provider</a>, like <a href="https://developer.android.com/reference/android/support/v4/content/FileProvider.html?utm_campaign=android_discussion_security_090616&#38;utm_source=anddev&#38;utm_medium=blog">FileProvider</a>. The application private directory (usually /data/data/) is now set to Linux permission 0700 for apps targeting API Level 24+.</li><li>To make it easier for apps to control access to their secure network traffic, user-installed certificate authorities and those installed through Device Admin APIs are <a href="https://android-developers.blogspot.com/2016/07/changes-to-trusted-certificate.html">no longer trusted by default</a> for apps targeting API Level 24+. Additionally, all new Android devices must ship with the <a href="https://source.android.com/security/overview/app-security.html#certificate-authorities">same trusted CA store</a>.</li><li>With <a href="https://developer.android.com/preview/features/security-config.html?utm_campaign=android_discussion_security_090616&#38;utm_source=anddev&#38;utm_medium=blog">Network Security Config</a>, developers can more easily configure network security policy through a declarative configuration file. This includes blocking cleartext traffic, configuring the set of trusted CAs and certificates, and setting up a separate debug configuration.</li></ul><div>We&#8217;ve also continued to refine app permissions and capabilities to protect you from potentially harmful apps.</div></div><div><ul><li>To improve device privacy, we have further restricted and removed access to persistent device identifiers such as MAC addresses.</li><li>User interface overlays can no longer be displayed on top of permissions dialogs. This &#8220;clickjacking&#8221; technique was used by some apps to attempt to gain permissions improperly.</li><li>We&#8217;ve reduced the power of device admin applications so they can no longer change your lockscreen if you have a lockscreen set, and device admin will no longer be notified of impending disable via <a href="https://developer.android.com/reference/android/app/admin/DeviceAdminReceiver.html?utm_campaign=android_discussion_security_090616&#38;utm_source=anddev&#38;utm_medium=blog#onDisableRequested(android.content.Context,%20android.content.Intent)">onDisableRequested()</a>. These were tactics used by some ransomware to gain control of a device.</li></ul><div><b>System Updates</b></div></div><div><br /></div><div>Lastly, we've made significant enhancements to the OTA update system to keep your device up-to-date much more easily with the latest system software and security patches. We've made the install time for OTAs faster, and the OTA size smaller for security updates. You no longer have to wait for the optimizing apps step, which was one of the slowest parts of the update process, because the new JIT compiler has been <a href="https://developer.android.com/about/versions/nougat/android-7.0.html?utm_campaign=android_discussion_security_090616&#38;utm_source=anddev&#38;utm_medium=blog#doze_on_the_go">optimized</a> to make installs and updates lightning fast.</div><div><br /></div><div>The update experience is even faster for new Android devices running Nougat with updated firmware. Like they do with Chromebooks, updates are applied in the background while the device continues to run normally. These updates are applied to a different system partition, and when you reboot, it will seamlessly switch to that new partition running the new system software version.</div><div><br /></div><div>We&#8217;re constantly working to improve Android security and Android Nougat brings significant security improvements across all fronts. As always, we appreciate feedback on our work and welcome suggestions for how we can improve Android. Contact us at <a href="mailto:security@android.com">security@android.com</a>.</div><br /><br />]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Xiaowen Xin, Android Security Team</span><br /><i>[Cross-posted from the <a href="https://android-developers.blogspot.com/2016/09/security-enhancements-in-nougat.html">Android Developers Blog</a>]</i><br /><br />Over the course of the summer, we previewed a variety of security enhancements in Android 7.0 Nougat: an increased focus on security with our <a href="https://android-developers.blogspot.com/2016/06/one-year-of-android-security-rewards.html">vulnerability rewards program</a>, a new <a href="https://android-developers.blogspot.com/2016/04/developing-for-direct-boot.html">Direct Boot mode</a>, re-architected mediaserver and <a href="https://android-developers.blogspot.com/2016/05/hardening-media-stack.html">hardened media stack</a>, apps that are protected from <a href="https://android-developers.blogspot.com/2016/04/protecting-against-unintentional.html">accidental regressions to cleartext traffic</a>, an update to the way Android handles <a href="https://android-developers.blogspot.com/2016/07/changes-to-trusted-certificate.html">trusted certificate authorities</a>, strict enforcement of <a href="https://android-developers.blogspot.com/2016/07/strictly-enforced-verified-boot-with.html">verified boot</a> with error correction, and <a href="https://android-developers.blogspot.com/2016/07/protecting-android-with-more-linux.html">updates to the Linux kernel to reduce the attack surface and increase memory protection</a>. Phew!<br /><br />Now that Nougat has begun to roll out, we wanted to recap these updates in a single overview and highlight a few new improvements.<br /><b>Direct Boot and encryption</b><br /><b><br /></b>In previous versions of Android, users with encrypted devices would have to enter their PIN/pattern/password by default during the boot process to decrypt their storage area and finish booting. With Android 7.0 Nougat, we’ve updated the underlying encryption scheme and streamlined the boot process to speed up rebooting your phone. Now your phone’s main features, like the phone app and your alarm clock, are ready right away before you even type your PIN, so people can call you and your alarm clock can wake you up. We call this feature <a href="https://android-developers.blogspot.com/2016/04/developing-for-direct-boot.html">Direct Boot</a>.<br /><br />Under the hood, file-based encryption enables this improved user experience. With this new encryption scheme, the system storage area, as well as each user profile storage area, are all encrypted separately. Unlike with full-disk encryption, where all data was encrypted as a single unit, per-profile-based encryption enables the system to reboot normally into a functional state using just device keys. Essential apps can opt-in to run in a limited state after reboot, and when you enter your lock screen credential, these apps then get access your user data to provide full functionality.<br /><br />File-based encryption better isolates and protects individual users and profiles on a device by encrypting data at a finer granularity. Each profile is encrypted using a unique key that can only be unlocked by your PIN or password, so that your data can only be decrypted by you.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-M1hMaG1w3hw/V89PovCOdYI/AAAAAAAAAWI/ePEsP5f1ABs9ib5kWSeCoJyYxy4XtnRygCLcB/s1600/image00.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="381" src="https://1.bp.blogspot.com/-M1hMaG1w3hw/V89PovCOdYI/AAAAAAAAAWI/ePEsP5f1ABs9ib5kWSeCoJyYxy4XtnRygCLcB/s400/image00.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: left;">Encryption support is getting stronger across the Android ecosystem as well. Starting with Marshmallow, all capable devices were required to support encryption. Many devices, like Nexus 5X and 6P also use unique keys that are accessible only with trusted hardware, such as the ARM TrustZone. Now with 7.0 Nougat, all new capable Android devices must also have this kind of hardware support for key storage and provide brute force protection while verifying your lock screen credential before these keys can be used. This way, all of your data can only be decrypted on that exact device and only by you.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><b><br /></b></div><div class="separator" style="clear: both; text-align: left;"><b>The media stack and platform hardening</b></div><div class="separator" style="clear: both; text-align: left;"><b><br /></b></div><div class="separator" style="clear: both; text-align: left;">In Android Nougat, we’ve both hardened and <a href="https://android-developers.blogspot.com/2016/05/hardening-media-stack.html">re-architected</a> mediaserver, one of the main system services that processes untrusted input. First, by incorporating integer overflow sanitization, part of Clang’s <a href="http://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html">UndefinedBehaviorSanitizer</a>, we prevent an entire class of vulnerabilities, which comprise the majority of reported libstagefright bugs. As soon as an integer overflow is detected, we shut down the process so an attack is stopped. Second, we’ve modularized the media stack to put different components into individual sandboxes and tightened the privileges of each sandbox to have the minimum privileges required to perform its job. With this containment technique, a compromise in many parts of the stack grants the attacker access to significantly fewer permissions and significantly reduced exposed kernel attack surface.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">In addition to hardening the mediaserver, we’ve added a large list of protections for the platform, including:</div><div class="separator" style="clear: both; text-align: left;"></div><ul><li>Verified Boot: Verified Boot is now strictly enforced to prevent compromised devices from booting; it supports <a href="https://android-developers.blogspot.com/2016/07/strictly-enforced-verified-boot-with.html">error correction</a> to improve reliability against non-malicious data corruption.</li><li>SELinux: Updated SELinux configuration and increased Seccomp coverage further locks down the application sandbox and reduces attack surface. Library load order randomization and improved ASLR: Increased randomness makes some code-reuse attacks less reliable.</li><li><a href="https://android-developers.blogspot.com/2016/07/protecting-android-with-more-linux.html">Kernel hardening</a>: Added additional memory protection for newer kernels by <a href="https://android-review.googlesource.com/#/q/status:merged+project:kernel/common+branch:android-3.18+topic:arm64-ronx">marking portions of kernel memory as read-only</a>, <a href="https://android-review.googlesource.com/#/q/status:merged+project:kernel/common+branch:android-4.1+topic:sw_PAN">restricting kernel access to userspace addresses</a>, and further reducing the existing attack surface.</li><li><a href="https://developer.android.com/preview/api-overview.html?utm_campaign=android_discussion_security_090616&amp;utm_source=anddev&amp;utm_medium=blog#apk_signature_v2">APK signature scheme v2</a>: Introduced a whole-file signature scheme that improves <a href="https://source.android.com/security/apksigning/v2.html#verification">verification speed</a> and strengthens integrity guarantees.</li></ul><div><b>App security improvements</b></div><div><b><br /></b></div><div>Android Nougat is the safest and easiest version of Android for application developers to use.</div><div><ul><li>Apps that want to share data with other apps now must explicitly opt-in by offering their files through a <a href="https://developer.android.com/guide/topics/providers/content-providers.html?utm_campaign=android_discussion_security_090616&amp;utm_source=anddev&amp;utm_medium=blog">Content Provider</a>, like <a href="https://developer.android.com/reference/android/support/v4/content/FileProvider.html?utm_campaign=android_discussion_security_090616&amp;utm_source=anddev&amp;utm_medium=blog">FileProvider</a>. The application private directory (usually /data/data/) is now set to Linux permission 0700 for apps targeting API Level 24+.</li><li>To make it easier for apps to control access to their secure network traffic, user-installed certificate authorities and those installed through Device Admin APIs are <a href="https://android-developers.blogspot.com/2016/07/changes-to-trusted-certificate.html">no longer trusted by default</a> for apps targeting API Level 24+. Additionally, all new Android devices must ship with the <a href="https://source.android.com/security/overview/app-security.html#certificate-authorities">same trusted CA store</a>.</li><li>With <a href="https://developer.android.com/preview/features/security-config.html?utm_campaign=android_discussion_security_090616&amp;utm_source=anddev&amp;utm_medium=blog">Network Security Config</a>, developers can more easily configure network security policy through a declarative configuration file. This includes blocking cleartext traffic, configuring the set of trusted CAs and certificates, and setting up a separate debug configuration.</li></ul><div>We’ve also continued to refine app permissions and capabilities to protect you from potentially harmful apps.</div></div><div><ul><li>To improve device privacy, we have further restricted and removed access to persistent device identifiers such as MAC addresses.</li><li>User interface overlays can no longer be displayed on top of permissions dialogs. This “clickjacking” technique was used by some apps to attempt to gain permissions improperly.</li><li>We’ve reduced the power of device admin applications so they can no longer change your lockscreen if you have a lockscreen set, and device admin will no longer be notified of impending disable via <a href="https://developer.android.com/reference/android/app/admin/DeviceAdminReceiver.html?utm_campaign=android_discussion_security_090616&amp;utm_source=anddev&amp;utm_medium=blog#onDisableRequested(android.content.Context,%20android.content.Intent)">onDisableRequested()</a>. These were tactics used by some ransomware to gain control of a device.</li></ul><div><b>System Updates</b></div></div><div><br /></div><div>Lastly, we've made significant enhancements to the OTA update system to keep your device up-to-date much more easily with the latest system software and security patches. We've made the install time for OTAs faster, and the OTA size smaller for security updates. You no longer have to wait for the optimizing apps step, which was one of the slowest parts of the update process, because the new JIT compiler has been <a href="https://developer.android.com/about/versions/nougat/android-7.0.html?utm_campaign=android_discussion_security_090616&amp;utm_source=anddev&amp;utm_medium=blog#doze_on_the_go">optimized</a> to make installs and updates lightning fast.</div><div><br /></div><div>The update experience is even faster for new Android devices running Nougat with updated firmware. Like they do with Chromebooks, updates are applied in the background while the device continues to run normally. These updates are applied to a different system partition, and when you reboot, it will seamlessly switch to that new partition running the new system software version.</div><div><br /></div><div>We’re constantly working to improve Android security and Android Nougat brings significant security improvements across all fronts. As always, we appreciate feedback on our work and welcome suggestions for how we can improve Android. Contact us at <a href="mailto:security@android.com">security@android.com</a>.</div><br /><br />]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/keeping-android-safe-security-enhancements-in-nougat-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>More Safe Browsing Help for Webmasters</title>
		<link>https://googledata.org/google-online-security/more-safe-browsing-help-for-webmasters/</link>
		<comments>https://googledata.org/google-online-security/more-safe-browsing-help-for-webmasters/#comments</comments>
		<pubDate>Tue, 06 Sep 2016 20:02:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=bffa9dea5f0c3afa890eab852571bcfd</guid>
		<description><![CDATA[<span>Posted by Kelly Hope Harrington, Safe Browsing Team</span><br />For more than <a href="https://webmasters.googleblog.com/2007/08/malware-reviews-via-webmaster-tools.html">nine years</a>, Safe Browsing has helped webmasters via Search Console with information about how to fix security issues with their sites. This includes relevant Help Center articles, example URLs to assist in diagnosing the presence of harmful content, and a process for webmasters to request reviews of their site after security issues are addressed. Over time, Safe Browsing has expanded its protection to cover additional threats to user safety such as <a href="https://security.googleblog.com/2015/11/safe-browsing-protection-from-even-more.html">Deceptive Sites</a> and <a href="https://security.googleblog.com/2015/02/more-protection-from-unwanted-software.html">Unwanted Software</a>.<br /><br />To help webmasters be even more successful in resolving issues, we&#8217;re happy to announce that we&#8217;ve updated the information available in Search Console in the Security Issues report.<br /><br /><div><a href="https://2.bp.blogspot.com/-VXTCgPpwD7I/V88OWDqXAgI/AAAAAAAAAVs/W4Dd__T7ZnoxugEyqY0kT42xwtQlouSJACLcB/s1600/Screen%2BShot%2B2016-08-26%2Bat%2B1.04.29%2BPM.png"><img border="0" height="400" src="https://2.bp.blogspot.com/-VXTCgPpwD7I/V88OWDqXAgI/AAAAAAAAAVs/W4Dd__T7ZnoxugEyqY0kT42xwtQlouSJACLcB/s640/Screen%2BShot%2B2016-08-26%2Bat%2B1.04.29%2BPM.png" width="640"></a></div><div><br /></div>The updated information provides more specific explanations of six different security issues detected by Safe Browsing, including <a href="https://developers.google.com/webmasters/hacked/">malware</a>, <a href="https://support.google.com/webmasters/answer/6350487">deceptive pages</a>, <a href="https://support.google.com/webmasters/answer/3258249">harmful downloads</a>, and <a href="https://support.google.com/webmasters/answer/3258249#if_uncommon">uncommon downloads</a>. These explanations give webmasters more context and detail about what Safe Browsing found. We also offer tailored recommendations for each type of issue, including sample URLs that webmasters can check to identify the source of the issue, as well as specific remediation actions webmasters can take to resolve the issue.<br /><br />We on the Safe Browsing team definitely recommend <a href="https://support.google.com/webmasters/topic/4564315">registering</a> your site in Search Console even if it is not currently experiencing a security issue. We send notifications through Search Console so webmasters can address any issues that appear <a href="http://research.google.com/pubs/pub44924.html">as quickly as possible</a>.<br /><br />Our goal is to help webmasters provide a safe and secure browsing experience for their users. We welcome any questions or feedback about the new features on the <a href="https://support.google.com/webmasters/go/community">Google Webmaster Help Forum</a>, where <a href="https://topcontributor.withgoogle.com/">Top Contributors</a> and Google employees are available to help.<br /><br />For more information about Safe Browsing&#8217;s ongoing work to shine light on the state of web security and encourage safer web security practices, check out our summary of trends and findings on the <a href="https://www.google.com/transparencyreport/safebrowsing/">Safe Browsing Transparency Report</a>. If you&#8217;re interested in the tools Google provides for webmasters and developers dealing with hacked sites, this <a href="https://www.youtube.com/watch?v=mbJvL61DOZg">video</a> provides a great overview.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Kelly Hope Harrington, Safe Browsing Team</span><br />For more than <a href="https://webmasters.googleblog.com/2007/08/malware-reviews-via-webmaster-tools.html">nine years</a>, Safe Browsing has helped webmasters via Search Console with information about how to fix security issues with their sites. This includes relevant Help Center articles, example URLs to assist in diagnosing the presence of harmful content, and a process for webmasters to request reviews of their site after security issues are addressed. Over time, Safe Browsing has expanded its protection to cover additional threats to user safety such as <a href="https://security.googleblog.com/2015/11/safe-browsing-protection-from-even-more.html">Deceptive Sites</a> and <a href="https://security.googleblog.com/2015/02/more-protection-from-unwanted-software.html">Unwanted Software</a>.<br /><br />To help webmasters be even more successful in resolving issues, we’re happy to announce that we’ve updated the information available in Search Console in the Security Issues report.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-VXTCgPpwD7I/V88OWDqXAgI/AAAAAAAAAVs/W4Dd__T7ZnoxugEyqY0kT42xwtQlouSJACLcB/s1600/Screen%2BShot%2B2016-08-26%2Bat%2B1.04.29%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://2.bp.blogspot.com/-VXTCgPpwD7I/V88OWDqXAgI/AAAAAAAAAVs/W4Dd__T7ZnoxugEyqY0kT42xwtQlouSJACLcB/s640/Screen%2BShot%2B2016-08-26%2Bat%2B1.04.29%2BPM.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>The updated information provides more specific explanations of six different security issues detected by Safe Browsing, including <a href="https://developers.google.com/webmasters/hacked/">malware</a>, <a href="https://support.google.com/webmasters/answer/6350487">deceptive pages</a>, <a href="https://support.google.com/webmasters/answer/3258249">harmful downloads</a>, and <a href="https://support.google.com/webmasters/answer/3258249#if_uncommon">uncommon downloads</a>. These explanations give webmasters more context and detail about what Safe Browsing found. We also offer tailored recommendations for each type of issue, including sample URLs that webmasters can check to identify the source of the issue, as well as specific remediation actions webmasters can take to resolve the issue.<br /><br />We on the Safe Browsing team definitely recommend <a href="https://support.google.com/webmasters/topic/4564315">registering</a> your site in Search Console even if it is not currently experiencing a security issue. We send notifications through Search Console so webmasters can address any issues that appear <a href="http://research.google.com/pubs/pub44924.html">as quickly as possible</a>.<br /><br />Our goal is to help webmasters provide a safe and secure browsing experience for their users. We welcome any questions or feedback about the new features on the <a href="https://support.google.com/webmasters/go/community">Google Webmaster Help Forum</a>, where <a href="https://topcontributor.withgoogle.com/">Top Contributors</a> and Google employees are available to help.<br /><br />For more information about Safe Browsing’s ongoing work to shine light on the state of web security and encourage safer web security practices, check out our summary of trends and findings on the <a href="https://www.google.com/transparencyreport/safebrowsing/">Safe Browsing Transparency Report</a>. If you’re interested in the tools Google provides for webmasters and developers dealing with hacked sites, this <a href="https://www.youtube.com/watch?v=mbJvL61DOZg">video</a> provides a great overview.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/more-safe-browsing-help-for-webmasters/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Guided in-process fuzzing of Chrome components</title>
		<link>https://googledata.org/google-online-security/guided-in-process-fuzzing-of-chrome-components/</link>
		<comments>https://googledata.org/google-online-security/guided-in-process-fuzzing-of-chrome-components/#comments</comments>
		<pubDate>Fri, 05 Aug 2016 15:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=0c43d614dd7382a5c1ca3e97c8a64903</guid>
		<description><![CDATA[<span>Posted by Max Moroz, <a href="https://www.chromium.org/Home/chromium-security/bugs">Chrome Security Engineer</a> and Kostya Serebryany, <a href="https://github.com/google/sanitizers">Sanitizer Tsar</a></span><br /><br />In the past, we&#8217;ve posted about innovations in fuzzing, a software testing technique used to discover coding errors and security vulnerabilities. The topics have included <a href="http://blog.chromium.org/2011/06/testing-chromium-addresssanitizer-fast.html">AddressSanitizer</a>, <a href="http://blog.chromium.org/2012/04/fuzzing-for-security.html">ClusterFuzz</a>, <a href="http://blog.chromium.org/2013/05/testing-chromium-syzyasan-lightweight.html">SyzyASAN</a>, <a href="http://blog.chromium.org/2014/04/testing-chromium-threadsanitizer-v2.html">ThreadSanitizer</a> and <a href="https://www.google.com/search?q=site%3Asecurity.googleblog.com%20fuzzing">others</a>.<br /><br />Today we'd like to talk about <a href="http://llvm.org/docs/LibFuzzer.html">libFuzzer</a> (part of the <a href="http://llvm.org/">LLVM</a> project), an engine for <b>in-process, coverage-guided, white-box fuzzing</b>:<br /><br /><ul><li>By <b>in-process</b>, we mean that we don&#8217;t launch a new process for every test case, and that we mutate inputs directly in memory.</li><li>By <b>coverage-guided</b>, we mean that we measure code coverage for every input, and accumulate test cases that increase overall coverage.</li><li>By <b>white-box</b>, we mean that we use compile-time instrumentation of the source code.</li></ul><div><br />LibFuzzer makes it possible to fuzz individual components of Chrome. This means you don&#8217;t need to generate an HTML page or network payload and launch the whole browser, which adds overhead and flakiness to testing. Instead, you can fuzz any function or internal API directly. Based on our experience, libFuzzer-based fuzzing is extremely efficient, more reliable, and usually thousands of times faster than traditional out-of-process fuzzing.</div><div><br /></div><div>Our goal is to have fuzz testing for every component of Chrome where fuzzing is applicable, and we hope all Chromium developers and external security researchers will contribute to this effort.</div><div><br /></div><div><b>How to write a fuzz target</b></div><div><br /></div><div>With libFuzzer, you need to write only one function, which we call a target function or a fuzz target. It accepts a data buffer and length as input and then feeds it into the code we want to test. And... that&#8217;s it!</div><div><br /></div><div>The fuzz targets are not specific to libFuzzer. Currently, we also run them with <a href="http://lcamtuf.coredump.cx/afl/">AFL</a>, and we expect to use other fuzzing engines in the future.<br /><b><a href="https://chromium.googlesource.com/chromium/src/+/master/testing/libfuzzer/fuzzers/convert_woff2ttf_fuzzer.cc">Sample Fuzzer</a></b><br /><br /><div></div><span><span><span>extern</span><span> </span><span>"</span><span>C</span><span>"</span><span> </span><span>int</span><span> </span><span>LLVMFuzzerTestOneInput</span><span>(</span><span>const</span><span> uint8_t</span><span>*</span><span> </span><span>data</span><span>,</span><span> </span><span>size_t</span><span> </span><span>size</span><span>)</span><span> </span><span>{</span><span><br /></span><span> &#160;std</span><span>::</span><span>string buf</span><span>;</span><span><br /></span><span> &#160;woff2</span><span>::</span><span>WOFF2StringOut out</span><span>(&#38;</span><span>buf</span><span>)</span><span>;</span><span><br /></span><span> &#160;out</span><span>.</span><span>SetMaxSize</span><span>(</span><span>30</span><span> </span><span>*</span><span> </span><span>1024</span><span> </span><span>*</span><span> </span><span>1024</span><span>)</span><span>;</span><span><br /></span><span> &#160;woff2</span><span>::</span><span>ConvertWOFF2ToTTF</span><span>(</span><span>data</span><span>,</span><span> </span><span>size</span><span>,</span><span> </span><span>&#38;</span><span>out</span><span>)</span><span>;</span><span><br /></span><span> &#160;</span><span>return</span><span> </span><span>0</span><span>;</span><span><br /></span><span>}</span></span></span><br />See also the <a href="https://cs.chromium.org/chromium/src/testing/libfuzzer/fuzzers/BUILD.gn?type=cs&#38;q=convert_woff2ttf_fuzzer+file:BUILD.gn&#38;sq=package:chromium&#38;l=269">build rule</a>.<br /><b><a href="https://bugs.chromium.org/p/chromium/issues/detail?id=609042">Sample Bug</a></b><br /><br /><div></div><div dir="ltr"><span><span>==9896==ERROR: AddressSanitizer: </span><span>heap-buffer-overflow</span><span> on address 0x62e000022836 at pc 0x000000499c51 bp 0x7fffa0dc1450 sp 0x7fffa0dc0c00</span></span></div><div dir="ltr"><span><span>WRITE of size 41994 at 0x62e000022836 thread T0</span></span></div><div dir="ltr"><span><span>SCARINESS: 45 (multi-byte-write-heap-buffer-overflow)</span></span></div><div dir="ltr"><span><span> &#160;&#160;&#160;#0 0x499c50 in __asan_memcpy</span></span></div><div dir="ltr"><span><span> &#160;&#160;&#160;#1 0x4e6b50 in Read third_party/woff2/src/buffer.h:86:7</span></span></div><div dir="ltr"><span><span> &#160;&#160;&#160;#2 0x4e6b50 in ReconstructGlyf third_party/woff2/src/woff2_dec.cc:500</span></span></div><div dir="ltr"><span><span> &#160;&#160;&#160;#3 0x4e6b50 in ReconstructFont third_party/woff2/src/woff2_dec.cc:917</span></span></div><div dir="ltr"><span><span> &#160;&#160;&#160;#4 0x4e6b50 in woff2::ConvertWOFF2ToTTF(unsigned char const*, unsigned long, woff2::WOFF2Out*) third_party/woff2/src/woff2_dec.cc:1282</span></span></div><span><span><span> &#160;&#160;&#160;#5 0x4dbfd6 in LLVMFuzzerTestOneInput testing/libfuzzer/fuzzers/convert_woff2ttf_fuzzer.cc:15:3</span></span></span><br /><br />Check out <a href="https://chromium.googlesource.com/chromium/src/+/master/testing/libfuzzer/README.md">our documentation</a> for additional information.<br /><br /></div><b>Integrating LibFuzzer with ClusterFuzz</b><br /><br /><a href="http://blog.chromium.org/2012/04/fuzzing-for-security.html">ClusterFuzz</a> is Chromium&#8217;s infrastructure for large scale fuzzing. It automates crash detection, report deduplication, test minimization, and other tasks. Once you commit a fuzz target into the Chromium codebase (<a href="https://cs.chromium.org/chromium/src/testing/libfuzzer/fuzzers/">examples</a>), ClusterFuzz will automatically pick it up and fuzz it with libFuzzer and AFL.&#160; <br /><br />ClusterFuzz supports most of the libFuzzer features like dictionaries, seed corpus and custom options for different fuzzers. Check out our <a href="https://chromium.googlesource.com/chromium/src/+/master/testing/libfuzzer/efficient_fuzzer.md">Efficient Fuzzer Guide</a> to learn how to use them. <br /><br />Besides the initial seed corpus, we store, minimize, and synchronize the corpora for every fuzzer and across all bots. This allows us to continuously increase code coverage over time and find interesting bugs along the way.<br /><br />ClusterFuzz uses the following memory debugging tools with libFuzzer-based fuzzers:<br /><ul><li><b>AddressSanitizer (ASan):</b> 500 GCE VMs</li><li><b>MemorySanitizer (MSan):</b> 100 GCE VMs</li><li><b>UndefinedBehaviorSanitizer (UBSan):</b> 100 GCE VMs</li></ul><div><b><br /></b><b>Sample Fuzzer Statistics</b></div><div><br /><br />It&#8217;s important to track and analyze performance of fuzzers. So, we have this dashboard to track fuzzer statistics, that is accessible to all chromium developers:</div><div><br /></div><div><a href="https://1.bp.blogspot.com/-k89lJb-TR4w/V6QfUWIIafI/AAAAAAAAAVU/TutPOba471A6qjYW4glsvOcCwQsq8K5XACEw/s1600/statistics.png"><img border="0" height="395" src="https://1.bp.blogspot.com/-k89lJb-TR4w/V6QfUWIIafI/AAAAAAAAAVU/TutPOba471A6qjYW4glsvOcCwQsq8K5XACEw/s640/statistics.png" width="640"></a></div><div><br /><b>Overall statistics for the last 30 days:</b><br /><ul><li><a href="https://cs.chromium.org/search/?q=%22+int+LLVMFuzzerTestOneInput%22+-libFuzzer/src+-llvm/+-buildtools+-file:.md&#38;sq=package:chromium&#38;type=cs">120</a>&#160;fuzzers</li><li><a href="https://bugs.chromium.org/p/chromium/issues/list?can=1&#38;q=label%3AStability-LibFuzzer+opened%3E2016%2F7%2F05+opened%3C2016%2F8%2F04&#38;colspec=ID+Pri+M+Stars+ReleaseBlock+Component+Status+Owner+Summary+OS+Modified&#38;x=m&#38;y=releaseblock&#38;cells=ids">112</a> bugs filed</li><li>Aaaaaand&#8230;. <b>14,366,371,459,772 unique test inputs!</b></li></ul><div><b><br /></b><b>Analysis of the bugs found so far</b></div><div><a href="https://3.bp.blogspot.com/-Nbz4-k4sAr8/V6QfT9nBvlI/AAAAAAAAAVU/jcEn9ewRk5EGah-7ZOHYq22kiYnraWyPQCEw/s1600/chart.png"><img border="0" height="251" src="https://3.bp.blogspot.com/-Nbz4-k4sAr8/V6QfT9nBvlI/AAAAAAAAAVU/jcEn9ewRk5EGah-7ZOHYq22kiYnraWyPQCEw/s640/chart.png" width="640"></a></div><div><b><br /></b></div><div>Looking at the <a href="https://bugs.chromium.org/p/chromium/issues/list?can=1&#38;q=label%3AStability-LibFuzzer&#38;colspec=ID+Pri+M+Stars+ReleaseBlock+Component+Status+Owner+Summary+OS+Modified&#38;x=m&#38;y=releaseblock&#38;cells=ids">324</a> bugs found so far, we can say that ASan and MSan have been very effective memory tools for finding security vulnerabilities. They give us comparable numbers of crashes, though ASan crashes usually are more severe than MSan ones. LSan (part of ASan) and UBSan have a great impact for Stability - another one of our <a href="https://www.chromium.org/developers/core-principles">4 core principles</a>.</div><div><br /></div><div><b><br /></b></div><div><b>Extending Chrome&#8217;s Vulnerability Reward Program</b></div><div><br /></div><div>Under Chrome's Trusted Researcher Program, we invite submission of fuzzers. We run them for you on ClusterFuzz and automatically nominate bugs they find for reward payments.</div><div><br /></div><div>Today we're pleased to announce that the invite-only Trusted Researcher Program is being replaced with the Chrome Fuzzer Program which encourages fuzzer submissions from all, and also covers libFuzzer-based fuzzers! Full guidelines are listed on <a href="https://www.google.com/about/appsecurity/chrome-rewards/">Chrome&#8217;s Vulnerability Reward Program page</a>.</div></div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Max Moroz, <a href="https://www.chromium.org/Home/chromium-security/bugs">Chrome Security Engineer</a> and Kostya Serebryany, <a href="https://github.com/google/sanitizers">Sanitizer Tsar</a></span><br /><br />In the past, we’ve posted about innovations in fuzzing, a software testing technique used to discover coding errors and security vulnerabilities. The topics have included <a href="http://blog.chromium.org/2011/06/testing-chromium-addresssanitizer-fast.html">AddressSanitizer</a>, <a href="http://blog.chromium.org/2012/04/fuzzing-for-security.html">ClusterFuzz</a>, <a href="http://blog.chromium.org/2013/05/testing-chromium-syzyasan-lightweight.html">SyzyASAN</a>, <a href="http://blog.chromium.org/2014/04/testing-chromium-threadsanitizer-v2.html">ThreadSanitizer</a> and <a href="https://www.google.com/search?q=site%3Asecurity.googleblog.com%20fuzzing">others</a>.<br /><br />Today we'd like to talk about <a href="http://llvm.org/docs/LibFuzzer.html">libFuzzer</a> (part of the <a href="http://llvm.org/">LLVM</a> project), an engine for <b>in-process, coverage-guided, white-box fuzzing</b>:<br /><br /><ul><li>By <b>in-process</b>, we mean that we don’t launch a new process for every test case, and that we mutate inputs directly in memory.</li><li>By <b>coverage-guided</b>, we mean that we measure code coverage for every input, and accumulate test cases that increase overall coverage.</li><li>By <b>white-box</b>, we mean that we use compile-time instrumentation of the source code.</li></ul><div><br />LibFuzzer makes it possible to fuzz individual components of Chrome. This means you don’t need to generate an HTML page or network payload and launch the whole browser, which adds overhead and flakiness to testing. Instead, you can fuzz any function or internal API directly. Based on our experience, libFuzzer-based fuzzing is extremely efficient, more reliable, and usually thousands of times faster than traditional out-of-process fuzzing.</div><div><br /></div><div>Our goal is to have fuzz testing for every component of Chrome where fuzzing is applicable, and we hope all Chromium developers and external security researchers will contribute to this effort.</div><div><br /></div><div><b>How to write a fuzz target</b></div><div><br /></div><div>With libFuzzer, you need to write only one function, which we call a target function or a fuzz target. It accepts a data buffer and length as input and then feeds it into the code we want to test. And... that’s it!</div><div><br /></div><div>The fuzz targets are not specific to libFuzzer. Currently, we also run them with <a href="http://lcamtuf.coredump.cx/afl/">AFL</a>, and we expect to use other fuzzing engines in the future.<br /><b><a href="https://chromium.googlesource.com/chromium/src/+/master/testing/libfuzzer/fuzzers/convert_woff2ttf_fuzzer.cc">Sample Fuzzer</a></b><br /><br /><div class="separator" style="clear: both; text-align: center;"></div><span id="docs-internal-guid-afe2ba01-5b62-12e1-4de9-4856e15b4e80"><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><span style="background-color: white; color: maroon; font-size: 14.666666666666666px; font-weight: 700; vertical-align: baseline; white-space: pre-wrap;">extern</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: white; color: maroon; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">"</span><span style="background-color: white; color: #0000e6; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">C</span><span style="background-color: white; color: maroon; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">"</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: white; color: maroon; font-size: 14.666666666666666px; font-weight: 700; vertical-align: baseline; white-space: pre-wrap;">int</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; font-weight: 700; vertical-align: baseline; white-space: pre-wrap;">LLVMFuzzerTestOneInput</span><span style="background-color: white; color: #808030; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: white; color: maroon; font-size: 14.666666666666666px; font-weight: 700; vertical-align: baseline; white-space: pre-wrap;">const</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> uint8_t</span><span style="background-color: white; color: #808030; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">*</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; font-weight: 700; vertical-align: baseline; white-space: pre-wrap;">data</span><span style="background-color: white; color: #808030; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: white; color: #603000; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">size_t</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; font-weight: 700; vertical-align: baseline; white-space: pre-wrap;">size</span><span style="background-color: white; color: #808030; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">)</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: white; color: purple; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">{</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"><br class="kix-line-break" /></span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> &nbsp;std</span><span style="background-color: white; color: purple; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">::</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;">string buf</span><span style="background-color: white; color: purple; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">;</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"><br class="kix-line-break" /></span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> &nbsp;woff2</span><span style="background-color: white; color: purple; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">::</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;">WOFF2StringOut out</span><span style="background-color: white; color: #808030; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">(&amp;</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;">buf</span><span style="background-color: white; color: #808030; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">)</span><span style="background-color: white; color: purple; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">;</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"><br class="kix-line-break" /></span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> &nbsp;out</span><span style="background-color: white; color: #808030; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">.</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;">SetMaxSize</span><span style="background-color: white; color: #808030; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: white; color: #008c00; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">30</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: white; color: #808030; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">*</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: white; color: #008c00; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">1024</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: white; color: #808030; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">*</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: white; color: #008c00; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">1024</span><span style="background-color: white; color: #808030; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">)</span><span style="background-color: white; color: purple; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">;</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"><br class="kix-line-break" /></span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> &nbsp;woff2</span><span style="background-color: white; color: purple; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">::</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;">ConvertWOFF2ToTTF</span><span style="background-color: white; color: #808030; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">(</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; font-weight: 700; vertical-align: baseline; white-space: pre-wrap;">data</span><span style="background-color: white; color: #808030; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; font-weight: 700; vertical-align: baseline; white-space: pre-wrap;">size</span><span style="background-color: white; color: #808030; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">,</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: white; color: #808030; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">&amp;</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;">out</span><span style="background-color: white; color: #808030; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">)</span><span style="background-color: white; color: purple; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">;</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"><br class="kix-line-break" /></span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> &nbsp;</span><span style="background-color: white; color: maroon; font-size: 14.666666666666666px; font-weight: 700; vertical-align: baseline; white-space: pre-wrap;">return</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"> </span><span style="background-color: white; color: #008c00; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">0</span><span style="background-color: white; color: purple; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">;</span><span style="background-color: white; font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"><br class="kix-line-break" /></span><span style="background-color: white; color: purple; font-size: 14.666666666666666px; vertical-align: baseline; white-space: pre-wrap;">}</span></span></span><br />See also the <a href="https://cs.chromium.org/chromium/src/testing/libfuzzer/fuzzers/BUILD.gn?type=cs&amp;q=convert_woff2ttf_fuzzer+file:BUILD.gn&amp;sq=package:chromium&amp;l=269">build rule</a>.<br /><b><a href="https://bugs.chromium.org/p/chromium/issues/detail?id=609042">Sample Bug</a></b><br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"><span style="background-color: transparent; color: black; font-size: 14.666666666666666px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">==9896==ERROR: AddressSanitizer: </span><span style="background-color: transparent; color: black; font-size: 14.666666666666666px; font-style: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">heap-buffer-overflow</span><span style="background-color: transparent; color: black; font-size: 14.666666666666666px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> on address 0x62e000022836 at pc 0x000000499c51 bp 0x7fffa0dc1450 sp 0x7fffa0dc0c00</span></span></div><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-size: 14.666666666666666px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">WRITE of size 41994 at 0x62e000022836 thread T0</span></span></div><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-size: 14.666666666666666px; font-style: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">SCARINESS: 45 (multi-byte-write-heap-buffer-overflow)</span></span></div><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-size: 14.666666666666666px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"> &nbsp;&nbsp;&nbsp;#0 0x499c50 in __asan_memcpy</span></span></div><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-size: 14.666666666666666px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"> &nbsp;&nbsp;&nbsp;#1 0x4e6b50 in Read third_party/woff2/src/buffer.h:86:7</span></span></div><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-size: 14.666666666666666px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"> &nbsp;&nbsp;&nbsp;#2 0x4e6b50 in ReconstructGlyf third_party/woff2/src/woff2_dec.cc:500</span></span></div><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-size: 14.666666666666666px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"> &nbsp;&nbsp;&nbsp;#3 0x4e6b50 in ReconstructFont third_party/woff2/src/woff2_dec.cc:917</span></span></div><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-size: 14.666666666666666px; font-style: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"> &nbsp;&nbsp;&nbsp;#4 0x4e6b50 in woff2::ConvertWOFF2ToTTF(unsigned char const*, unsigned long, woff2::WOFF2Out*) third_party/woff2/src/woff2_dec.cc:1282</span></span></div><span id="docs-internal-guid-afe2ba01-5b61-9516-803f-a0c4d563ffe7"><span style="font-size: 14.666666666666666px; font-variant-alternates: normal; font-variant-east-asian: normal; font-variant-ligatures: normal; font-variant-numeric: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;"><span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;"> &nbsp;&nbsp;&nbsp;#5 0x4dbfd6 in LLVMFuzzerTestOneInput testing/libfuzzer/fuzzers/convert_woff2ttf_fuzzer.cc:15:3</span></span></span><br /><br />Check out <a href="https://chromium.googlesource.com/chromium/src/+/master/testing/libfuzzer/README.md">our documentation</a> for additional information.<br /><br /></div><b>Integrating LibFuzzer with ClusterFuzz</b><br /><br /><a href="http://blog.chromium.org/2012/04/fuzzing-for-security.html">ClusterFuzz</a> is Chromium’s infrastructure for large scale fuzzing. It automates crash detection, report deduplication, test minimization, and other tasks. Once you commit a fuzz target into the Chromium codebase (<a href="https://cs.chromium.org/chromium/src/testing/libfuzzer/fuzzers/">examples</a>), ClusterFuzz will automatically pick it up and fuzz it with libFuzzer and AFL.&nbsp; <br /><br />ClusterFuzz supports most of the libFuzzer features like dictionaries, seed corpus and custom options for different fuzzers. Check out our <a href="https://chromium.googlesource.com/chromium/src/+/master/testing/libfuzzer/efficient_fuzzer.md">Efficient Fuzzer Guide</a> to learn how to use them. <br /><br />Besides the initial seed corpus, we store, minimize, and synchronize the corpora for every fuzzer and across all bots. This allows us to continuously increase code coverage over time and find interesting bugs along the way.<br /><br />ClusterFuzz uses the following memory debugging tools with libFuzzer-based fuzzers:<br /><ul><li><b>AddressSanitizer (ASan):</b> 500 GCE VMs</li><li><b>MemorySanitizer (MSan):</b> 100 GCE VMs</li><li><b>UndefinedBehaviorSanitizer (UBSan):</b> 100 GCE VMs</li></ul><div><b><br /></b><b>Sample Fuzzer Statistics</b></div><div><br /><br />It’s important to track and analyze performance of fuzzers. So, we have this dashboard to track fuzzer statistics, that is accessible to all chromium developers:</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-k89lJb-TR4w/V6QfUWIIafI/AAAAAAAAAVU/TutPOba471A6qjYW4glsvOcCwQsq8K5XACEw/s1600/statistics.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="395" src="https://1.bp.blogspot.com/-k89lJb-TR4w/V6QfUWIIafI/AAAAAAAAAVU/TutPOba471A6qjYW4glsvOcCwQsq8K5XACEw/s640/statistics.png" width="640" /></a></div><div><br /><b>Overall statistics for the last 30 days:</b><br /><ul><li><a href="https://cs.chromium.org/search/?q=%22+int+LLVMFuzzerTestOneInput%22+-libFuzzer/src+-llvm/+-buildtools+-file:.md&amp;sq=package:chromium&amp;type=cs">120</a>&nbsp;fuzzers</li><li><a href="https://bugs.chromium.org/p/chromium/issues/list?can=1&amp;q=label%3AStability-LibFuzzer+opened%3E2016%2F7%2F05+opened%3C2016%2F8%2F04&amp;colspec=ID+Pri+M+Stars+ReleaseBlock+Component+Status+Owner+Summary+OS+Modified&amp;x=m&amp;y=releaseblock&amp;cells=ids">112</a> bugs filed</li><li>Aaaaaand…. <b>14,366,371,459,772 unique test inputs!</b></li></ul><div><b><br /></b><b>Analysis of the bugs found so far</b></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-Nbz4-k4sAr8/V6QfT9nBvlI/AAAAAAAAAVU/jcEn9ewRk5EGah-7ZOHYq22kiYnraWyPQCEw/s1600/chart.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="251" src="https://3.bp.blogspot.com/-Nbz4-k4sAr8/V6QfT9nBvlI/AAAAAAAAAVU/jcEn9ewRk5EGah-7ZOHYq22kiYnraWyPQCEw/s640/chart.png" width="640" /></a></div><div><b><br /></b></div><div>Looking at the <a href="https://bugs.chromium.org/p/chromium/issues/list?can=1&amp;q=label%3AStability-LibFuzzer&amp;colspec=ID+Pri+M+Stars+ReleaseBlock+Component+Status+Owner+Summary+OS+Modified&amp;x=m&amp;y=releaseblock&amp;cells=ids">324</a> bugs found so far, we can say that ASan and MSan have been very effective memory tools for finding security vulnerabilities. They give us comparable numbers of crashes, though ASan crashes usually are more severe than MSan ones. LSan (part of ASan) and UBSan have a great impact for Stability - another one of our <a href="https://www.chromium.org/developers/core-principles">4 core principles</a>.</div><div><br /></div><div><b><br /></b></div><div><b>Extending Chrome’s Vulnerability Reward Program</b></div><div><br /></div><div>Under Chrome's Trusted Researcher Program, we invite submission of fuzzers. We run them for you on ClusterFuzz and automatically nominate bugs they find for reward payments.</div><div><br /></div><div>Today we're pleased to announce that the invite-only Trusted Researcher Program is being replaced with the Chrome Fuzzer Program which encourages fuzzer submissions from all, and also covers libFuzzer-based fuzzers! Full guidelines are listed on <a href="https://www.google.com/about/appsecurity/chrome-rewards/">Chrome’s Vulnerability Reward Program page</a>.</div></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/guided-in-process-fuzzing-of-chrome-components/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>New research: Zeroing in on deceptive software installations</title>
		<link>https://googledata.org/google-online-security/new-research-zeroing-in-on-deceptive-software-installations/</link>
		<comments>https://googledata.org/google-online-security/new-research-zeroing-in-on-deceptive-software-installations/#comments</comments>
		<pubDate>Thu, 04 Aug 2016 16:31:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=4647c8dc43de966cdf1b8e738013c1c6</guid>
		<description><![CDATA[<span>Posted by Kurt Thomas, Research Scientist and Juan A. Elices Crespo, Software Engineer</span><br /><br />As part of Google&#8217;s ongoing effort to <a href="https://security.googleblog.com/2015/12/year-one-progress-in-fight-against.html">protect users from unwanted software</a>, we have been zeroing in on the deceptive installation tactics and actors that play a role in unwanted software delivery. This software includes <a href="https://security.googleblog.com/2015/05/new-research-ad-injection-economy.html">unwanted ad injectors</a> that insert unintended ads into webpages and <a href="https://security.googleblog.com/2013/10/dont-mess-with-my-browser.html">browser settings hijackers</a> that change search settings without user consent.<br /><br />Every week, Google Safe Browsing generates over 60 million warnings to help users avoid installing unwanted software--that&#8217;s more than 3x the number of warnings we show for malware. Many of these warnings appear when users unwittingly download software bundles laden with several additional applications, a business model known as pay-per-install that earns up to $1.50 for each successful install. Recently, we finished the first in-depth investigation with <a href="http://engineering.nyu.edu/people/damon-mccoy">NYU Tandon School of Engineering</a> into multiple pay-per-install networks and the unwanted software families purchasing installs. The full report, which you can <a href="http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45487.pdf">read here</a>, will be presented next week at the <a href="https://www.usenix.org/conference/usenixsecurity16">USENIX Security Symposium</a>.<br /><br />Over a year-long period, we found four of the largest pay-per-install networks routinely distributed unwanted ad injectors, browser settings hijackers, and scareware flagged by over 30 anti-virus engines. These bundles were deceptively promoted through fake software updates, phony content lockers, and spoofed brands--techniques openly discussed on underground forums as ways to trick users into unintentionally downloading software and accepting the installation terms. While not all software bundles lead to unwanted software, critically, it takes only one deceptive party in a chain of web advertisements, pay-per-install networks, and application developers for abuse to manifest.<br /><b>Behind the scenes of unwanted software distribution</b><br /><div></div><div><a href="https://1.bp.blogspot.com/-qaQ9CW2nxsA/V6KMEHbcVJI/AAAAAAAAAUc/4HpSXKKviCozOnXxP4UDAWtq6qfqmo5BACEw/s1600/blurred.png"><img border="0" height="426" src="https://1.bp.blogspot.com/-qaQ9CW2nxsA/V6KMEHbcVJI/AAAAAAAAAUc/4HpSXKKviCozOnXxP4UDAWtq6qfqmo5BACEw/s640/blurred.png" width="640"></a></div><div><i>Software bundle installation dialogue.&#160;Accepting the express install option will cause eight other programs to be installed with no indication of each program&#8217;s functionality.</i></div><div><br /></div>If you have ever encountered an installation dialog like the one above, then you are already familiar with the pay-per-install distribution model. Behind the scenes there are a few different players:<br /><ul><li><b>Advertisers</b>: In pay-per-install lingo, advertisers are software developers, including unwanted software developers, paying for installs via bundling. In our example above, these advertisers include Plus-HD and Vuupc among others. The cost per install ranges anywhere from $0.10 in South America to $1.50 in the United States. Unwanted software developers will recoup this loss via ad injection, selling search traffic, or levying subscription fees. During our investigation, we identified 1,211 advertisers paying for installs.</li></ul><ul><li><b>Affiliate networks</b>: Affiliate networks serve as middlemen between advertisers looking to buy installs and popular software packages willing to bundle additional applications in return for a fee. These affiliate networks provide the core technology for tracking successful installs and billing. Additionally, they provide tools that attempt to thwart Google Safe Browsing or anti-virus detection. We spotted at least 50 affiliate networks fueling this business.</li></ul><ul><li><b>Publishers</b>: Finally, popular software applications re-package their binaries to include several advertiser offers. Publishers are then responsible for getting users to download and install their software through whatever means possible: download portals, organic page traffic, or often times deceptive ads. Our study uncovered 2,518 publishers distributing through 191,372 webpages.</li></ul><div>This decentralized model encourages advertisers to focus solely on monetizing users upon installation and for publishers to maximize conversion, irrespective of the final user experience. It takes only one bad actor anywhere in the distribution chain for unwanted installs to manifest.</div><div><b><br /></b></div><div><b><br /></b></div><div><b>What gets bundled?</b></div><div><b><br /></b></div><div>We monitored the offers bundled by four of the largest pay-per-install affiliate networks on a daily basis for over a year. In total, we collected 446K offers related to 843 unique software packages. The most commonly bundled software included unwanted ad injectors, browser settings hijackers, and scareware purporting to fix urgent issues with a victim&#8217;s machine for $30-40. Here&#8217;s an example of an ad injector impersonating an anti-virus alert to scam users into fixing non-existent system issues:</div><div><br /></div><div></div><div><a href="https://1.bp.blogspot.com/-9m5t4j5IMv0/V6KOKtjifqI/AAAAAAAAAUk/M5IjDgRfD1IReqSeRIFl2qv0xFG_YkgAQCLcB/s1600/Screen%2BShot%2B2016-08-03%2Bat%2B5.png"><img border="0" height="368" src="https://1.bp.blogspot.com/-9m5t4j5IMv0/V6KOKtjifqI/AAAAAAAAAUk/M5IjDgRfD1IReqSeRIFl2qv0xFG_YkgAQCLcB/s640/Screen%2BShot%2B2016-08-03%2Bat%2B5.png" width="640"></a></div><div><br /></div><div><b>Deceptive practices</b></div><div><br /></div><div>Taken as a whole, we found 59% of weekly offers bundled by pay-per-install affiliate networks were flagged by at least one anti-virus engine as potentially unwanted. In response, software bundles will first fingerprint a user&#8217;s machine prior to installation to detect the presence of &#8220;hostile&#8221; anti-virus engines. Furthermore, in response to protections provide by Google Safe Browsing, publishers have resorted to increasingly convoluted tactics to try and avoid detection, like the defunct technique shown below of password protecting compressed binaries:</div><div><br /></div><div><a href="https://1.bp.blogspot.com/-r_m6sMzf_0U/V6KMD0G5aII/AAAAAAAAAUQ/dvInBvywLco1TlaoM6ys9jqz_1x4B28HwCEw/s1600/pasted%2Bimage%2B0%2B%25287%2529.png"><img border="0" height="328" src="https://1.bp.blogspot.com/-r_m6sMzf_0U/V6KMD0G5aII/AAAAAAAAAUQ/dvInBvywLco1TlaoM6ys9jqz_1x4B28HwCEw/s640/pasted%2Bimage%2B0%2B%25287%2529.png" width="640"></a></div><div><br /></div><div>Paired with deceptive promotional tools like <a href="https://security.googleblog.com/2015/12/year-one-progress-in-fight-against.html">fake video codecs, software updates, or misrepresented brands</a>, there are a multitude of deceptive behaviors currently pervasive to software bundling.</div><div><b><br /></b></div><div><b><br /></b></div><div><b>Cleaning up the ecosystem</b></div><div><b><br /></b></div><div>We are <a href="https://security.googleblog.com/2015/12/year-one-progress-in-fight-against.html">constantly improving Google Safe Browsing</a> defenses and the <a href="https://www.google.com/chrome/cleanup-tool/">Chrome Cleanup Tool</a> to protect users from unwanted software installs. When it comes to our <a href="https://support.google.com/adwordspolicy/answer/6020954?hl=en">ads policy</a>, we take quick action to block and remove advertisers who misrepresent downloads or distribute software that violates Google&#8217;s <a href="https://www.google.com/about/company/unwanted-software-policy.html">unwanted software policy</a>.</div><div><br /></div><div>Additionally, Google is pushing for real change from businesses involved in the pay-per-install market to address the deceptive practices of some participants. As part of this, Google recently hosted a Clean Software Summit bringing together members of the anti-virus industry, bundling platforms, and the <a href="http://www.cs-alliance.org/">Clean Software Alliance</a>. Together, we laid the groundwork for an industry-wide initiative to provide users with clear choices when installing software and to block deceptive actors pushing unwanted installs. We continue to advocate on behalf of users to ensure they remain safe while downloading software online.</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Kurt Thomas, Research Scientist and Juan A. Elices Crespo, Software Engineer</span><br /><br />As part of Google’s ongoing effort to <a href="https://security.googleblog.com/2015/12/year-one-progress-in-fight-against.html">protect users from unwanted software</a>, we have been zeroing in on the deceptive installation tactics and actors that play a role in unwanted software delivery. This software includes <a href="https://security.googleblog.com/2015/05/new-research-ad-injection-economy.html">unwanted ad injectors</a> that insert unintended ads into webpages and <a href="https://security.googleblog.com/2013/10/dont-mess-with-my-browser.html">browser settings hijackers</a> that change search settings without user consent.<br /><br />Every week, Google Safe Browsing generates over 60 million warnings to help users avoid installing unwanted software--that’s more than 3x the number of warnings we show for malware. Many of these warnings appear when users unwittingly download software bundles laden with several additional applications, a business model known as pay-per-install that earns up to $1.50 for each successful install. Recently, we finished the first in-depth investigation with <a href="http://engineering.nyu.edu/people/damon-mccoy">NYU Tandon School of Engineering</a> into multiple pay-per-install networks and the unwanted software families purchasing installs. The full report, which you can <a href="http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45487.pdf">read here</a>, will be presented next week at the <a href="https://www.usenix.org/conference/usenixsecurity16">USENIX Security Symposium</a>.<br /><br />Over a year-long period, we found four of the largest pay-per-install networks routinely distributed unwanted ad injectors, browser settings hijackers, and scareware flagged by over 30 anti-virus engines. These bundles were deceptively promoted through fake software updates, phony content lockers, and spoofed brands--techniques openly discussed on underground forums as ways to trick users into unintentionally downloading software and accepting the installation terms. While not all software bundles lead to unwanted software, critically, it takes only one deceptive party in a chain of web advertisements, pay-per-install networks, and application developers for abuse to manifest.<br /><b>Behind the scenes of unwanted software distribution</b><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-qaQ9CW2nxsA/V6KMEHbcVJI/AAAAAAAAAUc/4HpSXKKviCozOnXxP4UDAWtq6qfqmo5BACEw/s1600/blurred.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="426" src="https://1.bp.blogspot.com/-qaQ9CW2nxsA/V6KMEHbcVJI/AAAAAAAAAUc/4HpSXKKviCozOnXxP4UDAWtq6qfqmo5BACEw/s640/blurred.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;"><i>Software bundle installation dialogue.&nbsp;Accepting the express install option will cause eight other programs to be installed with no indication of each program’s functionality.</i></div><div class="separator" style="clear: both; text-align: left;"><br /></div>If you have ever encountered an installation dialog like the one above, then you are already familiar with the pay-per-install distribution model. Behind the scenes there are a few different players:<br /><ul><li><b>Advertisers</b>: In pay-per-install lingo, advertisers are software developers, including unwanted software developers, paying for installs via bundling. In our example above, these advertisers include Plus-HD and Vuupc among others. The cost per install ranges anywhere from $0.10 in South America to $1.50 in the United States. Unwanted software developers will recoup this loss via ad injection, selling search traffic, or levying subscription fees. During our investigation, we identified 1,211 advertisers paying for installs.</li></ul><ul><li><b>Affiliate networks</b>: Affiliate networks serve as middlemen between advertisers looking to buy installs and popular software packages willing to bundle additional applications in return for a fee. These affiliate networks provide the core technology for tracking successful installs and billing. Additionally, they provide tools that attempt to thwart Google Safe Browsing or anti-virus detection. We spotted at least 50 affiliate networks fueling this business.</li></ul><ul><li><b>Publishers</b>: Finally, popular software applications re-package their binaries to include several advertiser offers. Publishers are then responsible for getting users to download and install their software through whatever means possible: download portals, organic page traffic, or often times deceptive ads. Our study uncovered 2,518 publishers distributing through 191,372 webpages.</li></ul><div>This decentralized model encourages advertisers to focus solely on monetizing users upon installation and for publishers to maximize conversion, irrespective of the final user experience. It takes only one bad actor anywhere in the distribution chain for unwanted installs to manifest.</div><div><b><br /></b></div><div><b><br /></b></div><div><b>What gets bundled?</b></div><div><b><br /></b></div><div>We monitored the offers bundled by four of the largest pay-per-install affiliate networks on a daily basis for over a year. In total, we collected 446K offers related to 843 unique software packages. The most commonly bundled software included unwanted ad injectors, browser settings hijackers, and scareware purporting to fix urgent issues with a victim’s machine for $30-40. Here’s an example of an ad injector impersonating an anti-virus alert to scam users into fixing non-existent system issues:</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-9m5t4j5IMv0/V6KOKtjifqI/AAAAAAAAAUk/M5IjDgRfD1IReqSeRIFl2qv0xFG_YkgAQCLcB/s1600/Screen%2BShot%2B2016-08-03%2Bat%2B5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="368" src="https://1.bp.blogspot.com/-9m5t4j5IMv0/V6KOKtjifqI/AAAAAAAAAUk/M5IjDgRfD1IReqSeRIFl2qv0xFG_YkgAQCLcB/s640/Screen%2BShot%2B2016-08-03%2Bat%2B5.png" width="640" /></a></div><div><br /></div><div><b>Deceptive practices</b></div><div><br /></div><div>Taken as a whole, we found 59% of weekly offers bundled by pay-per-install affiliate networks were flagged by at least one anti-virus engine as potentially unwanted. In response, software bundles will first fingerprint a user’s machine prior to installation to detect the presence of “hostile” anti-virus engines. Furthermore, in response to protections provide by Google Safe Browsing, publishers have resorted to increasingly convoluted tactics to try and avoid detection, like the defunct technique shown below of password protecting compressed binaries:</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-r_m6sMzf_0U/V6KMD0G5aII/AAAAAAAAAUQ/dvInBvywLco1TlaoM6ys9jqz_1x4B28HwCEw/s1600/pasted%2Bimage%2B0%2B%25287%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="328" src="https://1.bp.blogspot.com/-r_m6sMzf_0U/V6KMD0G5aII/AAAAAAAAAUQ/dvInBvywLco1TlaoM6ys9jqz_1x4B28HwCEw/s640/pasted%2Bimage%2B0%2B%25287%2529.png" width="640" /></a></div><div><br /></div><div>Paired with deceptive promotional tools like <a href="https://security.googleblog.com/2015/12/year-one-progress-in-fight-against.html">fake video codecs, software updates, or misrepresented brands</a>, there are a multitude of deceptive behaviors currently pervasive to software bundling.</div><div><b><br /></b></div><div><b><br /></b></div><div><b>Cleaning up the ecosystem</b></div><div><b><br /></b></div><div>We are <a href="https://security.googleblog.com/2015/12/year-one-progress-in-fight-against.html">constantly improving Google Safe Browsing</a> defenses and the <a href="https://www.google.com/chrome/cleanup-tool/">Chrome Cleanup Tool</a> to protect users from unwanted software installs. When it comes to our <a href="https://support.google.com/adwordspolicy/answer/6020954?hl=en">ads policy</a>, we take quick action to block and remove advertisers who misrepresent downloads or distribute software that violates Google’s <a href="https://www.google.com/about/company/unwanted-software-policy.html">unwanted software policy</a>.</div><div><br /></div><div>Additionally, Google is pushing for real change from businesses involved in the pay-per-install market to address the deceptive practices of some participants. As part of this, Google recently hosted a Clean Software Summit bringing together members of the anti-virus industry, bundling platforms, and the <a href="http://www.cs-alliance.org/">Clean Software Alliance</a>. Together, we laid the groundwork for an industry-wide initiative to provide users with clear choices when installing software and to block deceptive actors pushing unwanted installs. We continue to advocate on behalf of users to ensure they remain safe while downloading software online.</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/new-research-zeroing-in-on-deceptive-software-installations/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Adding YouTube and Calendar to the HTTPS Transparency Report</title>
		<link>https://googledata.org/google-online-security/adding-youtube-and-calendar-to-the-https-transparency-report/</link>
		<comments>https://googledata.org/google-online-security/adding-youtube-and-calendar-to-the-https-transparency-report/#comments</comments>
		<pubDate>Mon, 01 Aug 2016 18:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=fa27f7a462aa748ce6dfaac54440588a</guid>
		<description><![CDATA[<span>Posted by Emily Schechter, HTTPS Enthusiast</span><br /><br />Earlier this year, we <a href="https://security.googleblog.com/2016/03/securing-web-together_15.html">launched</a> a new section of our Transparency Report dedicated to HTTPS encryption. This report shows how much traffic is encrypted for Google products and popular sites across the web. Today, we&#8217;re adding two Google products to the report: YouTube and Calendar. The traffic for both products is currently more than 90% encrypted via HTTPS.<br /><br /><div><a href="https://2.bp.blogspot.com/-T_wAqk-ECjE/V56kBVvMalI/AAAAAAAAATk/3Y6HLD80y3MN37eLYUhcP-SoRLoBiWyqACLcB/s1600/transparency%2Breport%2Bupdate.png"><img border="0" src="https://2.bp.blogspot.com/-T_wAqk-ECjE/V56kBVvMalI/AAAAAAAAATk/3Y6HLD80y3MN37eLYUhcP-SoRLoBiWyqACLcB/s1600/transparency%2Breport%2Bupdate.png"></a></div><b>Case study: YouTube</b><br />As we&#8217;ve implemented HTTPS across products over the years, we&#8217;ve worked through a wide variety of technical obstacles. Below are some of the challenges we faced during <a href="https://youtube-eng.blogspot.com/2016/08/youtubes-road-to-https.html">YouTube&#8217;s two year road to HTTPS</a>:<br /><br /><ul><li><b>Lots of traffic!</b> Our CDN, the <a href="https://peering.google.com/#/options/google-global-cache">Google Global Cache</a>, serves a massive amount of video, and migrating it all to HTTPS is no small feat. Luckily, hardware acceleration for AES is widespread, so we were able to encrypt virtually all video serving without adding machines. (Yes, <a href="https://istlsfastyet.com/">HTTPS is fast now</a>.)</li><li><b>Lots of devices!</b> You can watch YouTube videos on everything from flip phones to smart TVs. We A/B tested HTTPS on every device to ensure that users would not be negatively impacted. We found that HTTPS improved quality of experience on most clients: by ensuring content integrity, we virtually eliminated many types of streaming errors.</li><li><b>Lots of requests!</b> Mixed content&#8212;any insecure request made in a secure context&#8212;poses a challenge for any large website or app. We get an alert when an insecure request is made from any of our clients and eventually will block all mixed content using <a href="https://en.wikipedia.org/wiki/Content_Security_Policy">Content Security Policy</a> on the web, <a href="https://developer.apple.com/library/ios/releasenotes/General/WhatsNewIniOS/Articles/iOS9.html#//apple_ref/doc/uid/TP40016198-SW14">App Transport Security on iOS</a>, and <a href="https://developer.android.com/guide/topics/manifest/application-element.html#usesCleartextTraffic">uses CleartextTraffic</a> on Android. Ads on YouTube have used HTTPS <a href="https://security.googleblog.com/2015/04/ads-take-step-towards-https-everywhere.html">since 2014</a>.</li></ul><div><br /></div><div>We're also proud to be using <a href="https://security.googleblog.com/2016/07/bringing-hsts-to-wwwgooglecom.html">HTTP Secure Transport Security (HSTS)</a> on youtube.com to cut down on HTTP to HTTPS redirects. This improves both security and latency for end users. Our HSTS lifetime is one year, and we hope to preload this soon in web browsers.</div><div><br /></div><div>97% for YouTube is pretty good, but why isn't YouTube at 100%? In short, some devices do not fully support modern HTTPS. Over time, to keep YouTube users as safe as possible, we will gradually phase out insecure connections.</div><div><br /></div><div><br /></div><div>We know that any non-secure HTTP traffic could be vulnerable to attackers. All websites and apps should be protected with HTTPS &#8212; if you&#8217;re a developer that hasn&#8217;t yet migrated, <a href="https://developers.google.com/web/fundamentals/security/encrypt-in-transit/?hl=en">get started</a> today.</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Emily Schechter, HTTPS Enthusiast</span><br /><br />Earlier this year, we <a href="https://security.googleblog.com/2016/03/securing-web-together_15.html">launched</a> a new section of our Transparency Report dedicated to HTTPS encryption. This report shows how much traffic is encrypted for Google products and popular sites across the web. Today, we’re adding two Google products to the report: YouTube and Calendar. The traffic for both products is currently more than 90% encrypted via HTTPS.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-T_wAqk-ECjE/V56kBVvMalI/AAAAAAAAATk/3Y6HLD80y3MN37eLYUhcP-SoRLoBiWyqACLcB/s1600/transparency%2Breport%2Bupdate.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-T_wAqk-ECjE/V56kBVvMalI/AAAAAAAAATk/3Y6HLD80y3MN37eLYUhcP-SoRLoBiWyqACLcB/s1600/transparency%2Breport%2Bupdate.png" /></a></div><b>Case study: YouTube</b><br />As we’ve implemented HTTPS across products over the years, we’ve worked through a wide variety of technical obstacles. Below are some of the challenges we faced during <a href="https://youtube-eng.blogspot.com/2016/08/youtubes-road-to-https.html">YouTube’s two year road to HTTPS</a>:<br /><br /><ul><li><b>Lots of traffic!</b> Our CDN, the <a href="https://peering.google.com/#/options/google-global-cache">Google Global Cache</a>, serves a massive amount of video, and migrating it all to HTTPS is no small feat. Luckily, hardware acceleration for AES is widespread, so we were able to encrypt virtually all video serving without adding machines. (Yes, <a href="https://istlsfastyet.com/">HTTPS is fast now</a>.)</li><li><b>Lots of devices!</b> You can watch YouTube videos on everything from flip phones to smart TVs. We A/B tested HTTPS on every device to ensure that users would not be negatively impacted. We found that HTTPS improved quality of experience on most clients: by ensuring content integrity, we virtually eliminated many types of streaming errors.</li><li><b>Lots of requests!</b> Mixed content—any insecure request made in a secure context—poses a challenge for any large website or app. We get an alert when an insecure request is made from any of our clients and eventually will block all mixed content using <a href="https://en.wikipedia.org/wiki/Content_Security_Policy">Content Security Policy</a> on the web, <a href="https://developer.apple.com/library/ios/releasenotes/General/WhatsNewIniOS/Articles/iOS9.html#//apple_ref/doc/uid/TP40016198-SW14">App Transport Security on iOS</a>, and <a href="https://developer.android.com/guide/topics/manifest/application-element.html#usesCleartextTraffic">uses CleartextTraffic</a> on Android. Ads on YouTube have used HTTPS <a href="https://security.googleblog.com/2015/04/ads-take-step-towards-https-everywhere.html">since 2014</a>.</li></ul><div><br /></div><div>We're also proud to be using <a href="https://security.googleblog.com/2016/07/bringing-hsts-to-wwwgooglecom.html">HTTP Secure Transport Security (HSTS)</a> on youtube.com to cut down on HTTP to HTTPS redirects. This improves both security and latency for end users. Our HSTS lifetime is one year, and we hope to preload this soon in web browsers.</div><div><br /></div><div>97% for YouTube is pretty good, but why isn't YouTube at 100%? In short, some devices do not fully support modern HTTPS. Over time, to keep YouTube users as safe as possible, we will gradually phase out insecure connections.</div><div><br /></div><div><br /></div><div>We know that any non-secure HTTP traffic could be vulnerable to attackers. All websites and apps should be protected with HTTPS — if you’re a developer that hasn’t yet migrated, <a href="https://developers.google.com/web/fundamentals/security/encrypt-in-transit/?hl=en">get started</a> today.</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/adding-youtube-and-calendar-to-the-https-transparency-report/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Bringing HSTS to www.google.com</title>
		<link>https://googledata.org/google-online-security/bringing-hsts-to-www-google-com/</link>
		<comments>https://googledata.org/google-online-security/bringing-hsts-to-www-google-com/#comments</comments>
		<pubDate>Fri, 29 Jul 2016 18:20:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=a555e846efd3fe1ef280b5d6b40409f5</guid>
		<description><![CDATA[<span>Posted by Jay Brown, Sr. Technical Program Manager, Security</span><br /><br />For many years, we&#8217;ve worked to increase the use of encryption between our users and Google. Today, the <a href="https://www.google.com/transparencyreport/https/">vast majority of these connections</a> are encrypted, and our work continues on this effort.<br /><br />To further protect users, we've taken another step to strengthen how we use encryption for data in transit by implementing HTTP Strict Transport Security&#8212;<a href="https://tools.ietf.org/html/rfc6797">HSTS</a> for short&#8212;on the <a href="http://www.google.com/">www.google.com</a> domain. HSTS prevents people from accidentally navigating to HTTP URLs by automatically converting insecure HTTP URLs into secure HTTPS URLs. Users might navigate to these HTTP URLs by manually typing a protocol-less or HTTP URL in the address bar, or by following HTTP links from other websites.<br /><b><br /></b><b>Preparing for launch</b><br /><br />Ordinarily, implementing HSTS is a relatively basic process. However, due to Google's particular complexities, we needed to do some extra prep work that most other domains wouldn't have needed to do. For example, we had to address <a href="https://developers.google.com/web/fundamentals/security/prevent-mixed-content/what-is-mixed-content?hl=en#mixed-content-weakens-https">mixed content</a>, bad HREFs, redirects to HTTP, and other issues like updating legacy services which could cause problems for users as they try to access our core domain.<br /><br />This process wasn&#8217;t without its pitfalls. Perhaps most memorably, we accidentally broke Google&#8217;s Santa Tracker just before Christmas last year (don&#8217;t worry &#8212; we fixed it before Santa and his reindeer made their trip).<br /><br /><b>Deployment and next steps</b><br /><br />We&#8217;ve turned on HSTS for <a href="http://www.google.com/">www.google.com</a>, but some work remains on our deployment checklist.<br /><br />In the immediate term, we&#8217;re focused on increasing the duration that the header is active (&#8216;max-age&#8217;). We've initially set the header&#8217;s max-age to one day; the short duration helps mitigate the risk of any potential problems with this roll-out. By increasing the max-age, however, we reduce the likelihood that an initial request to <a href="http://www.google.com/">www.google.com</a> happens over HTTP. Over the next few months, we will ramp up the max-age of the header to at least one year.<br /><br />Encrypting data in transit helps keep our users and their data secure. We&#8217;re excited to be implementing HSTS and will continue to extend it to more domains and Google products in the coming months.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Jay Brown, Sr. Technical Program Manager, Security</span><br /><br />For many years, we’ve worked to increase the use of encryption between our users and Google. Today, the <a href="https://www.google.com/transparencyreport/https/">vast majority of these connections</a> are encrypted, and our work continues on this effort.<br /><br />To further protect users, we've taken another step to strengthen how we use encryption for data in transit by implementing HTTP Strict Transport Security—<a href="https://tools.ietf.org/html/rfc6797">HSTS</a> for short—on the <a href="http://www.google.com/">www.google.com</a> domain. HSTS prevents people from accidentally navigating to HTTP URLs by automatically converting insecure HTTP URLs into secure HTTPS URLs. Users might navigate to these HTTP URLs by manually typing a protocol-less or HTTP URL in the address bar, or by following HTTP links from other websites.<br /><b><br /></b><b>Preparing for launch</b><br /><br />Ordinarily, implementing HSTS is a relatively basic process. However, due to Google's particular complexities, we needed to do some extra prep work that most other domains wouldn't have needed to do. For example, we had to address <a href="https://developers.google.com/web/fundamentals/security/prevent-mixed-content/what-is-mixed-content?hl=en#mixed-content-weakens-https">mixed content</a>, bad HREFs, redirects to HTTP, and other issues like updating legacy services which could cause problems for users as they try to access our core domain.<br /><br />This process wasn’t without its pitfalls. Perhaps most memorably, we accidentally broke Google’s Santa Tracker just before Christmas last year (don’t worry — we fixed it before Santa and his reindeer made their trip).<br /><br /><b>Deployment and next steps</b><br /><br />We’ve turned on HSTS for <a href="http://www.google.com/">www.google.com</a>, but some work remains on our deployment checklist.<br /><br />In the immediate term, we’re focused on increasing the duration that the header is active (‘max-age’). We've initially set the header’s max-age to one day; the short duration helps mitigate the risk of any potential problems with this roll-out. By increasing the max-age, however, we reduce the likelihood that an initial request to <a href="http://www.google.com/">www.google.com</a> happens over HTTP. Over the next few months, we will ramp up the max-age of the header to at least one year.<br /><br />Encrypting data in transit helps keep our users and their data secure. We’re excited to be implementing HSTS and will continue to extend it to more domains and Google products in the coming months.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/bringing-hsts-to-www-google-com/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Protecting Android with more Linux kernel defenses</title>
		<link>https://googledata.org/google-online-security/protecting-android-with-more-linux-kernel-defenses-2/</link>
		<comments>https://googledata.org/google-online-security/protecting-android-with-more-linux-kernel-defenses-2/#comments</comments>
		<pubDate>Thu, 28 Jul 2016 00:20:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=2da30ccf1335c576d70061b80bef4136</guid>
		<description><![CDATA[<span>Posted by Jeff Vander Stoep, Android Security team</span> <br /><br /><i>[Cross-posted from the <a href="http://android-developers.blogspot.com/2016/07/protecting-android-with-more-linux.html">Android Developers Blog</a>]</i><br /><br />Android relies heavily on the Linux kernel for enforcement of its security model. To better protect the kernel, we&#8217;ve enabled a number of mechanisms within Android. At a high level these protections are grouped into two categories&#8212;memory protections and attack surface reduction.<br /><b><u>Memory Protections</u></b><br />One of the major security features provided by the kernel is memory protection for userspace processes in the form of address space separation. Unlike userspace processes, the kernel&#8217;s various tasks live within one address space and a vulnerability anywhere in the kernel can potentially impact unrelated portions of the system&#8217;s memory. Kernel memory protections are designed to maintain the integrity of the kernel in spite of vulnerabilities.<br /><b>Mark Memory As Read-Only/No-Execute</b><br />This feature segments kernel memory into logical sections and sets restrictive page access permissions on each section. Code is marked as read only + execute. Data sections are marked as no-execute and further segmented into read-only and read-write sections. This feature is enabled with config option CONFIG_DEBUG_RODATA. It was put together by Kees Cook and is based on a subset of <a href="https://grsecurity.net/">Grsecurity&#8217;s</a> KERNEXEC feature by Brad Spengler and Qualcomm&#8217;s CONFIG_STRICT_MEMORY_RWX feature by Larry Bassel and Laura Abbott. CONFIG_DEBUG_RODATA landed in the upstream kernel for arm/arm64 and has been backported to Android&#8217;s 3.18+ arm/<a href="https://android-review.googlesource.com/#/c/174947/">arm64</a> common kernel.<br /><b>Restrict Kernel Access to User Space</b><br />This feature improves protection of the kernel by preventing it from directly accessing userspace memory. This can make a number of attacks more difficult because attackers have significantly less control over kernel memory that is executable, particularly with CONFIG_DEBUG_RODATA enabled. Similar features were already in existence, the earliest being Grsecurity&#8217;s UDEREF. This feature is enabled with config option CONFIG_CPU_SW_DOMAIN_PAN and was implemented by Russell King for ARMv7 and backported to <a href="https://android-review.googlesource.com/#/q/topic:sw_PAN">Android&#8217;s 4.1</a> kernel by Kees Cook.<br /><b>Improve Protection Against Stack Buffer Overflows</b><br />Much like its predecessor, stack-protector, stack-protector-strong protects against <a href="https://en.wikipedia.org/wiki/Stack_buffer_overflow">stack buffer overflows</a>, but additionally provides coverage for <a href="https://outflux.net/blog/archives/2014/01/27/fstack-protector-strong/">more array types</a>, as the original only protected character arrays. Stack-protector-strong was implemented by Han Shan and <a href="https://gcc.gnu.org/ml/gcc-patches/2012-06/msg00974.html">added to the gcc 4.9 compiler</a>.<br /><b><u><br /></u></b><b><u>Attack Surface Reduction</u></b><br />Attack surface reduction attempts to expose fewer entry points to the kernel without breaking legitimate functionality. Reducing attack surface can include removing code, removing access to entry points, or selectively exposing features.<br /><b>Remove Default Access to Debug Features</b><br />The kernel&#8217;s perf system provides infrastructure for performance measurement and can be used for analyzing both the kernel and userspace applications. Perf is a valuable tool for developers, but adds unnecessary attack surface for the vast majority of Android users. In Android Nougat, access to perf will be blocked by default. Developers may still access perf by enabling developer settings and using adb to set a property: &#8220;adb shell setprop security.perf_harden 0&#8221;.<br />The patchset for blocking access to perf may be broken down into kernel and userspace sections. The <a href="https://android-review.googlesource.com/#/c/234573/">kernel patch</a> is by <a href="https://lkml.org/lkml/2016/1/11/587">Ben Hutchings</a> and is derived from Grsecurity&#8217;s CONFIG_GRKERNSEC_PERF_HARDEN by Brad Spengler. The userspace changes were <a href="https://android-review.googlesource.com/#/q/topic:perf_harden">contributed by Daniel Micay</a>. Thanks to <a href="https://conference.hitb.org/hitbsecconf2016ams/sessions/perf-from-profiling-to-kernel-exploiting/">Wish Wu</a> and others for responsibly disclosing security vulnerabilities in perf.<br /><b>Restrict App Access to IOCTL Commands</b><br />Much of Android security model is described and enforced by SELinux. The ioctl() syscall represented a major gap in the granularity of enforcement via SELinux. <a href="http://kernsec.org/files/lss2015/vanderstoep.pdf">Ioctl command whitelisting with SELinux</a> was added as a means to provide per-command control over the ioctl syscall by SELinux.<br />Most of the kernel vulnerabilities reported on Android occur in drivers and are reached using the ioctl syscall, for example <a href="https://source.android.com/security/bulletin/2016-03-01.html#elevation_of_privilege_vulnerability_in_mediatek_wi-fi_kernel_driver">CVE-2016-0820</a>. Some ioctl commands are needed by third-party applications, however most are not and access can be restricted without breaking legitimate functionality. In Android Nougat, only a small whitelist of socket ioctl commands are available to applications. For select devices, applications&#8217; access to GPU ioctls has been similarly restricted.<br /><b>Require SECCOMP-BPF</b><br />Seccomp provides an additional sandboxing mechanism allowing a process to restrict the syscalls and syscall arguments available using a configurable filter. Restricting the availability of syscalls can dramatically cut down on the exposed attack surface of the kernel. Since seccomp was first introduced on Nexus devices in Lollipop, its availability across the Android ecosystem has steadily improved. With Android Nougat, seccomp support is a requirement for all devices. On Android Nougat we are using seccomp on the mediaextractor and mediacodec processes as part of the <a href="http://android-developers.blogspot.com/2016/05/hardening-media-stack.html">media hardening effort</a>.<br /><b><u><br /></u></b><b><u>Ongoing Efforts</u></b><br />There are other projects underway aimed at protecting the kernel:<br /><br /><ul><li>The <a href="http://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project">Kernel Self Protection Project</a> is developing runtime and compiler defenses for the upstream kernel.</li><li>Further sandbox tightening and attack surface reduction with SELinux is ongoing in AOSP.</li><li><a href="https://www.chromium.org/chromium-os/developer-guide/chromium-os-sandboxing#h.l7ou90opzirq">Minijail</a> provides a convenient mechanism for applying many containment and sandboxing features offered by the kernel, including seccomp filters and namespaces.</li><li>Projects like <a href="https://www.kernel.org/doc/Documentation/kasan.txt">kasan</a> and <a href="https://www.kernel.org/doc/Documentation/kcov.txt">kcov</a> help fuzzers discover the root cause of crashes and to intelligently construct test cases that increase code coverage&#8212;ultimately resulting in a more efficient bug hunting process.</li></ul><div>Due to these efforts and others, we expect the security of the kernel to continue improving. As always, we appreciate feedback on our work and welcome suggestions for how we can improve Android. Contact us at <a href="mailto:security@android.com">security@android.com</a>.</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Jeff Vander Stoep, Android Security team</span> <br /><br /><i>[Cross-posted from the <a href="http://android-developers.blogspot.com/2016/07/protecting-android-with-more-linux.html">Android Developers Blog</a>]</i><br /><br />Android relies heavily on the Linux kernel for enforcement of its security model. To better protect the kernel, we’ve enabled a number of mechanisms within Android. At a high level these protections are grouped into two categories—memory protections and attack surface reduction.<br /><b><u>Memory Protections</u></b><br />One of the major security features provided by the kernel is memory protection for userspace processes in the form of address space separation. Unlike userspace processes, the kernel’s various tasks live within one address space and a vulnerability anywhere in the kernel can potentially impact unrelated portions of the system’s memory. Kernel memory protections are designed to maintain the integrity of the kernel in spite of vulnerabilities.<br /><b>Mark Memory As Read-Only/No-Execute</b><br />This feature segments kernel memory into logical sections and sets restrictive page access permissions on each section. Code is marked as read only + execute. Data sections are marked as no-execute and further segmented into read-only and read-write sections. This feature is enabled with config option CONFIG_DEBUG_RODATA. It was put together by Kees Cook and is based on a subset of <a href="https://grsecurity.net/">Grsecurity’s</a> KERNEXEC feature by Brad Spengler and Qualcomm’s CONFIG_STRICT_MEMORY_RWX feature by Larry Bassel and Laura Abbott. CONFIG_DEBUG_RODATA landed in the upstream kernel for arm/arm64 and has been backported to Android’s 3.18+ arm/<a href="https://android-review.googlesource.com/#/c/174947/">arm64</a> common kernel.<br /><b>Restrict Kernel Access to User Space</b><br />This feature improves protection of the kernel by preventing it from directly accessing userspace memory. This can make a number of attacks more difficult because attackers have significantly less control over kernel memory that is executable, particularly with CONFIG_DEBUG_RODATA enabled. Similar features were already in existence, the earliest being Grsecurity’s UDEREF. This feature is enabled with config option CONFIG_CPU_SW_DOMAIN_PAN and was implemented by Russell King for ARMv7 and backported to <a href="https://android-review.googlesource.com/#/q/topic:sw_PAN">Android’s 4.1</a> kernel by Kees Cook.<br /><b>Improve Protection Against Stack Buffer Overflows</b><br />Much like its predecessor, stack-protector, stack-protector-strong protects against <a href="https://en.wikipedia.org/wiki/Stack_buffer_overflow">stack buffer overflows</a>, but additionally provides coverage for <a href="https://outflux.net/blog/archives/2014/01/27/fstack-protector-strong/">more array types</a>, as the original only protected character arrays. Stack-protector-strong was implemented by Han Shan and <a href="https://gcc.gnu.org/ml/gcc-patches/2012-06/msg00974.html">added to the gcc 4.9 compiler</a>.<br /><b><u><br /></u></b><b><u>Attack Surface Reduction</u></b><br />Attack surface reduction attempts to expose fewer entry points to the kernel without breaking legitimate functionality. Reducing attack surface can include removing code, removing access to entry points, or selectively exposing features.<br /><b>Remove Default Access to Debug Features</b><br />The kernel’s perf system provides infrastructure for performance measurement and can be used for analyzing both the kernel and userspace applications. Perf is a valuable tool for developers, but adds unnecessary attack surface for the vast majority of Android users. In Android Nougat, access to perf will be blocked by default. Developers may still access perf by enabling developer settings and using adb to set a property: “adb shell setprop security.perf_harden 0”.<br />The patchset for blocking access to perf may be broken down into kernel and userspace sections. The <a href="https://android-review.googlesource.com/#/c/234573/">kernel patch</a> is by <a href="https://lkml.org/lkml/2016/1/11/587">Ben Hutchings</a> and is derived from Grsecurity’s CONFIG_GRKERNSEC_PERF_HARDEN by Brad Spengler. The userspace changes were <a href="https://android-review.googlesource.com/#/q/topic:perf_harden">contributed by Daniel Micay</a>. Thanks to <a href="https://conference.hitb.org/hitbsecconf2016ams/sessions/perf-from-profiling-to-kernel-exploiting/">Wish Wu</a> and others for responsibly disclosing security vulnerabilities in perf.<br /><b>Restrict App Access to IOCTL Commands</b><br />Much of Android security model is described and enforced by SELinux. The ioctl() syscall represented a major gap in the granularity of enforcement via SELinux. <a href="http://kernsec.org/files/lss2015/vanderstoep.pdf">Ioctl command whitelisting with SELinux</a> was added as a means to provide per-command control over the ioctl syscall by SELinux.<br />Most of the kernel vulnerabilities reported on Android occur in drivers and are reached using the ioctl syscall, for example <a href="https://source.android.com/security/bulletin/2016-03-01.html#elevation_of_privilege_vulnerability_in_mediatek_wi-fi_kernel_driver">CVE-2016-0820</a>. Some ioctl commands are needed by third-party applications, however most are not and access can be restricted without breaking legitimate functionality. In Android Nougat, only a small whitelist of socket ioctl commands are available to applications. For select devices, applications’ access to GPU ioctls has been similarly restricted.<br /><b>Require SECCOMP-BPF</b><br />Seccomp provides an additional sandboxing mechanism allowing a process to restrict the syscalls and syscall arguments available using a configurable filter. Restricting the availability of syscalls can dramatically cut down on the exposed attack surface of the kernel. Since seccomp was first introduced on Nexus devices in Lollipop, its availability across the Android ecosystem has steadily improved. With Android Nougat, seccomp support is a requirement for all devices. On Android Nougat we are using seccomp on the mediaextractor and mediacodec processes as part of the <a href="http://android-developers.blogspot.com/2016/05/hardening-media-stack.html">media hardening effort</a>.<br /><b><u><br /></u></b><b><u>Ongoing Efforts</u></b><br />There are other projects underway aimed at protecting the kernel:<br /><br /><ul><li>The <a href="http://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project">Kernel Self Protection Project</a> is developing runtime and compiler defenses for the upstream kernel.</li><li>Further sandbox tightening and attack surface reduction with SELinux is ongoing in AOSP.</li><li><a href="https://www.chromium.org/chromium-os/developer-guide/chromium-os-sandboxing#h.l7ou90opzirq">Minijail</a> provides a convenient mechanism for applying many containment and sandboxing features offered by the kernel, including seccomp filters and namespaces.</li><li>Projects like <a href="https://www.kernel.org/doc/Documentation/kasan.txt">kasan</a> and <a href="https://www.kernel.org/doc/Documentation/kcov.txt">kcov</a> help fuzzers discover the root cause of crashes and to intelligently construct test cases that increase code coverage—ultimately resulting in a more efficient bug hunting process.</li></ul><div>Due to these efforts and others, we expect the security of the kernel to continue improving. As always, we appreciate feedback on our work and welcome suggestions for how we can improve Android. Contact us at <a href="mailto:security@android.com">security@android.com</a>.</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/protecting-android-with-more-linux-kernel-defenses-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Changes to Trusted Certificate Authorities in Android Nougat</title>
		<link>https://googledata.org/google-online-security/changes-to-trusted-certificate-authorities-in-android-nougat-2/</link>
		<comments>https://googledata.org/google-online-security/changes-to-trusted-certificate-authorities-in-android-nougat-2/#comments</comments>
		<pubDate>Fri, 08 Jul 2016 22:23:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=2384509498f2761c6bbd927c92675872</guid>
		<description><![CDATA[<span>Posted by Chad Brubaker, Android Security team</span> <br /><i>[Cross-posted from the <a href="http://android-developers.blogspot.com/2016/07/changes-to-trusted-certificate.html">Android Developers Blog</a>]</i><br />In Android Nougat, we&#8217;ve changed how Android handles trusted certificate authorities (CAs) to provide safer defaults for secure app traffic. Most apps and users should not be affected by these changes or need to take any action. The changes include:<br /><ul><li>Safe and easy APIs to trust custom CAs. </li><li>Apps that target API Level 24 and above no longer trust user or admin-added CAs for secure connections, by default. </li><li>All devices running Android Nougat offer the same standardized set of system CAs&#8212;no device-specific customizations.</li></ul>For more details on these changes and what to do if you&#8217;re affected by them, read on. <br /><h3>Safe and easy APIs</h3>Apps have always been able customize which certificate authorities they trust. However, we saw apps making mistakes due to the complexities of the Java TLS APIs. To address this we <a href="https://developer.android.com/preview/features/security-config.html">improved the APIs</a> for customizing trust. <br /><h3>User-added CAs</h3>Protection of all application data is a key goal of the Android application sandbox. Android Nougat changes how applications interact with user- and admin-supplied CAs. By default, apps that target API level 24 will&#8212;by design&#8212;not honor such CAs unless the app explicitly opts in. This safe-by-default setting reduces application attack surface and encourages consistent handling of network and file-based application data. <br /><h3>Customizing trusted CAs</h3>Customizing the CAs your app trusts on Android Nougat is easy using the Network Security Config. Trust can be specified across the whole app or only for connections to certain domains, as needed. Below are some examples for trusting a custom or user-added CA, in addition to the system CAs. For more examples and details, see <a href="https://developer.android.com/preview/features/security-config.html">the full documentation</a>. <br /><h4>Trusting custom CAs for debugging</h4>To allow your app to trust custom CAs only for local debugging, include something like this in your Network Security Config. The CAs will only be trusted while your app is marked as debuggable. <br /><pre>&#60;network-security-config&#62;  <br />      &#60;debug-overrides&#62;  <br />           &#60;trust-anchors&#62;  <br />                &#60;!-- Trust user added CAs while debuggable only --&#62;<br />                &#60;certificates src="user" /&#62;  <br />           &#60;/trust-anchors&#62;  <br />      &#60;/domain-config&#62;  <br /> &#60;/network-security-config&#62;</pre><h4>Trusting custom CAs for a domain</h4>To allow your app to trust custom CAs for a specific domain, include something like this in your Network Security Config. <br /><pre>&#60;network-security-config&#62;  <br />      &#60;domain-config&#62;  <br />           &#60;domain includeSubdomains="true"&#62;internal.example.com&#60;/domain&#62;  <br />           &#60;trust-anchors&#62;  <br />                &#60;!-- Only trust the CAs included with the app  <br />                     for connections to internal.example.com --&#62;  <br />                &#60;certificates src="@raw/cas" /&#62;  <br />           &#60;/trust-anchors&#62;  <br />      &#60;/domain-config&#62;  <br /> &#60;/network-security-config&#62;</pre><h4>Trusting user-added CAs for some domains</h4>To allow your app to trust user-added CAs for multiple domains, include something like this in your Network Security Config. <br /><pre>&#60;network-security-config&#62;  <br />      &#60;domain-config&#62;  <br />           &#60;domain includeSubdomains="true"&#62;userCaDomain.com&#60;/domain&#62;  <br />           &#60;domain includeSubdomains="true"&#62;otherUserCaDomain.com&#60;/domain&#62;  <br />           &#60;trust-anchors&#62;  <br />                  &#60;!-- Trust preinstalled CAs --&#62;  <br />                  &#60;certificates src="system" /&#62;  <br />                  &#60;!-- Additionally trust user added CAs --&#62;  <br />                  &#60;certificates src="user" /&#62;  <br />           &#60;/trust-anchors&#62;  <br />      &#60;/domain-config&#62;  <br /> &#60;/network-security-config&#62;</pre><h4>Trusting user-added CAs for all domains except some</h4>To allow your app to trust user-added CAs for all domains, except for those specified, include something like this in your Network Security Config. <br /><pre>&#60;network-security-config&#62;  <br />      &#60;base-config&#62;  <br />           &#60;trust-anchors&#62;  <br />                &#60;!-- Trust preinstalled CAs --&#62;  <br />                &#60;certificates src="system" /&#62;  <br />                &#60;!-- Additionally trust user added CAs --&#62;  <br />                &#60;certificates src="user" /&#62;  <br />           &#60;/trust-anchors&#62;  <br />      &#60;/base-config&#62;  <br />      &#60;domain-config&#62;  <br />           &#60;domain includeSubdomains="true"&#62;sensitive.example.com&#60;/domain&#62;  <br />           &#60;trust-anchors&#62;  <br />                &#60;!-- Only allow sensitive content to be exchanged  <br />             with the real server and not any user or  <br />    admin configured MiTMs --&#62;  <br />                &#60;certificates src="system" /&#62;  <br />           &#60;trust-anchors&#62;  <br />      &#60;/domain-config&#62;  <br /> &#60;/network-security-config&#62;</pre><h4>Trusting user-added CAs for all secure connections</h4>To allow your app to trust user-added CAs for all secure connections, add this in your Network Security Config. <br /><pre>&#60;network-security-config&#62;  <br />      &#60;base-config&#62;  <br />            &#60;trust-anchors&#62;  <br />                &#60;!-- Trust preinstalled CAs --&#62;  <br />                &#60;certificates src="system" /&#62;  <br />                &#60;!-- Additionally trust user added CAs --&#62;  <br />                &#60;certificates src="user" /&#62;  <br />           &#60;/trust-anchors&#62;  <br />      &#60;/base-config&#62;  <br /> &#60;/network-security-config&#62;</pre><h3>Standardized set of system-trusted CAs</h3>To provide a more consistent and more secure experience across the Android ecosystem, beginning with Android Nougat, compatible devices trust only the standardized system CAs maintained in <a href="https://android.googlesource.com/platform/system/ca-certificates/">AOSP</a>. <br />Previously, the set of preinstalled CAs bundled with the system could vary from device to device. This could lead to compatibility issues when some devices did not include CAs that apps needed for connections as well as potential security issues if CAs that did not meet our security requirements were included on some devices. <br /><h3>What if I have a CA I believe should be included on Android?</h3>First, be sure that your CA needs to be included in the system. The preinstalled CAs are <strong>only</strong> for CAs that meet our security requirements because they affect the secure connections of most apps on the device. If you need to add a CA for connecting to hosts that use that CA, you should instead customize your apps and services that connect to those hosts. For more information, see the <em>Customizing trusted CAs</em> section above. <br />If you operate a CA that you believe should be included in Android, first complete the <a href="https://wiki.mozilla.org/CA:How_to_apply">Mozilla CA Inclusion Process</a> and then file a <a href="https://code.google.com/p/android/issues/entry">feature request</a>against Android to have the CA added to the standardized set of system CAs.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Chad Brubaker, Android Security team</span> <br /><i>[Cross-posted from the <a href="http://android-developers.blogspot.com/2016/07/changes-to-trusted-certificate.html">Android Developers Blog</a>]</i><br />In Android Nougat, we’ve changed how Android handles trusted certificate authorities (CAs) to provide safer defaults for secure app traffic. Most apps and users should not be affected by these changes or need to take any action. The changes include:<br /><ul><li>Safe and easy APIs to trust custom CAs. </li><li>Apps that target API Level 24 and above no longer trust user or admin-added CAs for secure connections, by default. </li><li>All devices running Android Nougat offer the same standardized set of system CAs—no device-specific customizations.</li></ul>For more details on these changes and what to do if you’re affected by them, read on. <br /><h3>Safe and easy APIs</h3>Apps have always been able customize which certificate authorities they trust. However, we saw apps making mistakes due to the complexities of the Java TLS APIs. To address this we <a href="https://developer.android.com/preview/features/security-config.html">improved the APIs</a> for customizing trust. <br /><h3>User-added CAs</h3>Protection of all application data is a key goal of the Android application sandbox. Android Nougat changes how applications interact with user- and admin-supplied CAs. By default, apps that target API level 24 will—by design—not honor such CAs unless the app explicitly opts in. This safe-by-default setting reduces application attack surface and encourages consistent handling of network and file-based application data. <br /><h3>Customizing trusted CAs</h3>Customizing the CAs your app trusts on Android Nougat is easy using the Network Security Config. Trust can be specified across the whole app or only for connections to certain domains, as needed. Below are some examples for trusting a custom or user-added CA, in addition to the system CAs. For more examples and details, see <a href="https://developer.android.com/preview/features/security-config.html">the full documentation</a>. <br /><h4>Trusting custom CAs for debugging</h4>To allow your app to trust custom CAs only for local debugging, include something like this in your Network Security Config. The CAs will only be trusted while your app is marked as debuggable. <br /><pre class="pretty print">&lt;network-security-config&gt;  <br />      &lt;debug-overrides&gt;  <br />           &lt;trust-anchors&gt;  <br />                &lt;!-- Trust user added CAs while debuggable only --&gt;<br />                &lt;certificates src="user" /&gt;  <br />           &lt;/trust-anchors&gt;  <br />      &lt;/domain-config&gt;  <br /> &lt;/network-security-config&gt;</pre><h4>Trusting custom CAs for a domain</h4>To allow your app to trust custom CAs for a specific domain, include something like this in your Network Security Config. <br /><pre class="pretty print">&lt;network-security-config&gt;  <br />      &lt;domain-config&gt;  <br />           &lt;domain includeSubdomains="true"&gt;internal.example.com&lt;/domain&gt;  <br />           &lt;trust-anchors&gt;  <br />                &lt;!-- Only trust the CAs included with the app  <br />                     for connections to internal.example.com --&gt;  <br />                &lt;certificates src="@raw/cas" /&gt;  <br />           &lt;/trust-anchors&gt;  <br />      &lt;/domain-config&gt;  <br /> &lt;/network-security-config&gt;</pre><h4>Trusting user-added CAs for some domains</h4>To allow your app to trust user-added CAs for multiple domains, include something like this in your Network Security Config. <br /><pre class="pretty print">&lt;network-security-config&gt;  <br />      &lt;domain-config&gt;  <br />           &lt;domain includeSubdomains="true"&gt;userCaDomain.com&lt;/domain&gt;  <br />           &lt;domain includeSubdomains="true"&gt;otherUserCaDomain.com&lt;/domain&gt;  <br />           &lt;trust-anchors&gt;  <br />                  &lt;!-- Trust preinstalled CAs --&gt;  <br />                  &lt;certificates src="system" /&gt;  <br />                  &lt;!-- Additionally trust user added CAs --&gt;  <br />                  &lt;certificates src="user" /&gt;  <br />           &lt;/trust-anchors&gt;  <br />      &lt;/domain-config&gt;  <br /> &lt;/network-security-config&gt;</pre><h4>Trusting user-added CAs for all domains except some</h4>To allow your app to trust user-added CAs for all domains, except for those specified, include something like this in your Network Security Config. <br /><pre class="pretty print">&lt;network-security-config&gt;  <br />      &lt;base-config&gt;  <br />           &lt;trust-anchors&gt;  <br />                &lt;!-- Trust preinstalled CAs --&gt;  <br />                &lt;certificates src="system" /&gt;  <br />                &lt;!-- Additionally trust user added CAs --&gt;  <br />                &lt;certificates src="user" /&gt;  <br />           &lt;/trust-anchors&gt;  <br />      &lt;/base-config&gt;  <br />      &lt;domain-config&gt;  <br />           &lt;domain includeSubdomains="true"&gt;sensitive.example.com&lt;/domain&gt;  <br />           &lt;trust-anchors&gt;  <br />                &lt;!-- Only allow sensitive content to be exchanged  <br />             with the real server and not any user or  <br />    admin configured MiTMs --&gt;  <br />                &lt;certificates src="system" /&gt;  <br />           &lt;trust-anchors&gt;  <br />      &lt;/domain-config&gt;  <br /> &lt;/network-security-config&gt;</pre><h4>Trusting user-added CAs for all secure connections</h4>To allow your app to trust user-added CAs for all secure connections, add this in your Network Security Config. <br /><pre class="pretty print">&lt;network-security-config&gt;  <br />      &lt;base-config&gt;  <br />            &lt;trust-anchors&gt;  <br />                &lt;!-- Trust preinstalled CAs --&gt;  <br />                &lt;certificates src="system" /&gt;  <br />                &lt;!-- Additionally trust user added CAs --&gt;  <br />                &lt;certificates src="user" /&gt;  <br />           &lt;/trust-anchors&gt;  <br />      &lt;/base-config&gt;  <br /> &lt;/network-security-config&gt;</pre><h3>Standardized set of system-trusted CAs</h3>To provide a more consistent and more secure experience across the Android ecosystem, beginning with Android Nougat, compatible devices trust only the standardized system CAs maintained in <a href="https://android.googlesource.com/platform/system/ca-certificates/">AOSP</a>. <br />Previously, the set of preinstalled CAs bundled with the system could vary from device to device. This could lead to compatibility issues when some devices did not include CAs that apps needed for connections as well as potential security issues if CAs that did not meet our security requirements were included on some devices. <br /><h3>What if I have a CA I believe should be included on Android?</h3>First, be sure that your CA needs to be included in the system. The preinstalled CAs are <strong>only</strong> for CAs that meet our security requirements because they affect the secure connections of most apps on the device. If you need to add a CA for connecting to hosts that use that CA, you should instead customize your apps and services that connect to those hosts. For more information, see the <em>Customizing trusted CAs</em> section above. <br />If you operate a CA that you believe should be included in Android, first complete the <a href="https://wiki.mozilla.org/CA:How_to_apply">Mozilla CA Inclusion Process</a> and then file a <a href="https://code.google.com/p/android/issues/entry">feature request</a>against Android to have the CA added to the standardized set of system CAs. ]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/changes-to-trusted-certificate-authorities-in-android-nougat-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Experimenting with Post-Quantum Cryptography</title>
		<link>https://googledata.org/google-online-security/experimenting-with-post-quantum-cryptography/</link>
		<comments>https://googledata.org/google-online-security/experimenting-with-post-quantum-cryptography/#comments</comments>
		<pubDate>Thu, 07 Jul 2016 15:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=22ea3222c749645c215adcb15ab1a767</guid>
		<description><![CDATA[<span>Posted by Matt Braithwaite, Software Engineer</span><br />Quantum computers are a fundamentally different sort of computer that take advantage of aspects of quantum physics to solve certain sorts of problems dramatically faster than conventional computers can. While they will, no doubt, be of huge benefit in some areas of study, some of the problems that they are effective at solving are the ones that we use to secure digital communications. Specifically, if large quantum computers can be built then they may be able to break the asymmetric cryptographic primitives that are currently used in TLS, the security protocol behind HTTPS.<br /><br />Quantum computers exist today but, for the moment, they are small and experimental, containing only a handful of quantum bits. It's not even certain that large machines will ever be built, although <a href="https://research.googleblog.com/2015/03/a-step-closer-to-quantum-computation.html">Google</a>, <a href="http://www.research.ibm.com/quantum/">IBM</a>, <a href="https://www.microsoft.com/en-us/research/research-area/hardware-devices-quantum-computing/">Microsoft</a>, <a href="https://newsroom.intel.com/news-releases/intel-invests-us50-million-to-advance-quantum-computing/">Intel</a> and others are working on it. (Adiabatic quantum computers, like the D-Wave computer that Google operates with NASA, can have large numbers of quantum bits, but currently solve fundamentally different problems.)<br /><br />However, a hypothetical, future quantum computer would be able to retrospectively decrypt any internet communication that was recorded today, and many types of information need to remain confidential for decades. Thus even the possibility of a future quantum computer is something that we should be thinking about today.<br /><b>Experimenting with Post-quantum cryptography in Chrome</b><br />The study of cryptographic primitives that remain secure even against quantum computers is called &#8220;post-quantum cryptography&#8221;. Today we're announcing an experiment in Chrome where a small fraction of connections between desktop Chrome and Google's servers will use a post-quantum key-exchange algorithm in addition to the elliptic-curve key-exchange algorithm that would typically be used. By adding a post-quantum algorithm on top of the existing one, we are able to experiment without affecting user security. The post-quantum algorithm might turn out to be breakable even with today's computers, in which case the elliptic-curve algorithm will still provide the best security that today&#8217;s technology can offer. Alternatively, if the post-quantum algorithm turns out to be secure then it'll protect the connection even against a future, quantum computer.<br /><br />Our aims with this experiment are to highlight an area of research that Google believes to be important and to gain real-world experience with the larger data structures that post-quantum algorithms will likely require.<br /><br />We're indebted to Erdem Alkim, L&#233;o Ducas, Thomas P&#246;ppelmann and Peter Schwabe, the researchers who developed &#8220;<a href="https://eprint.iacr.org/2015/1092">New Hope</a>&#8221;, the post-quantum algorithm that we selected for this experiment. Their scheme looked to be the most promising post-quantum key-exchange when we investigated in December 2015. Their work builds upon <a href="https://eprint.iacr.org/2014/599.pdf">earlier work</a> by Bos, Costello, Naehrig and Stebila, and also on <a href="https://eprint.iacr.org/2012/230.pdf">work</a> by Lyubashevsky, Peikert and Regev.<br /><br />We explicitly do not wish to make our selected post-quantum algorithm a de-facto standard. To this end we plan to discontinue this experiment within two years, hopefully by replacing it with something better. Since we selected New Hope, we've noted two <a href="https://eprint.iacr.org/2016/413">promising</a> <a href="https://ntruprime.cr.yp.to/ntruprime-20160511.pdf">papers</a> in this space, which are welcome. Additionally, Google researchers, in collaboration with researchers from NXP, Microsoft, Centrum Wiskunde &#38; Informatica and McMaster University, have just published <a href="https://eprint.iacr.org/2016/659">another paper</a> in this area. Practical research papers, such as these, are critical if cryptography is to have real-world impact.<br /><br />This experiment is currently enabled in <a href="https://www.google.com/chrome/browser/canary.html">Chrome Canary</a> and you can tell whether it's being used by opening the recently introduced <a href="https://developers.google.com/web/updates/2015/12/security-panel?hl=en">Security Panel</a> and looking for &#8220;CECPQ1&#8221;, for example on <a href="https://play.google.com/store">https://play.google.com/store</a>. Not all Google domains will have it enabled and the experiment may appear and disappear a few times if any issues are found.<br /><div><a href="https://3.bp.blogspot.com/-q_7S4AI2qAw/V32YDMPlb7I/AAAAAAAAATA/ew-zoC4tSkYLxh3mvdppJVLIcEVCSh-ZgCLcB/s1600/play%2Bscreenshot.png"><img border="0" height="363" src="https://3.bp.blogspot.com/-q_7S4AI2qAw/V32YDMPlb7I/AAAAAAAAATA/ew-zoC4tSkYLxh3mvdppJVLIcEVCSh-ZgCLcB/s640/play%2Bscreenshot.png" width="640"></a></div><br />While it's still very early days for quantum computers, we're excited to begin preparing for them, and to help ensure our users' data will remain secure long into the future.<br /><b><br /></b><b><br /></b><b><br /></b>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Matt Braithwaite, Software Engineer</span><br />Quantum computers are a fundamentally different sort of computer that take advantage of aspects of quantum physics to solve certain sorts of problems dramatically faster than conventional computers can. While they will, no doubt, be of huge benefit in some areas of study, some of the problems that they are effective at solving are the ones that we use to secure digital communications. Specifically, if large quantum computers can be built then they may be able to break the asymmetric cryptographic primitives that are currently used in TLS, the security protocol behind HTTPS.<br /><br />Quantum computers exist today but, for the moment, they are small and experimental, containing only a handful of quantum bits. It's not even certain that large machines will ever be built, although <a href="https://research.googleblog.com/2015/03/a-step-closer-to-quantum-computation.html">Google</a>, <a href="http://www.research.ibm.com/quantum/">IBM</a>, <a href="https://www.microsoft.com/en-us/research/research-area/hardware-devices-quantum-computing/">Microsoft</a>, <a href="https://newsroom.intel.com/news-releases/intel-invests-us50-million-to-advance-quantum-computing/">Intel</a> and others are working on it. (Adiabatic quantum computers, like the D-Wave computer that Google operates with NASA, can have large numbers of quantum bits, but currently solve fundamentally different problems.)<br /><br />However, a hypothetical, future quantum computer would be able to retrospectively decrypt any internet communication that was recorded today, and many types of information need to remain confidential for decades. Thus even the possibility of a future quantum computer is something that we should be thinking about today.<br /><b>Experimenting with Post-quantum cryptography in Chrome</b><br />The study of cryptographic primitives that remain secure even against quantum computers is called “post-quantum cryptography”. Today we're announcing an experiment in Chrome where a small fraction of connections between desktop Chrome and Google's servers will use a post-quantum key-exchange algorithm in addition to the elliptic-curve key-exchange algorithm that would typically be used. By adding a post-quantum algorithm on top of the existing one, we are able to experiment without affecting user security. The post-quantum algorithm might turn out to be breakable even with today's computers, in which case the elliptic-curve algorithm will still provide the best security that today’s technology can offer. Alternatively, if the post-quantum algorithm turns out to be secure then it'll protect the connection even against a future, quantum computer.<br /><br />Our aims with this experiment are to highlight an area of research that Google believes to be important and to gain real-world experience with the larger data structures that post-quantum algorithms will likely require.<br /><br />We're indebted to Erdem Alkim, Léo Ducas, Thomas Pöppelmann and Peter Schwabe, the researchers who developed “<a href="https://eprint.iacr.org/2015/1092">New Hope</a>”, the post-quantum algorithm that we selected for this experiment. Their scheme looked to be the most promising post-quantum key-exchange when we investigated in December 2015. Their work builds upon <a href="https://eprint.iacr.org/2014/599.pdf">earlier work</a> by Bos, Costello, Naehrig and Stebila, and also on <a href="https://eprint.iacr.org/2012/230.pdf">work</a> by Lyubashevsky, Peikert and Regev.<br /><br />We explicitly do not wish to make our selected post-quantum algorithm a de-facto standard. To this end we plan to discontinue this experiment within two years, hopefully by replacing it with something better. Since we selected New Hope, we've noted two <a href="https://eprint.iacr.org/2016/413">promising</a> <a href="https://ntruprime.cr.yp.to/ntruprime-20160511.pdf">papers</a> in this space, which are welcome. Additionally, Google researchers, in collaboration with researchers from NXP, Microsoft, Centrum Wiskunde &amp; Informatica and McMaster University, have just published <a href="https://eprint.iacr.org/2016/659">another paper</a> in this area. Practical research papers, such as these, are critical if cryptography is to have real-world impact.<br /><br />This experiment is currently enabled in <a href="https://www.google.com/chrome/browser/canary.html">Chrome Canary</a> and you can tell whether it's being used by opening the recently introduced <a href="https://developers.google.com/web/updates/2015/12/security-panel?hl=en">Security Panel</a> and looking for “CECPQ1”, for example on <a href="https://play.google.com/store">https://play.google.com/store</a>. Not all Google domains will have it enabled and the experiment may appear and disappear a few times if any issues are found.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-q_7S4AI2qAw/V32YDMPlb7I/AAAAAAAAATA/ew-zoC4tSkYLxh3mvdppJVLIcEVCSh-ZgCLcB/s1600/play%2Bscreenshot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="363" src="https://3.bp.blogspot.com/-q_7S4AI2qAw/V32YDMPlb7I/AAAAAAAAATA/ew-zoC4tSkYLxh3mvdppJVLIcEVCSh-ZgCLcB/s640/play%2Bscreenshot.png" width="640" /></a></div><br />While it's still very early days for quantum computers, we're excited to begin preparing for them, and to help ensure our users' data will remain secure long into the future.<br /><b><br /></b><b><br /></b><b><br /></b>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/experimenting-with-post-quantum-cryptography/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>One Year of Android Security Rewards</title>
		<link>https://googledata.org/google-online-security/one-year-of-android-security-rewards-2/</link>
		<comments>https://googledata.org/google-online-security/one-year-of-android-security-rewards-2/#comments</comments>
		<pubDate>Thu, 16 Jun 2016 20:31:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=70e22a833010d6bd253393de969c9447</guid>
		<description><![CDATA[<span>Posted by Quan To, Program Manager, Android Security</span><br /><br /><a href="https://security.googleblog.com/2015/06/announcing-security-rewards-for-android.html">A year ago</a>, we added <a href="https://www.google.com/about/appsecurity/android-rewards/index.html">Android Security Rewards</a> to the long standing <a href="https://www.google.com/about/appsecurity/reward-program/">Google Vulnerability Rewards Program</a>. We offered up to $38,000 per report that we used to fix vulnerabilities and protect Android users.<br /><br />Since then, we have received over 250 qualifying vulnerability reports from researchers that have helped make Android and mobile security stronger. More than a third of them were reported in Media Server which has been <a href="http://android-developers.blogspot.com/2016/05/hardening-media-stack.html">hardened in Android N</a> to make it more resistant to vulnerabilities.<br /><br />While the program is focused on Nexus devices and has a primary goal of improving Android security, more than a quarter of the issues were reported in code that is developed and used outside of the Android Open Source Project. Fixing these kernel and device driver bugs helps improve security of the broader mobile industry (and even some non-mobile platforms).<br /><b>By the Numbers</b><br />Here&#8217;s a quick rundown of the Android VRP&#8217;s first year:<br /><br /><ul><li>We paid over $550,000 to 82 individuals. That&#8217;s an average of $2,200 per reward and $6,700 per researcher.</li><li>We paid our top researcher, <a href="https://twitter.com/heisecode">@heisecode</a>, $75,750 for 26 vulnerability reports.</li><li>We paid 15 researchers $10,000 or more.</li><li>There were no payouts for the top reward for a complete remote exploit chain leading to TrustZone or Verified Boot compromise.</li></ul><div>Thank you to <a href="https://source.android.com/security/overview/acknowledgements.html">those</a> who submitted high quality <a href="https://code.google.com/p/android/issues/entry?template=Security%20bug%20report">vulnerability reports</a> to us last year.</div><div><br /></div><div><b><br /></b></div><div><b><br /></b></div><div><b>Improvements to Android VRP</b></div><div><b><br /></b></div><div>We&#8217;re constantly working to improve the program and today we&#8217;re making a few changes to all vulnerability reports filed after June 1, 2016.</div><div><br /></div><div>We&#8217;re paying more!</div><div><ul><li>We will now pay 33% more for a high-quality vulnerability report with proof of concept. For example, the reward for a Critical vulnerability report with a proof of concept increased from $3000 to $4000.</li><li>A high quality vulnerability report with a proof of concept, a CTS Test, or a patch will receive an additional 50% more.</li><li>We&#8217;re raising our rewards for a remote or proximal kernel exploit from $20,000 to $30,000.</li><li>A remote exploit chain or exploits leading to TrustZone or Verified Boot compromise increase from $30,000 to $50,000.</li></ul><div>All of the changes, as well as the additional terms of the program, are explained in more detail in our <a href="https://www.google.com/about/appsecurity/android-rewards/">Program Rules</a>. If you&#8217;re interested in helping us find security vulnerabilities, take a look at <a href="https://sites.google.com/site/bughunteruniversity/">Bug Hunter University</a> and learn how to submit high quality vulnerability reports. Remember, the better the report, the more you&#8217;ll get paid. We also recently updated our <a href="https://source.android.com/security/overview/updates-resources.html">severity ratings</a>, so make sure to check those out, too.</div></div><div><br /></div><div><br /></div><div>Thank you to everyone who helped us make Android safer. Together, we made a huge investment in security research that has made Android stronger. We&#8217;re just getting started and are looking forward to doing even more in the future.</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Quan To, Program Manager, Android Security</span><br /><br /><a href="https://security.googleblog.com/2015/06/announcing-security-rewards-for-android.html">A year ago</a>, we added <a href="https://www.google.com/about/appsecurity/android-rewards/index.html">Android Security Rewards</a> to the long standing <a href="https://www.google.com/about/appsecurity/reward-program/">Google Vulnerability Rewards Program</a>. We offered up to $38,000 per report that we used to fix vulnerabilities and protect Android users.<br /><br />Since then, we have received over 250 qualifying vulnerability reports from researchers that have helped make Android and mobile security stronger. More than a third of them were reported in Media Server which has been <a href="http://android-developers.blogspot.com/2016/05/hardening-media-stack.html">hardened in Android N</a> to make it more resistant to vulnerabilities.<br /><br />While the program is focused on Nexus devices and has a primary goal of improving Android security, more than a quarter of the issues were reported in code that is developed and used outside of the Android Open Source Project. Fixing these kernel and device driver bugs helps improve security of the broader mobile industry (and even some non-mobile platforms).<br /><b>By the Numbers</b><br />Here’s a quick rundown of the Android VRP’s first year:<br /><br /><ul><li>We paid over $550,000 to 82 individuals. That’s an average of $2,200 per reward and $6,700 per researcher.</li><li>We paid our top researcher, <a href="https://twitter.com/heisecode">@heisecode</a>, $75,750 for 26 vulnerability reports.</li><li>We paid 15 researchers $10,000 or more.</li><li>There were no payouts for the top reward for a complete remote exploit chain leading to TrustZone or Verified Boot compromise.</li></ul><div>Thank you to <a href="https://source.android.com/security/overview/acknowledgements.html">those</a> who submitted high quality <a href="https://code.google.com/p/android/issues/entry?template=Security%20bug%20report">vulnerability reports</a> to us last year.</div><div><br /></div><div><b><br /></b></div><div><b><br /></b></div><div><b>Improvements to Android VRP</b></div><div><b><br /></b></div><div>We’re constantly working to improve the program and today we’re making a few changes to all vulnerability reports filed after June 1, 2016.</div><div><br /></div><div>We’re paying more!</div><div><ul><li>We will now pay 33% more for a high-quality vulnerability report with proof of concept. For example, the reward for a Critical vulnerability report with a proof of concept increased from $3000 to $4000.</li><li>A high quality vulnerability report with a proof of concept, a CTS Test, or a patch will receive an additional 50% more.</li><li>We’re raising our rewards for a remote or proximal kernel exploit from $20,000 to $30,000.</li><li>A remote exploit chain or exploits leading to TrustZone or Verified Boot compromise increase from $30,000 to $50,000.</li></ul><div>All of the changes, as well as the additional terms of the program, are explained in more detail in our <a href="https://www.google.com/about/appsecurity/android-rewards/">Program Rules</a>. If you’re interested in helping us find security vulnerabilities, take a look at <a href="https://sites.google.com/site/bughunteruniversity/">Bug Hunter University</a> and learn how to submit high quality vulnerability reports. Remember, the better the report, the more you’ll get paid. We also recently updated our <a href="https://source.android.com/security/overview/updates-resources.html">severity ratings</a>, so make sure to check those out, too.</div></div><div><br /></div><div><br /></div><div>Thank you to everyone who helped us make Android safer. Together, we made a huge investment in security research that has made Android stronger. We’re just getting started and are looking forward to doing even more in the future.</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/one-year-of-android-security-rewards-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Evolving the Safe Browsing API</title>
		<link>https://googledata.org/google-online-security/evolving-the-safe-browsing-api/</link>
		<comments>https://googledata.org/google-online-security/evolving-the-safe-browsing-api/#comments</comments>
		<pubDate>Fri, 20 May 2016 17:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=e7eae86a2212e99b29cafdf564b0246c</guid>
		<description><![CDATA[<span>Posted by Emily Schechter and Alex Wozniak, Safe Browsing Team&#160;</span><br /><span><br /></span>We're excited to announce the launch of the new <a href="https://developers.google.com/safe-browsing/">Safe Browsing API version 4</a>. Version 4 replaces the existing Safe Browsing API version 3. With the launch of v4, we&#8217;re now starting the deprecation process for v2-3: please transition off of these older Safe Browsing protocol versions as soon as possible and onto protocol version 4.<br /><br /><a href="https://www.google.com/transparencyreport/safebrowsing/?hl=en">Safe Browsing</a> protects well over two billion internet-connected devices from threats like malware and phishing, and has done so for over a decade. We launched v1 of the Safe Browsing API <a href="https://security.googleblog.com/2007/06/phishers-and-malware-authors-beware.html">in 2007</a> to give developers a simple mechanism to access Google&#8217;s lists of suspected unsafe sites.<br /><br />The web has evolved since then and users are now increasingly using the web from their mobile devices. These devices have constraints less common to traditional desktop computing environments: mobile devices have very limited power and network bandwidth, and often poor quality of service. Additionally, cellular data costs our users money, so we have a responsibility to use it judiciously.<br /><br />With protocol version 4, we&#8217;ve optimized for this new environment with a clear focus on maximizing protection per bit, which benefits all Safe Browsing users, mobile and desktop alike. Version 4 clients can now define constraints such as geographic location, platform type, and data caps to use bandwidth and device resources as efficiently as possible. This allows us to function well within the much stricter mobile constraints without sacrificing protection.<br /><br />We&#8217;ve been using the new protocol since December via the <a href="https://security.googleblog.com/2015/12/protecting-hundreds-of-millions-more.html">Safe Browsing client on Android</a>, which is part of Google Play Services. The first app to use the client is Chrome, starting with version 46: we&#8217;re already protecting hundreds of millions of Android Chrome users by default.<br /><br /><b>We&#8217;ve Done Most Of The Work For You Already</b><br /><b><br /></b>A single device should only have a single, up-to-date instance of Safe Browsing data, so we&#8217;re taking care of that for all Android developers. Please don&#8217;t implement your own Version 4 client on Android: we&#8217;re working on making a simple, device-local API available to prevent any resource waste on device. We&#8217;ll announce the availability of this new device-local API as soon as possible; in the meantime, there&#8217;s no need to develop a Version 4 client on your own. For those who operate in less resource-constrained environments, using the Safe Browsing Version 4 API directly allows you to:<br /><br /><ul><li>Check pages against the Safe Browsing lists based on platform and threat types.</li><li>Warn users before they click links that may lead to infected pages.</li><li>Prevent users from posting links to known infected pages</li></ul><div><br /></div><div>To make Safe Browsing integration as simple as possible, we&#8217;re also releasing a <a href="https://github.com/google/safebrowsing">reference client implementation</a> of the new API today, written in Go. It also provides a Safe Browsing HTTP proxy server, which supports JSON.</div><div><br /></div><div><br /></div><div>It&#8217;s easy to start protecting users with the new Version 4 of the Safe Browsing API. <a href="https://developers.google.com/safe-browsing/v4/get-started#general-set-up">Sign up for a key</a> and <a href="https://groups.google.com/forum/#!forum/google-safe-browsing-api">let us know what you think</a>!</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Emily Schechter and Alex Wozniak, Safe Browsing Team&nbsp;</span><br /><span class="byline-author"><br /></span>We're excited to announce the launch of the new <a href="https://developers.google.com/safe-browsing/">Safe Browsing API version 4</a>. Version 4 replaces the existing Safe Browsing API version 3. With the launch of v4, we’re now starting the deprecation process for v2-3: please transition off of these older Safe Browsing protocol versions as soon as possible and onto protocol version 4.<br /><br /><a href="https://www.google.com/transparencyreport/safebrowsing/?hl=en">Safe Browsing</a> protects well over two billion internet-connected devices from threats like malware and phishing, and has done so for over a decade. We launched v1 of the Safe Browsing API <a href="https://security.googleblog.com/2007/06/phishers-and-malware-authors-beware.html">in 2007</a> to give developers a simple mechanism to access Google’s lists of suspected unsafe sites.<br /><br />The web has evolved since then and users are now increasingly using the web from their mobile devices. These devices have constraints less common to traditional desktop computing environments: mobile devices have very limited power and network bandwidth, and often poor quality of service. Additionally, cellular data costs our users money, so we have a responsibility to use it judiciously.<br /><br />With protocol version 4, we’ve optimized for this new environment with a clear focus on maximizing protection per bit, which benefits all Safe Browsing users, mobile and desktop alike. Version 4 clients can now define constraints such as geographic location, platform type, and data caps to use bandwidth and device resources as efficiently as possible. This allows us to function well within the much stricter mobile constraints without sacrificing protection.<br /><br />We’ve been using the new protocol since December via the <a href="https://security.googleblog.com/2015/12/protecting-hundreds-of-millions-more.html">Safe Browsing client on Android</a>, which is part of Google Play Services. The first app to use the client is Chrome, starting with version 46: we’re already protecting hundreds of millions of Android Chrome users by default.<br /><br /><b>We’ve Done Most Of The Work For You Already</b><br /><b><br /></b>A single device should only have a single, up-to-date instance of Safe Browsing data, so we’re taking care of that for all Android developers. Please don’t implement your own Version 4 client on Android: we’re working on making a simple, device-local API available to prevent any resource waste on device. We’ll announce the availability of this new device-local API as soon as possible; in the meantime, there’s no need to develop a Version 4 client on your own. For those who operate in less resource-constrained environments, using the Safe Browsing Version 4 API directly allows you to:<br /><br /><ul><li>Check pages against the Safe Browsing lists based on platform and threat types.</li><li>Warn users before they click links that may lead to infected pages.</li><li>Prevent users from posting links to known infected pages</li></ul><div><br /></div><div>To make Safe Browsing integration as simple as possible, we’re also releasing a <a href="https://github.com/google/safebrowsing">reference client implementation</a> of the new API today, written in Go. It also provides a Safe Browsing HTTP proxy server, which supports JSON.</div><div><br /></div><div><br /></div><div>It’s easy to start protecting users with the new Version 4 of the Safe Browsing API. <a href="https://developers.google.com/safe-browsing/v4/get-started#general-set-up">Sign up for a key</a> and <a href="https://groups.google.com/forum/#!forum/google-safe-browsing-api">let us know what you think</a>!</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/evolving-the-safe-browsing-api/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Hardening the media stack</title>
		<link>https://googledata.org/google-online-security/hardening-the-media-stack-2/</link>
		<comments>https://googledata.org/google-online-security/hardening-the-media-stack-2/#comments</comments>
		<pubDate>Thu, 05 May 2016 20:12:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=87387ffb5c725456561afda2d8c71c86</guid>
		<description><![CDATA[<span>Posted by Dan Austin and Jeff Vander Stoep, Android Security team</span>  <br /><br /><i>[Cross-posted from the <a href="https://android-developers.blogspot.com/2016/05/hardening-media-stack.html">Android Developers Blog</a>]</i><br /><br />To help make Android more secure, we encourage and <a href="https://www.google.com/about/appsecurity/android-rewards/">reward</a> researchers who discover vulnerabilities. In 2015, a series of bugs in mediaserver&#8217;s libstagefright were disclosed to Google. We released updates for these issues with our August and September 2015 <a href="http://source.android.com/security/bulletin/index.html">security bulletins</a>.<br />In addition to addressing issues on a monthly basis, we&#8217;ve also been working on new security features designed to enhance the existing security model and provide additional defense in-depth. These defense measures attempt to achieve two goals:<br /><ul><li> <strong>Prevention</strong>: Stop bugs from becoming vulnerabilities   </li><li> <strong>Containment</strong>: Protect the system by de-privileging and isolating components that handle untrusted content </li></ul><br /><h3><span>Prevention</span></h3>Most of the vulnerabilities found in libstagefright were heap overflows resulting from unsigned <a href="https://www.owasp.org/index.php/Integer_overflow">integer overflows</a>. A number of integer overflows in libstagefright allowed an attacker to allocate a buffer with less space than necessary for the incoming data, resulting in a buffer overflow in the heap.<br />The result of an unsigned integer overflow is well defined, but the ensuing behavior could be unexpected or unsafe. In contrast, signed integer overflows are considered undefined behavior in C/C++, which means the result of an overflow is not guaranteed, and the compiler author may choose the resulting behavior&#8212;typically what is fastest or simplest. We have added compiler changes that are designed to provide safer defaults for both signed and unsigned integer overflows.<br />The UndefinedBehaviorSanitizer (<a href="http://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html">UBSan</a>) is part of the LLVM/Clang compiler toolchain that detects undefined or unintended behavior. UBSan can check for multiple types of undefined and unsafe behavior, including signed and unsigned integer overflow. These checks add code to the resulting executable, testing for integer overflow conditions during runtime. For example, figure 1 shows source code for the <code>parseChunk</code> function in the <code>MPEG4Extractor</code> component of libstagefright after the original researcher-supplied patch was applied. The modification, which is contained in the black box below, appears to prevent integer overflows from occurring. Unfortunately, while <code>SIZE_MAX</code> and <code>size</code> are 32-bit values, <code>chunk_size</code> is a 64-bit value, resulting in an incomplete check and the potential for integer overflow. In the line within the red box, the addition of <code>size</code> and <code>chunk_size</code> may result in an integer overflow and creation of buffer smaller than <code>size</code> elements. The subsequent <code>memcpy</code> could then lead to exploitable memory corruption, as <code>size</code> + <code>chunk_size</code> could be less than <code>size</code>, which is highlighted in the blue box. The mechanics of a potential exploit vector for this vulnerability are explained in more detail by <a href="http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html">Project Zero</a>.<br /><div><a href="https://1.bp.blogspot.com/-q1wloCizVXs/VyuNQkSyhWI/AAAAAAAAC5U/_ENUgwJd-OUZywTaJWBZVdrW1Cj_sLz6ACLcB/s1600/image01.png"><img border="0" src="https://1.bp.blogspot.com/-q1wloCizVXs/VyuNQkSyhWI/AAAAAAAAC5U/_ENUgwJd-OUZywTaJWBZVdrW1Cj_sLz6ACLcB/s640/image01.png"></a></div><strong>Figure 1.</strong> Source code demonstrating a subtle unsigned integer overflow.<br />Figure 2 compares assembly generated from the code segment above with a second version compiled with integer sanitization enabled. The add operation that results in the integer overflow is contained in the red box. <br />In the unsanitized version, <code>size</code> (<code>r6</code>) and <code>chunk_size</code> (<code>r7</code>) are added together, potentially resulting in <code>r0</code> overflowing and being less than <code>size</code>. Then, <code>buffer</code> is allocated with the <code>size</code> specified in <code>r0</code>, and <code>size</code> bytes are copied to it. If <code>r0</code> is less than <code>r6</code>, this results in memory corruption.<br />In the sanitized version, <code>size</code> (<code>r7</code>) and <code>chunk_size</code> (<code>r5</code>) are added together with the result stored in <code>r0</code>. Later, <code>r0</code> is checked against <code>r7</code>, if <code>r0</code> is less than <code>r7</code>, as indicated by the <code>CC</code> condition code, <code>r3</code> is set to 1. If <code>r3</code> is 1, and the carry bit was set, then an integer overflow occurred, and an abort is triggered, preventing memory corruption.<br />Note that the incomplete check provided in the patch was not included in figure 2. The overflow occurs in the <code>buffer</code> allocation&#8217;s <code>add</code> operation. This addition triggers an integer sanitization check, which turns this exploitable flaw into a harmless abort. <br /><div><a href="https://3.bp.blogspot.com/-4oXE0qKB5VE/VyuNZhQXdiI/AAAAAAAAC5Y/LOTqWfulNMoE7hexzrFEBffIgk7f8mMRwCLcB/s1600/image00.png"><img border="0" src="https://3.bp.blogspot.com/-4oXE0qKB5VE/VyuNZhQXdiI/AAAAAAAAC5Y/LOTqWfulNMoE7hexzrFEBffIgk7f8mMRwCLcB/s640/image00.png"></a></div><strong>Figure 2.</strong> Comparing unsanitized and sanitized compiler output.<br />While the integer sanitizers were originally intended as code hygiene tools, they effectively prevent the majority of reported libstagefright vulnerabilities. Turning on the integer overflow checks was just the first step. Preventing the runtime abort by finding and fixing integer overflows, most of which are not exploitable, represented a large effort by Android's media team. Most of the discovered overflows were fixed and those that remain (mostly for performance reasons) were verified and marked as safe to prevent the runtime abort.<br />In Android N, signed and unsigned integer overflow detection is enabled on the entire media stack, including libstagefright. This makes it harder to exploit integer overflows, and also helps to prevent future additions to Android from introducing new integer overflow bugs. <br /><h3><span>Containment</span></h3>For Android M and earlier, the mediaserver process in Android was responsible for most media-related tasks. This meant that it required access to all permissions needed by those responsibilities and, although mediaserver ran in its own sandbox, it still had access to a lot of resources and capabilities. This is why the libstagefright bugs from 2015 were significant&#8212;mediaserver could access several important resources on an Android device including camera, microphone, graphics, phone, Bluetooth, and internet.<br />A root cause analysis showed that the libstagefright bugs primarily occurred in code responsible for parsing file formats and media codecs. This is not surprising&#8212;parsing complex file formats and codecs while trying to optimize for speed is hard, and the large number of edge cases makes such code susceptible to both accidental and malicious malformed inputs. <br />However, media parsers do not require access to most of the privileged permissions held by mediaserver. Because of this, the media team re-architected mediaserver in Android N to better adhere to the principle of least privilege. Figure 3 illustrates how the monolithic mediaserver and its permissions have been divided, using the following heuristics:<br /><ul><li> parsing code moved into unprivileged sandboxes that have few or no permissions   </li><li> components that require sensitive permissions moved into separate sandboxes that only grant access to the specific resources the component needs. For example, only the cameraserver may access the camera, only the audioserver may access Bluetooth, and only the drmserver may access DRM resources. </li></ul><div><a href="https://1.bp.blogspot.com/-D59NoibUBw4/VyuNgVFoSdI/AAAAAAAAC5c/c5qwM6mdSzAw_So4y0PFmXV1-AYNIP3ggCLcB/s1600/image02.png"><img border="0" src="https://1.bp.blogspot.com/-D59NoibUBw4/VyuNgVFoSdI/AAAAAAAAC5c/c5qwM6mdSzAw_So4y0PFmXV1-AYNIP3ggCLcB/s640/image02.png"></a></div><strong>Figure 3</strong>. How mediaserver and its permissions have been divided in Android N.<br />Comparing the potential impact of the libstagefright bugs on Android N and older versions demonstrates the value of this strategy. Gaining code execution in libstagefright previously granted access to all the permissions and resources available to the monolithic mediaserver process including graphics driver, camera driver, or sockets, which present a rich kernel attack surface.<br />In Android N, libstagefright runs within the mediacodec sandbox with access to very few permissions. Access to camera, microphone, photos, phone, Bluetooth, and internet as well as dynamic code loading are disallowed by <a href="https://source.android.com/security/selinux/">SELinux</a>. Interaction with the kernel is further restricted by <a href="https://en.wikipedia.org/wiki/Seccomp">seccomp</a>. This means that compromising libstagefright would grant the attacker access to significantly fewer permissions and also mitigates privilege escalation by reducing the attack surface exposed by the kernel.<br /><h3><span>Conclusion</span></h3>The media hardening project is an ongoing effort focused on moving functionality into less privileged sandboxes and further reducing the permissions granted to those sandboxes. While the techniques discussed here were applied to the Android media framework, they are suitable across the Android codebase. These hardening techniques&#8212;and others&#8212;are being actively applied to additional components within Android. As always, we appreciate feedback on our work and welcome suggestions for how we can improve Android. Contact us at <a href="mailto:security@android.com">security@android.com</a>.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Dan Austin and Jeff Vander Stoep, Android Security team</span>  <br /><br /><i>[Cross-posted from the <a href="https://android-developers.blogspot.com/2016/05/hardening-media-stack.html">Android Developers Blog</a>]</i><br /><br />To help make Android more secure, we encourage and <a href="https://www.google.com/about/appsecurity/android-rewards/">reward</a> researchers who discover vulnerabilities. In 2015, a series of bugs in mediaserver’s libstagefright were disclosed to Google. We released updates for these issues with our August and September 2015 <a href="http://source.android.com/security/bulletin/index.html">security bulletins</a>.<br />In addition to addressing issues on a monthly basis, we’ve also been working on new security features designed to enhance the existing security model and provide additional defense in-depth. These defense measures attempt to achieve two goals:<br /><ul><li> <strong>Prevention</strong>: Stop bugs from becoming vulnerabilities   </li><li> <strong>Containment</strong>: Protect the system by de-privileging and isolating components that handle untrusted content </li></ul><br /><h3 id="prevention"><span style="font-size: large;">Prevention</span></h3>Most of the vulnerabilities found in libstagefright were heap overflows resulting from unsigned <a href="https://www.owasp.org/index.php/Integer_overflow">integer overflows</a>. A number of integer overflows in libstagefright allowed an attacker to allocate a buffer with less space than necessary for the incoming data, resulting in a buffer overflow in the heap.<br />The result of an unsigned integer overflow is well defined, but the ensuing behavior could be unexpected or unsafe. In contrast, signed integer overflows are considered undefined behavior in C/C++, which means the result of an overflow is not guaranteed, and the compiler author may choose the resulting behavior—typically what is fastest or simplest. We have added compiler changes that are designed to provide safer defaults for both signed and unsigned integer overflows.<br />The UndefinedBehaviorSanitizer (<a href="http://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html">UBSan</a>) is part of the LLVM/Clang compiler toolchain that detects undefined or unintended behavior. UBSan can check for multiple types of undefined and unsafe behavior, including signed and unsigned integer overflow. These checks add code to the resulting executable, testing for integer overflow conditions during runtime. For example, figure 1 shows source code for the <code>parseChunk</code> function in the <code>MPEG4Extractor</code> component of libstagefright after the original researcher-supplied patch was applied. The modification, which is contained in the black box below, appears to prevent integer overflows from occurring. Unfortunately, while <code>SIZE_MAX</code> and <code>size</code> are 32-bit values, <code>chunk_size</code> is a 64-bit value, resulting in an incomplete check and the potential for integer overflow. In the line within the red box, the addition of <code>size</code> and <code>chunk_size</code> may result in an integer overflow and creation of buffer smaller than <code>size</code> elements. The subsequent <code>memcpy</code> could then lead to exploitable memory corruption, as <code>size</code> + <code>chunk_size</code> could be less than <code>size</code>, which is highlighted in the blue box. The mechanics of a potential exploit vector for this vulnerability are explained in more detail by <a href="http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html">Project Zero</a>.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-q1wloCizVXs/VyuNQkSyhWI/AAAAAAAAC5U/_ENUgwJd-OUZywTaJWBZVdrW1Cj_sLz6ACLcB/s1600/image01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-q1wloCizVXs/VyuNQkSyhWI/AAAAAAAAC5U/_ENUgwJd-OUZywTaJWBZVdrW1Cj_sLz6ACLcB/s640/image01.png" /></a></div><strong>Figure 1.</strong> Source code demonstrating a subtle unsigned integer overflow.<br />Figure 2 compares assembly generated from the code segment above with a second version compiled with integer sanitization enabled. The add operation that results in the integer overflow is contained in the red box. <br />In the unsanitized version, <code>size</code> (<code>r6</code>) and <code>chunk_size</code> (<code>r7</code>) are added together, potentially resulting in <code>r0</code> overflowing and being less than <code>size</code>. Then, <code>buffer</code> is allocated with the <code>size</code> specified in <code>r0</code>, and <code>size</code> bytes are copied to it. If <code>r0</code> is less than <code>r6</code>, this results in memory corruption.<br />In the sanitized version, <code>size</code> (<code>r7</code>) and <code>chunk_size</code> (<code>r5</code>) are added together with the result stored in <code>r0</code>. Later, <code>r0</code> is checked against <code>r7</code>, if <code>r0</code> is less than <code>r7</code>, as indicated by the <code>CC</code> condition code, <code>r3</code> is set to 1. If <code>r3</code> is 1, and the carry bit was set, then an integer overflow occurred, and an abort is triggered, preventing memory corruption.<br />Note that the incomplete check provided in the patch was not included in figure 2. The overflow occurs in the <code>buffer</code> allocation’s <code>add</code> operation. This addition triggers an integer sanitization check, which turns this exploitable flaw into a harmless abort. <br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-4oXE0qKB5VE/VyuNZhQXdiI/AAAAAAAAC5Y/LOTqWfulNMoE7hexzrFEBffIgk7f8mMRwCLcB/s1600/image00.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://3.bp.blogspot.com/-4oXE0qKB5VE/VyuNZhQXdiI/AAAAAAAAC5Y/LOTqWfulNMoE7hexzrFEBffIgk7f8mMRwCLcB/s640/image00.png" /></a></div><strong>Figure 2.</strong> Comparing unsanitized and sanitized compiler output.<br />While the integer sanitizers were originally intended as code hygiene tools, they effectively prevent the majority of reported libstagefright vulnerabilities. Turning on the integer overflow checks was just the first step. Preventing the runtime abort by finding and fixing integer overflows, most of which are not exploitable, represented a large effort by Android's media team. Most of the discovered overflows were fixed and those that remain (mostly for performance reasons) were verified and marked as safe to prevent the runtime abort.<br />In Android N, signed and unsigned integer overflow detection is enabled on the entire media stack, including libstagefright. This makes it harder to exploit integer overflows, and also helps to prevent future additions to Android from introducing new integer overflow bugs. <br /><h3 id="containment"><span style="font-size: large;">Containment</span></h3>For Android M and earlier, the mediaserver process in Android was responsible for most media-related tasks. This meant that it required access to all permissions needed by those responsibilities and, although mediaserver ran in its own sandbox, it still had access to a lot of resources and capabilities. This is why the libstagefright bugs from 2015 were significant—mediaserver could access several important resources on an Android device including camera, microphone, graphics, phone, Bluetooth, and internet.<br />A root cause analysis showed that the libstagefright bugs primarily occurred in code responsible for parsing file formats and media codecs. This is not surprising—parsing complex file formats and codecs while trying to optimize for speed is hard, and the large number of edge cases makes such code susceptible to both accidental and malicious malformed inputs. <br />However, media parsers do not require access to most of the privileged permissions held by mediaserver. Because of this, the media team re-architected mediaserver in Android N to better adhere to the principle of least privilege. Figure 3 illustrates how the monolithic mediaserver and its permissions have been divided, using the following heuristics:<br /><ul><li> parsing code moved into unprivileged sandboxes that have few or no permissions   </li><li> components that require sensitive permissions moved into separate sandboxes that only grant access to the specific resources the component needs. For example, only the cameraserver may access the camera, only the audioserver may access Bluetooth, and only the drmserver may access DRM resources. </li></ul><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-D59NoibUBw4/VyuNgVFoSdI/AAAAAAAAC5c/c5qwM6mdSzAw_So4y0PFmXV1-AYNIP3ggCLcB/s1600/image02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-D59NoibUBw4/VyuNgVFoSdI/AAAAAAAAC5c/c5qwM6mdSzAw_So4y0PFmXV1-AYNIP3ggCLcB/s640/image02.png" /></a></div><strong>Figure 3</strong>. How mediaserver and its permissions have been divided in Android N.<br />Comparing the potential impact of the libstagefright bugs on Android N and older versions demonstrates the value of this strategy. Gaining code execution in libstagefright previously granted access to all the permissions and resources available to the monolithic mediaserver process including graphics driver, camera driver, or sockets, which present a rich kernel attack surface.<br />In Android N, libstagefright runs within the mediacodec sandbox with access to very few permissions. Access to camera, microphone, photos, phone, Bluetooth, and internet as well as dynamic code loading are disallowed by <a href="https://source.android.com/security/selinux/">SELinux</a>. Interaction with the kernel is further restricted by <a href="https://en.wikipedia.org/wiki/Seccomp">seccomp</a>. This means that compromising libstagefright would grant the attacker access to significantly fewer permissions and also mitigates privilege escalation by reducing the attack surface exposed by the kernel.<br /><h3 id="conclusion"><span style="font-size: large;">Conclusion</span></h3>The media hardening project is an ongoing effort focused on moving functionality into less privileged sandboxes and further reducing the permissions granted to those sandboxes. While the techniques discussed here were applied to the Android media framework, they are suitable across the Android codebase. These hardening techniques—and others—are being actively applied to additional components within Android. As always, we appreciate feedback on our work and welcome suggestions for how we can improve Android. Contact us at <a href="mailto:security@android.com">security@android.com</a>.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/hardening-the-media-stack-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Bringing HTTPS to all blogspot domain blogs</title>
		<link>https://googledata.org/google-online-security/bringing-https-to-all-blogspot-domain-blogs/</link>
		<comments>https://googledata.org/google-online-security/bringing-https-to-all-blogspot-domain-blogs/#comments</comments>
		<pubDate>Tue, 03 May 2016 17:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=e8f9e69b7b52cb2d46a894e46efe8635</guid>
		<description><![CDATA[<span>Posted by Milinda Perera, Software Engineer, Security</span><br /><br />HTTPS is fundamental to internet security; it protects the integrity and confidentiality of data sent between websites and visitors' browsers. Last September, we <a href="https://security.googleblog.com/2015/09/https-support-coming-to-blogspot.html">began</a> rolling out HTTPS support for blogspot domain blogs so you could try it out. Today, we&#8217;re launching another milestone: an HTTPS version for every blogspot domain blog. With this change, visitors can access any blogspot domain blog over an encrypted channel.<br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="https://4.bp.blogspot.com/-vkk_dtL8mGI/VyPWO-cgszI/AAAAAAAACbI/3do96MUOTE8DAdDKPhPy-CWNm7sb8Q9UQCLcB/s1600/fig1.png"><img border="0" height="148" src="https://4.bp.blogspot.com/-vkk_dtL8mGI/VyPWO-cgszI/AAAAAAAACbI/3do96MUOTE8DAdDKPhPy-CWNm7sb8Q9UQCLcB/s640/fig1.png" width="640"></a></td></tr><tr><td>The HTTPS indicator in the Chrome browser</td></tr></tbody></table>As part of this launch, we're removing the HTTPS Availability setting. Even if you did not previously turn on this setting, your blogs will have an HTTPS version enabled.<br /><br />We&#8217;re also adding a <a href="https://support.google.com/blogger/?p=blog_security_https">new setting called HTTPS Redirect</a> that allows you to opt-in to redirect HTTP requests to HTTPS. While all blogspot blogs will have an HTTPS version enabled, if you turn on this new setting, all visitors will be redirected to the HTTPS version of your blog at <b>https</b>://&#60;your-blog&#62;.blogspot.com even if they go to <b>http</b>://&#60;your-blog&#62;.blogspot.com. If you choose to turn off this setting, visitors will have two options for viewing your blog: the unencrypted version at <b>http</b>://&#60;your-blog&#62;.blogspot.com or the encrypted version at <b>https</b>://&#60;your-blog&#62;.blogspot.com.<br /><table align="center" cellpadding="0" cellspacing="0"><tbody><tr><td><a href="https://3.bp.blogspot.com/-uj6bGEqKWnA/VyPWeOLsadI/AAAAAAAACbM/108cizn_DsIfw9My3E3fRm8tRGOOjUyPACLcB/s1600/fig2.png"><img border="0" height="545" src="https://3.bp.blogspot.com/-uj6bGEqKWnA/VyPWeOLsadI/AAAAAAAACbM/108cizn_DsIfw9My3E3fRm8tRGOOjUyPACLcB/s640/fig2.png" width="640"></a></td></tr><tr><td>The new HTTPS Redirect setting in the Blogger dashboard</td></tr></tbody></table>Please be aware that <a href="https://developers.google.com/web/fundamentals/security/prevent-mixed-content/">mixed content</a> may cause some of your blog's functionality not to work in the HTTPS version. Mixed content is often caused by incompatible templates, gadgets, or post content. While we're proactively fixing most of these errors, some of them can only be fixed by you, the blog authors. To help <a href="https://support.google.com/blogger/?p=mixed_content">spot and fix</a> these errors, we recently <a href="https://blogger.googleblog.com/2016/04/an-update-to-blogger-post-editor-to.html">released</a> a mixed content warning tool that alerts you to possible mixed content issues in your posts, and gives you the option to fix them automatically before saving.<br /><br />Existing links and bookmarks to your blogs are not affected by this launch, and will continue to work. Please note that blogs on custom domains will not yet have HTTPS support.<br /><br />This update expands Google's <a href="https://youtu.be/cBhZ6S0PFCY">HTTPS Everywhere</a> mission to all blogspot domain blogs. We appreciate your <a href="https://productforums.google.com/forum/#!forum/blogger">feedback</a> and will use it to make future improvements.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Milinda Perera, Software Engineer, Security</span><br /><br />HTTPS is fundamental to internet security; it protects the integrity and confidentiality of data sent between websites and visitors' browsers. Last September, we <a href="https://security.googleblog.com/2015/09/https-support-coming-to-blogspot.html">began</a> rolling out HTTPS support for blogspot domain blogs so you could try it out. Today, we’re launching another milestone: an HTTPS version for every blogspot domain blog. With this change, visitors can access any blogspot domain blog over an encrypted channel.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-vkk_dtL8mGI/VyPWO-cgszI/AAAAAAAACbI/3do96MUOTE8DAdDKPhPy-CWNm7sb8Q9UQCLcB/s1600/fig1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="148" src="https://4.bp.blogspot.com/-vkk_dtL8mGI/VyPWO-cgszI/AAAAAAAACbI/3do96MUOTE8DAdDKPhPy-CWNm7sb8Q9UQCLcB/s640/fig1.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The HTTPS indicator in the Chrome browser</td></tr></tbody></table>As part of this launch, we're removing the HTTPS Availability setting. Even if you did not previously turn on this setting, your blogs will have an HTTPS version enabled.<br /><br />We’re also adding a <a href="https://support.google.com/blogger/?p=blog_security_https">new setting called HTTPS Redirect</a> that allows you to opt-in to redirect HTTP requests to HTTPS. While all blogspot blogs will have an HTTPS version enabled, if you turn on this new setting, all visitors will be redirected to the HTTPS version of your blog at <b>https</b>://&lt;your-blog&gt;.blogspot.com even if they go to <b>http</b>://&lt;your-blog&gt;.blogspot.com. If you choose to turn off this setting, visitors will have two options for viewing your blog: the unencrypted version at <b>http</b>://&lt;your-blog&gt;.blogspot.com or the encrypted version at <b>https</b>://&lt;your-blog&gt;.blogspot.com.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-uj6bGEqKWnA/VyPWeOLsadI/AAAAAAAACbM/108cizn_DsIfw9My3E3fRm8tRGOOjUyPACLcB/s1600/fig2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="545" src="https://3.bp.blogspot.com/-uj6bGEqKWnA/VyPWeOLsadI/AAAAAAAACbM/108cizn_DsIfw9My3E3fRm8tRGOOjUyPACLcB/s640/fig2.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The new HTTPS Redirect setting in the Blogger dashboard</td></tr></tbody></table>Please be aware that <a href="https://developers.google.com/web/fundamentals/security/prevent-mixed-content/">mixed content</a> may cause some of your blog's functionality not to work in the HTTPS version. Mixed content is often caused by incompatible templates, gadgets, or post content. While we're proactively fixing most of these errors, some of them can only be fixed by you, the blog authors. To help <a href="https://support.google.com/blogger/?p=mixed_content">spot and fix</a> these errors, we recently <a href="https://blogger.googleblog.com/2016/04/an-update-to-blogger-post-editor-to.html">released</a> a mixed content warning tool that alerts you to possible mixed content issues in your posts, and gives you the option to fix them automatically before saving.<br /><br />Existing links and bookmarks to your blogs are not affected by this launch, and will continue to work. Please note that blogs on custom domains will not yet have HTTPS support.<br /><br />This update expands Google's <a href="https://youtu.be/cBhZ6S0PFCY">HTTPS Everywhere</a> mission to all blogspot domain blogs. We appreciate your <a href="https://productforums.google.com/forum/#!forum/blogger">feedback</a> and will use it to make future improvements.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/bringing-https-to-all-blogspot-domain-blogs/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Protecting against unintentional regressions to cleartext traffic in your Android apps</title>
		<link>https://googledata.org/google-online-security/protecting-against-unintentional-regressions-to-cleartext-traffic-in-your-android-apps-2/</link>
		<comments>https://googledata.org/google-online-security/protecting-against-unintentional-regressions-to-cleartext-traffic-in-your-android-apps-2/#comments</comments>
		<pubDate>Mon, 25 Apr 2016 22:11:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=95680d41a40e1c518a5c8601c1ce5959</guid>
		<description><![CDATA[<span>Posted by Alex Klyubin, Android Security team</span><br /><br /><i>[Cross-posted from the <a href="https://android-developers.blogspot.com/2016/04/protecting-against-unintentional.html">Android Developers Blog</a>]</i><br /><br />When your app communicates with servers using cleartext network traffic, such as HTTP, the traffic risks being eavesdropped upon and tampered with by third parties. This may leak information about your users and open your app up to injection of unauthorized content or exploits. Ideally, your app should use secure traffic only, such as by using <a href="https://developers.google.com/web/fundamentals/security/encrypt-in-transit/">HTTPS instead of HTTP</a>. Such traffic is protected against eavesdropping and tampering.<br /><br />Many Android apps already use secure traffic only. However, some of them occasionally regress to cleartext traffic by accident. For example, an inadvertent change in one of the server components could make the server provide the app with HTTP URLs instead of HTTPS URLs. The app would then proceed to communicate in cleartext, without any user-visible symptoms. This situation may go unnoticed by the app&#8217;s developer and users.<br /><br />Even if you believe your app is only using secure traffic, make sure to use the new mechanisms provided by Android Marshmallow (Android 6.0) to catch and prevent accidental regressions.<br /><b>New Protections Mechanisms</b><br /><b><br /></b>For apps which only use secure traffic, Android 6.0 Marshmallow (API Level 23) introduced two mechanisms to address regressions to cleartext traffic: (1) in production / installed base, block cleartext traffic, and (2) during development / QA, log or crash whenever non-TLS/SSL traffic is encountered. The following sections provide more information about these mechanisms.<br /><b><br /></b><b>Block cleartext traffic in production</b><br /><b><br /></b>To protect the installed base of your app against regressions to cleartext traffic, declare <span>android:usesCleartextTraffic=&#8221;false&#8221;</span> attribute on the <span>application</span> element in your app&#8217;s AndroidManifest.xml. This declares that the app is not supposed to use cleartext network traffic and makes the platform network stacks of Android Marshmallow block cleartext traffic in the app. For example, if your app accidentally attempts to sign in the user via a cleartext HTTP request, the request will be blocked and the user&#8217;s identity and password will not leak to the network.<br /><br />You don&#8217;t have to set minSdkVersion or targetSdkVersion of your app to 23 (Android Marshmallow) to use <span>android:usesCleartextTraffic</span>. On older platforms, this attribute is simply ignored and thus has no effect.<br /><br />Please note that WebView does not yet honor this feature.<br /><br />And under certain circumstances cleartext traffic may still leave or enter the app. For example, Socket API ignores the cleartext policy because it does not know whether the data it transmits or receives can be classified as cleartext. Android platform HTTP stacks, on the other hand, honor the policy because they know whether traffic is cleartext.<br /><br />Google AdMob is also built to honor this policy. When your app declares that it does not use cleartext traffic, only HTTPS-only ads should be served to the app.<br /><br />Third-party network, ad, and analytics libraries are encouraged to add support for this policy. They can query the cleartext traffic policy via the <a href="https://developer.android.com/reference/android/security/NetworkSecurityPolicy.html?utm_campaign=android_discussion_security_042516&#38;utm_source=anddev&#38;utm_medium=blog">NetworkSecurityPolicy</a> class.<br /><br /><b>Detect cleartext traffic during development</b><br /><b><br /></b>To spot cleartext traffic during development or QA, <a href="https://developer.android.com/reference/android/os/StrictMode.html?utm_campaign=android_discussion_security_042516&#38;utm_source=anddev&#38;utm_medium=blog">StrictMode API</a> lets you modify your app to detect non-TLS/SSL traffic and then either log violations to system log or crash the app (see <a href="https://developer.android.com/reference/android/os/StrictMode.VmPolicy.Builder.html?utm_campaign=android_discussion_security_042516&#38;utm_source=anddev&#38;utm_medium=blog#detectCleartextNetwork()">StrictMode.VmPolicy.Builder.detectCleartextNetwork()</a>). This is a useful tool for identifying which bits of the app are using non-TLS/SSL (and DLTS) traffic. Unlike the <span>android:usesCleartextTraffic</span> attribute, this feature is not meant to be enabled in app builds distributed to users.<br /><br />Firstly, this feature is supposed to flag secure traffic that is not TLS/SSL. More importantly, TLS/SSL traffic via HTTP proxy also may be flagged. This is an issue because as a developer, you have no control over whether a particular user of your app may have configured their Android device to use an HTTP proxy. Finally, the implementation of the feature is not future-proof and thus may reject future TLS/SSL protocol versions. Thus, this feature is intended to be used only during the development and QA phase.<br /><br /><b>Declare finer-grained cleartext policy in Network Security Config</b><br /><b><br /></b><a href="https://developer.android.com/preview/index.html?utm_campaign=android_discussion_security_042516&#38;utm_source=anddev&#38;utm_medium=blog">Android N</a> offers finer-grained control over cleartext traffic policy. As opposed to <span>android:usesCleartextTraffic</span> attribute, which applies to all destinations with which an app communicates, Android N&#8217;s <a href="https://developer.android.com/preview/features/security-config.html?utm_campaign=android_discussion_security_042516&#38;utm_source=anddev&#38;utm_medium=blog">Network Security Config</a> lets an app specify cleartext policy for specific destinations. For example, to facilitate a more gradual transition towards a policy that does not allow cleartext traffic, an app can at first block accidental cleartext only for communication with its most important backends and permit cleartext to be used for other destinations.<br /><br /><b>Next Steps</b><br /><b><br /></b>It is a security best practice to only use secure network traffic for communication between your app and its servers. Android Marshmallow enables you to enforce this practice, so give it a try!<br /><br />As always, we appreciate feedback and welcome suggestions for improving Android. Contact us at <a href="mailto:security@android.com">security@android.com</a>. HTTPS, Android-Security]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Alex Klyubin, Android Security team</span><br /><br /><i>[Cross-posted from the <a href="https://android-developers.blogspot.com/2016/04/protecting-against-unintentional.html">Android Developers Blog</a>]</i><br /><br />When your app communicates with servers using cleartext network traffic, such as HTTP, the traffic risks being eavesdropped upon and tampered with by third parties. This may leak information about your users and open your app up to injection of unauthorized content or exploits. Ideally, your app should use secure traffic only, such as by using <a href="https://developers.google.com/web/fundamentals/security/encrypt-in-transit/">HTTPS instead of HTTP</a>. Such traffic is protected against eavesdropping and tampering.<br /><br />Many Android apps already use secure traffic only. However, some of them occasionally regress to cleartext traffic by accident. For example, an inadvertent change in one of the server components could make the server provide the app with HTTP URLs instead of HTTPS URLs. The app would then proceed to communicate in cleartext, without any user-visible symptoms. This situation may go unnoticed by the app’s developer and users.<br /><br />Even if you believe your app is only using secure traffic, make sure to use the new mechanisms provided by Android Marshmallow (Android 6.0) to catch and prevent accidental regressions.<br /><b>New Protections Mechanisms</b><br /><b><br /></b>For apps which only use secure traffic, Android 6.0 Marshmallow (API Level 23) introduced two mechanisms to address regressions to cleartext traffic: (1) in production / installed base, block cleartext traffic, and (2) during development / QA, log or crash whenever non-TLS/SSL traffic is encountered. The following sections provide more information about these mechanisms.<br /><b><br /></b><b>Block cleartext traffic in production</b><br /><b><br /></b>To protect the installed base of your app against regressions to cleartext traffic, declare <span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">android:usesCleartextTraffic=”false”</span> attribute on the <span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">application</span> element in your app’s AndroidManifest.xml. This declares that the app is not supposed to use cleartext network traffic and makes the platform network stacks of Android Marshmallow block cleartext traffic in the app. For example, if your app accidentally attempts to sign in the user via a cleartext HTTP request, the request will be blocked and the user’s identity and password will not leak to the network.<br /><br />You don’t have to set minSdkVersion or targetSdkVersion of your app to 23 (Android Marshmallow) to use <span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">android:usesCleartextTraffic</span>. On older platforms, this attribute is simply ignored and thus has no effect.<br /><br />Please note that WebView does not yet honor this feature.<br /><br />And under certain circumstances cleartext traffic may still leave or enter the app. For example, Socket API ignores the cleartext policy because it does not know whether the data it transmits or receives can be classified as cleartext. Android platform HTTP stacks, on the other hand, honor the policy because they know whether traffic is cleartext.<br /><br />Google AdMob is also built to honor this policy. When your app declares that it does not use cleartext traffic, only HTTPS-only ads should be served to the app.<br /><br />Third-party network, ad, and analytics libraries are encouraged to add support for this policy. They can query the cleartext traffic policy via the <a href="https://developer.android.com/reference/android/security/NetworkSecurityPolicy.html?utm_campaign=android_discussion_security_042516&amp;utm_source=anddev&amp;utm_medium=blog">NetworkSecurityPolicy</a> class.<br /><br /><b>Detect cleartext traffic during development</b><br /><b><br /></b>To spot cleartext traffic during development or QA, <a href="https://developer.android.com/reference/android/os/StrictMode.html?utm_campaign=android_discussion_security_042516&amp;utm_source=anddev&amp;utm_medium=blog">StrictMode API</a> lets you modify your app to detect non-TLS/SSL traffic and then either log violations to system log or crash the app (see <a href="https://developer.android.com/reference/android/os/StrictMode.VmPolicy.Builder.html?utm_campaign=android_discussion_security_042516&amp;utm_source=anddev&amp;utm_medium=blog#detectCleartextNetwork()">StrictMode.VmPolicy.Builder.detectCleartextNetwork()</a>). This is a useful tool for identifying which bits of the app are using non-TLS/SSL (and DLTS) traffic. Unlike the <span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">android:usesCleartextTraffic</span> attribute, this feature is not meant to be enabled in app builds distributed to users.<br /><br />Firstly, this feature is supposed to flag secure traffic that is not TLS/SSL. More importantly, TLS/SSL traffic via HTTP proxy also may be flagged. This is an issue because as a developer, you have no control over whether a particular user of your app may have configured their Android device to use an HTTP proxy. Finally, the implementation of the feature is not future-proof and thus may reject future TLS/SSL protocol versions. Thus, this feature is intended to be used only during the development and QA phase.<br /><br /><b>Declare finer-grained cleartext policy in Network Security Config</b><br /><b><br /></b><a href="https://developer.android.com/preview/index.html?utm_campaign=android_discussion_security_042516&amp;utm_source=anddev&amp;utm_medium=blog">Android N</a> offers finer-grained control over cleartext traffic policy. As opposed to <span style="font-family: &quot;courier new&quot; , &quot;courier&quot; , monospace;">android:usesCleartextTraffic</span> attribute, which applies to all destinations with which an app communicates, Android N’s <a href="https://developer.android.com/preview/features/security-config.html?utm_campaign=android_discussion_security_042516&amp;utm_source=anddev&amp;utm_medium=blog">Network Security Config</a> lets an app specify cleartext policy for specific destinations. For example, to facilitate a more gradual transition towards a policy that does not allow cleartext traffic, an app can at first block accidental cleartext only for communication with its most important backends and permit cleartext to be used for other destinations.<br /><br /><b>Next Steps</b><br /><b><br /></b>It is a security best practice to only use secure network traffic for communication between your app and its servers. Android Marshmallow enables you to enforce this practice, so give it a try!<br /><br />As always, we appreciate feedback and welcome suggestions for improving Android. Contact us at <a href="mailto:security@android.com">security@android.com</a>. HTTPS, Android-Security]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/protecting-against-unintentional-regressions-to-cleartext-traffic-in-your-android-apps-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Android Security 2015 Annual Report</title>
		<link>https://googledata.org/google-online-security/android-security-2015-annual-report/</link>
		<comments>https://googledata.org/google-online-security/android-security-2015-annual-report/#comments</comments>
		<pubDate>Tue, 19 Apr 2016 13:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=626af01944b7d1e645e3c876ec4076d1</guid>
		<description><![CDATA[<span>Posted by Adrian Ludwig, Lead Engineer, Android Security</span><br />Today, for the <a href="https://security.googleblog.com/2015/04/android-security-state-of-union-2014.html">second year in a row</a>, we&#8217;re releasing our Android Security Annual report. This detailed summary includes: a look at how Google services protect the Android ecosystem, an overview of new security protections introduced in 2015, and our work with Android partners and the security research community at large. The full report is <a href="http://source.android.com/security/reports/Google_Android_Security_2015_Report_Final.pdf">here</a>, and an overview is below.<br /><br />One important goal of releasing this report is to drive an informed conversation about Android security. We hope to accomplish this by providing more information about what we are doing, and what we see happening in the ecosystem. We strongly believe that rigorous, data-driven discussion about security will help guide our efforts to make the Android ecosystem safer.<br /><b>Enhancing Google's services to protect Android users</b><br /><br />In the last year, we&#8217;ve significantly improved our machine learning and event correlation to detect potentially harmful behavior.<br /><br /><ul><li>We protected users from malware and other Potentially Harmful Apps (PHAs), checking over 6 billion installed applications per day.</li><li>We protected users from network-based and on-device threats by scanning 400 million devices per day.</li><li>And we <a href="https://security.googleblog.com/2015/12/protecting-hundreds-of-millions-more.html">protected hundreds of millions of Chrome users on Android</a> from unsafe websites with Safe Browsing.</li></ul><div>We continued to make it even more difficult to get PHAs into Google Play. Last year&#8217;s enhancements reduced the probability of installing a PHA from Google Play by over 40% compared to 2014. Within Google Play, install attempts of most categories of PHAs declined including:</div><div><ul><li>Data Collection: decreased over 40% to 0.08% of installs</li><li>Spyware: decreased 60% to 0.02% of installs</li><li>Hostile Downloader: decreased 50% to 0.01% of installs</li></ul><div>Overall, PHAs were installed on fewer than 0.15% of devices that only get apps from Google Play. About 0.5% of devices that install apps from both Play and other sources had a PHA installed during 2015, similar to the data in last year&#8217;s report.</div></div><div><br /></div><div>It&#8217;s critical that we also protect users that install apps from sources other than Google Play. Our <a href="http://officialandroid.blogspot.com/2014/04/expanding-googles-security-services-for.html">Verify Apps service</a> protects these users and we improved the effectiveness of the PHA warnings provided by Verify Apps by over 50%. In 2015, we saw an increase in the number of PHA install attempts outside of Google Play, and we disrupted several coordinated efforts to install PHAs onto user devices from outside of Google Play.</div><div><br /></div><div><b><br /></b></div><div><b>New security features in the Android platform</b></div><div><br /></div><div>Last year, we <a href="http://officialandroid.blogspot.com/2015/10/get-ready-for-sweet-taste-of-android-60.html">launched Android 6.0 Marshmallow</a>, introducing a variety of new security protections and controls:</div><div><ul><li>Full disk encryption is now a requirement for all new Marshmallow devices with adequate hardware capabilities and is also extended to allow encryption of data on SD cards.</li><li>Updated app permissions enable you to manage the data they share with specific apps with more granularity and precision.</li><li>New verified boot ensures your phone is healthy from the bootloader all the way up to the operating system.</li><li><a href="https://support.google.com/nexus/answer/4457705">Android security patch level</a> enables you to check and make sure your device has the most recent security updates.</li><li>And much more, including support for fingerprint scanners, and SELinux enhancements.</li></ul><div><b>Deeper engagement with the Android ecosystem</b></div></div><div><b><br /></b></div><div>We&#8217;re working to foster Android security research and making investments to strengthen protections across the ecosystem now and in the long run.</div><div><br /></div><div>In June, <a href="https://security.googleblog.com/2015/06/announcing-security-rewards-for-android.html">Android joined Google&#8217;s Vulnerability Rewards Program</a>, which pays security researchers when they find and report bugs to us. We fixed over 100 vulnerabilities reported this way and paid researchers more than $200,000 for their findings.</div><div><br /></div><div>In August, we launched our <a href="http://officialandroid.blogspot.com/2015/08/an-update-to-nexus-devices.html">monthly public security update program</a> to the Android Open Source Project, as well as a security update lifecycle for Nexus devices. We intend the update lifecycle for Nexus devices to be a model for all Android manufacturers going forward and have been actively working with ecosystem partners to facilitate similar programs. Since then, manufacturers have provided monthly security updates for hundreds of unique Android device models and hundreds of millions of users have installed monthly security updates to their devices. Despite this progress, many Android devices are still not receiving monthly updates&#8212;we are increasing our efforts to help partners update more devices in a timely manner.</div><div><br /></div><div><br /></div><div>Greater transparency, well-informed discussions about security, and ongoing innovation will help keep users safe. We'll continue our ongoing efforts to improve Android&#8217;s protections, and we look forward to engaging with the ecosystem and security community in 2016 and beyond.</div><br /><br />]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Adrian Ludwig, Lead Engineer, Android Security</span><br />Today, for the <a href="https://security.googleblog.com/2015/04/android-security-state-of-union-2014.html">second year in a row</a>, we’re releasing our Android Security Annual report. This detailed summary includes: a look at how Google services protect the Android ecosystem, an overview of new security protections introduced in 2015, and our work with Android partners and the security research community at large. The full report is <a href="http://source.android.com/security/reports/Google_Android_Security_2015_Report_Final.pdf">here</a>, and an overview is below.<br /><br />One important goal of releasing this report is to drive an informed conversation about Android security. We hope to accomplish this by providing more information about what we are doing, and what we see happening in the ecosystem. We strongly believe that rigorous, data-driven discussion about security will help guide our efforts to make the Android ecosystem safer.<br /><b>Enhancing Google's services to protect Android users</b><br /><br />In the last year, we’ve significantly improved our machine learning and event correlation to detect potentially harmful behavior.<br /><br /><ul><li>We protected users from malware and other Potentially Harmful Apps (PHAs), checking over 6 billion installed applications per day.</li><li>We protected users from network-based and on-device threats by scanning 400 million devices per day.</li><li>And we <a href="https://security.googleblog.com/2015/12/protecting-hundreds-of-millions-more.html">protected hundreds of millions of Chrome users on Android</a> from unsafe websites with Safe Browsing.</li></ul><div>We continued to make it even more difficult to get PHAs into Google Play. Last year’s enhancements reduced the probability of installing a PHA from Google Play by over 40% compared to 2014. Within Google Play, install attempts of most categories of PHAs declined including:</div><div><ul><li>Data Collection: decreased over 40% to 0.08% of installs</li><li>Spyware: decreased 60% to 0.02% of installs</li><li>Hostile Downloader: decreased 50% to 0.01% of installs</li></ul><div>Overall, PHAs were installed on fewer than 0.15% of devices that only get apps from Google Play. About 0.5% of devices that install apps from both Play and other sources had a PHA installed during 2015, similar to the data in last year’s report.</div></div><div><br /></div><div>It’s critical that we also protect users that install apps from sources other than Google Play. Our <a href="http://officialandroid.blogspot.com/2014/04/expanding-googles-security-services-for.html">Verify Apps service</a> protects these users and we improved the effectiveness of the PHA warnings provided by Verify Apps by over 50%. In 2015, we saw an increase in the number of PHA install attempts outside of Google Play, and we disrupted several coordinated efforts to install PHAs onto user devices from outside of Google Play.</div><div><br /></div><div><b><br /></b></div><div><b>New security features in the Android platform</b></div><div><br /></div><div>Last year, we <a href="http://officialandroid.blogspot.com/2015/10/get-ready-for-sweet-taste-of-android-60.html">launched Android 6.0 Marshmallow</a>, introducing a variety of new security protections and controls:</div><div><ul><li>Full disk encryption is now a requirement for all new Marshmallow devices with adequate hardware capabilities and is also extended to allow encryption of data on SD cards.</li><li>Updated app permissions enable you to manage the data they share with specific apps with more granularity and precision.</li><li>New verified boot ensures your phone is healthy from the bootloader all the way up to the operating system.</li><li><a href="https://support.google.com/nexus/answer/4457705">Android security patch level</a> enables you to check and make sure your device has the most recent security updates.</li><li>And much more, including support for fingerprint scanners, and SELinux enhancements.</li></ul><div><b>Deeper engagement with the Android ecosystem</b></div></div><div><b><br /></b></div><div>We’re working to foster Android security research and making investments to strengthen protections across the ecosystem now and in the long run.</div><div><br /></div><div>In June, <a href="https://security.googleblog.com/2015/06/announcing-security-rewards-for-android.html">Android joined Google’s Vulnerability Rewards Program</a>, which pays security researchers when they find and report bugs to us. We fixed over 100 vulnerabilities reported this way and paid researchers more than $200,000 for their findings.</div><div><br /></div><div>In August, we launched our <a href="http://officialandroid.blogspot.com/2015/08/an-update-to-nexus-devices.html">monthly public security update program</a> to the Android Open Source Project, as well as a security update lifecycle for Nexus devices. We intend the update lifecycle for Nexus devices to be a model for all Android manufacturers going forward and have been actively working with ecosystem partners to facilitate similar programs. Since then, manufacturers have provided monthly security updates for hundreds of unique Android device models and hundreds of millions of users have installed monthly security updates to their devices. Despite this progress, many Android devices are still not receiving monthly updates—we are increasing our efforts to help partners update more devices in a timely manner.</div><div><br /></div><div><br /></div><div>Greater transparency, well-informed discussions about security, and ongoing innovation will help keep users safe. We'll continue our ongoing efforts to improve Android’s protections, and we look forward to engaging with the ecosystem and security community in 2016 and beyond.</div><br /><br />]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/android-security-2015-annual-report/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Helping webmasters re-secure their sites</title>
		<link>https://googledata.org/google-online-security/helping-webmasters-re-secure-their-sites-2/</link>
		<comments>https://googledata.org/google-online-security/helping-webmasters-re-secure-their-sites-2/#comments</comments>
		<pubDate>Mon, 18 Apr 2016 16:15:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=f6f7e6fdd1ed282ce487d7bbe61dbf6c</guid>
		<description><![CDATA[<span>Posted by Kurt Thomas and Yuan Niu, Spam &#38; Abuse Research</span><br />Every week, <a href="https://www.google.com/transparencyreport/safebrowsing/">over 10 million users encounter harmful websites</a> that deliver malware and scams. Many of these sites are compromised personal blogs or small business pages that have fallen victim due to a weak password or outdated software. Safe Browsing and Google Search protect visitors from dangerous content by displaying browser warnings and labeling search results with <a href="https://support.google.com/websearch/answer/45449?hl=en">'this site may harm your computer'</a>. While this helps keep users safe in the moment, the compromised site remains a problem that needs to be fixed.<br /><br />Unfortunately, many webmasters for compromised sites are unaware anything is amiss. Worse yet, even when they learn of an incident, they may lack the security expertise to take action and address the root cause of compromise. Quoting one webmaster from a survey we conducted, &#8220;our daily and weekly backups were both infected&#8221; and even after seeking the help of a specialist, after &#8220;lots of wasted hours/days&#8221; the webmaster abandoned all attempts to restore the site and instead refocused his efforts on &#8220;rebuilding the site from scratch&#8221;.<br /><br />In order to find the best way to help webmasters clean-up from compromise, we recently teamed up with the University of California, Berkeley to explore how to quickly contact webmasters and expedite recovery while minimizing the distress involved. We&#8217;ve summarized our key lessons below. The full study, which you can read <a href="http://research.google.com/pubs/pub44924.html">here</a>, was recently presented at the <a href="http://www2016.ca/">International World Wide Web Conference</a>.<br /><br />When Google works directly with webmasters during critical moments like security breaches, we can help 75% of webmasters re-secure their content. The whole process takes a median of 3 days. This is a better experience for webmasters and their audience.<br /><b>How many sites get compromised?</b><br /><b><br /></b><div><a href="https://4.bp.blogspot.com/-30oj7v-8ing/VxRmwPth9YI/AAAAAAAAASs/EpcfPJrQnJIdGJuo4OrB5Zg4fL3dIRqOgCLcB/s1600/webmaster%2Bresearch.jpg"><img border="0" height="312" src="https://4.bp.blogspot.com/-30oj7v-8ing/VxRmwPth9YI/AAAAAAAAASs/EpcfPJrQnJIdGJuo4OrB5Zg4fL3dIRqOgCLcB/s640/webmaster%2Bresearch.jpg" width="640"></a></div><div><i>Number of freshly compromised sites Google detects every week.</i></div><br />Over the last year Google detected nearly 800,000 compromised websites&#8212;roughly 16,500 new sites every week from around the globe. Visitors to these sites are exposed to low-quality scam content and malware via <a href="https://security.googleblog.com/2008/02/all-your-iframe-are-point-to-us.html">drive-by downloads</a>. While browser and search warnings help protect visitors from harm, these warnings can at times feel punitive to webmasters who learn only after-the-fact that their site was compromised. To balance the safety of our users with the experience of webmasters, we set out to find the best approach to help webmasters recover from security breaches and ultimately reconnect websites with their audience.<br /><b>Finding the most effective ways to aid&#160;webmaster</b><br /><br /><ol><li><b>Getting in touch with webmasters:</b> One of the hardest steps on the road to recovery is first getting in contact with webmasters. We tried three notification channels: email, browser warnings, and search warnings. For webmasters who proactively registered their site with <a href="https://www.google.com/webmaster">Search Console</a>, we found that email communication led to 75% of webmasters re-securing their pages. When we didn&#8217;t know a webmaster&#8217;s email address, browser warnings and search warnings helped 54% and 43% of sites clean up respectively.</li><li><b>Providing tips on cleaning up harmful content:</b> Attackers rely on hidden files, easy-to-miss redirects, and remote inclusions to serve scams and malware. This makes clean-up increasingly tricky. When we emailed webmasters, we included tips and samples of exactly which pages contained harmful content. This, combined with expedited notification, helped webmasters clean up 62% faster compared to no tips&#8212;usually within 3 days.</li><li><b>Making sure sites stay clean:</b> Once a site is no longer serving harmful content, it&#8217;s important to make sure attackers don&#8217;t reassert control. We monitored recently cleaned websites and found 12% were compromised again in 30 days. This illustrates the challenge involved in identifying the root cause of a breach versus dealing with the side-effects.</li></ol><div><b>Making security issues less painful for webmasters&#8212;and everyone</b></div><div><br /></div><div>We hope that webmasters never have to deal with a security incident. If you are a webmaster, there are some quick steps you can take to reduce your risk. We&#8217;ve made it easier to <a href="https://security.googleblog.com/2015/02/safe-browsing-and-google-analytics.html">receive security notifications through Google Analytics</a> as well as through <a href="https://www.google.com/webmaster">Search Console</a>. Make sure to register for both services. Also, we have laid out helpful tips for <a href="https://webmasters.googleblog.com/2015/07/nohacked-how-to-avoid-being-target-of.html">updating your site&#8217;s software</a> and <a href="https://webmasters.googleblog.com/2015/08/nohacked-using-two-factor.html">adding additional authentication</a> that will make your site safer.</div><div><br /></div><div>If you&#8217;re a hosting provider or building a service that needs to notify victims of compromise, understand that the entire process is distressing for users. Establish a reliable communication channel before a security incident occurs, make sure to provide victims with clear recovery steps, and promptly reply to inquiries so the process feels helpful, not punitive.</div><div><br /></div><div>As we work to make the web a safer place, we think it&#8217;s critical to empower webmasters and users to make good security decisions. It&#8217;s easy for the security community to be pessimistic about incident response being &#8216;too complex&#8217; for victims, but as our findings demonstrate, even just starting a dialogue can significantly expedite recovery.</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Kurt Thomas and Yuan Niu, Spam &amp; Abuse Research</span><br />Every week, <a href="https://www.google.com/transparencyreport/safebrowsing/">over 10 million users encounter harmful websites</a> that deliver malware and scams. Many of these sites are compromised personal blogs or small business pages that have fallen victim due to a weak password or outdated software. Safe Browsing and Google Search protect visitors from dangerous content by displaying browser warnings and labeling search results with <a href="https://support.google.com/websearch/answer/45449?hl=en">'this site may harm your computer'</a>. While this helps keep users safe in the moment, the compromised site remains a problem that needs to be fixed.<br /><br />Unfortunately, many webmasters for compromised sites are unaware anything is amiss. Worse yet, even when they learn of an incident, they may lack the security expertise to take action and address the root cause of compromise. Quoting one webmaster from a survey we conducted, “our daily and weekly backups were both infected” and even after seeking the help of a specialist, after “lots of wasted hours/days” the webmaster abandoned all attempts to restore the site and instead refocused his efforts on “rebuilding the site from scratch”.<br /><br />In order to find the best way to help webmasters clean-up from compromise, we recently teamed up with the University of California, Berkeley to explore how to quickly contact webmasters and expedite recovery while minimizing the distress involved. We’ve summarized our key lessons below. The full study, which you can read <a href="http://research.google.com/pubs/pub44924.html">here</a>, was recently presented at the <a href="http://www2016.ca/">International World Wide Web Conference</a>.<br /><br />When Google works directly with webmasters during critical moments like security breaches, we can help 75% of webmasters re-secure their content. The whole process takes a median of 3 days. This is a better experience for webmasters and their audience.<br /><b>How many sites get compromised?</b><br /><b><br /></b><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-30oj7v-8ing/VxRmwPth9YI/AAAAAAAAASs/EpcfPJrQnJIdGJuo4OrB5Zg4fL3dIRqOgCLcB/s1600/webmaster%2Bresearch.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="312" src="https://4.bp.blogspot.com/-30oj7v-8ing/VxRmwPth9YI/AAAAAAAAASs/EpcfPJrQnJIdGJuo4OrB5Zg4fL3dIRqOgCLcB/s640/webmaster%2Bresearch.jpg" width="640" /></a></div><div style="text-align: center;"><i>Number of freshly compromised sites Google detects every week.</i></div><br />Over the last year Google detected nearly 800,000 compromised websites—roughly 16,500 new sites every week from around the globe. Visitors to these sites are exposed to low-quality scam content and malware via <a href="https://security.googleblog.com/2008/02/all-your-iframe-are-point-to-us.html">drive-by downloads</a>. While browser and search warnings help protect visitors from harm, these warnings can at times feel punitive to webmasters who learn only after-the-fact that their site was compromised. To balance the safety of our users with the experience of webmasters, we set out to find the best approach to help webmasters recover from security breaches and ultimately reconnect websites with their audience.<br /><b>Finding the most effective ways to aid&nbsp;webmaster</b><br /><br /><ol><li><b>Getting in touch with webmasters:</b> One of the hardest steps on the road to recovery is first getting in contact with webmasters. We tried three notification channels: email, browser warnings, and search warnings. For webmasters who proactively registered their site with <a href="https://www.google.com/webmaster">Search Console</a>, we found that email communication led to 75% of webmasters re-securing their pages. When we didn’t know a webmaster’s email address, browser warnings and search warnings helped 54% and 43% of sites clean up respectively.</li><li><b>Providing tips on cleaning up harmful content:</b> Attackers rely on hidden files, easy-to-miss redirects, and remote inclusions to serve scams and malware. This makes clean-up increasingly tricky. When we emailed webmasters, we included tips and samples of exactly which pages contained harmful content. This, combined with expedited notification, helped webmasters clean up 62% faster compared to no tips—usually within 3 days.</li><li><b>Making sure sites stay clean:</b> Once a site is no longer serving harmful content, it’s important to make sure attackers don’t reassert control. We monitored recently cleaned websites and found 12% were compromised again in 30 days. This illustrates the challenge involved in identifying the root cause of a breach versus dealing with the side-effects.</li></ol><div><b>Making security issues less painful for webmasters—and everyone</b></div><div><br /></div><div>We hope that webmasters never have to deal with a security incident. If you are a webmaster, there are some quick steps you can take to reduce your risk. We’ve made it easier to <a href="https://security.googleblog.com/2015/02/safe-browsing-and-google-analytics.html">receive security notifications through Google Analytics</a> as well as through <a href="https://www.google.com/webmaster">Search Console</a>. Make sure to register for both services. Also, we have laid out helpful tips for <a href="https://webmasters.googleblog.com/2015/07/nohacked-how-to-avoid-being-target-of.html">updating your site’s software</a> and <a href="https://webmasters.googleblog.com/2015/08/nohacked-using-two-factor.html">adding additional authentication</a> that will make your site safer.</div><div><br /></div><div>If you’re a hosting provider or building a service that needs to notify victims of compromise, understand that the entire process is distressing for users. Establish a reliable communication channel before a security incident occurs, make sure to provide victims with clear recovery steps, and promptly reply to inquiries so the process feels helpful, not punitive.</div><div><br /></div><div>As we work to make the web a safer place, we think it’s critical to empower webmasters and users to make good security decisions. It’s easy for the security community to be pessimistic about incident response being ‘too complex’ for victims, but as our findings demonstrate, even just starting a dialogue can significantly expedite recovery.</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/helping-webmasters-re-secure-their-sites-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Growing Eddystone with Ephemeral Identifiers: A Privacy Aware &amp; Secure Open Beacon Format</title>
		<link>https://googledata.org/google-online-security/growing-eddystone-with-ephemeral-identifiers-a-privacy-aware-secure-open-beacon-format/</link>
		<comments>https://googledata.org/google-online-security/growing-eddystone-with-ephemeral-identifiers-a-privacy-aware-secure-open-beacon-format/#comments</comments>
		<pubDate>Thu, 14 Apr 2016 15:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=a48b566d270b647a79dcf759d971835e</guid>
		<description><![CDATA[<span>Posted by&#160;</span><span>Nirdhar Khazanie, Product Manager     and&#160;</span><span>Yossi Matias, VP     Engineering</span><br /><br />Last July, we <a href="https://googledevelopers.blogspot.com/2015/07/lighting-way-with-ble-beacons.html">    launched</a> Eddystone, an open and extensible Bluetooth Low Energy (BLE)     beacon format from Google, supported by Android, iOS, and Chrome. Beacons     mark important places and objects in a way that your phone can understand.     To do this, they typically broadcast public one-way signals &#8210; such as an     Eddystone-UID or -URL.<br /><br />Today, we're introducing Ephemeral IDs (EID), a beacon frame in the     Eddystone format that gives developers more power to control who can make     use of the beacon signal. Eddystone-EID enables a new set of use cases     where it is important for users to be able to exchange information securely     and privately. Since the beacon frame changes periodically, the signal is     only useful to clients with access to a resolution service that maps the     beacon&#8217;s current identifier to stable data. In other words, the signal is     only recognizable to a controlled set of users. In this post we&#8217;ll provide     a bit more detail about this feature, as well as Google&#8217;s implementation of     <a href="https://developers.google.com/beacons/eddystone-eid">Eddystone-EID</a>    with Google Cloud Platform&#8217;s <a href="https://developers.google.com/beacons/proximity/guides">Proximity Beacon     API</a> and the Nearby API for Android and CocoaPod for iOS.<br /><b><br /></b><b>Technical Specifications</b><br /><br />To an observer of an Eddystone-EID beacon, the AES-encrypted     eight byte beacon identifier changes pseudo-randomly with an average period     that is set by the developer &#8210; over a range from 1 second to just over 9     hours. The identifier is generated using a key and timer running on the     beacon. When the beacon is provisioned, or set up, the key is generated and     exchanged with a resolution service such as Proximity Beacon API using an     Elliptic Curve Diffie-Hellman key agreement <a href="https://en.wikipedia.org/wiki/Elliptic_curve_Diffie%E2%80%93Hellman">protocol</a>,     and the timer is synchronized with the service. This way, only the beacon     and the service that it is registered with have access to the key. You can     read more about the technical details of Eddystone-EID from the <a href="https://github.com/google/eddystone/tree/master/eddystone-eid">specification</a>    &#8210; including the provisioning process &#8210; on GitHub, or from our recent     <a href="https://developers.google.com/beacons/eddystone-eid-preprint.pdf">    preprint</a>.<br /><br />An Eddystone-EID contains measures designed to prevent a variety of nuanced     attacks. For example, the rotation period for a single beacon varies     slightly from identifier to identifier, meaning that an attacker cannot use     a consistent period to identify a particular beacon. Eddystone-EID also     enables safety features such as proximity awareness, device authentication,     and data encryption on packet transmission. The <a href="https://github.com/google/eddystone/tree/master/eddystone-tlm/tlm-encrypted.md">    Eddystone-TLM</a> frame has also been extended with a new version that     broadcasts battery level also encrypted with the shared key, meaning that     an attacker cannot use the battery level as an identifying feature     either.<br /><br />When correctly implemented and combined with a service that supports a     range of access control checks, such as Proximity Beacon API, this pattern     has several advantages:<br /><ul><li>The beacon&#8217;s location cannot be spoofed, except by a real-time         relay of the beacon signal. This makes it ideal for use cases where a         developer wishes to enable premium features for a user at a         location.</li><li>Beacons provide a high-quality and precise location signal that is         valuable to the deployer. Eddystone-EID enables deployers to decide         which developers/businesses can make use of that signal.</li><li>Eddystone-EID beacons can be integrated into devices that users         carry with them without leaving users vulnerable to tracking.</li></ul><div><b>Integrating Seamlessly with the Google Beacon Platform</b><br /><br />Launching today on <a href="https://play.google.com/store/apps/details?id=com.google.android.apps.location.beacon.beacontools">        Android</a> and <a href="https://itunes.apple.com/us/app/beacon-tools/id1094371356?ls=1&#38;mt=8">        iOS</a>, is a new addition to the wider Google beacon platform: Beacon         Tools. Beacon Tools allows you to provision and register an         Eddystone-EID beacon, as well as associate content with your beacon         through the Google Cloud Platform.<br /><br />In addition to Eddystone-EID and the new encrypted version of the         previously available Eddystone-TLM, we&#8217;re also adding a common         configuration protocol to the Eddystone family. The <a href="https://github.com/google/eddystone/tree/master/configuration-service">        Eddystone GATT service</a> allows any Eddystone beacon to be         provisioned by any tool that supports the protocol. This encourages the         development of an open ecosystem of beacon products, both in hardware         and software, removing restrictions for developers.<br /><b><br /></b><b>Eddystone-EID Support in the Beacon Industry</b><br /><br />We&#8217;re excited to have worked with a variety of industry players as         Eddystone-EID develops. Over the past year, Eddystone <a href="https://developers.google.com/beacons/eddystone#beacon_manufacturers">manufacturers</a>        in the beacon space have grown from 5 to over 25. The following 15         manufacturers will be supporting Eddystone-EID, with more to follow:<br /><br /><div><table><colgroup><col><col><col></colgroup><tbody><tr><th><div><span><a href="http://accent-systems.com/ble-beacons/?utm_source=google_developers_eddystone&#38;utm_medium=referral_google&#38;utm_campaign=eddystone_beacon_manufacturers">Accent Systems</a></span></div></th>    <th><div><span><a href="https://bluvision.com/">Bluvision</a></span></div></th>    <th><div><span><a href="http://reco2.me/?utm_source=googledevelopers&#38;utm_medium=referral&#38;utm_campaign=eddystone_EID">Reco/Perples</a></span></div></th>  </tr><tr><td><div><a href="http://www.beaconinside.com/">Beacon Inside</a></div></td>    <td><div><a href="http://www.estimote.com/">Estimote</a></div></td>    <td><div><a href="https://www.sensoro.com/en/order">Sensoro</a></div></td>  </tr><tr><td><div><a href="https://www.blesh.com/eddystone-hacker-guide/">Blesh</a></div></td>    <td><div><a href="https://www.gimbal.com/">Gimbal</a></div></td>    <td><div><a href="http://www.signal360.com/">Signal360</a></div></td>  </tr><tr><td><div><a href="http://www.bluebite.com/">BlueBite</a></div></td>    <td><div><a href="https://www.nordicsemi.com/">Nordic</a></div></td>    <td><div><a href="https://www.swirl.com/">Swirl</a></div></td>  </tr><tr><td><div><a href="http://bluecats.com/">Bluecats</a></div></td>    <td><div><a href="http://www.radiusnetworks.com/">Radius Networks</a></div></td>    <td><div><a href="https://www.zebra.com/">Zebra</a></div></td>  </tr></tbody></table></div><div><br /></div><br /></div><div><a href="https://1.bp.blogspot.com/-2n3BMRgXckc/Vw7XDK7a1QI/AAAAAAAAASY/HrrPe8rA50EXbqqnT841amcF1yzPeJh1ACLcB/s1600/EID%2Bfinal%2B2%2Bimages-02.jpg"><img border="0" height="360" src="https://1.bp.blogspot.com/-2n3BMRgXckc/Vw7XDK7a1QI/AAAAAAAAASY/HrrPe8rA50EXbqqnT841amcF1yzPeJh1ACLcB/s640/EID%2Bfinal%2B2%2Bimages-02.jpg" width="640"></a>    </div><div><br />In addition to beacon manufacturers, we&#8217;ve been working with a range of         innovative companies to demonstrate Eddystone-EID in a variety of         different scenarios.</div><div><ul><li>                <a href="https://www.samsonite.com/home/home.html">Samsonite</a> and                 <a href="http://accent-systems.com/">Accent Systems</a> have                 developed a suitcase with Eddystone-EID where users can                 securely keep track of their personal luggage.             </li><li>                <a href="http://hk.k11.com/en/Home.aspx">K11</a> is a Hong Kong                 museum and retail experience using <a href="https://www.sensoro.com/en/order">Sensoro</a> Eddystone-EID                 beacons for visitor tours and customer promotions.             </li><li>                <a href="https://www.monumentalsportsnetwork.com/">Monumental                 Sports</a> in Washington, DC, uses <a href="http://www.radiusnetworks.com/">Radius Networks</a>                Eddystone-EID beacons for delivering customer rewards during                 Washington Wizards and Capitals sporting events.             </li><li>                <a href="http://www.spartadigital.co.uk/">Sparta Digital</a>                has produced an app called Buzzin that uses Eddystone-EID                 beacons deployed in Manchester, UK to enable a more seamless                 transit experience.</li></ul>You can get started with Eddystone-EID by creating a Google Cloud             Platform project and purchasing compatible hardware through one of             our <a href="https://developers.google.com/beacons/eddystone#beacon_manufacturers">            manufacturers</a>. Best of all, Eddystone-EID works transparently             to beacon subscriptions created through the Google Play Services             Nearby Messages API, allowing you to run combined networks of             Eddystone-EID and Eddystone-UID transparently in your client code!</div><div dir="ltr"></div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by&nbsp;</span><span class="byline-author">Nirdhar Khazanie, Product Manager     and&nbsp;</span><span class="byline-author">Yossi Matias, VP     Engineering</span><br /><br />Last July, we <a href="https://googledevelopers.blogspot.com/2015/07/lighting-way-with-ble-beacons.html">    launched</a> Eddystone, an open and extensible Bluetooth Low Energy (BLE)     beacon format from Google, supported by Android, iOS, and Chrome. Beacons     mark important places and objects in a way that your phone can understand.     To do this, they typically broadcast public one-way signals ‒ such as an     Eddystone-UID or -URL.<br /><br />Today, we're introducing Ephemeral IDs (EID), a beacon frame in the     Eddystone format that gives developers more power to control who can make     use of the beacon signal. Eddystone-EID enables a new set of use cases     where it is important for users to be able to exchange information securely     and privately. Since the beacon frame changes periodically, the signal is     only useful to clients with access to a resolution service that maps the     beacon’s current identifier to stable data. In other words, the signal is     only recognizable to a controlled set of users. In this post we’ll provide     a bit more detail about this feature, as well as Google’s implementation of     <a href="https://developers.google.com/beacons/eddystone-eid">Eddystone-EID</a>    with Google Cloud Platform’s <a href="https://developers.google.com/beacons/proximity/guides">Proximity Beacon     API</a> and the Nearby API for Android and CocoaPod for iOS.<br /><b><br /></b><b>Technical Specifications</b><br /><br />To an observer of an Eddystone-EID beacon, the AES-encrypted     eight byte beacon identifier changes pseudo-randomly with an average period     that is set by the developer ‒ over a range from 1 second to just over 9     hours. The identifier is generated using a key and timer running on the     beacon. When the beacon is provisioned, or set up, the key is generated and     exchanged with a resolution service such as Proximity Beacon API using an     Elliptic Curve Diffie-Hellman key agreement <a href="https://en.wikipedia.org/wiki/Elliptic_curve_Diffie%E2%80%93Hellman">protocol</a>,     and the timer is synchronized with the service. This way, only the beacon     and the service that it is registered with have access to the key. You can     read more about the technical details of Eddystone-EID from the <a href="https://github.com/google/eddystone/tree/master/eddystone-eid">specification</a>    ‒ including the provisioning process ‒ on GitHub, or from our recent     <a href="https://developers.google.com/beacons/eddystone-eid-preprint.pdf">    preprint</a>.<br /><br />An Eddystone-EID contains measures designed to prevent a variety of nuanced     attacks. For example, the rotation period for a single beacon varies     slightly from identifier to identifier, meaning that an attacker cannot use     a consistent period to identify a particular beacon. Eddystone-EID also     enables safety features such as proximity awareness, device authentication,     and data encryption on packet transmission. The <a href="https://github.com/google/eddystone/tree/master/eddystone-tlm/tlm-encrypted.md">    Eddystone-TLM</a> frame has also been extended with a new version that     broadcasts battery level also encrypted with the shared key, meaning that     an attacker cannot use the battery level as an identifying feature     either.<br /><br />When correctly implemented and combined with a service that supports a     range of access control checks, such as Proximity Beacon API, this pattern     has several advantages:<br /><ul><li>The beacon’s location cannot be spoofed, except by a real-time         relay of the beacon signal. This makes it ideal for use cases where a         developer wishes to enable premium features for a user at a         location.</li><li>Beacons provide a high-quality and precise location signal that is         valuable to the deployer. Eddystone-EID enables deployers to decide         which developers/businesses can make use of that signal.</li><li>Eddystone-EID beacons can be integrated into devices that users         carry with them without leaving users vulnerable to tracking.</li></ul><div><b>Integrating Seamlessly with the Google Beacon Platform</b><br /><br />Launching today on <a href="https://play.google.com/store/apps/details?id=com.google.android.apps.location.beacon.beacontools">        Android</a> and <a href="https://itunes.apple.com/us/app/beacon-tools/id1094371356?ls=1&amp;mt=8">        iOS</a>, is a new addition to the wider Google beacon platform: Beacon         Tools. Beacon Tools allows you to provision and register an         Eddystone-EID beacon, as well as associate content with your beacon         through the Google Cloud Platform.<br /><br />In addition to Eddystone-EID and the new encrypted version of the         previously available Eddystone-TLM, we’re also adding a common         configuration protocol to the Eddystone family. The <a href="https://github.com/google/eddystone/tree/master/configuration-service">        Eddystone GATT service</a> allows any Eddystone beacon to be         provisioned by any tool that supports the protocol. This encourages the         development of an open ecosystem of beacon products, both in hardware         and software, removing restrictions for developers.<br /><b><br /></b><b>Eddystone-EID Support in the Beacon Industry</b><br /><br />We’re excited to have worked with a variety of industry players as         Eddystone-EID develops. Over the past year, Eddystone <a href="https://developers.google.com/beacons/eddystone#beacon_manufacturers">manufacturers</a>        in the beacon space have grown from 5 to over 25. The following 15         manufacturers will be supporting Eddystone-EID, with more to follow:<br /><br /><div><table 633px="" fixed="" style="text-align: left;" table-layout:="" width:=""><colgroup><col style="width: 211px;"></col><col style="width: 211px;"></col><col style="width: 211px;"></col></colgroup>  <tbody><tr>    <th><div style="text-align: left;"><span style="font-weight: normal;"><a href="http://accent-systems.com/ble-beacons/?utm_source=google_developers_eddystone&amp;utm_medium=referral_google&amp;utm_campaign=eddystone_beacon_manufacturers">Accent Systems</a></span></div></th>    <th><div style="text-align: left;"><span style="font-weight: normal;"><a href="https://bluvision.com/">Bluvision</a></span></div></th>    <th><div style="text-align: left;"><span style="font-weight: normal;"><a href="http://reco2.me/?utm_source=googledevelopers&amp;utm_medium=referral&amp;utm_campaign=eddystone_EID">Reco/Perples</a></span></div></th>  </tr><tr>    <td><div style="text-align: left;"><a href="http://www.beaconinside.com/">Beacon Inside</a></div></td>    <td><div style="text-align: left;"><a href="http://www.estimote.com/">Estimote</a></div></td>    <td><div style="text-align: left;"><a href="https://www.sensoro.com/en/order">Sensoro</a></div></td>  </tr><tr>    <td><div style="text-align: left;"><a href="https://www.blesh.com/eddystone-hacker-guide/">Blesh</a></div></td>    <td><div style="text-align: left;"><a href="https://www.gimbal.com/">Gimbal</a></div></td>    <td><div style="text-align: left;"><a href="http://www.signal360.com/">Signal360</a></div></td>  </tr><tr>    <td><div style="text-align: left;"><a href="http://www.bluebite.com/">BlueBite</a></div></td>    <td><div style="text-align: left;"><a href="https://www.nordicsemi.com/">Nordic</a></div></td>    <td><div style="text-align: left;"><a href="https://www.swirl.com/">Swirl</a></div></td>  </tr><tr>    <td><div style="text-align: left;"><a href="http://bluecats.com/">Bluecats</a></div></td>    <td><div style="text-align: left;"><a href="http://www.radiusnetworks.com/">Radius Networks</a></div></td>    <td><div style="text-align: left;"><a href="https://www.zebra.com/">Zebra</a></div></td>  </tr></tbody></table></div><div style="text-align: left;"><br /></div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-2n3BMRgXckc/Vw7XDK7a1QI/AAAAAAAAASY/HrrPe8rA50EXbqqnT841amcF1yzPeJh1ACLcB/s1600/EID%2Bfinal%2B2%2Bimages-02.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="360" src="https://1.bp.blogspot.com/-2n3BMRgXckc/Vw7XDK7a1QI/AAAAAAAAASY/HrrPe8rA50EXbqqnT841amcF1yzPeJh1ACLcB/s640/EID%2Bfinal%2B2%2Bimages-02.jpg" width="640" /></a>    </div><div><br />In addition to beacon manufacturers, we’ve been working with a range of         innovative companies to demonstrate Eddystone-EID in a variety of         different scenarios.</div><div><ul><li>                <a href="https://www.samsonite.com/home/home.html">Samsonite</a> and                 <a href="http://accent-systems.com/">Accent Systems</a> have                 developed a suitcase with Eddystone-EID where users can                 securely keep track of their personal luggage.             </li><li>                <a href="http://hk.k11.com/en/Home.aspx">K11</a> is a Hong Kong                 museum and retail experience using <a href="https://www.sensoro.com/en/order">Sensoro</a> Eddystone-EID                 beacons for visitor tours and customer promotions.             </li><li>                <a href="https://www.monumentalsportsnetwork.com/">Monumental                 Sports</a> in Washington, DC, uses <a href="http://www.radiusnetworks.com/">Radius Networks</a>                Eddystone-EID beacons for delivering customer rewards during                 Washington Wizards and Capitals sporting events.             </li><li>                <a href="http://www.spartadigital.co.uk/">Sparta Digital</a>                has produced an app called Buzzin that uses Eddystone-EID                 beacons deployed in Manchester, UK to enable a more seamless                 transit experience.</li></ul>You can get started with Eddystone-EID by creating a Google Cloud             Platform project and purchasing compatible hardware through one of             our <a href="https://developers.google.com/beacons/eddystone#beacon_manufacturers">            manufacturers</a>. Best of all, Eddystone-EID works transparently             to beacon subscriptions created through the Google Play Services             Nearby Messages API, allowing you to run combined networks of             Eddystone-EID and Eddystone-UID transparently in your client code!</div><div dir="ltr" style="margin-left: 0pt;"></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/growing-eddystone-with-ephemeral-identifiers-a-privacy-aware-secure-open-beacon-format/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Improvements to Safe Browsing Alerts for Network Administrators</title>
		<link>https://googledata.org/google-online-security/improvements-to-safe-browsing-alerts-for-network-administrators/</link>
		<comments>https://googledata.org/google-online-security/improvements-to-safe-browsing-alerts-for-network-administrators/#comments</comments>
		<pubDate>Wed, 06 Apr 2016 19:06:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=204cb047e71ec00712aab1d83e2aec27</guid>
		<description><![CDATA[<span>Posted by Nav Jagpal, Software Engineer</span><br /><br />We <a href="https://googleonlinesecurity.blogspot.com/2010/09/safe-browsing-alerts-for-network.html">launched</a> Safe Browsing Alerts for Network Administrators over 5 years ago. Just as Safe Browsing warns users about dangerous sites, this service sends notifications to network administrators when our systems detect harmful URLs on their networks.<br /><br />We&#8217;ve made good progress:<br /><br /><ul><li>22k ASNs are being monitored, or roughly 40% of active networks</li><li>1300 network administrators are actively using the tool</li><li>250 reports are sent daily to these administrators</li></ul><div>Today, to provide Network Admins with even more useful information for protecting their users, we&#8217;re adding URLs related to Unwanted Software, Malicious Software, and Social Engineering to the set of information we share.</div><div><br /></div><div><br /></div><div>Here&#8217;s the full set of data we share with network administrators:</div><div><ul><li><b>Compromised</b>: Pages harming users through <a href="https://en.wikipedia.org/wiki/Drive-by_download">drive-by-download</a> or exploits.</li><li><b>Distribution</b>: Domains that are responsible for launching exploits and serving malware. Unlike compromised sites, which are often run by innocent webmasters, distribution domains are typically set up with the primary purpose of serving malicious content.</li><li><b>Social Engineering</b>: Deceptive websites that trick users into performing unwanted actions such as downloading software or divulging private information. Social engineering includes phishing sites that trick users into revealing passwords.</li><li><b>Unwanted Software</b>: URLs which lead to software that violates our <a href="https://www.google.com/about/company/unwanted-software-policy.html">Unwanted Software Policy</a>. This kind of software is often distributed through deceptive means such as social engineering, and has harmful software traits such as modifying users&#8217; browsing experience in unexpected ways and performing unwanted ad injections. You can learn more about Unwanted Software, or UwS, <a href="https://security.googleblog.com/2015/12/year-one-progress-in-fight-against.html">here</a>.</li><li><b>Malware Software</b>: Traditional malware downloads, such as trojans and viruses.</li></ul><div>Network administrators can use the data provided by our service to gain insights into the security and quality of their network. By working together, we can make it more challenging and expensive for attackers to profit from user harm.</div><div><br /></div><div><br /></div><div>If you&#8217;re a network administrator and haven&#8217;t yet registered your AS, you can do so <a href="https://www.google.com/safebrowsing/alerts/">here</a>. If you are experiencing problems verifying ownership, please <a href="https://productforums.google.com/forum/#!topicsearchin/webmasters/category%243Amalware--hacked-sites%7Csort:relevance%7Cspell:false">contact us</a>.</div></div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Nav Jagpal, Software Engineer</span><br /><br />We <a href="https://googleonlinesecurity.blogspot.com/2010/09/safe-browsing-alerts-for-network.html">launched</a> Safe Browsing Alerts for Network Administrators over 5 years ago. Just as Safe Browsing warns users about dangerous sites, this service sends notifications to network administrators when our systems detect harmful URLs on their networks.<br /><br />We’ve made good progress:<br /><br /><ul><li>22k ASNs are being monitored, or roughly 40% of active networks</li><li>1300 network administrators are actively using the tool</li><li>250 reports are sent daily to these administrators</li></ul><div>Today, to provide Network Admins with even more useful information for protecting their users, we’re adding URLs related to Unwanted Software, Malicious Software, and Social Engineering to the set of information we share.</div><div><br /></div><div><br /></div><div>Here’s the full set of data we share with network administrators:</div><div><ul><li><b>Compromised</b>: Pages harming users through <a href="https://en.wikipedia.org/wiki/Drive-by_download">drive-by-download</a> or exploits.</li><li><b>Distribution</b>: Domains that are responsible for launching exploits and serving malware. Unlike compromised sites, which are often run by innocent webmasters, distribution domains are typically set up with the primary purpose of serving malicious content.</li><li><b>Social Engineering</b>: Deceptive websites that trick users into performing unwanted actions such as downloading software or divulging private information. Social engineering includes phishing sites that trick users into revealing passwords.</li><li><b>Unwanted Software</b>: URLs which lead to software that violates our <a href="https://www.google.com/about/company/unwanted-software-policy.html">Unwanted Software Policy</a>. This kind of software is often distributed through deceptive means such as social engineering, and has harmful software traits such as modifying users’ browsing experience in unexpected ways and performing unwanted ad injections. You can learn more about Unwanted Software, or UwS, <a href="https://security.googleblog.com/2015/12/year-one-progress-in-fight-against.html">here</a>.</li><li><b>Malware Software</b>: Traditional malware downloads, such as trojans and viruses.</li></ul><div>Network administrators can use the data provided by our service to gain insights into the security and quality of their network. By working together, we can make it more challenging and expensive for attackers to profit from user harm.</div><div><br /></div><div><br /></div><div>If you’re a network administrator and haven’t yet registered your AS, you can do so <a href="https://www.google.com/safebrowsing/alerts/">here</a>. If you are experiencing problems verifying ownership, please <a href="https://productforums.google.com/forum/#!topicsearchin/webmasters/category$3Amalware--hacked-sites%7Csort:relevance%7Cspell:false">contact us</a>.</div></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/improvements-to-safe-browsing-alerts-for-network-administrators/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>More Encryption, More Notifications, More Email Security</title>
		<link>https://googledata.org/google-online-security/more-encryption-more-notifications-more-email-security/</link>
		<comments>https://googledata.org/google-online-security/more-encryption-more-notifications-more-email-security/#comments</comments>
		<pubDate>Thu, 24 Mar 2016 14:59:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=ee1fdeab4feebc8127de833ecf4a3026</guid>
		<description><![CDATA[<span>Posted by Nicolas Lidzborski, Gmail Security Engineering Lead and Jonathan Pevarnek, <a href="https://jigsaw.google.com/">Jigsaw</a> Engineer</span><br /><br />Today, we&#8217;re announcing a variety of new protections that will help keep Gmail users even safer and promote email security best practices across the Internet as a whole.<br /><br /><b>New tools and industry standards make email even safer</b><br /><br />On <a href="https://googleblog.blogspot.com/2016/02/building-safer-web-for-everyone.html">Safer Internet Day</a> this year, we introduced a new visual element to Gmail that lets users know when they&#8217;ve received a message that wasn&#8217;t delivered using encryption or if they&#8217;re composing a message to a recipient whose email service doesn&#8217;t support TLS encryption. It&#8217;s the red lock icon featured below:<br /><div><a href="https://2.bp.blogspot.com/-BWUeYw74ojo/VvODIeVrYQI/AAAAAAAAARw/tibrDy77f5sqdfFTrI7_6E24PuAEzXSXA/s1600/red%2Block.jpg"><img border="0" height="160" src="https://2.bp.blogspot.com/-BWUeYw74ojo/VvODIeVrYQI/AAAAAAAAARw/tibrDy77f5sqdfFTrI7_6E24PuAEzXSXA/s640/red%2Block.jpg" width="640"></a></div>This has had an immediate, positive effect on Gmail security. In the 44 days since we introduced it, the amount of inbound mail sent over an encrypted connection <a href="https://www.google.com/transparencyreport/saferemail/">increased by 25%</a>. We&#8217;re very encouraged by this progress! Given the relative ease of implementing encryption and its significant benefits for users, we expect to see this progress continue.<br /><br />However, as our recent <a href="https://security.googleblog.com/2015/11/new-research-encouraging-trends-and.html">research</a> with the University of Michigan and University of Illinois shows, misconfigured or malicious parts of the Internet can still tamper with email encryption. To help ensure TLS encryption works as intended, we&#8217;ve teamed-up with a variety of industry partners &#8212; including Comcast, Microsoft, and Yahoo!&#8212; to submit a <a href="https://datatracker.ietf.org/doc/draft-margolis-smtp-sts/">draft IETF specification</a> for &#8220;SMTP Strict Transport Security.&#8221; With this new proposed standard, companies can ensure that mail will only be delivered through encrypted channels, and that any encryption failures should be reported for further analysis, helping shine the spotlight on any malfeasance occurring around the Internet.<br /><br /><b>Safe Browsing makes Gmail more secure</b><br /><br />Since 2007, <a href="https://googleblog.blogspot.com/2015/03/protecting-people-across-web-with.html">Safe Browsing</a> has protected users across the web by warning them before they visit dangerous sites known for phishing, malware, and <a href="https://security.googleblog.com/2015/12/year-one-progress-in-fight-against.html">Unwanted Software</a>. Over the years, we&#8217;ve brought the protections afforded by Safe Browsing to other Google products as well, including: Chrome, <a href="https://security.googleblog.com/2015/12/protecting-hundreds-of-millions-more.html">Android</a>, Ads, Google Analytics, and more.<br /><br />Safe Browsing already protects Gmail users by identifying potentially dangerous links in messages. Starting this week, Gmail users will begin to see warnings if they click these links, further extending this protection to different web browsers and email apps. The full-page warning will look like this:<br /><div><a href="https://3.bp.blogspot.com/-mMSfbO0BAl4/VvODTRoLzeI/AAAAAAAAAR0/1EF5UWYuYDIm2NqnZgwRwM98srVbgxqQQ/s1600/new%2Bsb%2Bwarning.jpg"><img border="0" height="209" src="https://3.bp.blogspot.com/-mMSfbO0BAl4/VvODTRoLzeI/AAAAAAAAAR0/1EF5UWYuYDIm2NqnZgwRwM98srVbgxqQQ/s640/new%2Bsb%2Bwarning.jpg" width="640"></a></div><br /><b>Enhancing state-sponsored attack warnings</b><br /><br /><a href="https://security.googleblog.com/2012/06/security-warnings-for-suspected-state.html">Since 2012</a>, we&#8217;ve warned Gmail users when we suspect they&#8217;ve been targeted by state-sponsored attackers:<br /><div><a href="https://1.bp.blogspot.com/-t9M8MUKmsn0/VvOEHIVm2xI/AAAAAAAAAR8/uWNc8JeF1n4fH0IgKOCExI_R08nx3781w/s1600/old%2Bwarning.jpg"><img border="0" height="100" src="https://1.bp.blogspot.com/-t9M8MUKmsn0/VvOEHIVm2xI/AAAAAAAAAR8/uWNc8JeF1n4fH0IgKOCExI_R08nx3781w/s640/old%2Bwarning.jpg" width="640"></a></div>These warnings are rare&#8212;fewer than 0.1% of users ever receive them&#8212;but they are critically important. The users that receive these warnings are often activists, journalists, and policy-makers taking bold stands around the world.<br /><br />Today, we&#8217;re launching a new, full-page warning with instructions about how these users can stay safe. They may see these new warnings instead of, or in addition to, the existing ones.<br /><div><a href="https://2.bp.blogspot.com/-nuTZy1YiVYs/VvOEN-CAQVI/AAAAAAAAASA/nrHIVC4pW4Eck8j-cPDPkCo2kt5yBVCCA/s1600/new%2Bwarning.png"><img border="0" height="587" src="https://2.bp.blogspot.com/-nuTZy1YiVYs/VvOEN-CAQVI/AAAAAAAAASA/nrHIVC4pW4Eck8j-cPDPkCo2kt5yBVCCA/s640/new%2Bwarning.png" width="640"></a></div><br />The security of our users and their data is paramount. We&#8217;ll continue to build new protections, and work closely with the broader email ecosystem to support and improve standards such as TLS, that keep users safe.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Nicolas Lidzborski, Gmail Security Engineering Lead and Jonathan Pevarnek, <a href="https://jigsaw.google.com/">Jigsaw</a> Engineer</span><br /><br />Today, we’re announcing a variety of new protections that will help keep Gmail users even safer and promote email security best practices across the Internet as a whole.<br /><br /><b>New tools and industry standards make email even safer</b><br /><br />On <a href="https://googleblog.blogspot.com/2016/02/building-safer-web-for-everyone.html">Safer Internet Day</a> this year, we introduced a new visual element to Gmail that lets users know when they’ve received a message that wasn’t delivered using encryption or if they’re composing a message to a recipient whose email service doesn’t support TLS encryption. It’s the red lock icon featured below:<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-BWUeYw74ojo/VvODIeVrYQI/AAAAAAAAARw/tibrDy77f5sqdfFTrI7_6E24PuAEzXSXA/s1600/red%2Block.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="160" src="https://2.bp.blogspot.com/-BWUeYw74ojo/VvODIeVrYQI/AAAAAAAAARw/tibrDy77f5sqdfFTrI7_6E24PuAEzXSXA/s640/red%2Block.jpg" width="640" /></a></div>This has had an immediate, positive effect on Gmail security. In the 44 days since we introduced it, the amount of inbound mail sent over an encrypted connection <a href="https://www.google.com/transparencyreport/saferemail/">increased by 25%</a>. We’re very encouraged by this progress! Given the relative ease of implementing encryption and its significant benefits for users, we expect to see this progress continue.<br /><br />However, as our recent <a href="https://security.googleblog.com/2015/11/new-research-encouraging-trends-and.html">research</a> with the University of Michigan and University of Illinois shows, misconfigured or malicious parts of the Internet can still tamper with email encryption. To help ensure TLS encryption works as intended, we’ve teamed-up with a variety of industry partners — including Comcast, Microsoft, and Yahoo!— to submit a <a href="https://datatracker.ietf.org/doc/draft-margolis-smtp-sts/">draft IETF specification</a> for “SMTP Strict Transport Security.” With this new proposed standard, companies can ensure that mail will only be delivered through encrypted channels, and that any encryption failures should be reported for further analysis, helping shine the spotlight on any malfeasance occurring around the Internet.<br /><br /><b>Safe Browsing makes Gmail more secure</b><br /><br />Since 2007, <a href="https://googleblog.blogspot.com/2015/03/protecting-people-across-web-with.html">Safe Browsing</a> has protected users across the web by warning them before they visit dangerous sites known for phishing, malware, and <a href="https://security.googleblog.com/2015/12/year-one-progress-in-fight-against.html">Unwanted Software</a>. Over the years, we’ve brought the protections afforded by Safe Browsing to other Google products as well, including: Chrome, <a href="https://security.googleblog.com/2015/12/protecting-hundreds-of-millions-more.html">Android</a>, Ads, Google Analytics, and more.<br /><br />Safe Browsing already protects Gmail users by identifying potentially dangerous links in messages. Starting this week, Gmail users will begin to see warnings if they click these links, further extending this protection to different web browsers and email apps. The full-page warning will look like this:<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-mMSfbO0BAl4/VvODTRoLzeI/AAAAAAAAAR0/1EF5UWYuYDIm2NqnZgwRwM98srVbgxqQQ/s1600/new%2Bsb%2Bwarning.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="209" src="https://3.bp.blogspot.com/-mMSfbO0BAl4/VvODTRoLzeI/AAAAAAAAAR0/1EF5UWYuYDIm2NqnZgwRwM98srVbgxqQQ/s640/new%2Bsb%2Bwarning.jpg" width="640" /></a></div><br /><b>Enhancing state-sponsored attack warnings</b><br /><br /><a href="https://security.googleblog.com/2012/06/security-warnings-for-suspected-state.html">Since 2012</a>, we’ve warned Gmail users when we suspect they’ve been targeted by state-sponsored attackers:<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-t9M8MUKmsn0/VvOEHIVm2xI/AAAAAAAAAR8/uWNc8JeF1n4fH0IgKOCExI_R08nx3781w/s1600/old%2Bwarning.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="100" src="https://1.bp.blogspot.com/-t9M8MUKmsn0/VvOEHIVm2xI/AAAAAAAAAR8/uWNc8JeF1n4fH0IgKOCExI_R08nx3781w/s640/old%2Bwarning.jpg" width="640" /></a></div>These warnings are rare—fewer than 0.1% of users ever receive them—but they are critically important. The users that receive these warnings are often activists, journalists, and policy-makers taking bold stands around the world.<br /><br />Today, we’re launching a new, full-page warning with instructions about how these users can stay safe. They may see these new warnings instead of, or in addition to, the existing ones.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-nuTZy1YiVYs/VvOEN-CAQVI/AAAAAAAAASA/nrHIVC4pW4Eck8j-cPDPkCo2kt5yBVCCA/s1600/new%2Bwarning.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="587" src="https://2.bp.blogspot.com/-nuTZy1YiVYs/VvOEN-CAQVI/AAAAAAAAASA/nrHIVC4pW4Eck8j-cPDPkCo2kt5yBVCCA/s640/new%2Bwarning.png" width="640" /></a></div><br />The security of our users and their data is paramount. We’ll continue to build new protections, and work closely with the broader email ecosystem to support and improve standards such as TLS, that keep users safe.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/more-encryption-more-notifications-more-email-security/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Certificate Transparency for Untrusted CAs</title>
		<link>https://googledata.org/google-online-security/certificate-transparency-for-untrusted-cas/</link>
		<comments>https://googledata.org/google-online-security/certificate-transparency-for-untrusted-cas/#comments</comments>
		<pubDate>Mon, 21 Mar 2016 19:25:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=e88635fccbd396db9a06028bf16af7ce</guid>
		<description><![CDATA[<span>Posted by Martin Smith, Software Engineer, Certificate Transparency</span><br /><br />Today we are announcing a new Certificate Transparency log for a new set of root certificates: those that were once or are not yet trusted by browsers.<br /><br /><a href="https://www.certificate-transparency.org/">Certificate Transparency</a> (CT) data has a number of different uses, including protecting users from mis-issued certificates and providing webmasters and other interested parties with a public record of what certificates have been issued for domains.<br /><br /><a href="https://www.chromium.org/Home/chromium-security/certificate-transparency">Initially</a>, our logs included browser-trusted Certificate Authorities (CAs). However, there are two main classes of CA that can&#8217;t easily be included in the existing logs:<br /><br /><ol><li>Those that were once trusted and have since been withdrawn from the root programs.</li><li>New CAs that are on the path to inclusion in browser trusted roots.</li></ol><div><br /></div><div>Including these in trusted logs is problematic for several reasons, including uncertainties around revocation policies and the possibility of cross-signing attacks being attempted by malicious third-parties.</div><div><br /></div><div><br /></div><div>However, visibility of these CAs&#8217; activities is still useful, so we have created a new CT log for these certificates. This log will not be trusted by Chrome, and will provide a public record of certificates that are not accepted by the existing Google-operated logs.</div><div><br /></div><div><br /></div><div>The new log is accessible at <span>ct.googleapis.com/submariner</span> and is listed on our <a href="https://www.certificate-transparency.org/known-logs">Known Logs</a> page. It has the same API as the existing logs.</div><div><br /></div><div><br /></div><div>Initially, Submariner includes certificates chaining up to the set of root certificates that <a href="https://knowledge.symantec.com/support/ssl-certificates-support/index?page=content&#38;id=ALERT1941&#38;actp=LIST&#38;viewlocale=en_US">Symantec</a> recently announced it had discontinued, as well as a <a href="https://mozillacaprogram.secure.force.com/CA/PendingCACertificateReport">collection of additional roots</a> suggested to us that are pending inclusion in Mozilla.</div><div><br /></div><div><br /></div><div>Once Symantec&#8217;s affected certificates are no longer trusted by browsers, we will be withdrawing them from the trusted roots accepted by our existing logs (Aviator, Pilot, and Rocketeer).</div><div><br /></div><div><br /></div><div>Third parties are invited to suggest additional roots for potential inclusion in the new log by email to <a href="mailto:google-ct-logs@googlegroups.com">google-ct-logs@googlegroups.com</a>.</div><div><br /></div><div><br /></div><div>Everyone is welcome to make use of the log to submit certificates and query data. We hope it will prove useful and help to improve web security.</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Martin Smith, Software Engineer, Certificate Transparency</span><br /><br />Today we are announcing a new Certificate Transparency log for a new set of root certificates: those that were once or are not yet trusted by browsers.<br /><br /><a href="https://www.certificate-transparency.org/">Certificate Transparency</a> (CT) data has a number of different uses, including protecting users from mis-issued certificates and providing webmasters and other interested parties with a public record of what certificates have been issued for domains.<br /><br /><a href="https://www.chromium.org/Home/chromium-security/certificate-transparency">Initially</a>, our logs included browser-trusted Certificate Authorities (CAs). However, there are two main classes of CA that can’t easily be included in the existing logs:<br /><br /><ol><li>Those that were once trusted and have since been withdrawn from the root programs.</li><li>New CAs that are on the path to inclusion in browser trusted roots.</li></ol><div><br /></div><div>Including these in trusted logs is problematic for several reasons, including uncertainties around revocation policies and the possibility of cross-signing attacks being attempted by malicious third-parties.</div><div><br /></div><div><br /></div><div>However, visibility of these CAs’ activities is still useful, so we have created a new CT log for these certificates. This log will not be trusted by Chrome, and will provide a public record of certificates that are not accepted by the existing Google-operated logs.</div><div><br /></div><div><br /></div><div>The new log is accessible at <span style="font-family: Courier New, Courier, monospace;">ct.googleapis.com/submariner</span> and is listed on our <a href="https://www.certificate-transparency.org/known-logs">Known Logs</a> page. It has the same API as the existing logs.</div><div><br /></div><div><br /></div><div>Initially, Submariner includes certificates chaining up to the set of root certificates that <a href="https://knowledge.symantec.com/support/ssl-certificates-support/index?page=content&amp;id=ALERT1941&amp;actp=LIST&amp;viewlocale=en_US">Symantec</a> recently announced it had discontinued, as well as a <a href="https://mozillacaprogram.secure.force.com/CA/PendingCACertificateReport">collection of additional roots</a> suggested to us that are pending inclusion in Mozilla.</div><div><br /></div><div><br /></div><div>Once Symantec’s affected certificates are no longer trusted by browsers, we will be withdrawing them from the trusted roots accepted by our existing logs (Aviator, Pilot, and Rocketeer).</div><div><br /></div><div><br /></div><div>Third parties are invited to suggest additional roots for potential inclusion in the new log by email to <a href="mailto:google-ct-logs@googlegroups.com">google-ct-logs@googlegroups.com</a>.</div><div><br /></div><div><br /></div><div>Everyone is welcome to make use of the log to submit certificates and query data. We hope it will prove useful and help to improve web security.</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/certificate-transparency-for-untrusted-cas/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>BinDiff now available for free</title>
		<link>https://googledata.org/google-online-security/bindiff-now-available-for-free/</link>
		<comments>https://googledata.org/google-online-security/bindiff-now-available-for-free/#comments</comments>
		<pubDate>Fri, 18 Mar 2016 16:36:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=6a17bf512831091398809bc1d208ca1f</guid>
		<description><![CDATA[<span>Posted by Christian Blichmann, Software Engineer</span><br /><br />BinDiff is a comparison tool for binary files that helps to quickly find differences and similarities in disassembled code. It is used by security researchers and engineers across the globe to identify and isolate fixes for vulnerabilities in vendor-supplied patches and to analyze multiple versions of the same binary. Another common use case is to transfer analysis results from one binary to another, helping to prevent duplicate analyses of, for example, malware binaries. This also helps to retain knowledge across teams of binary analysts where the individual workflows might vary from analyst to analyst.<br /><br />More specifically, BinDiff can be used to:<br /><br /><ul><li>Compare binary files for x86, MIPS, ARM/AArch64, PowerPC, and other architectures.</li><li>Identify identical and similar functions in different binaries.</li><li>Port function names, comments and local variable names from one disassembly to another.</li><li>Detect and highlight changes between two variants of the same function.</li></ul><div><br /></div><div>Here is a screenshot demonstrating what using BinDiff to display per-function differences looks like:</div><div><a href="https://3.bp.blogspot.com/-ztHVR44Udlc/Vuwbf9qpazI/AAAAAAAAARc/O74mF-47dU4uBkmLBk2Nxntdc1TF3qmjg/s1600/bindiff.jpg"><img border="0" height="446" src="https://3.bp.blogspot.com/-ztHVR44Udlc/Vuwbf9qpazI/AAAAAAAAARc/O74mF-47dU4uBkmLBk2Nxntdc1TF3qmjg/s640/bindiff.jpg" width="640"></a></div><div><br /></div><div>At Google, the BinDiff core engine powers a large-scale malware processing pipeline helping to protect both internal and external users. BinDiff provides the underlying comparison results needed to cluster the world's malware into related families with billions of comparisons performed so far.</div><div><br /></div><div><br /></div><div>Ever since zynamics <a href="http://blog.zynamics.com/2011/03/01/zynamics-acquired-by-google/">joined Google</a> in 2011, we have been committed to keeping our most valuable tools available to the security research community. We first lowered the price, and today we are taking the next logical step by making it available free of charge.</div><div><br /></div><div><br /></div><div>You can download BinDiff from the <a href="http://zynamics.com/software.html">zynamics web site</a>. It&#8217;s the current version, BinDiff 4.2 for both Linux and Windows. To use it, you also need the commercial Hex-Rays IDA Pro disassembler, 6.8 or later.</div><div><br /></div><div><br /></div><div>Happy BinDiff-ing!</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Christian Blichmann, Software Engineer</span><br /><br />BinDiff is a comparison tool for binary files that helps to quickly find differences and similarities in disassembled code. It is used by security researchers and engineers across the globe to identify and isolate fixes for vulnerabilities in vendor-supplied patches and to analyze multiple versions of the same binary. Another common use case is to transfer analysis results from one binary to another, helping to prevent duplicate analyses of, for example, malware binaries. This also helps to retain knowledge across teams of binary analysts where the individual workflows might vary from analyst to analyst.<br /><br />More specifically, BinDiff can be used to:<br /><br /><ul><li>Compare binary files for x86, MIPS, ARM/AArch64, PowerPC, and other architectures.</li><li>Identify identical and similar functions in different binaries.</li><li>Port function names, comments and local variable names from one disassembly to another.</li><li>Detect and highlight changes between two variants of the same function.</li></ul><div><br /></div><div>Here is a screenshot demonstrating what using BinDiff to display per-function differences looks like:</div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-ztHVR44Udlc/Vuwbf9qpazI/AAAAAAAAARc/O74mF-47dU4uBkmLBk2Nxntdc1TF3qmjg/s1600/bindiff.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="446" src="https://3.bp.blogspot.com/-ztHVR44Udlc/Vuwbf9qpazI/AAAAAAAAARc/O74mF-47dU4uBkmLBk2Nxntdc1TF3qmjg/s640/bindiff.jpg" width="640" /></a></div><div><br /></div><div>At Google, the BinDiff core engine powers a large-scale malware processing pipeline helping to protect both internal and external users. BinDiff provides the underlying comparison results needed to cluster the world's malware into related families with billions of comparisons performed so far.</div><div><br /></div><div><br /></div><div>Ever since zynamics <a href="http://blog.zynamics.com/2011/03/01/zynamics-acquired-by-google/">joined Google</a> in 2011, we have been committed to keeping our most valuable tools available to the security research community. We first lowered the price, and today we are taking the next logical step by making it available free of charge.</div><div><br /></div><div><br /></div><div>You can download BinDiff from the <a href="http://zynamics.com/software.html">zynamics web site</a>. It’s the current version, BinDiff 4.2 for both Linux and Windows. To use it, you also need the commercial Hex-Rays IDA Pro disassembler, 6.8 or later.</div><div><br /></div><div><br /></div><div>Happy BinDiff-ing!</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/bindiff-now-available-for-free/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Securing the web, together</title>
		<link>https://googledata.org/google-online-security/securing-the-web-together/</link>
		<comments>https://googledata.org/google-online-security/securing-the-web-together/#comments</comments>
		<pubDate>Tue, 15 Mar 2016 16:30:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=75a42de29db22ffd2a6f93a1d09ef540</guid>
		<description><![CDATA[<span>Posted by Rutledge Chin Feman and Tim Willis, HTTPS Evangelists</span><br /><br />Encryption keeps people&#8217;s information safe as it moves between their devices and Google, protecting it from interception and unauthorized access by attackers. With a modern encrypted connection, you can be confident that your data will be private and secure.<br /><br />Today we are launching a new section of our <a href="https://www.google.com/transparencyreport/https">Transparency Report</a> to track the progress of encryption efforts&#8212;both at Google and on some of the web's most trafficked sites. Our aim with this project is to hold ourselves accountable and encourage others to encrypt so we can make the web even safer for everyone.<br /><br />Here's an overview of what is included in the new report:<br /><b>Google sites</b><br />Every week, we&#8217;ll update this report with progress we've made towards implementing HTTPS by default across Google&#8217;s services. We&#8217;ve long offered Gmail, Drive, and Search over HTTPS, and in the last year, we&#8217;ve begun to add traffic from more products, like <a href="https://googleonlinesecurity.blogspot.com/2015/04/ads-take-step-towards-https-everywhere.html">ads</a> and <a href="https://buzz.blogger.com/2015/09/https-support-coming-to-blogspot.html">Blogger</a> as well.<br /><br />We're making positive strides, but we still have a ways to go.<br /><div><a href="https://3.bp.blogspot.com/-lpFk0biE57k/VudMtbw8ZSI/AAAAAAAAARM/z4_yvRAx9xwPmYTLFvigIrS06o_RRMSbg/s1600/google%2Boverall.jpg"><img border="0" height="196" src="https://3.bp.blogspot.com/-lpFk0biE57k/VudMtbw8ZSI/AAAAAAAAARM/z4_yvRAx9xwPmYTLFvigIrS06o_RRMSbg/s640/google%2Boverall.jpg" width="640"></a><i>This chart represents the percentage of requests to Google's servers that used encrypted connections. YouTube traffic is currently not included in this data.</i></div><div><br /></div><div>We plan on adding additional Google products over time to increase the scope of this report.</div><div><b><br /></b></div><div><b><br /></b></div><div><b>Popular third-party sites</b></div><div><br /></div><div>Our report also includes data about the <a href="https://www.google.com/transparencyreport/https/grid/">HTTPS connections on many popular sites across the web</a>, beyond Google. We've chosen these sites based on a combination of publicly-available <a href="https://www.alexa.com/topsites">Alexa data</a> and our own Google internal data; we estimate they account for approximately 25% of all web traffic on the Internet.</div><div><b><br /></b></div><div><b><br /></b></div><div><b>Certificate Transparency</b></div><div><br /></div><div>Websites use certificates to assert to users that they are legitimate, so browsers need to be able to check whether the certificate that you&#8217;re being presented is valid and appropriately issued. That is why this report also offers a <a href="https://www.google.com/transparencyreport/https/ct/">Certificate Transparency log viewer</a>, providing a web interface for users and site administrators to easily check and see who has issued a certificate for a website. For example, if you use this log viewer and search for google.com with &#8216;include expired' checked, you'll see the <a href="https://security.googleblog.com/2015/09/improved-digital-certificate-security.html">mis-issued google.com certificate from September 2015</a>.</div><div><br /></div><div><b><br /></b></div><div><b>Encryption for everyone</b></div><div><br /></div><div>Implementing HTTPS can be difficult&#8212;we know from experience! Some common obstacles include:&#160;</div><div></div><ul><li>Older hardware and/or software that doesn&#8217;t support modern encryption technologies.</li><li>Governments and organizations that may block or otherwise degrade HTTPS traffic.</li><li>Organizations that may not have the desire or technical resources to implement HTTPS.</li></ul>While there&#8217;s no one-size-fits-all solution to these challenges, <a href="https://developers.google.com/web/fundamentals/security/encrypt-in-transit/">we&#8217;ve put together a resource for webmasters to use</a> as they work through this process. We also support industry-wide efforts, like EFF's &#8216;<a href="https://www.eff.org/encrypt-the-web-report">Encrypt the Web</a>&#8217; report, that aim to bring more of the web to HTTPS.<br /><div><br /></div><div>Implementing encryption is not easy work. But, as more people spend more of their time on the web, it&#8217;s an increasingly essential element of online security. We hope this report will provide a snapshot of our own encryption efforts and will encourage everyone to make HTTPS the default on the web, even faster.</div><br />]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Rutledge Chin Feman and Tim Willis, HTTPS Evangelists</span><br /><br />Encryption keeps people’s information safe as it moves between their devices and Google, protecting it from interception and unauthorized access by attackers. With a modern encrypted connection, you can be confident that your data will be private and secure.<br /><br />Today we are launching a new section of our <a href="https://www.google.com/transparencyreport/https">Transparency Report</a> to track the progress of encryption efforts—both at Google and on some of the web's most trafficked sites. Our aim with this project is to hold ourselves accountable and encourage others to encrypt so we can make the web even safer for everyone.<br /><br />Here's an overview of what is included in the new report:<br /><b>Google sites</b><br />Every week, we’ll update this report with progress we've made towards implementing HTTPS by default across Google’s services. We’ve long offered Gmail, Drive, and Search over HTTPS, and in the last year, we’ve begun to add traffic from more products, like <a href="https://googleonlinesecurity.blogspot.com/2015/04/ads-take-step-towards-https-everywhere.html">ads</a> and <a href="https://buzz.blogger.com/2015/09/https-support-coming-to-blogspot.html">Blogger</a> as well.<br /><br />We're making positive strides, but we still have a ways to go.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-lpFk0biE57k/VudMtbw8ZSI/AAAAAAAAARM/z4_yvRAx9xwPmYTLFvigIrS06o_RRMSbg/s1600/google%2Boverall.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="196" src="https://3.bp.blogspot.com/-lpFk0biE57k/VudMtbw8ZSI/AAAAAAAAARM/z4_yvRAx9xwPmYTLFvigIrS06o_RRMSbg/s640/google%2Boverall.jpg" width="640" /></a><i>This chart represents the percentage of requests to Google's servers that used encrypted connections. YouTube traffic is currently not included in this data.</i></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">We plan on adding additional Google products over time to increase the scope of this report.</div><div class="separator" style="clear: both; text-align: left;"><b><br /></b></div><div class="separator" style="clear: both; text-align: left;"><b><br /></b></div><div class="separator" style="clear: both; text-align: left;"><b>Popular third-party sites</b></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Our report also includes data about the <a href="https://www.google.com/transparencyreport/https/grid/">HTTPS connections on many popular sites across the web</a>, beyond Google. We've chosen these sites based on a combination of publicly-available <a href="https://www.alexa.com/topsites">Alexa data</a> and our own Google internal data; we estimate they account for approximately 25% of all web traffic on the Internet.</div><div class="separator" style="clear: both; text-align: left;"><b><br /></b></div><div class="separator" style="clear: both; text-align: left;"><b><br /></b></div><div class="separator" style="clear: both; text-align: left;"><b>Certificate Transparency</b></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Websites use certificates to assert to users that they are legitimate, so browsers need to be able to check whether the certificate that you’re being presented is valid and appropriately issued. That is why this report also offers a <a href="https://www.google.com/transparencyreport/https/ct/">Certificate Transparency log viewer</a>, providing a web interface for users and site administrators to easily check and see who has issued a certificate for a website. For example, if you use this log viewer and search for google.com with ‘include expired' checked, you'll see the <a href="https://security.googleblog.com/2015/09/improved-digital-certificate-security.html">mis-issued google.com certificate from September 2015</a>.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><b><br /></b></div><div class="separator" style="clear: both; text-align: left;"><b>Encryption for everyone</b></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Implementing HTTPS can be difficult—we know from experience! Some common obstacles include:&nbsp;</div><div class="separator" style="clear: both; text-align: left;"></div><ul><li>Older hardware and/or software that doesn’t support modern encryption technologies.</li><li>Governments and organizations that may block or otherwise degrade HTTPS traffic.</li><li>Organizations that may not have the desire or technical resources to implement HTTPS.</li></ul>While there’s no one-size-fits-all solution to these challenges, <a href="https://developers.google.com/web/fundamentals/security/encrypt-in-transit/">we’ve put together a resource for webmasters to use</a> as they work through this process. We also support industry-wide efforts, like EFF's ‘<a href="https://www.eff.org/encrypt-the-web-report">Encrypt the Web</a>’ report, that aim to bring more of the web to HTTPS.<br /><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Implementing encryption is not easy work. But, as more people spend more of their time on the web, it’s an increasingly essential element of online security. We hope this report will provide a snapshot of our own encryption efforts and will encourage everyone to make HTTPS the default on the web, even faster.</div><br />]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/securing-the-web-together/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Get Rich or Hack Tryin’</title>
		<link>https://googledata.org/google-online-security/get-rich-or-hack-tryin/</link>
		<comments>https://googledata.org/google-online-security/get-rich-or-hack-tryin/#comments</comments>
		<pubDate>Mon, 14 Mar 2016 16:31:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=5d42df85f5c9a33f2e13d5dc95906bef</guid>
		<description><![CDATA[<span>Posted by Nathan Parker, Chrome Defender and Tim Willis, Hacker Philanthropist</span><br /><br />Since 2010, we've happily rewarded researchers who find and report security issues to us through Google&#8217;s Security Reward Program. Last year, <a href="https://security.googleblog.com/2016/01/google-security-rewards-2015-year-in.html">Google paid researchers more than $2,000,000 for their work</a> to make Google users safer.<br /><br />It's no secret that Chrome takes security seriously. Today, we&#8217;re introducing two new changes to expand the Chrome Reward Program even further:<br /><br /><ul><li><b>Increasing our top reward from $50,000 to $100,000.</b> Last year we introduced a $50,000 reward for the persistent compromise of a Chromebook in guest mode. Since we introduced the $50,000 reward, we haven&#8217;t had a successful submission. That said, great research deserves great awards, so we&#8217;re putting up a standing six-figure sum, available all year round with no quotas and no maximum reward pool.</li></ul><ul><li><b>Adding a Download Protection Bypass bounty.</b> We&#8217;re extending our reward program scope to include rewards for methods that bypass Chrome&#8217;s <a href="https://www.google.com/transparencyreport/safebrowsing/">Safe Browsing</a> <a href="https://www.chromium.org/developers/design-documents/safebrowsing#TOC-Download-Protection">download protection features</a>. There&#8217;s much more detail on this new category on <a href="https://www.google.com/about/appsecurity/chrome-rewards/index.html#rewards">our rewards page</a> - be sure to take a look if you&#8217;re interested.</li></ul><div><br /></div><div>We look forward to seeing some amazing bugs and continuing to work with the security research community.</div><div><br /></div><div>Happy hacking!</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Nathan Parker, Chrome Defender and Tim Willis, Hacker Philanthropist</span><br /><br />Since 2010, we've happily rewarded researchers who find and report security issues to us through Google’s Security Reward Program. Last year, <a href="https://security.googleblog.com/2016/01/google-security-rewards-2015-year-in.html">Google paid researchers more than $2,000,000 for their work</a> to make Google users safer.<br /><br />It's no secret that Chrome takes security seriously. Today, we’re introducing two new changes to expand the Chrome Reward Program even further:<br /><br /><ul><li><b>Increasing our top reward from $50,000 to $100,000.</b> Last year we introduced a $50,000 reward for the persistent compromise of a Chromebook in guest mode. Since we introduced the $50,000 reward, we haven’t had a successful submission. That said, great research deserves great awards, so we’re putting up a standing six-figure sum, available all year round with no quotas and no maximum reward pool.</li></ul><ul><li><b>Adding a Download Protection Bypass bounty.</b> We’re extending our reward program scope to include rewards for methods that bypass Chrome’s <a href="https://www.google.com/transparencyreport/safebrowsing/">Safe Browsing</a> <a href="https://www.chromium.org/developers/design-documents/safebrowsing#TOC-Download-Protection">download protection features</a>. There’s much more detail on this new category on <a href="https://www.google.com/about/appsecurity/chrome-rewards/index.html#rewards">our rewards page</a> - be sure to take a look if you’re interested.</li></ul><div><br /></div><div>We look forward to seeing some amazing bugs and continuing to work with the security research community.</div><div><br /></div><div>Happy hacking!</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/get-rich-or-hack-tryin/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Scalable vendor security reviews</title>
		<link>https://googledata.org/google-online-security/scalable-vendor-security-reviews/</link>
		<comments>https://googledata.org/google-online-security/scalable-vendor-security-reviews/#comments</comments>
		<pubDate>Mon, 07 Mar 2016 18:02:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=5991957d725c56a4a0775c9dc1c3eb60</guid>
		<description><![CDATA[<span>Posted by Lukas Weichselbaum and Daniel Fabian, Google Security</span><br /><br /><i>[Cross-posted on the <a href="https://google-opensource.blogspot.com/2016/03/scalable-vendor-security-reviews.html">Google Open Source Blog</a>]</i><br /><br />At Google, we assess the security of hundreds of vendors every year. We scale our efforts through automating much of the initial information gathering and triage portions of the vendor review process. To do this we've developed the Vendor Security Assessment Questionnaire (VSAQ), a collection of self-adapting questionnaires for evaluating multiple aspects of a vendor's security and privacy posture.<br /><br />We've received feedback from many vendors who completed the questionnaires. Most vendors found them intuitive and flexible &#8212; and, even better, they've been able to use the embedded tips and recommendations to improve their security posture. Some also expressed interest in using the questionnaires to assess their own suppliers.<br /><br />Based on this positive response, we've decided to open source the VSAQ Framework (Apache License Version 2) and the generally applicable parts of our questionnaires on GitHub: <a href="https://github.com/google/vsaq">https://github.com/google/vsaq</a>. We hope it will help companies spin up, or further improve their own vendor security programs. We also hope the base questionnaires can serve as a self-assessment tool for security-conscious companies and developers looking to improve their security posture.<br /><br />The VSAQ Framework comes with four security questionnaire templates that can be used with the VSAQ rendering engine:<br /><br /><ul><li><a href="https://vsaq-demo.withgoogle.com/vsaq.html?qpath=questionnaires/webapp.json">Web Application Security Questionnaire</a></li><li><a href="https://vsaq-demo.withgoogle.com/vsaq.html?qpath=questionnaires/security_privacy_programs.json">Security &#38; Privacy Program Questionnaire</a></li><li><a href="https://vsaq-demo.withgoogle.com/vsaq.html?qpath=questionnaires/infrastructure.json">Infrastructure Security Questionnaire</a></li><li><a href="https://vsaq-demo.withgoogle.com/vsaq.html?qpath=questionnaires/physical_and_datacenter.json">Physical &#38; Data Center Security Questionnaire</a></li></ul><div><br /></div><div>All four base questionnaire templates can be readily extended with company-specific questions. Using the same questionnaire templates across companies may help to scale assessment efforts. Common templates can also minimize the burden on vendor companies, by facilitating the reuse of responses.</div><div><br /></div><div>The <a href="https://github.com/google/vsaq">VSAQ Framework</a> comes with a simple client-side-only reference implementation that's suitable for self-assessments, for vendor security programs with a moderate throughput, and for just trying out the framework. For a high-throughput vendor security program, we recommend using the VSAQ Framework with a custom server-side component that fits your needs (the interface is quite simple).</div><div><br /></div><div>Give VSAQ a try! A demo version of the VSAQ Framework is available here: <a href="https://vsaq-demo.withgoogle.com/">https://vsaq-demo.withgoogle.com</a></div><div><br /></div><div><a href="https://3.bp.blogspot.com/-SdCHKXwn87g/VtzaGu5WOfI/AAAAAAAAAQ0/J-TD4llV5vY/s1600/sec%2Band%2Bprivacy%2Bquestionnaire.jpg"><img border="0" height="492" src="https://3.bp.blogspot.com/-SdCHKXwn87g/VtzaGu5WOfI/AAAAAAAAAQ0/J-TD4llV5vY/s640/sec%2Band%2Bprivacy%2Bquestionnaire.jpg" width="640"></a><i>Excerpt from Security and Privacy Programs Questionnaire</i></div><div><br /></div><div>Let us know how VSAQ works for you: <a href="https://github.com/google/vsaq/blob/master/AUTHORS">contact us</a>. We look forward to getting your feedback and continuing to make vendor reviews scalable &#8212; and maybe even fun!</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Lukas Weichselbaum and Daniel Fabian, Google Security</span><br /><br /><i>[Cross-posted on the <a href="https://google-opensource.blogspot.com/2016/03/scalable-vendor-security-reviews.html">Google Open Source Blog</a>]</i><br /><br />At Google, we assess the security of hundreds of vendors every year. We scale our efforts through automating much of the initial information gathering and triage portions of the vendor review process. To do this we've developed the Vendor Security Assessment Questionnaire (VSAQ), a collection of self-adapting questionnaires for evaluating multiple aspects of a vendor's security and privacy posture.<br /><br />We've received feedback from many vendors who completed the questionnaires. Most vendors found them intuitive and flexible — and, even better, they've been able to use the embedded tips and recommendations to improve their security posture. Some also expressed interest in using the questionnaires to assess their own suppliers.<br /><br />Based on this positive response, we've decided to open source the VSAQ Framework (Apache License Version 2) and the generally applicable parts of our questionnaires on GitHub: <a href="https://github.com/google/vsaq">https://github.com/google/vsaq</a>. We hope it will help companies spin up, or further improve their own vendor security programs. We also hope the base questionnaires can serve as a self-assessment tool for security-conscious companies and developers looking to improve their security posture.<br /><br />The VSAQ Framework comes with four security questionnaire templates that can be used with the VSAQ rendering engine:<br /><br /><ul><li><a href="https://vsaq-demo.withgoogle.com/vsaq.html?qpath=questionnaires/webapp.json">Web Application Security Questionnaire</a></li><li><a href="https://vsaq-demo.withgoogle.com/vsaq.html?qpath=questionnaires/security_privacy_programs.json">Security &amp; Privacy Program Questionnaire</a></li><li><a href="https://vsaq-demo.withgoogle.com/vsaq.html?qpath=questionnaires/infrastructure.json">Infrastructure Security Questionnaire</a></li><li><a href="https://vsaq-demo.withgoogle.com/vsaq.html?qpath=questionnaires/physical_and_datacenter.json">Physical &amp; Data Center Security Questionnaire</a></li></ul><div><br /></div><div>All four base questionnaire templates can be readily extended with company-specific questions. Using the same questionnaire templates across companies may help to scale assessment efforts. Common templates can also minimize the burden on vendor companies, by facilitating the reuse of responses.</div><div><br /></div><div>The <a href="https://github.com/google/vsaq">VSAQ Framework</a> comes with a simple client-side-only reference implementation that's suitable for self-assessments, for vendor security programs with a moderate throughput, and for just trying out the framework. For a high-throughput vendor security program, we recommend using the VSAQ Framework with a custom server-side component that fits your needs (the interface is quite simple).</div><div><br /></div><div>Give VSAQ a try! A demo version of the VSAQ Framework is available here: <a href="https://vsaq-demo.withgoogle.com/">https://vsaq-demo.withgoogle.com</a></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-SdCHKXwn87g/VtzaGu5WOfI/AAAAAAAAAQ0/J-TD4llV5vY/s1600/sec%2Band%2Bprivacy%2Bquestionnaire.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="492" src="https://3.bp.blogspot.com/-SdCHKXwn87g/VtzaGu5WOfI/AAAAAAAAAQ0/J-TD4llV5vY/s640/sec%2Band%2Bprivacy%2Bquestionnaire.jpg" width="640" /></a><i>Excerpt from Security and Privacy Programs Questionnaire</i></div><div><br /></div><div>Let us know how VSAQ works for you: <a href="https://github.com/google/vsaq/blob/master/AUTHORS">contact us</a>. We look forward to getting your feedback and continuing to make vendor reviews scalable — and maybe even fun!</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/scalable-vendor-security-reviews/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>CVE-2015-7547: glibc getaddrinfo stack-based buffer overflow</title>
		<link>https://googledata.org/google-online-security/cve-2015-7547-glibc-getaddrinfo-stack-based-buffer-overflow/</link>
		<comments>https://googledata.org/google-online-security/cve-2015-7547-glibc-getaddrinfo-stack-based-buffer-overflow/#comments</comments>
		<pubDate>Tue, 16 Feb 2016 14:14:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=91c92198070a094cdc570409818d2ab7</guid>
		<description><![CDATA[<span>Posted by Fermin J. Serna, Staff Security Engineer and Kevin Stadmeyer, Technical Program Manager</span><br /><br />Have you ever been deep in the mines of debugging and suddenly realized that you were staring at something far more interesting than you were expecting? You are not alone! Recently a Google engineer noticed that their SSH client segfaulted every time they tried to connect to a specific host. That engineer filed a ticket to investigate the behavior and after an intense investigation we discovered the issue lay in glibc and not in SSH as we were expecting.<br /><br />Thanks to this engineer&#8217;s keen observation, we were able determine that the issue could result in remote code execution. We immediately began an in-depth analysis of the issue to determine whether it could be exploited, and possible fixes. We saw this as a challenge, and after some intense hacking sessions, we were able to craft a full working exploit!<br /><br />In the course of our investigation, and to our surprise, we learned that the glibc maintainers had previously been alerted of the issue via their bug tracker in July, 2015. (<a href="https://sourceware.org/bugzilla/show_bug.cgi?id=18665">bug</a>). We couldn't immediately tell whether the bug fix was underway, so we worked hard to make sure we understood the issue and then reached out to the glibc maintainers. To our delight, Florian Weimer and Carlos O&#8217;Donell of <a href="https://www.redhat.com/en">Red Hat</a> had also been studying the bug&#8217;s impact, albeit completely independently! Due to the sensitive nature of the issue, the investigation, patch creation, and regression tests performed primarily by Florian and Carlos had continued &#8220;off-bug.&#8221;<br /><br />This was an amazing coincidence, and thanks to their hard work and cooperation, we were able to translate both teams&#8217; knowledge into a comprehensive patch and regression test to protect glibc users.<br /><br />That patch is available <a href="https://sourceware.org/ml/libc-alpha/2016-02/msg00416.html">here</a>.<br /><br /><b>Issue Summary:</b><br /><br />Our initial investigations showed that the issue affected all the versions of glibc since 2.9. You should definitely update if you are on an older version though. If the vulnerability is detected, machine owners may wish to take steps to mitigate the risk of an attack.<br /><br />The glibc DNS client side resolver is vulnerable to a stack-based buffer overflow when the getaddrinfo() library function is used. Software using this function may be exploited with attacker-controlled domain names, attacker-controlled DNS servers, or through a man-in-the-middle attack.<br /><br />Google has found some mitigations that may help prevent exploitation if you are not able to immediately patch your instance of glibc. The vulnerability relies on an oversized (2048+ bytes) UDP or TCP response, which is followed by another response that will overwrite the stack. Our suggested mitigation is to limit the response (i.e., via DNSMasq or similar programs) sizes accepted by the DNS resolver locally as well as to ensure that DNS queries are sent only to DNS servers which limit the response size for UDP responses with the truncation bit set.<br /><br /><b>Technical information:</b><br /><br />glibc reserves 2048 bytes in the stack through alloca() for the DNS answer at _nss_dns_gethostbyname4_r() for hosting responses to a DNS query.<br /><br />Later on, at send_dg() and send_vc(), if the response is larger than 2048 bytes, a new buffer is allocated from the heap and all the information (buffer pointer, new buffer size and response size) is updated.<br /><br />Under certain conditions a mismatch between the stack buffer and the new heap allocation will happen. The final effect is that the stack buffer will be used to store the DNS response, even though the response is larger than the stack buffer and a heap buffer was allocated. This behavior leads to the stack buffer overflow.<br /><br />The vectors to trigger this buffer overflow are very common and can include ssh, sudo, and curl. We are confident that the exploitation vectors are diverse and widespread; we have not attempted to enumerate these vectors further.<br /><br /><b>Exploitation:</b><br /><br />Remote code execution is possible, but not straightforward. It requires bypassing the security mitigations present on the system, such as ASLR. We will not release our exploit code, but a non-weaponized Proof of Concept has been made available simultaneously with this blog post. With this <a href="https://github.com/fjserna/CVE-2015-7547">Proof of Concept</a>, you can verify if you are affected by this issue, and verify any mitigations you may wish to enact.<br /><br />As you can see in the below debugging session we are able to reliably control EIP/RIP.<br /><br />(gdb) x/i $rip<br />=&#62; 0x7fe156f0ccce &#60;_nss_dns_gethostbyname4_r+398&#62;: req<br />(gdb) x/a $rsp<br />0x7fff56fd8a48: 0x4242424242424242 0x4242424242420042<br /><br />When code crashes unexpectedly, it can be a sign of something much more significant than it appears; ignore crashes at your peril!<br /><br />Failed exploit indicators, due to ASLR, can range from:<br /><br /><ul><li>Crash on free(ptr) where ptr is controlled by the attacker.&#160;</li><li>Crash on free(ptr) where ptr is semi-controlled by the attacker since ptr has to be a valid readable address.&#160;</li><li>Crash reading from memory pointed by a local overwritten variable.&#160;</li><li>Crash writing to memory on an attacker-controlled pointer.</li></ul><br />We would like to thank Neel Mehta, Thomas Garnier, Gynvael Coldwind, Michael Schaller, Tom Payne, Michael Haro, Damian Menscher, Matt Brown, Yunhong Gu, Florian Weimer, Carlos O&#8217;Donell and the rest of the glibc team for their help figuring out all details about this bug, exploitation, and patch development.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Fermin J. Serna, Staff Security Engineer and Kevin Stadmeyer, Technical Program Manager</span><br /><br />Have you ever been deep in the mines of debugging and suddenly realized that you were staring at something far more interesting than you were expecting? You are not alone! Recently a Google engineer noticed that their SSH client segfaulted every time they tried to connect to a specific host. That engineer filed a ticket to investigate the behavior and after an intense investigation we discovered the issue lay in glibc and not in SSH as we were expecting.<br /><br />Thanks to this engineer’s keen observation, we were able determine that the issue could result in remote code execution. We immediately began an in-depth analysis of the issue to determine whether it could be exploited, and possible fixes. We saw this as a challenge, and after some intense hacking sessions, we were able to craft a full working exploit!<br /><br />In the course of our investigation, and to our surprise, we learned that the glibc maintainers had previously been alerted of the issue via their bug tracker in July, 2015. (<a href="https://sourceware.org/bugzilla/show_bug.cgi?id=18665">bug</a>). We couldn't immediately tell whether the bug fix was underway, so we worked hard to make sure we understood the issue and then reached out to the glibc maintainers. To our delight, Florian Weimer and Carlos O’Donell of <a href="https://www.redhat.com/en">Red Hat</a> had also been studying the bug’s impact, albeit completely independently! Due to the sensitive nature of the issue, the investigation, patch creation, and regression tests performed primarily by Florian and Carlos had continued “off-bug.”<br /><br />This was an amazing coincidence, and thanks to their hard work and cooperation, we were able to translate both teams’ knowledge into a comprehensive patch and regression test to protect glibc users.<br /><br />That patch is available <a href="https://sourceware.org/ml/libc-alpha/2016-02/msg00416.html">here</a>.<br /><br /><b>Issue Summary:</b><br /><br />Our initial investigations showed that the issue affected all the versions of glibc since 2.9. You should definitely update if you are on an older version though. If the vulnerability is detected, machine owners may wish to take steps to mitigate the risk of an attack.<br /><br />The glibc DNS client side resolver is vulnerable to a stack-based buffer overflow when the getaddrinfo() library function is used. Software using this function may be exploited with attacker-controlled domain names, attacker-controlled DNS servers, or through a man-in-the-middle attack.<br /><br />Google has found some mitigations that may help prevent exploitation if you are not able to immediately patch your instance of glibc. The vulnerability relies on an oversized (2048+ bytes) UDP or TCP response, which is followed by another response that will overwrite the stack. Our suggested mitigation is to limit the response (i.e., via DNSMasq or similar programs) sizes accepted by the DNS resolver locally as well as to ensure that DNS queries are sent only to DNS servers which limit the response size for UDP responses with the truncation bit set.<br /><br /><b>Technical information:</b><br /><br />glibc reserves 2048 bytes in the stack through alloca() for the DNS answer at _nss_dns_gethostbyname4_r() for hosting responses to a DNS query.<br /><br />Later on, at send_dg() and send_vc(), if the response is larger than 2048 bytes, a new buffer is allocated from the heap and all the information (buffer pointer, new buffer size and response size) is updated.<br /><br />Under certain conditions a mismatch between the stack buffer and the new heap allocation will happen. The final effect is that the stack buffer will be used to store the DNS response, even though the response is larger than the stack buffer and a heap buffer was allocated. This behavior leads to the stack buffer overflow.<br /><br />The vectors to trigger this buffer overflow are very common and can include ssh, sudo, and curl. We are confident that the exploitation vectors are diverse and widespread; we have not attempted to enumerate these vectors further.<br /><br /><b>Exploitation:</b><br /><br />Remote code execution is possible, but not straightforward. It requires bypassing the security mitigations present on the system, such as ASLR. We will not release our exploit code, but a non-weaponized Proof of Concept has been made available simultaneously with this blog post. With this <a href="https://github.com/fjserna/CVE-2015-7547">Proof of Concept</a>, you can verify if you are affected by this issue, and verify any mitigations you may wish to enact.<br /><br />As you can see in the below debugging session we are able to reliably control EIP/RIP.<br /><br />(gdb) x/i $rip<br />=&gt; 0x7fe156f0ccce &lt;_nss_dns_gethostbyname4_r+398&gt;: req<br />(gdb) x/a $rsp<br />0x7fff56fd8a48: 0x4242424242424242 0x4242424242420042<br /><br />When code crashes unexpectedly, it can be a sign of something much more significant than it appears; ignore crashes at your peril!<br /><br />Failed exploit indicators, due to ASLR, can range from:<br /><br /><ul><li>Crash on free(ptr) where ptr is controlled by the attacker.&nbsp;</li><li>Crash on free(ptr) where ptr is semi-controlled by the attacker since ptr has to be a valid readable address.&nbsp;</li><li>Crash reading from memory pointed by a local overwritten variable.&nbsp;</li><li>Crash writing to memory on an attacker-controlled pointer.</li></ul><br />We would like to thank Neel Mehta, Thomas Garnier, Gynvael Coldwind, Michael Schaller, Tom Payne, Michael Haro, Damian Menscher, Matt Brown, Yunhong Gu, Florian Weimer, Carlos O’Donell and the rest of the glibc team for their help figuring out all details about this bug, exploitation, and patch development.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/cve-2015-7547-glibc-getaddrinfo-stack-based-buffer-overflow/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Building a safer web, for everyone</title>
		<link>https://googledata.org/google-online-security/building-a-safer-web-for-everyone-2/</link>
		<comments>https://googledata.org/google-online-security/building-a-safer-web-for-everyone-2/#comments</comments>
		<pubDate>Tue, 09 Feb 2016 21:13:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=b18a3ff956fcfd2b15f880584bac1193</guid>
		<description><![CDATA[<span>Posted by Gerhard Eschelbeck, VP, Security and Privacy</span><br /><br /><i>[Cross-posted from the <a href="https://googleblog.blogspot.com/2016/02/building-safer-web-for-everyone.html">Official Google Blog</a>]</i><br /><br />Today is <a href="https://www.saferinternetday.org/web/usa/sid">Safer Internet Day</a>, a moment for technology companies, nonprofit organizations, security firms, and people around the world to focus on online safety, together. To mark the occasion, we&#8217;re rolling out new tools, and some useful reminders, to help protect you from online dangers of all stripes&#8212;phishing, malware, and other threats to your personal information.<br /><br /><b>1. Keeping security settings simple</b><br /><br />The <a href="https://security.google.com/settings/security/secureaccount?utm_source=ogb-blog&#38;utm_medium=blog-pr&#38;utm_campaign=safer-internet-day">Security Checkup</a> is a quick way to control the security settings for your Google Account. You can add a recovery phone number so we can help if you&#8217;re ever locked out of your account, strengthen your password settings, see which devices are connected to your account, and more. If you complete the Security Checkup by February 11, you&#8217;ll also get <a href="https://googledrive.blogspot.com/2016/02/safer-internet-day-2016.html">2GB of extra Google Drive storage</a>, which can be used across Google Drive, Gmail, and Photos.<br /><div><a href="https://3.bp.blogspot.com/-fKs2OklswPw/VrpS6QE1TII/AAAAAAAAAQA/6GwWZ4qB9jw/s1600/SID_Blog_001.gif"><img border="0" height="225" src="https://3.bp.blogspot.com/-fKs2OklswPw/VrpS6QE1TII/AAAAAAAAAQA/6GwWZ4qB9jw/s400/SID_Blog_001.gif" width="400"></a></div>Safer Internet Day is a great time to do it, but you can&#8212;and should!&#8212;take a Security Checkup on a regular basis. Start your Security Checkup by visiting <a href="https://myaccount.google.com/">My Account</a>.<br /><br /><b>2. Informing Gmail users about potentially unsafe messages</b><br /><br />If you and your Grandpa both use Gmail to exchange messages, your connections are <a href="https://gmailblog.blogspot.com/2014/03/staying-at-forefront-of-email-security.html">encrypted</a> and <a href="https://googleonlinesecurity.blogspot.com/2013/12/internet-wide-efforts-to-fight-email.html">authenticated</a>. That means no peering eyes can read those emails as they zoom across the web, and you can be confident that the message from your Grandpa in size 48 font (with no punctuation and a few misspellings) is really from him!<br /><br />However, as our <a href="https://www.google.com/transparencyreport/saferemail/">Safer Email Transparency Report</a> explains, these things are not always true when Gmail interacts with other mail services. Today, we&#8217;re introducing changes in Gmail on the web to let people know when a received message was not encrypted, if you&#8217;re composing a message to a recipient whose email service doesn&#8217;t support <a href="https://www.google.com/transparencyreport/saferemail/tls/?hl=en">TLS encryption</a>, or when the sender&#8217;s domain couldn&#8217;t be authenticated.<br /><br />Here&#8217;s the notice you&#8217;ll see in Gmail before you send a message to a service that doesn&#8217;t support TLS encryption. You&#8217;ll also see the broken lock icon if you receive a message that was sent without TLS encryption.<br /><div><a href="https://3.bp.blogspot.com/-T6X5Elx3i0U/VrpTKhBEuuI/AAAAAAAAAQE/62NwfaFUd_k/s1600/notice%2Bone.gif"><img border="0" height="362" src="https://3.bp.blogspot.com/-T6X5Elx3i0U/VrpTKhBEuuI/AAAAAAAAAQE/62NwfaFUd_k/s400/notice%2Bone.gif" width="400"></a></div>If you receive a message that can&#8217;t be authenticated, you&#8217;ll see a question mark where you might otherwise see a profile photo or logo:<br /><br /><div><a href="https://1.bp.blogspot.com/-J73uPFlG11g/VrpTPyGswMI/AAAAAAAAAQI/CKi5w-YrzCY/s1600/notice%2B2.png"><img border="0" height="200" src="https://1.bp.blogspot.com/-J73uPFlG11g/VrpTPyGswMI/AAAAAAAAAQI/CKi5w-YrzCY/s400/notice%2B2.png" width="400"></a></div><br /><b>3. Protecting you from bad apps</b><br /><br />Dangerous apps that phish and steal your personal information, or hold your phone hostage and make you pay to unlock it, have no place on your smartphone&#8212;or any device, for that matter.<br /><br />Google Play helps protect your Android device by rejecting bad apps that don&#8217;t comply with our <a href="https://www.android.com/us/developer-content-policy.html#showlanguages">Play policies</a>. It also conducts more than 200 million daily security scans of devices, in tandem with our <a href="https://www.google.com/transparencyreport/safebrowsing/">Safe Browsing</a> system, for any signs of trouble. Last year, bad apps were installed on fewer than 0.13% of Android devices that install apps only from Google Play.<br /><br />Learn more about these, and other Android security features &#8212; like app sandboxing, <a href="https://officialandroid.blogspot.com/2015/08/an-update-to-nexus-devices.html">monthly security updates</a> for Nexus and other devices, and our <a href="https://googleonlinesecurity.blogspot.com/2015/06/announcing-security-rewards-for-android.html">Security Rewards Program</a>&#8212;in new research we&#8217;ve made public on our <a href="https://officialandroid.blogspot.com/2016/02/safety-first-four-easy-android-security.html">Android blog</a>.<br /><br /><b>4. Busting bad advertising practices</b><br /><br />Malicious advertising &#8220;botnets&#8221; try to send phony visitors to websites to make money from online ads. Botnets threaten the businesses of honest advertisers and publishers, and because they&#8217;re often made up of devices infected with malware, they put users in harm&#8217;s way too.<br /><br />We've worked to keep botnets out of our ads systems, cutting them out of advertising revenue, and making it harder to make money from distributing malware and <a href="https://googleonlinesecurity.blogspot.com/2015/12/year-one-progress-in-fight-against.html">Unwanted Software</a>. Now, as part of our effort to <a href="https://googleblog.blogspot.com/2016/01/better-ads-report.html">fight bad ads online</a>, we&#8217;re reinforcing our existing botnet defenses by automatically filtering traffic from three of the top ad fraud botnets, comprising more than 500,000 infected user machines. Learn more about this update on the <a href="https://doubleclickadvertisers.blogspot.com/2016/02/booting-bots-new-botnet-protections.html">Doubleclick blog</a>.<br /><br /><b>5. Moving the security conversation forward</b><br /><br />Recent events&#8212;<a href="https://en.wikipedia.org/wiki/Edward_Snowden#Global_surveillance_disclosures">Edward Snowden&#8217;s disclosures</a>, the <a href="https://en.wikipedia.org/wiki/Sony_Pictures_Entertainment_hack">Sony Hack</a>, the <a href="https://www.washingtonpost.com/news/the-switch/wp/2015/12/08/you-already-use-encryption-heres-what-you-need-to-know-about-it/">current conversation around encryption</a>, and more&#8212;have made online safety a truly mainstream issue. This is reflected both in news headlines, and popular culture: &#8220;<a href="https://en.wikipedia.org/wiki/Mr._Robot_(TV_series)">Mr. Robot</a>,&#8221; a TV series about hacking and cybersecurity, just won a Golden Globe for Best Drama, and <a href="https://twitter.com/SwiftOnSecurity">@SwiftOnSecurity</a>, a popular security commentator, is named after Taylor Swift.<br /><br />But despite this shift, security remains a complex topic that lends itself to lively debates between experts...that are often unintelligible to just about everyone else. We need to simplify the way we talk about online security to enable everyone to understand its importance and participate in this conversation.<br /><br />To that end, we&#8217;re teaming up with <a href="https://medium.com/">Medium</a> to host a virtual roundtable about online security, present and future. Moderated by journalist and security researcher <a href="https://en.wikipedia.org/wiki/Kevin_Poulsen">Kevin Poulsen</a>, this project aims to present fresh perspectives about online security in a time when our attention is increasingly ruled by the devices we carry with us constantly. We hope you&#8217;ll <a href="https://me.dm/roundtable">tune in</a> and check it out.<br /><br />Online security and safety are being discussed more often, and with more urgency, than ever before. We hope you&#8217;ll take a few minutes today to learn how Google protects your data and how we can work toward a safer web, for everyone.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Gerhard Eschelbeck, VP, Security and Privacy</span><br /><br /><i>[Cross-posted from the <a href="https://googleblog.blogspot.com/2016/02/building-safer-web-for-everyone.html">Official Google Blog</a>]</i><br /><br />Today is <a href="https://www.saferinternetday.org/web/usa/sid">Safer Internet Day</a>, a moment for technology companies, nonprofit organizations, security firms, and people around the world to focus on online safety, together. To mark the occasion, we’re rolling out new tools, and some useful reminders, to help protect you from online dangers of all stripes—phishing, malware, and other threats to your personal information.<br /><br /><b>1. Keeping security settings simple</b><br /><br />The <a href="https://security.google.com/settings/security/secureaccount?utm_source=ogb-blog&amp;utm_medium=blog-pr&amp;utm_campaign=safer-internet-day">Security Checkup</a> is a quick way to control the security settings for your Google Account. You can add a recovery phone number so we can help if you’re ever locked out of your account, strengthen your password settings, see which devices are connected to your account, and more. If you complete the Security Checkup by February 11, you’ll also get <a href="https://googledrive.blogspot.com/2016/02/safer-internet-day-2016.html">2GB of extra Google Drive storage</a>, which can be used across Google Drive, Gmail, and Photos.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-fKs2OklswPw/VrpS6QE1TII/AAAAAAAAAQA/6GwWZ4qB9jw/s1600/SID_Blog_001.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="https://3.bp.blogspot.com/-fKs2OklswPw/VrpS6QE1TII/AAAAAAAAAQA/6GwWZ4qB9jw/s400/SID_Blog_001.gif" width="400" /></a></div>Safer Internet Day is a great time to do it, but you can—and should!—take a Security Checkup on a regular basis. Start your Security Checkup by visiting <a href="https://myaccount.google.com/">My Account</a>.<br /><br /><b>2. Informing Gmail users about potentially unsafe messages</b><br /><br />If you and your Grandpa both use Gmail to exchange messages, your connections are <a href="https://gmailblog.blogspot.com/2014/03/staying-at-forefront-of-email-security.html">encrypted</a> and <a href="https://googleonlinesecurity.blogspot.com/2013/12/internet-wide-efforts-to-fight-email.html">authenticated</a>. That means no peering eyes can read those emails as they zoom across the web, and you can be confident that the message from your Grandpa in size 48 font (with no punctuation and a few misspellings) is really from him!<br /><br />However, as our <a href="https://www.google.com/transparencyreport/saferemail/">Safer Email Transparency Report</a> explains, these things are not always true when Gmail interacts with other mail services. Today, we’re introducing changes in Gmail on the web to let people know when a received message was not encrypted, if you’re composing a message to a recipient whose email service doesn’t support <a href="https://www.google.com/transparencyreport/saferemail/tls/?hl=en">TLS encryption</a>, or when the sender’s domain couldn’t be authenticated.<br /><br />Here’s the notice you’ll see in Gmail before you send a message to a service that doesn’t support TLS encryption. You’ll also see the broken lock icon if you receive a message that was sent without TLS encryption.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-T6X5Elx3i0U/VrpTKhBEuuI/AAAAAAAAAQE/62NwfaFUd_k/s1600/notice%2Bone.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="362" src="https://3.bp.blogspot.com/-T6X5Elx3i0U/VrpTKhBEuuI/AAAAAAAAAQE/62NwfaFUd_k/s400/notice%2Bone.gif" width="400" /></a></div>If you receive a message that can’t be authenticated, you’ll see a question mark where you might otherwise see a profile photo or logo:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-J73uPFlG11g/VrpTPyGswMI/AAAAAAAAAQI/CKi5w-YrzCY/s1600/notice%2B2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="https://1.bp.blogspot.com/-J73uPFlG11g/VrpTPyGswMI/AAAAAAAAAQI/CKi5w-YrzCY/s400/notice%2B2.png" width="400" /></a></div><br /><b>3. Protecting you from bad apps</b><br /><br />Dangerous apps that phish and steal your personal information, or hold your phone hostage and make you pay to unlock it, have no place on your smartphone—or any device, for that matter.<br /><br />Google Play helps protect your Android device by rejecting bad apps that don’t comply with our <a href="https://www.android.com/us/developer-content-policy.html#showlanguages">Play policies</a>. It also conducts more than 200 million daily security scans of devices, in tandem with our <a href="https://www.google.com/transparencyreport/safebrowsing/">Safe Browsing</a> system, for any signs of trouble. Last year, bad apps were installed on fewer than 0.13% of Android devices that install apps only from Google Play.<br /><br />Learn more about these, and other Android security features — like app sandboxing, <a href="https://officialandroid.blogspot.com/2015/08/an-update-to-nexus-devices.html">monthly security updates</a> for Nexus and other devices, and our <a href="https://googleonlinesecurity.blogspot.com/2015/06/announcing-security-rewards-for-android.html">Security Rewards Program</a>—in new research we’ve made public on our <a href="https://officialandroid.blogspot.com/2016/02/safety-first-four-easy-android-security.html">Android blog</a>.<br /><br /><b>4. Busting bad advertising practices</b><br /><br />Malicious advertising “botnets” try to send phony visitors to websites to make money from online ads. Botnets threaten the businesses of honest advertisers and publishers, and because they’re often made up of devices infected with malware, they put users in harm’s way too.<br /><br />We've worked to keep botnets out of our ads systems, cutting them out of advertising revenue, and making it harder to make money from distributing malware and <a href="https://googleonlinesecurity.blogspot.com/2015/12/year-one-progress-in-fight-against.html">Unwanted Software</a>. Now, as part of our effort to <a href="https://googleblog.blogspot.com/2016/01/better-ads-report.html">fight bad ads online</a>, we’re reinforcing our existing botnet defenses by automatically filtering traffic from three of the top ad fraud botnets, comprising more than 500,000 infected user machines. Learn more about this update on the <a href="https://doubleclickadvertisers.blogspot.com/2016/02/booting-bots-new-botnet-protections.html">Doubleclick blog</a>.<br /><br /><b>5. Moving the security conversation forward</b><br /><br />Recent events—<a href="https://en.wikipedia.org/wiki/Edward_Snowden#Global_surveillance_disclosures">Edward Snowden’s disclosures</a>, the <a href="https://en.wikipedia.org/wiki/Sony_Pictures_Entertainment_hack">Sony Hack</a>, the <a href="https://www.washingtonpost.com/news/the-switch/wp/2015/12/08/you-already-use-encryption-heres-what-you-need-to-know-about-it/">current conversation around encryption</a>, and more—have made online safety a truly mainstream issue. This is reflected both in news headlines, and popular culture: “<a href="https://en.wikipedia.org/wiki/Mr._Robot_(TV_series)">Mr. Robot</a>,” a TV series about hacking and cybersecurity, just won a Golden Globe for Best Drama, and <a href="https://twitter.com/SwiftOnSecurity">@SwiftOnSecurity</a>, a popular security commentator, is named after Taylor Swift.<br /><br />But despite this shift, security remains a complex topic that lends itself to lively debates between experts...that are often unintelligible to just about everyone else. We need to simplify the way we talk about online security to enable everyone to understand its importance and participate in this conversation.<br /><br />To that end, we’re teaming up with <a href="https://medium.com/">Medium</a> to host a virtual roundtable about online security, present and future. Moderated by journalist and security researcher <a href="https://en.wikipedia.org/wiki/Kevin_Poulsen">Kevin Poulsen</a>, this project aims to present fresh perspectives about online security in a time when our attention is increasingly ruled by the devices we carry with us constantly. We hope you’ll <a href="https://me.dm/roundtable">tune in</a> and check it out.<br /><br />Online security and safety are being discussed more often, and with more urgency, than ever before. We hope you’ll take a few minutes today to learn how Google protects your data and how we can work toward a safer web, for everyone.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/building-a-safer-web-for-everyone-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>No More Deceptive Download Buttons</title>
		<link>https://googledata.org/google-online-security/no-more-deceptive-download-buttons/</link>
		<comments>https://googledata.org/google-online-security/no-more-deceptive-download-buttons/#comments</comments>
		<pubDate>Wed, 03 Feb 2016 20:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=40262de795bc8ea3dec23206ce84a6ff</guid>
		<description><![CDATA[<span>Posted by Lucas Ballard, Safe Browsing Team</span><br /><br />In <a href="https://googleonlinesecurity.blogspot.com/2015/11/safe-browsing-protection-from-even-more.html">November</a>, we announced that Safe Browsing would protect you from social engineering attacks - deceptive tactics that try to trick you into doing something dangerous, like installing <a href="https://www.google.com/about/company/unwanted-software-policy.html">unwanted software</a> or <a href="https://googleonlinesecurity.blogspot.com/2010/03/phishing-phree.html">revealing your personal information</a> (for example, passwords, phone numbers, or credit cards). You may have encountered social engineering in a deceptive download button, or an image ad that falsely claims your system is out of date. Today, we&#8217;re expanding Safe Browsing protection to protect you from such deceptive embedded content, like social engineering ads.<br /><div><a href="http://1.bp.blogspot.com/-Zuc5HLTPkVc/VrJHDI7OwmI/AAAAAAAAAPs/slv4rkY2g0I/s1600/SBWarnBlur.jpg"><img border="0" height="265" src="http://1.bp.blogspot.com/-Zuc5HLTPkVc/VrJHDI7OwmI/AAAAAAAAAPs/slv4rkY2g0I/s400/SBWarnBlur.jpg" width="400"></a></div>Consistent with the social engineering policy we announced in November, embedded content (like ads) on a web page will be considered social engineering when they either:<br /><br /><ul><li>Pretend to act, or look and feel, like a trusted entity &#8212; like your own device or browser, or the website itself.&#160;</li><li>Try to trick you into doing something you&#8217;d only do for a trusted entity &#8212; like sharing a password or calling tech support.</li></ul><div><br /></div><div>Below are some examples of deceptive content, shown via ads:</div><div><a href="http://2.bp.blogspot.com/-aRKhBscoV2s/VrI_reE05CI/AAAAAAAAAPY/i7rAsE-vMG8/s1600/pasted%2Bimage%2B0%2B%25284%2529.png"><img border="0" height="226" src="http://2.bp.blogspot.com/-aRKhBscoV2s/VrI_reE05CI/AAAAAAAAAPY/i7rAsE-vMG8/s320/pasted%2Bimage%2B0%2B%25284%2529.png" width="320"></a></div><div>This image claims that your software is out-of-date to trick you into clicking &#8220;update&#8221;.&#160;</div><br /><div><a href="http://3.bp.blogspot.com/-sZzCG6GTVRc/VrI_rdhOgQI/AAAAAAAAAPU/EOM5eZh-JoE/s1600/pasted%2Bimage%2B0%2B%25285%2529.png"><img border="0" height="201" src="http://3.bp.blogspot.com/-sZzCG6GTVRc/VrI_rdhOgQI/AAAAAAAAAPU/EOM5eZh-JoE/s320/pasted%2Bimage%2B0%2B%25285%2529.png" width="320"></a></div><div>This image mimics a dialogue from the FLV software developer -- but it does not actually originate from this developer.</div><div><br /></div><div><a href="http://3.bp.blogspot.com/-vSEPKTittRY/VrI_nm6oWeI/AAAAAAAAAPQ/4rblSPCnLGk/s1600/pasted%2Bimage%2B0.png"><img border="0" height="48" src="http://3.bp.blogspot.com/-vSEPKTittRY/VrI_nm6oWeI/AAAAAAAAAPQ/4rblSPCnLGk/s320/pasted%2Bimage%2B0.png" width="320"></a></div><div>These buttons seem like they will produce content that relate to the site (like a TV show or sports video stream) by mimicking the site&#8217;s look and feel. They are often not distinguishable from the rest of the page.</div><div><br /></div><div>Our fight against unwanted software and social engineering is still just beginning. We'll continue to improve Google's <a href="https://www.google.com/transparencyreport/safebrowsing/">Safe Browsing</a> protection to help more people stay safe online.</div><div><br /></div><div><b>Will my site be affected?</b></div><div><br /></div><div>If visitors to your web site consistently see social engineering content, Google Safe Browsing may warn users when they visit the site. If your site is flagged for containing social engineering content, you should troubleshoot with Search Console. Check out our <a href="https://support.google.com/webmasters/answer/6350487">social engineering help for webmasters</a>.</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Lucas Ballard, Safe Browsing Team</span><br /><br />In <a href="https://googleonlinesecurity.blogspot.com/2015/11/safe-browsing-protection-from-even-more.html">November</a>, we announced that Safe Browsing would protect you from social engineering attacks - deceptive tactics that try to trick you into doing something dangerous, like installing <a href="https://www.google.com/about/company/unwanted-software-policy.html">unwanted software</a> or <a href="https://googleonlinesecurity.blogspot.com/2010/03/phishing-phree.html">revealing your personal information</a> (for example, passwords, phone numbers, or credit cards). You may have encountered social engineering in a deceptive download button, or an image ad that falsely claims your system is out of date. Today, we’re expanding Safe Browsing protection to protect you from such deceptive embedded content, like social engineering ads.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-Zuc5HLTPkVc/VrJHDI7OwmI/AAAAAAAAAPs/slv4rkY2g0I/s1600/SBWarnBlur.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="265" src="http://1.bp.blogspot.com/-Zuc5HLTPkVc/VrJHDI7OwmI/AAAAAAAAAPs/slv4rkY2g0I/s400/SBWarnBlur.jpg" width="400" /></a></div>Consistent with the social engineering policy we announced in November, embedded content (like ads) on a web page will be considered social engineering when they either:<br /><br /><ul><li>Pretend to act, or look and feel, like a trusted entity — like your own device or browser, or the website itself.&nbsp;</li><li>Try to trick you into doing something you’d only do for a trusted entity — like sharing a password or calling tech support.</li></ul><div><br /></div><div>Below are some examples of deceptive content, shown via ads:</div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-aRKhBscoV2s/VrI_reE05CI/AAAAAAAAAPY/i7rAsE-vMG8/s1600/pasted%2Bimage%2B0%2B%25284%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="226" src="http://2.bp.blogspot.com/-aRKhBscoV2s/VrI_reE05CI/AAAAAAAAAPY/i7rAsE-vMG8/s320/pasted%2Bimage%2B0%2B%25284%2529.png" width="320" /></a></div><div style="text-align: center;">This image claims that your software is out-of-date to trick you into clicking “update”.&nbsp;</div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-sZzCG6GTVRc/VrI_rdhOgQI/AAAAAAAAAPU/EOM5eZh-JoE/s1600/pasted%2Bimage%2B0%2B%25285%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="201" src="http://3.bp.blogspot.com/-sZzCG6GTVRc/VrI_rdhOgQI/AAAAAAAAAPU/EOM5eZh-JoE/s320/pasted%2Bimage%2B0%2B%25285%2529.png" width="320" /></a></div><div style="text-align: center;">This image mimics a dialogue from the FLV software developer -- but it does not actually originate from this developer.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-vSEPKTittRY/VrI_nm6oWeI/AAAAAAAAAPQ/4rblSPCnLGk/s1600/pasted%2Bimage%2B0.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="48" src="http://3.bp.blogspot.com/-vSEPKTittRY/VrI_nm6oWeI/AAAAAAAAAPQ/4rblSPCnLGk/s320/pasted%2Bimage%2B0.png" width="320" /></a></div><div style="text-align: center;">These buttons seem like they will produce content that relate to the site (like a TV show or sports video stream) by mimicking the site’s look and feel. They are often not distinguishable from the rest of the page.</div><div><br /></div><div>Our fight against unwanted software and social engineering is still just beginning. We'll continue to improve Google's <a href="https://www.google.com/transparencyreport/safebrowsing/">Safe Browsing</a> protection to help more people stay safe online.</div><div><br /></div><div><b>Will my site be affected?</b></div><div><br /></div><div>If visitors to your web site consistently see social engineering content, Google Safe Browsing may warn users when they visit the site. If your site is flagged for containing social engineering content, you should troubleshoot with Search Console. Check out our <a href="https://support.google.com/webmasters/answer/6350487">social engineering help for webmasters</a>.</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/no-more-deceptive-download-buttons/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Google Security Rewards &#8211; 2015 Year in Review</title>
		<link>https://googledata.org/google-online-security/google-security-rewards-2015-year-in-review/</link>
		<comments>https://googledata.org/google-online-security/google-security-rewards-2015-year-in-review/#comments</comments>
		<pubDate>Thu, 28 Jan 2016 18:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=ec8cc879a3080720a96f336aa4868469</guid>
		<description><![CDATA[<span>Posted by Eduardo Vela Nava, Google Security</span><br /><br />We launched our <a href="https://www.google.com/about/appsecurity/reward-program/">Vulnerability Reward Program</a> in 2010 because rewarding security researchers for their hard work benefits everyone. These financial rewards help make our services, and the web as a whole, safer and more secure.<br /><br />With an open approach, we&#8217;re able to consider a broad diversity of expertise for individual issues. We can also offer incentives for external researchers to work on challenging, time-consuming, projects that otherwise may not receive proper attention.<br /><br />Last January, we summarized these efforts in our first ever <a href="https://googleonlinesecurity.blogspot.com/2015/01/security-reward-programs-year-in-review.html">Security Reward Program &#8216;Year in Review&#8217;</a>. Now, at the beginning of another new year, we wanted to look back at 2015 and again show our appreciation for researchers&#8217; important contributions.<br /><br /><b>2015 at a Glance</b><br /><br />Once again, researchers from around the world&#8212;Great Britain, Poland, Germany, Romania, Israel, Brazil, United States, China, Russia, India to name a few countries&#8212;participated our program.<br /><br />Here's an overview of the rewards they received and broader milestones for the program, as a whole.<br /><div><a href="http://3.bp.blogspot.com/-jx8qGdOpcjM/Vqo5uMFNBEI/AAAAAAAAAO8/T4dsD7hZvkc/s1600/Security%2BRewards%2B2015.jpg"><img border="0" height="400" src="https://3.bp.blogspot.com/-jx8qGdOpcjM/Vqo5uMFNBEI/AAAAAAAAAO8/T4dsD7hZvkc/s400/Security%2BRewards%2B2015.jpg" width="400"></a></div><div><b>Android Joins Security Rewards</b></div><div><br /></div><div>Android was a newcomer to the Security Reward program initiative in 2015 and it made a significant and immediate impact as soon as it joined the program.</div><div><br /></div><div>We <a href="https://googleonlinesecurity.blogspot.com/2015/06/announcing-security-rewards-for-android.html">launched</a> our Android VRP in June, and by the end of 2015, we had paid more than $200,000 to researchers for their work, including our largest single payment of $37,500 to an Android security researcher.</div><div><br /></div><div><b>New Vulnerability Research Grants Pay Off</b></div><div><br /></div><div>Last year, we began to provide researchers with <a href="https://www.google.com/about/appsecurity/research-grants/">Vulnerability Research Grants</a>, lump sums of money that researchers receive before starting their investigations. The purpose of these grants is to ensure that researchers are rewarded for their hard work, even if they don&#8217;t find a vulnerability.</div><div><br /></div><div>We&#8217;ve already seen positive results from this program; here&#8217;s one example. Kamil Histamullin a researcher from Kasan, Russia received a VRP grant early last year. Shortly thereafter, he found an issue in YouTube Creator Studio which would have enabled anyone to delete any video from YouTube by simply changing a parameter from the URL. After the issue was reported, our teams quickly fixed it and the researcher was was rewarded $5,000 in addition to his initial research grant. Kamil detailed his findings on his <a href="http://kamil.hism.ru/posts/about-vrg-and-delete-any-youtube-video-issue.html">personal blog</a> in March.</div><div><br /></div><div><b>Established Programs Continue to Grow</b></div><div><br /></div><div>We continued to see important security research in our established programs in 2015. Here are just a few examples:</div><div></div><ul><li>Tomasz Bojarski found 70 bugs on Google in 2015, and was our most prolific researcher of the year. He found a bug in our vulnerability submission form.</li><li>You may have read about Sanmay Ved, a researcher from who was able to buy google.com for one minute on Google Domains. Our initial financial reward to Sanmay&#8212;$ 6,006.13&#8212;spelled-out Google, numerically (squint a little and you&#8217;ll see it!). We then doubled this amount when <a href="https://www.businessinsider.com/google-rewarded-the-guy-who-bought-googlecom-2015-10">Sanmay donated his reward to charity</a>.</li></ul><div>We also injected some new energy into these existing research programs and grants. In December, we <a href="https://googledrive.blogspot.com/2015/12/keeping-things-safe-should-be-rewarded.html">announced</a> that we'd be dedicating one million dollars specifically for security research related to Google Drive.</div><div><br /></div><div>We&#8217;re looking forward to continuing the Security Reward Program&#8217;s growth in 2016. Stay tuned for more exciting reward program changes throughout the year.</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Eduardo Vela Nava, Google Security</span><br /><br />We launched our <a href="https://www.google.com/about/appsecurity/reward-program/">Vulnerability Reward Program</a> in 2010 because rewarding security researchers for their hard work benefits everyone. These financial rewards help make our services, and the web as a whole, safer and more secure.<br /><br />With an open approach, we’re able to consider a broad diversity of expertise for individual issues. We can also offer incentives for external researchers to work on challenging, time-consuming, projects that otherwise may not receive proper attention.<br /><br />Last January, we summarized these efforts in our first ever <a href="https://googleonlinesecurity.blogspot.com/2015/01/security-reward-programs-year-in-review.html">Security Reward Program ‘Year in Review’</a>. Now, at the beginning of another new year, we wanted to look back at 2015 and again show our appreciation for researchers’ important contributions.<br /><br /><b>2015 at a Glance</b><br /><br />Once again, researchers from around the world—Great Britain, Poland, Germany, Romania, Israel, Brazil, United States, China, Russia, India to name a few countries—participated our program.<br /><br />Here's an overview of the rewards they received and broader milestones for the program, as a whole.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-jx8qGdOpcjM/Vqo5uMFNBEI/AAAAAAAAAO8/T4dsD7hZvkc/s1600/Security%2BRewards%2B2015.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://3.bp.blogspot.com/-jx8qGdOpcjM/Vqo5uMFNBEI/AAAAAAAAAO8/T4dsD7hZvkc/s400/Security%2BRewards%2B2015.jpg" width="400" /></a></div><div class="separator" style="clear: both; text-align: left;"><b>Android Joins Security Rewards</b></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Android was a newcomer to the Security Reward program initiative in 2015 and it made a significant and immediate impact as soon as it joined the program.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">We <a href="https://googleonlinesecurity.blogspot.com/2015/06/announcing-security-rewards-for-android.html">launched</a> our Android VRP in June, and by the end of 2015, we had paid more than $200,000 to researchers for their work, including our largest single payment of $37,500 to an Android security researcher.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><b>New Vulnerability Research Grants Pay Off</b></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Last year, we began to provide researchers with <a href="https://www.google.com/about/appsecurity/research-grants/">Vulnerability Research Grants</a>, lump sums of money that researchers receive before starting their investigations. The purpose of these grants is to ensure that researchers are rewarded for their hard work, even if they don’t find a vulnerability.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">We’ve already seen positive results from this program; here’s one example. Kamil Histamullin a researcher from Kasan, Russia received a VRP grant early last year. Shortly thereafter, he found an issue in YouTube Creator Studio which would have enabled anyone to delete any video from YouTube by simply changing a parameter from the URL. After the issue was reported, our teams quickly fixed it and the researcher was was rewarded $5,000 in addition to his initial research grant. Kamil detailed his findings on his <a href="http://kamil.hism.ru/posts/about-vrg-and-delete-any-youtube-video-issue.html">personal blog</a> in March.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><b>Established Programs Continue to Grow</b></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">We continued to see important security research in our established programs in 2015. Here are just a few examples:</div><div class="separator" style="clear: both; text-align: left;"></div><ul><li>Tomasz Bojarski found 70 bugs on Google in 2015, and was our most prolific researcher of the year. He found a bug in our vulnerability submission form.</li><li>You may have read about Sanmay Ved, a researcher from who was able to buy google.com for one minute on Google Domains. Our initial financial reward to Sanmay—$ 6,006.13—spelled-out Google, numerically (squint a little and you’ll see it!). We then doubled this amount when <a href="https://www.businessinsider.com/google-rewarded-the-guy-who-bought-googlecom-2015-10">Sanmay donated his reward to charity</a>.</li></ul><div>We also injected some new energy into these existing research programs and grants. In December, we <a href="https://googledrive.blogspot.com/2015/12/keeping-things-safe-should-be-rewarded.html">announced</a> that we'd be dedicating one million dollars specifically for security research related to Google Drive.</div><div><br /></div><div>We’re looking forward to continuing the Security Reward Program’s growth in 2016. Stay tuned for more exciting reward program changes throughout the year.</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/google-security-rewards-2015-year-in-review/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Why attend USENIX Enigma?</title>
		<link>https://googledata.org/google-online-security/why-attend-usenix-enigma-2/</link>
		<comments>https://googledata.org/google-online-security/why-attend-usenix-enigma-2/#comments</comments>
		<pubDate>Mon, 11 Jan 2016 18:15:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=39851cc8583a69ac97b2a12129939d79</guid>
		<description><![CDATA[<span>Posted by Parisa Tabriz, Security Princess &#38; Enigma Program Co-Chair</span><br /><br /><i>[Cross-posted from the <a href="http://googleresearch.blogspot.com/2016/01/why-attend-usenix-enigma.html">Google Research Blog</a>]</i><br /><br />Last August, we <a href="http://googleresearch.blogspot.com/2015/08/say-hello-to-enigma-conference.html">announced USENIX Enigma</a>, a new conference intended to shine a light on great, thought-provoking research in security, privacy, and electronic crime. With Enigma beginning in just a few short weeks, I wanted to share a couple of the reasons I&#8217;m personally excited about this new conference.<br /><br />Enigma aims to bridge the divide that exists between experts working in academia, industry, and public service, explicitly bringing researchers from different sectors together to share their work. Our speakers include those spearheading the defense of digital rights (<a href="https://www.eff.org/">Electronic Frontier Foundation</a>, <a href="https://www.accessnow.org/">Access Now</a>), practitioners at a number of well known industry leaders (<a href="https://www.akamai.com/">Akamai</a>, <a href="http://blackberry.com/">Blackberry</a>, <a href="https://www.facebook.com/">Facebook</a>, <a href="https://www.linkedin.com/">LinkedIn</a>, <a href="https://www.netflix.com/">Netflix</a>, <a href="https://twitter.com/">Twitter</a>), and researchers from multiple universities in the U.S. and abroad. With the diverse <a href="https://www.usenix.org/conference/enigma2016/conference-program">session topics and organizations represented</a>, I expect interesting&#8212;and perhaps spirited&#8212;coffee break and lunchtime discussions among the equally diverse list of conference attendees.<br /><br />Of course, I&#8217;m very proud to have some of my Google colleagues speaking at Enigma:<br /><ul><li>Adrienne Porter Felt will talk about blending research and engineering to solve usable security problems. You&#8217;ll hear how Chrome&#8217;s usable security team runs user studies and experiments to motivate engineering and design decisions. Adrienne will share the challenges they&#8217;ve faced when trying to adapt existing usable security research to practice, and give insight into how they&#8217;ve achieved successes.</li></ul><ul><li>Ben Hawkes will be speaking about <a href="https://googleonlinesecurity.blogspot.de/2014/07/announcing-project-zero.html">Project Zero</a>, a security research team dedicated to the mission of, &#8220;making <a href="https://en.wikipedia.org/wiki/Zero-day_(computing)">0day</a> hard.&#8221; Ben will talk about why Project Zero exists, and some of the recent trends and technologies that make vulnerability discovery and exploitation fundamentally harder.</li></ul><ul><li>Kostya Serebryany will be presenting a 3-pronged approach to securing C++ code based on his many years of experiencing wrangling complex, buggy software. Kostya will survey multiple dynamic sanitizing tools him and his team have made publicly available, review control-flow and data-flow guided fuzzing, and explain a method to harden your code in the presence of any bugs that remain.</li></ul><ul><li>Elie Bursztein will go through key lessons the Gmail team learned over the past 11 years while protecting users from spam, phishing, malware, and web attacks. Illustrated with concrete numbers and examples from one of the largest email systems on the planet, attendees will gain insight into specific techniques and approaches useful in fighting abuse and securing their online services.</li></ul><div><div></div>In addition to raw content, my Program Co-Chair, <a href="http://users.ece.cmu.edu/~dbrumley/index.html">David Brumley</a>, and I have prioritized talk quality. Researchers dedicate months or years of their time to thinking about a problem and conducting the technical work of research, but a common criticism of technical conferences is that the actual presentation of that research seems like an afterthought. Rather than be a regurgitation of a research paper in slide format, a presentation is an opportunity for a researcher to explain the context and impact of their work in their own voice; a chance to inspire the audience to want to learn more or dig deeper. Taking inspiration from the <a href="https://www.ted.com/about/our-organization">TED conference</a>, Enigma will have shorter presentations, and the program committee has worked with each speaker to help them craft the best version of their talk.&#160;</div><div><br /></div><div>Hope to see some of you at <a href="https://www.usenix.org/conference/enigma2016">USENIX Enigma</a> later this month!<br /><br /></div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Parisa Tabriz, Security Princess &amp; Enigma Program Co-Chair</span><br /><br /><i>[Cross-posted from the <a href="http://googleresearch.blogspot.com/2016/01/why-attend-usenix-enigma.html">Google Research Blog</a>]</i><br /><br />Last August, we <a href="http://googleresearch.blogspot.com/2015/08/say-hello-to-enigma-conference.html">announced USENIX Enigma</a>, a new conference intended to shine a light on great, thought-provoking research in security, privacy, and electronic crime. With Enigma beginning in just a few short weeks, I wanted to share a couple of the reasons I’m personally excited about this new conference.<br /><br />Enigma aims to bridge the divide that exists between experts working in academia, industry, and public service, explicitly bringing researchers from different sectors together to share their work. Our speakers include those spearheading the defense of digital rights (<a href="https://www.eff.org/">Electronic Frontier Foundation</a>, <a href="https://www.accessnow.org/">Access Now</a>), practitioners at a number of well known industry leaders (<a href="https://www.akamai.com/">Akamai</a>, <a href="http://blackberry.com/">Blackberry</a>, <a href="https://www.facebook.com/">Facebook</a>, <a href="https://www.linkedin.com/">LinkedIn</a>, <a href="https://www.netflix.com/">Netflix</a>, <a href="https://twitter.com/">Twitter</a>), and researchers from multiple universities in the U.S. and abroad. With the diverse <a href="https://www.usenix.org/conference/enigma2016/conference-program">session topics and organizations represented</a>, I expect interesting—and perhaps spirited—coffee break and lunchtime discussions among the equally diverse list of conference attendees.<br /><br />Of course, I’m very proud to have some of my Google colleagues speaking at Enigma:<br /><ul><li>Adrienne Porter Felt will talk about blending research and engineering to solve usable security problems. You’ll hear how Chrome’s usable security team runs user studies and experiments to motivate engineering and design decisions. Adrienne will share the challenges they’ve faced when trying to adapt existing usable security research to practice, and give insight into how they’ve achieved successes.</li></ul><ul><li>Ben Hawkes will be speaking about <a href="https://googleonlinesecurity.blogspot.de/2014/07/announcing-project-zero.html">Project Zero</a>, a security research team dedicated to the mission of, “making <a href="https://en.wikipedia.org/wiki/Zero-day_(computing)">0day</a> hard.” Ben will talk about why Project Zero exists, and some of the recent trends and technologies that make vulnerability discovery and exploitation fundamentally harder.</li></ul><ul><li>Kostya Serebryany will be presenting a 3-pronged approach to securing C++ code based on his many years of experiencing wrangling complex, buggy software. Kostya will survey multiple dynamic sanitizing tools him and his team have made publicly available, review control-flow and data-flow guided fuzzing, and explain a method to harden your code in the presence of any bugs that remain.</li></ul><ul><li>Elie Bursztein will go through key lessons the Gmail team learned over the past 11 years while protecting users from spam, phishing, malware, and web attacks. Illustrated with concrete numbers and examples from one of the largest email systems on the planet, attendees will gain insight into specific techniques and approaches useful in fighting abuse and securing their online services.</li></ul><div><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/MorFAKn16w8/0.jpg" frameborder="0" height="400" src="https://www.youtube.com/embed/MorFAKn16w8?feature=player_embedded" width="480"></iframe></div>In addition to raw content, my Program Co-Chair, <a href="http://users.ece.cmu.edu/~dbrumley/index.html">David Brumley</a>, and I have prioritized talk quality. Researchers dedicate months or years of their time to thinking about a problem and conducting the technical work of research, but a common criticism of technical conferences is that the actual presentation of that research seems like an afterthought. Rather than be a regurgitation of a research paper in slide format, a presentation is an opportunity for a researcher to explain the context and impact of their work in their own voice; a chance to inspire the audience to want to learn more or dig deeper. Taking inspiration from the <a href="https://www.ted.com/about/our-organization">TED conference</a>, Enigma will have shorter presentations, and the program committee has worked with each speaker to help them craft the best version of their talk.&nbsp;</div><div><br /></div><div>Hope to see some of you at <a href="https://www.usenix.org/conference/enigma2016">USENIX Enigma</a> later this month!<br /><br /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/why-attend-usenix-enigma-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>An update on SHA-1 certificates in Chrome</title>
		<link>https://googledata.org/google-online-security/an-update-on-sha-1-certificates-in-chrome/</link>
		<comments>https://googledata.org/google-online-security/an-update-on-sha-1-certificates-in-chrome/#comments</comments>
		<pubDate>Fri, 18 Dec 2015 21:25:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=5917b18be69086865e9b583e63499b3f</guid>
		<description><![CDATA[<span>Posted by Lucas Garron, Chrome security and David Benjamin, Chrome networking</span><br /><br />As <a href="https://googleonlinesecurity.blogspot.com/2014/09/gradually-sunsetting-sha-1.html">announced last September</a> and supported by <a href="https://sites.google.com/site/itstheshappening/">further recent research</a>, Google Chrome does not treat SHA-1 certificates as secure anymore, and will completely stop supporting them over the next year. Chrome will discontinue support in two steps: first, blocking new SHA-1 certificates; and second, blocking all SHA-1 certificates.<br /><br /><b>Step 1: Blocking new SHA-1 certificates</b><br /><b><br /></b>Starting in early 2016 with Chrome version 48, Chrome will display a certificate error if it encounters a site with a leaf certificate that:<br /><br /><ol><li>is signed with a SHA-1-based signature</li><li>is issued on or after January 1, 2016</li><li>chains to a public CA</li></ol><div><a href="http://1.bp.blogspot.com/-ccPfgfmU-AY/VnR4iHv6GiI/AAAAAAAAAOE/RwQTutHezMU/s1600/Screen%2BShot%2B2015-12-04%2Bat%2B16.01.45.png"><img border="0" height="238" src="http://1.bp.blogspot.com/-ccPfgfmU-AY/VnR4iHv6GiI/AAAAAAAAAOE/RwQTutHezMU/s400/Screen%2BShot%2B2015-12-04%2Bat%2B16.01.45.png" width="400"></a></div><div>We are hopeful that no one will encounter this error, since public CAs must stop issuing SHA-1 certificates in 2016 per the <a href="https://cabforum.org/baseline-requirements-documents/">Baseline Requirements for SSL</a>.</div><div><br /></div><div>In addition, a later version of Chrome in 2016 may extend these criteria in order to help guard against SHA-1 collision attacks on older devices, by displaying a certificate error for sites with certificate chains that:&#160;</div><div><ol><li>contain an intermediate or leaf certificate signed with a SHA-1-based signature</li><li>contain an intermediate or leaf certificate issued on or after January 1, 2016</li><li>chain to a public CA</li></ol>(Note that the first two criteria can match different certificates.)</div><div><br /></div><div>Note that sites using new SHA-1 certificates that chain to local trust anchors (rather than public CAs) will continue to work without a certificate error. However, they will still be subject to the UI downgrade specified in our <a href="https://googleonlinesecurity.blogspot.com/2014/09/gradually-sunsetting-sha-1.html">original announcement</a>.</div><div><br /></div><div><b>Step 2: Blocking all SHA-1 certificates</b></div><div><br /></div><div>Starting January 1, 2017 at the latest, Chrome will completely stop supporting SHA-1 certificates. At this point, sites that have a SHA-1-based signature as part of the certificate chain (not including the self-signature on the root certificate) will trigger a fatal network error. This includes certificate chains that end in a local trust anchor as well as those that end at a public CA.</div><div><br /></div><div>In line with <a href="https://cabforum.org/pipermail/public/2015-October/006121.html">Microsoft Edge</a> and <a href="https://blog.mozilla.org/security/2015/10/20/continuing-to-phase-out-sha-1-certificates/">Mozilla Firefox</a>, the target date for this step is January 1, 2017, but we are considering moving it earlier to July 1, 2016 in light of ongoing research. We therefore urge sites to replace any remaining SHA-1 certificates as soon as possible.</div><div><br /></div><div>Note that Chrome uses the certificate trust settings of the host OS where possible, and that an update such as Microsoft&#8217;s <a href="http://social.technet.microsoft.com/wiki/contents/articles/32288.windows-enforcement-of-authenticode-code-signing-and-timestamping.aspx">planned change</a> will cause a fatal network error in Chrome, regardless of Chrome&#8217;s intended target date.</div><div><br /></div><div><b>Keeping your site safe and compatible</b></div><div><br /></div><div>As individual TLS features are found to be too weak, browsers need to drop support for those features to keep users safe. Unfortunately, SHA-1 certificates are not the only feature that browsers will remove in the near future.</div><div><br /></div><div>As we <a href="https://groups.google.com/a/chromium.org/d/msg/security-dev/kVfCywocUO8/2BW3INFdDwAJ">announced</a> on our security-dev mailing list, Chrome 48 will also stop supporting RC4 cipher suites for TLS connections. This aligns with timelines for <a href="https://blogs.windows.com/msedgedev/2015/09/01/ending-support-for-the-rc4-cipher-in-microsoft-edge-and-internet-explorer-11/">Microsoft Edge</a> and <a href="https://blog.mozilla.org/security/2015/09/11/deprecating-the-rc4-cipher/">Mozilla Firefox</a>.</div><div><br /></div><div>For security and interoperability in the face of upcoming browser changes, site operators should ensure that their servers use SHA-2 certificates, support non-RC4 cipher suites, and follow TLS best practices. In particular, we recommend that most sites support TLS 1.2 and prioritize the ECDHE_RSA_WITH_AES_128_GCM cipher suite. We also encourage site operators to use tools like the <a href="https://www.ssllabs.com/ssltest/">SSL Labs server test</a> and <a href="https://mozilla.github.io/server-side-tls/ssl-config-generator/">Mozilla's SSL Configuration Generator</a>.</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Lucas Garron, Chrome security and David Benjamin, Chrome networking</span><br /><br />As <a href="https://googleonlinesecurity.blogspot.com/2014/09/gradually-sunsetting-sha-1.html">announced last September</a> and supported by <a href="https://sites.google.com/site/itstheshappening/">further recent research</a>, Google Chrome does not treat SHA-1 certificates as secure anymore, and will completely stop supporting them over the next year. Chrome will discontinue support in two steps: first, blocking new SHA-1 certificates; and second, blocking all SHA-1 certificates.<br /><br /><b>Step 1: Blocking new SHA-1 certificates</b><br /><b><br /></b>Starting in early 2016 with Chrome version 48, Chrome will display a certificate error if it encounters a site with a leaf certificate that:<br /><br /><ol><li>is signed with a SHA-1-based signature</li><li>is issued on or after January 1, 2016</li><li>chains to a public CA</li></ol><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-ccPfgfmU-AY/VnR4iHv6GiI/AAAAAAAAAOE/RwQTutHezMU/s1600/Screen%2BShot%2B2015-12-04%2Bat%2B16.01.45.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="238" src="http://1.bp.blogspot.com/-ccPfgfmU-AY/VnR4iHv6GiI/AAAAAAAAAOE/RwQTutHezMU/s400/Screen%2BShot%2B2015-12-04%2Bat%2B16.01.45.png" width="400" /></a></div><div>We are hopeful that no one will encounter this error, since public CAs must stop issuing SHA-1 certificates in 2016 per the <a href="https://cabforum.org/baseline-requirements-documents/">Baseline Requirements for SSL</a>.</div><div><br /></div><div>In addition, a later version of Chrome in 2016 may extend these criteria in order to help guard against SHA-1 collision attacks on older devices, by displaying a certificate error for sites with certificate chains that:&nbsp;</div><div><ol><li>contain an intermediate or leaf certificate signed with a SHA-1-based signature</li><li>contain an intermediate or leaf certificate issued on or after January 1, 2016</li><li>chain to a public CA</li></ol>(Note that the first two criteria can match different certificates.)</div><div><br /></div><div>Note that sites using new SHA-1 certificates that chain to local trust anchors (rather than public CAs) will continue to work without a certificate error. However, they will still be subject to the UI downgrade specified in our <a href="https://googleonlinesecurity.blogspot.com/2014/09/gradually-sunsetting-sha-1.html">original announcement</a>.</div><div><br /></div><div><b>Step 2: Blocking all SHA-1 certificates</b></div><div><br /></div><div>Starting January 1, 2017 at the latest, Chrome will completely stop supporting SHA-1 certificates. At this point, sites that have a SHA-1-based signature as part of the certificate chain (not including the self-signature on the root certificate) will trigger a fatal network error. This includes certificate chains that end in a local trust anchor as well as those that end at a public CA.</div><div><br /></div><div>In line with <a href="https://cabforum.org/pipermail/public/2015-October/006121.html">Microsoft Edge</a> and <a href="https://blog.mozilla.org/security/2015/10/20/continuing-to-phase-out-sha-1-certificates/">Mozilla Firefox</a>, the target date for this step is January 1, 2017, but we are considering moving it earlier to July 1, 2016 in light of ongoing research. We therefore urge sites to replace any remaining SHA-1 certificates as soon as possible.</div><div><br /></div><div>Note that Chrome uses the certificate trust settings of the host OS where possible, and that an update such as Microsoft’s <a href="http://social.technet.microsoft.com/wiki/contents/articles/32288.windows-enforcement-of-authenticode-code-signing-and-timestamping.aspx">planned change</a> will cause a fatal network error in Chrome, regardless of Chrome’s intended target date.</div><div><br /></div><div><b>Keeping your site safe and compatible</b></div><div><br /></div><div>As individual TLS features are found to be too weak, browsers need to drop support for those features to keep users safe. Unfortunately, SHA-1 certificates are not the only feature that browsers will remove in the near future.</div><div><br /></div><div>As we <a href="https://groups.google.com/a/chromium.org/d/msg/security-dev/kVfCywocUO8/2BW3INFdDwAJ">announced</a> on our security-dev mailing list, Chrome 48 will also stop supporting RC4 cipher suites for TLS connections. This aligns with timelines for <a href="https://blogs.windows.com/msedgedev/2015/09/01/ending-support-for-the-rc4-cipher-in-microsoft-edge-and-internet-explorer-11/">Microsoft Edge</a> and <a href="https://blog.mozilla.org/security/2015/09/11/deprecating-the-rc4-cipher/">Mozilla Firefox</a>.</div><div><br /></div><div>For security and interoperability in the face of upcoming browser changes, site operators should ensure that their servers use SHA-2 certificates, support non-RC4 cipher suites, and follow TLS best practices. In particular, we recommend that most sites support TLS 1.2 and prioritize the ECDHE_RSA_WITH_AES_128_GCM cipher suite. We also encourage site operators to use tools like the <a href="https://www.ssllabs.com/ssltest/">SSL Labs server test</a> and <a href="https://mozilla.github.io/server-side-tls/ssl-config-generator/">Mozilla's SSL Configuration Generator</a>.</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/an-update-on-sha-1-certificates-in-chrome/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Indexing HTTPS pages by default</title>
		<link>https://googledata.org/google-online-security/indexing-https-pages-by-default/</link>
		<comments>https://googledata.org/google-online-security/indexing-https-pages-by-default/#comments</comments>
		<pubDate>Thu, 17 Dec 2015 16:06:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=471d13d81410bf3113fde2e8f58c106a</guid>
		<description><![CDATA[<span>Posted by <a href="https://plus.sandbox.google.com/+ZinebAitBahajji">Zineb Ait Bahajji</a>, WTA, and the Google Security and Indexing teams</span><br /><br /><i>[Cross-posted from the <a href="http://googlewebmastercentral.blogspot.ch/2015/12/indexing-https-pages-by-default.html">Webmaster Central Blog</a>]</i><br /><br />At Google, user security has always been a top priority. Over the years, we&#8217;ve worked hard to promote a more secure web and to provide a better browsing experience for users. <a href="http://gmailblog.blogspot.ch/2014/03/staying-at-forefront-of-email-security.html">Gmail</a>, <a href="https://googleblog.blogspot.ch/2011/10/making-search-more-secure.html">Google search</a>, and YouTube have had secure connections for some time, and we also started giving a slight <a href="http://googlewebmastercentral.blogspot.ch/2014/08/https-as-ranking-signal.html">ranking boost to HTTPS URLs</a> in search results last year. Browsing the web should be a private experience between the user and the website, and must not be subject to <a href="https://en.wikipedia.org/wiki/Eavesdropping">eavesdropping</a>, <a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack">man-in-the-middle attacks</a>, or data modification. This is why we&#8217;ve been strongly promoting <a href="https://www.youtube.com/watch?v=cBhZ6S0PFCY">HTTPS everywhere</a>.<br /><br />As a natural continuation of this, today we'd like to announce that we're adjusting our indexing system to look for more HTTPS pages. Specifically, we&#8217;ll start crawling HTTPS equivalents of HTTP pages, even when the former are not linked to from any page. When two URLs from the same domain appear to have the same content but are served over different protocol schemes, we&#8217;ll typically choose to index the HTTPS URL if:<br /><br /><ul><li>It doesn&#8217;t contain insecure dependencies.</li><li>It isn&#8217;t blocked from crawling by robots.txt.</li><li>It doesn&#8217;t redirect users to or through an insecure HTTP page.</li><li>It doesn&#8217;t have a rel="canonical" link to the HTTP page.</li><li>It doesn&#8217;t contain a noindex robots meta tag.</li><li>It doesn&#8217;t have on-host outlinks to HTTP URLs.</li><li>The sitemaps lists the HTTPS URL, or doesn&#8217;t list the HTTP version of the URL.</li><li>The server has a valid TLS certificate.</li></ul><div><br /></div><div>Although our systems prefer the HTTPS version by default, you can also make this clearer for other search engines by redirecting your HTTP site to your HTTPS version and by implementing the <a href="https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security">HSTS header</a> on your server.</div><div><br /></div><div>We&#8217;re excited about taking another step forward in making the web more secure. By showing users HTTPS pages in our search results, we&#8217;re hoping to decrease the risk for users to browse a website over an insecure connection and making themselves vulnerable to content injection attacks. As usual, if you have any questions or comments, please let us know in the comments section below or in our <a href="https://productforums.google.com/forum/#!forum/webmasters">webmaster help forums</a>.</div><div><br /></div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by <a href="https://plus.sandbox.google.com/+ZinebAitBahajji">Zineb Ait Bahajji</a>, WTA, and the Google Security and Indexing teams</span><br /><br /><i>[Cross-posted from the <a href="http://googlewebmastercentral.blogspot.ch/2015/12/indexing-https-pages-by-default.html">Webmaster Central Blog</a>]</i><br /><br />At Google, user security has always been a top priority. Over the years, we’ve worked hard to promote a more secure web and to provide a better browsing experience for users. <a href="http://gmailblog.blogspot.ch/2014/03/staying-at-forefront-of-email-security.html">Gmail</a>, <a href="https://googleblog.blogspot.ch/2011/10/making-search-more-secure.html">Google search</a>, and YouTube have had secure connections for some time, and we also started giving a slight <a href="http://googlewebmastercentral.blogspot.ch/2014/08/https-as-ranking-signal.html">ranking boost to HTTPS URLs</a> in search results last year. Browsing the web should be a private experience between the user and the website, and must not be subject to <a href="https://en.wikipedia.org/wiki/Eavesdropping">eavesdropping</a>, <a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack">man-in-the-middle attacks</a>, or data modification. This is why we’ve been strongly promoting <a href="https://www.youtube.com/watch?v=cBhZ6S0PFCY">HTTPS everywhere</a>.<br /><br />As a natural continuation of this, today we'd like to announce that we're adjusting our indexing system to look for more HTTPS pages. Specifically, we’ll start crawling HTTPS equivalents of HTTP pages, even when the former are not linked to from any page. When two URLs from the same domain appear to have the same content but are served over different protocol schemes, we’ll typically choose to index the HTTPS URL if:<br /><br /><ul><li>It doesn’t contain insecure dependencies.</li><li>It isn’t blocked from crawling by robots.txt.</li><li>It doesn’t redirect users to or through an insecure HTTP page.</li><li>It doesn’t have a rel="canonical" link to the HTTP page.</li><li>It doesn’t contain a noindex robots meta tag.</li><li>It doesn’t have on-host outlinks to HTTP URLs.</li><li>The sitemaps lists the HTTPS URL, or doesn’t list the HTTP version of the URL.</li><li>The server has a valid TLS certificate.</li></ul><div><br /></div><div>Although our systems prefer the HTTPS version by default, you can also make this clearer for other search engines by redirecting your HTTP site to your HTTPS version and by implementing the <a href="https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security">HSTS header</a> on your server.</div><div><br /></div><div>We’re excited about taking another step forward in making the web more secure. By showing users HTTPS pages in our search results, we’re hoping to decrease the risk for users to browse a website over an insecure connection and making themselves vulnerable to content injection attacks. As usual, if you have any questions or comments, please let us know in the comments section below or in our <a href="https://productforums.google.com/forum/#!forum/webmasters">webmaster help forums</a>.</div><div><br /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/indexing-https-pages-by-default/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Proactive measures in digital certificate security</title>
		<link>https://googledata.org/google-online-security/proactive-measures-in-digital-certificate-security/</link>
		<comments>https://googledata.org/google-online-security/proactive-measures-in-digital-certificate-security/#comments</comments>
		<pubDate>Fri, 11 Dec 2015 16:15:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=a77c19055df5d8743b8ea1c250b7471b</guid>
		<description><![CDATA[<span>Posted by Ryan Sleevi, Software Engineer</span><br /><br />Over the course of the coming weeks, Google will be moving to distrust the &#8220;Class 3 Public Primary CA&#8221; root certificate operated by Symantec Corporation, across Chrome, Android, and Google products. We are taking this action in response to a <a href="https://knowledge.symantec.com/support/ssl-certificates-support/index?page=home">notification by Symantec Corporation</a> that, as of December 1, 2015, Symantec has decided that this root will no longer comply with the <a href="https://cabforum.org/about-the-baseline-requirements/">CA/Browser Forum&#8217;s Baseline Requirements</a>. As these requirements reflect industry best practice and are the foundation for publicly trusted certificates, the failure to comply with these represents an unacceptable risk to users of Google products.<br /><br />Symantec has informed us they intend to use this root certificate for purposes other than publicly-trusted certificates. However, as this root certificate will no longer adhere to the CA/Browser Forum&#8217;s Baseline Requirements, Google is no longer able to ensure that the root certificate, or certificates issued from this root certificate, will not be used to intercept, disrupt, or impersonate the secure communication of Google&#8217;s products or users. As Symantec is unwilling to specify the new purposes for these certificates, and as they are aware of the risk to Google&#8217;s users, they have requested that Google take preventative action by removing and distrusting this root certificate. This step is necessary because this root certificate is widely trusted on platforms such as Android, Windows, and versions of OS X prior to OS X 10.11, and thus certificates Symantec issues under this root certificate would otherwise be treated as trustworthy.<br /><br />Symantec has indicated that they do not believe their customers, who are the operators of secure websites, will be affected by this removal. Further, Symantec has also indicated that, to the best of their knowledge, they do not believe customers who attempt to access sites secured with Symantec certificates will be affected by this. Users or site operators who encounter issues with this distrusting and removal should <a href="https://support.symantec.com/">contact Symantec Technical Support</a>.<br /><br /><b>Further Technical Details of Affected Root:</b><br /><b>Friendly Name:</b> Class 3 Public Primary Certification Authority<br /><b>Subject:</b> C=US, O=VeriSign, Inc., OU=Class 3 Public Primary Certification Authority<br /><b>Public Key Hash (SHA-1):</b> E2:7F:7B:D8:77:D5:DF:9E:0A:3F:9E:B4:CB:0E:2E:A9:EF:DB:69:77<br /><b>Public Key Hash (SHA-256):</b><br />B1:12:41:42:A5:A1:A5:A2:88:19:C7:35:34:0E:FF:8C:9E:2F:81:68:FE:E3:BA:18:7F:25:3B:C1:A3:92:D7:E2<br /><br /><b>MD2 Version</b><br /><b>Fingerprint (SHA-1):</b> 74:2C:31:92:E6:07:E4:24:EB:45:49:54:2B:E1:BB:C5:3E:61:74:E2<br /><b>Fingerprint (SHA-256):</b> E7:68:56:34:EF:AC:F6:9A:CE:93:9A:6B:25:5B:7B:4F:AB:EF:42:93:5B:50:A2:65:AC:B5:CB:60:27:E4:4E:70<br /><br /><b>SHA1 Version</b><br /><b>Fingerprint (SHA-1)</b>: A1:DB:63:93:91:6F:17:E4:18:55:09:40:04:15:C7:02:40:B0:AE:6B<br /><b>Fingerprint (SHA-256)</b>: A4:B6:B3:99:6F:C2:F3:06:B3:FD:86:81:BD:63:41:3D:8C:50:09:CC:4F:A3:29:C2:CC:F0:E2:FA:1B:14:03:05<br /><br />]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Ryan Sleevi, Software Engineer</span><br /><br />Over the course of the coming weeks, Google will be moving to distrust the “Class 3 Public Primary CA” root certificate operated by Symantec Corporation, across Chrome, Android, and Google products. We are taking this action in response to a <a href="https://knowledge.symantec.com/support/ssl-certificates-support/index?page=home">notification by Symantec Corporation</a> that, as of December 1, 2015, Symantec has decided that this root will no longer comply with the <a href="https://cabforum.org/about-the-baseline-requirements/">CA/Browser Forum’s Baseline Requirements</a>. As these requirements reflect industry best practice and are the foundation for publicly trusted certificates, the failure to comply with these represents an unacceptable risk to users of Google products.<br /><br />Symantec has informed us they intend to use this root certificate for purposes other than publicly-trusted certificates. However, as this root certificate will no longer adhere to the CA/Browser Forum’s Baseline Requirements, Google is no longer able to ensure that the root certificate, or certificates issued from this root certificate, will not be used to intercept, disrupt, or impersonate the secure communication of Google’s products or users. As Symantec is unwilling to specify the new purposes for these certificates, and as they are aware of the risk to Google’s users, they have requested that Google take preventative action by removing and distrusting this root certificate. This step is necessary because this root certificate is widely trusted on platforms such as Android, Windows, and versions of OS X prior to OS X 10.11, and thus certificates Symantec issues under this root certificate would otherwise be treated as trustworthy.<br /><br />Symantec has indicated that they do not believe their customers, who are the operators of secure websites, will be affected by this removal. Further, Symantec has also indicated that, to the best of their knowledge, they do not believe customers who attempt to access sites secured with Symantec certificates will be affected by this. Users or site operators who encounter issues with this distrusting and removal should <a href="https://support.symantec.com/">contact Symantec Technical Support</a>.<br /><br /><b>Further Technical Details of Affected Root:</b><br /><b>Friendly Name:</b> Class 3 Public Primary Certification Authority<br /><b>Subject:</b> C=US, O=VeriSign, Inc., OU=Class 3 Public Primary Certification Authority<br /><b>Public Key Hash (SHA-1):</b> E2:7F:7B:D8:77:D5:DF:9E:0A:3F:9E:B4:CB:0E:2E:A9:EF:DB:69:77<br /><b>Public Key Hash (SHA-256):</b><br />B1:12:41:42:A5:A1:A5:A2:88:19:C7:35:34:0E:FF:8C:9E:2F:81:68:FE:E3:BA:18:7F:25:3B:C1:A3:92:D7:E2<br /><br /><b>MD2 Version</b><br /><b>Fingerprint (SHA-1):</b> 74:2C:31:92:E6:07:E4:24:EB:45:49:54:2B:E1:BB:C5:3E:61:74:E2<br /><b>Fingerprint (SHA-256):</b> E7:68:56:34:EF:AC:F6:9A:CE:93:9A:6B:25:5B:7B:4F:AB:EF:42:93:5B:50:A2:65:AC:B5:CB:60:27:E4:4E:70<br /><br /><b>SHA1 Version</b><br /><b>Fingerprint (SHA-1)</b>: A1:DB:63:93:91:6F:17:E4:18:55:09:40:04:15:C7:02:40:B0:AE:6B<br /><b>Fingerprint (SHA-256)</b>: A4:B6:B3:99:6F:C2:F3:06:B3:FD:86:81:BD:63:41:3D:8C:50:09:CC:4F:A3:29:C2:CC:F0:E2:FA:1B:14:03:05<br /><br />]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/proactive-measures-in-digital-certificate-security/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Year one: progress in the fight against Unwanted Software</title>
		<link>https://googledata.org/google-online-security/year-one-progress-in-the-fight-against-unwanted-software/</link>
		<comments>https://googledata.org/google-online-security/year-one-progress-in-the-fight-against-unwanted-software/#comments</comments>
		<pubDate>Wed, 09 Dec 2015 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=5079a6a9c7868e28034b145870788aa3</guid>
		<description><![CDATA[<span>Posted by Moheeb Abu Rajab, Google Security Team</span><br /><i><br /></i><i>&#8220;At least 2 or 3 times a week I get a big blue warning screen with a loud voice telling me that I&#8217;ve a virus and to call the number at the end of the big blue warning.&#8221;</i><br /><i>&#8220;I&#8217;m covered with ads and unwanted interruptions. what&#8217;s the fix?&#8221;</i><br /><i>&#8220;I WORK FROM HOME AND THIS POPING [sic] UP AND RUNNING ALL OVER MY COMPUTER IS NOT RESPECTFUL AT ALL THANK YOU.&#8221;</i><br /><i><br /></i><a href="http://googleonlinesecurity.blogspot.com/2007/05/introducing-googles-anti-malware.html">Launched in 2007</a>, Safe Browsing has long helped protect people across the web from well-known online dangers like phishing and malware. More recently, however, we&#8217;ve seen an increase in user complaints like the ones above. These issues and others&#8212;hijacked browser settings, software installed without users' permission that resists attempts to uninstall&#8212;have signaled the rise of a new type of malware that our systems haven&#8217;t been able to reliably detect.<br /><br /><a href="http://chrome.blogspot.com/2014/08/thats-not-download-youre-looking-for.html">More than a year ago</a>, we began a broad fight against this category of badness that we now call &#8220;Unwanted Software&#8221;, or &#8220;UwS&#8221; (pronounced &#8220;ooze&#8221;). Today, we wanted to share some progress and outline the work that must happen in order to continue protecting users across the web.<br /><br /><b>What is UwS and how does it get on my computer?</b><br /><br />In order to combat UwS, we first needed to define it. Despite lots of variety, our research enabled us to develop a <a href="https://www.google.com/about/company/unwanted-software-policy.html">defining list of characteristics</a> that this type of software often displays:<br /><br /><ul><li>It is deceptive, promising a value proposition that it does not meet.</li><li>It tries to trick users into installing it or it piggybacks on the installation of another program.</li><li>It doesn&#8217;t tell the user about all of its principal and significant functions.</li><li>It affects the user&#8217;s system in unexpected ways.</li><li>It is difficult to remove.</li><li>It collects or transmits private information without the user&#8217;s knowledge.</li><li>It is bundled with other software and its presence is not disclosed.</li></ul><div><br /></div><div>Next, we had to better understand how UwS is being disseminated.</div><div><br /></div><div>This varies quite a bit, but time and again, deception is at the heart of these tactics. Common UwS distribution tactics include: <a href="https://googleonlinesecurity.blogspot.com/2015/03/out-with-unwanted-ad-injectors.html">unwanted ad injection</a>, misleading ads such as &#8220;trick-to-click&#8221;, ads disguised as &#8216;download&#8217; or &#8216;play&#8217; buttons, bad software downloader practices, misleading or missing disclosures about what the software does, hijacked browser default settings, annoying system pop-up messages, and more.</div><div><br /></div><div>Here are a few specific examples:</div><div><a href="http://3.bp.blogspot.com/-_VV45FisuUY/VmdwAOTO2-I/AAAAAAAAAM0/iFvPBs-ngb8/s1600/basketball%2Bnew.jpg"><img border="0" height="211" src="http://3.bp.blogspot.com/-_VV45FisuUY/VmdwAOTO2-I/AAAAAAAAAM0/iFvPBs-ngb8/s400/basketball%2Bnew.jpg" width="400"></a></div><div><i>Deceptive ads leading to UwS downloads</i></div><div><a href="http://3.bp.blogspot.com/-yXZAfrR9L9U/Vmh3ZJvw0RI/AAAAAAAAANM/6rFclrgP1nE/s1600/nytimes_bad%2BNEW%2BNEW.jpg"><img border="0" height="260" src="http://3.bp.blogspot.com/-yXZAfrR9L9U/Vmh3ZJvw0RI/AAAAAAAAANM/6rFclrgP1nE/s400/nytimes_bad%2BNEW%2BNEW.jpg" width="400"></a></div><div><i>Ads from unwanted ads injector taking over a New York Times page and sending the user to phone scams</i></div><div><a href="http://1.bp.blogspot.com/-fSgTHJpagSc/Vmdt-lb3sUI/AAAAAAAAAMc/3y84TPMUjgM/s1600/google_injected%2B%25281%2529.png"><img border="0" height="216" src="http://1.bp.blogspot.com/-fSgTHJpagSc/Vmdt-lb3sUI/AAAAAAAAAMc/3y84TPMUjgM/s400/google_injected%2B%25281%2529.png" width="400"></a></div><div><i>Unwanted ad injector inserts ads on the Google search results page</i></div><div><a href="http://3.bp.blogspot.com/-NeCoDUD-T3w/VmdvgkXrHKI/AAAAAAAAAMs/a5GKb5dQnkw/s1600/car%2BNEW.jpg"><img border="0" height="320" src="http://3.bp.blogspot.com/-NeCoDUD-T3w/VmdvgkXrHKI/AAAAAAAAAMs/a5GKb5dQnkw/s400/car%2BNEW.jpg" width="400"></a></div><div><i>New tab page is overridden by UwS</i></div><div><a href="http://2.bp.blogspot.com/-3hxcnNNBYRw/Vmh3gnUPbdI/AAAAAAAAANU/gJVzKY8pqF0/s1600/live%2Bhelp.jpg"><img border="0" height="206" src="http://2.bp.blogspot.com/-3hxcnNNBYRw/Vmh3gnUPbdI/AAAAAAAAANU/gJVzKY8pqF0/s400/live%2Bhelp.jpg" width="400"></a></div><div><i>UwS hijacks Chrome navigations and directs users to a scam tech support website</i></div><div><br /></div><div><b>One year of progress</b></div><div><br /></div><div>Because UwS touches so many different parts of people&#8217;s online experiences, we&#8217;ve worked to fight it on many different fronts. Weaving UwS detection into Safe Browsing has been critical to this work, and we&#8217;ve pursued other efforts as well&#8212;here&#8217;s an overview:</div><div><ul><li>We now include UwS in <a href="http://googleonlinesecurity.blogspot.com/2014/08/thats-not-download-youre-looking-for.html">Safe Browsing</a> and its <a href="http://googleonlinesecurity.blogspot.com/2015/03/even-more-unwanted-software-protection.html">API</a>, enabling people who use Chrome and other browsers to see warnings before they go to sites that contain UwS. The red warning below appears in Chrome.</li></ul><div><a href="http://2.bp.blogspot.com/-fP2f-Ru84xk/Vmdt_a1ebLI/AAAAAAAAAMg/fpw1XCXs5U8/s1600/pasted%2Bimage%2B0%2B%25283%2529.png"><img border="0" height="283" src="http://2.bp.blogspot.com/-fP2f-Ru84xk/Vmdt_a1ebLI/AAAAAAAAAMg/fpw1XCXs5U8/s400/pasted%2Bimage%2B0%2B%25283%2529.png" width="400"></a></div><ul><li>We launched the <a href="https://www.google.com/chrome/cleanup-tool/">Chrome Cleanup Tool</a>, a one-shot UwS removal tool that has helped clean more than 40 million devices. We shed more light on a common symptom of UwS&#8212;<a href="http://googleonlinesecurity.blogspot.com/2015/03/out-with-unwanted-ad-injectors.html">unwanted ad injectors</a>. We outlined <a href="http://googleonlinesecurity.blogspot.com/2015/05/new-research-ad-injection-economy.html">how they make money</a> and <a href="http://googleonlinesecurity.blogspot.com/2015/09/cutting-unwanted-ad-injectors-out-of.html">launched a new filter</a> in DoubleClick Bid Manager that removes impressions generated by unwanted ad injectors before bids are made.</li><li>We started using <a href="http://googleonlinesecurity.blogspot.com/2015/02/more-protection-from-unwanted-software.html">UwS as a signal in search</a> to reduce the likelihood that sites with UwS would appear in search results.</li><li>We started <a href="https://support.google.com/adwordspolicy/answer/6101154?hl=en&#38;ref_topic=29265">disabling</a> Google ads that lead to sites with UwS downloads.</li></ul><div><br /></div><div>It&#8217;s still early, but these changes have already begun to move the needle.</div><div><ul><li>UwS-related Chrome user complaints have fallen. Last year, before we rolled-out our new policies, these were 40% of total complaints and now they&#8217;re 20%.</li><li>We&#8217;re now showing more than 5 million Safe Browsing warnings per day on Chrome related to UwS to ensure users are aware of a site&#8217;s potential risks.</li><li>We helped more than 14 million users remove over 190 deceptive Chrome extensions from their devices.</li><li>We <a href="https://googleonlinesecurity.blogspot.com/2015/05/new-research-ad-injection-economy.html">reduced the number of UwS warnings</a> that users see via AdWords by 95%, compared to last year. Even prior to last year, less than 1% of UwS downloads were due to AdWords.</li></ul><div><br /></div>However, there is still a long way to go. 20% of all feedback from Chrome users is related to UwS and we believe 1 in 10 Chrome users have hijacked settings or unwanted ad injectors on their machines. We expect users of other browsers continue to suffer from similar issues; there is lots of work still to be done.</div></div><div><br /></div><div><b>Looking ahead: broad industry participation is essential</b></div><div><br /></div><div>Given the complexity of the UwS ecosystem, the involvement of players across the industry is key to making meaningful progress in this fight. This chain is only as strong as its weakest links: everyone must work to develop and enforce strict, clear policies related to major sources of UwS.</div><div><br /></div><div>If we&#8217;re able, as an industry, to enforce these policies, then everyone will be able to provide better experiences for their users. With this in mind, we&#8217;re very pleased to see that the <a href="http://www.onguardonline.gov/blog/unwanted-software-can-cause-unwelcome-problems">FTC recently warned consumers</a> about UwS and characterizes UwS as a <a href="http://www.onguardonline.gov/articles/0011-malware">form of malware</a>. This is an important step toward uniting the online community and focusing good actors on the common goal of eliminating UwS.</div><div><br /></div><div>We&#8217;re still in the earliest stages of the fight against UwS, but we&#8217;re moving in the right direction. We&#8217;ll continue our efforts to protect users from UwS and work across the industry to eliminate these bad practices.</div><div><br /></div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Moheeb Abu Rajab, Google Security Team</span><br /><i><br /></i><i>“At least 2 or 3 times a week I get a big blue warning screen with a loud voice telling me that I’ve a virus and to call the number at the end of the big blue warning.”</i><br /><i>“I’m covered with ads and unwanted interruptions. what’s the fix?”</i><br /><i>“I WORK FROM HOME AND THIS POPING [sic] UP AND RUNNING ALL OVER MY COMPUTER IS NOT RESPECTFUL AT ALL THANK YOU.”</i><br /><i><br /></i><a href="http://googleonlinesecurity.blogspot.com/2007/05/introducing-googles-anti-malware.html">Launched in 2007</a>, Safe Browsing has long helped protect people across the web from well-known online dangers like phishing and malware. More recently, however, we’ve seen an increase in user complaints like the ones above. These issues and others—hijacked browser settings, software installed without users' permission that resists attempts to uninstall—have signaled the rise of a new type of malware that our systems haven’t been able to reliably detect.<br /><br /><a href="http://chrome.blogspot.com/2014/08/thats-not-download-youre-looking-for.html">More than a year ago</a>, we began a broad fight against this category of badness that we now call “Unwanted Software”, or “UwS” (pronounced “ooze”). Today, we wanted to share some progress and outline the work that must happen in order to continue protecting users across the web.<br /><br /><b>What is UwS and how does it get on my computer?</b><br /><br />In order to combat UwS, we first needed to define it. Despite lots of variety, our research enabled us to develop a <a href="https://www.google.com/about/company/unwanted-software-policy.html">defining list of characteristics</a> that this type of software often displays:<br /><br /><ul><li>It is deceptive, promising a value proposition that it does not meet.</li><li>It tries to trick users into installing it or it piggybacks on the installation of another program.</li><li>It doesn’t tell the user about all of its principal and significant functions.</li><li>It affects the user’s system in unexpected ways.</li><li>It is difficult to remove.</li><li>It collects or transmits private information without the user’s knowledge.</li><li>It is bundled with other software and its presence is not disclosed.</li></ul><div><br /></div><div>Next, we had to better understand how UwS is being disseminated.</div><div><br /></div><div>This varies quite a bit, but time and again, deception is at the heart of these tactics. Common UwS distribution tactics include: <a href="https://googleonlinesecurity.blogspot.com/2015/03/out-with-unwanted-ad-injectors.html">unwanted ad injection</a>, misleading ads such as “trick-to-click”, ads disguised as ‘download’ or ‘play’ buttons, bad software downloader practices, misleading or missing disclosures about what the software does, hijacked browser default settings, annoying system pop-up messages, and more.</div><div><br /></div><div>Here are a few specific examples:</div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-_VV45FisuUY/VmdwAOTO2-I/AAAAAAAAAM0/iFvPBs-ngb8/s1600/basketball%2Bnew.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="211" src="http://3.bp.blogspot.com/-_VV45FisuUY/VmdwAOTO2-I/AAAAAAAAAM0/iFvPBs-ngb8/s400/basketball%2Bnew.jpg" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><i>Deceptive ads leading to UwS downloads</i></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-yXZAfrR9L9U/Vmh3ZJvw0RI/AAAAAAAAANM/6rFclrgP1nE/s1600/nytimes_bad%2BNEW%2BNEW.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="260" src="http://3.bp.blogspot.com/-yXZAfrR9L9U/Vmh3ZJvw0RI/AAAAAAAAANM/6rFclrgP1nE/s400/nytimes_bad%2BNEW%2BNEW.jpg" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><i>Ads from unwanted ads injector taking over a New York Times page and sending the user to phone scams</i></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-fSgTHJpagSc/Vmdt-lb3sUI/AAAAAAAAAMc/3y84TPMUjgM/s1600/google_injected%2B%25281%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="216" src="http://1.bp.blogspot.com/-fSgTHJpagSc/Vmdt-lb3sUI/AAAAAAAAAMc/3y84TPMUjgM/s400/google_injected%2B%25281%2529.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><i>Unwanted ad injector inserts ads on the Google search results page</i></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-NeCoDUD-T3w/VmdvgkXrHKI/AAAAAAAAAMs/a5GKb5dQnkw/s1600/car%2BNEW.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://3.bp.blogspot.com/-NeCoDUD-T3w/VmdvgkXrHKI/AAAAAAAAAMs/a5GKb5dQnkw/s400/car%2BNEW.jpg" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><i>New tab page is overridden by UwS</i></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-3hxcnNNBYRw/Vmh3gnUPbdI/AAAAAAAAANU/gJVzKY8pqF0/s1600/live%2Bhelp.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="206" src="http://2.bp.blogspot.com/-3hxcnNNBYRw/Vmh3gnUPbdI/AAAAAAAAANU/gJVzKY8pqF0/s400/live%2Bhelp.jpg" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><i>UwS hijacks Chrome navigations and directs users to a scam tech support website</i></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div><b>One year of progress</b></div><div><br /></div><div>Because UwS touches so many different parts of people’s online experiences, we’ve worked to fight it on many different fronts. Weaving UwS detection into Safe Browsing has been critical to this work, and we’ve pursued other efforts as well—here’s an overview:</div><div><ul><li>We now include UwS in <a href="http://googleonlinesecurity.blogspot.com/2014/08/thats-not-download-youre-looking-for.html">Safe Browsing</a> and its <a href="http://googleonlinesecurity.blogspot.com/2015/03/even-more-unwanted-software-protection.html">API</a>, enabling people who use Chrome and other browsers to see warnings before they go to sites that contain UwS. The red warning below appears in Chrome.</li></ul><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-fP2f-Ru84xk/Vmdt_a1ebLI/AAAAAAAAAMg/fpw1XCXs5U8/s1600/pasted%2Bimage%2B0%2B%25283%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="283" src="http://2.bp.blogspot.com/-fP2f-Ru84xk/Vmdt_a1ebLI/AAAAAAAAAMg/fpw1XCXs5U8/s400/pasted%2Bimage%2B0%2B%25283%2529.png" width="400" /></a></div><ul><li>We launched the <a href="https://www.google.com/chrome/cleanup-tool/">Chrome Cleanup Tool</a>, a one-shot UwS removal tool that has helped clean more than 40 million devices. We shed more light on a common symptom of UwS—<a href="http://googleonlinesecurity.blogspot.com/2015/03/out-with-unwanted-ad-injectors.html">unwanted ad injectors</a>. We outlined <a href="http://googleonlinesecurity.blogspot.com/2015/05/new-research-ad-injection-economy.html">how they make money</a> and <a href="http://googleonlinesecurity.blogspot.com/2015/09/cutting-unwanted-ad-injectors-out-of.html">launched a new filter</a> in DoubleClick Bid Manager that removes impressions generated by unwanted ad injectors before bids are made.</li><li>We started using <a href="http://googleonlinesecurity.blogspot.com/2015/02/more-protection-from-unwanted-software.html">UwS as a signal in search</a> to reduce the likelihood that sites with UwS would appear in search results.</li><li>We started <a href="https://support.google.com/adwordspolicy/answer/6101154?hl=en&amp;ref_topic=29265">disabling</a> Google ads that lead to sites with UwS downloads.</li></ul><div><br /></div><div>It’s still early, but these changes have already begun to move the needle.</div><div><ul><li>UwS-related Chrome user complaints have fallen. Last year, before we rolled-out our new policies, these were 40% of total complaints and now they’re 20%.</li><li>We’re now showing more than 5 million Safe Browsing warnings per day on Chrome related to UwS to ensure users are aware of a site’s potential risks.</li><li>We helped more than 14 million users remove over 190 deceptive Chrome extensions from their devices.</li><li>We <a href="https://googleonlinesecurity.blogspot.com/2015/05/new-research-ad-injection-economy.html">reduced the number of UwS warnings</a> that users see via AdWords by 95%, compared to last year. Even prior to last year, less than 1% of UwS downloads were due to AdWords.</li></ul><div><br /></div>However, there is still a long way to go. 20% of all feedback from Chrome users is related to UwS and we believe 1 in 10 Chrome users have hijacked settings or unwanted ad injectors on their machines. We expect users of other browsers continue to suffer from similar issues; there is lots of work still to be done.</div></div><div><br /></div><div><b>Looking ahead: broad industry participation is essential</b></div><div><br /></div><div>Given the complexity of the UwS ecosystem, the involvement of players across the industry is key to making meaningful progress in this fight. This chain is only as strong as its weakest links: everyone must work to develop and enforce strict, clear policies related to major sources of UwS.</div><div><br /></div><div>If we’re able, as an industry, to enforce these policies, then everyone will be able to provide better experiences for their users. With this in mind, we’re very pleased to see that the <a href="http://www.onguardonline.gov/blog/unwanted-software-can-cause-unwelcome-problems">FTC recently warned consumers</a> about UwS and characterizes UwS as a <a href="http://www.onguardonline.gov/articles/0011-malware">form of malware</a>. This is an important step toward uniting the online community and focusing good actors on the common goal of eliminating UwS.</div><div><br /></div><div>We’re still in the earliest stages of the fight against UwS, but we’re moving in the right direction. We’ll continue our efforts to protect users from UwS and work across the industry to eliminate these bad practices.</div><div><br /></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/year-one-progress-in-the-fight-against-unwanted-software/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>A new version of Authenticator for Android</title>
		<link>https://googledata.org/google-online-security/a-new-version-of-authenticator-for-android/</link>
		<comments>https://googledata.org/google-online-security/a-new-version-of-authenticator-for-android/#comments</comments>
		<pubDate>Mon, 07 Dec 2015 22:02:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=8ecce914d5cc48559291949a66adbc1e</guid>
		<description><![CDATA[Posted by Alexei Czeskis, Software EngineerAuthenticator for Android is used by millions of users and, combined with 2-Step Verification, it provides an extra layer of protection for Google Accounts.Our latest version has some cool new features. You wi...]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Alexei Czeskis, Software Engineer</span><br /><br />Authenticator for Android is used by millions of users and, combined with <a href="https://www.google.com/landing/2step/">2-Step Verification</a>, it provides an extra layer of protection for Google Accounts.<br /><br />Our latest version has some cool new features. You will notice a new icon and a refreshed design. There's also support for <a href="https://www.android.com/wear/">Android Wear</a> devices, so you'll be able to get verification codes from compatible devices, like your watch.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-wdJ7RvAITMk/VmTsIDPwfoI/AAAAAAAAALM/g2zRnVLkIGg/s1600/Screenshot_20151112-223633.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em; text-align: left;"><img border="0" height="320" src="http://2.bp.blogspot.com/-wdJ7RvAITMk/VmTsIDPwfoI/AAAAAAAAALM/g2zRnVLkIGg/s320/Screenshot_20151112-223633.png" width="180" /></a><a href="http://3.bp.blogspot.com/-9U3_Kbs8bH4/VmTsFUZqGjI/AAAAAAAAALI/pdNCtKzWjx0/s1600/Screenshot_20151112-223540.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://3.bp.blogspot.com/-9U3_Kbs8bH4/VmTsFUZqGjI/AAAAAAAAALI/pdNCtKzWjx0/s320/Screenshot_20151112-223540.png" width="180" /></a></div>The new Authenticator also comes with a developer preview of support for NFC Security Key, based on the FIDO Universal 2nd Factor (U2F) protocol via NFC. Play Store will prompt for the NFC permission before you install this version of Authenticator.<br /><br />Developers who want to learn more about U2F can refer to FIDO's <a href="https://fidoalliance.org/specifications/download/">specifications</a>. Additionally, you can try it out at <a href="https://u2fdemo.appspot.com/">https://u2fdemo.appspot.com</a>. Note that you'll need an Android device running the latest versions of Google Chrome and Authenticator and also a <a href="https://support.google.com/accounts/answer/6103523">Security Key</a> with NFC support.<br /><br />You can find the latest <a href="https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2">Authenticator for Android on the Play Store</a>.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/a-new-version-of-authenticator-for-android/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Protecting hundreds of millions more mobile users</title>
		<link>https://googledata.org/google-online-security/protecting-hundreds-of-millions-more-mobile-users/</link>
		<comments>https://googledata.org/google-online-security/protecting-hundreds-of-millions-more-mobile-users/#comments</comments>
		<pubDate>Mon, 07 Dec 2015 17:32:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=ee09bafb3da29322d374b464f669410c</guid>
		<description><![CDATA[<span>Posted by No&#233; Lutz, Nathan Parker, Stephan Somogyi; Google Chrome and Safe Browsing Teams</span><br /><br /><br />  Google Safe Browsing has been protecting well over a billion desktop users against <a href="https://googleblog.blogspot.com/2015/03/protecting-people-across-web-with.html">malware, unwanted software, and social engineering sites</a> on the web for years. Today, we&#8217;re pleased to announce that we&#8217;ve extended our protective umbrella to hundreds of millions of Chrome users on Android.<br /><br /><b>How To Get It</b><br /><b><br /></b>If you&#8217;re an Android user, you probably already have it! This new Safe Browsing client on Android is part of Google Play Services, starting with version 8.1. The first app to use it is Chrome, starting with version 46&#8212;we&#8217;re now protecting all Android Chrome users by default. If you look at Chrome&#8217;s Settings &#62; Privacy menu, you can verify that Safe Browsing is enabled and that you&#8217;re protected. Chrome warns you about dangerous sites as shown below. It does this&#160;<a href="https://www.google.com/chrome/browser/privacy/whitepaper.html#malware">while preserving your privacy, just like on desktop</a>.<br /><br /><b>What Came Before</b><br /><b><br /></b>The Android platform and the Play Store have long had protection against potentially harmful apps. And as our adversaries have improved their skills in trying to evade us, we&#8217;ve improved our detection, keeping Android app users safe. But not all dangers to mobile users come from apps.<br /><br /><b>What&#8217;s New</b><br /><div><a href="http://3.bp.blogspot.com/-zPRNoz24Ih4/VmTu1X6KYpI/AAAAAAAAALc/VdvM_K2GF6o/s1600/deceptive_mob_interstitial.png"><img border="0" height="400" src="http://3.bp.blogspot.com/-zPRNoz24Ih4/VmTu1X6KYpI/AAAAAAAAALc/VdvM_K2GF6o/s400/deceptive_mob_interstitial.png" width="232"></a></div><a href="https://googleonlinesecurity.blogspot.com/2015/11/safe-browsing-protection-from-even-more.html">Social engineering</a>&#8212;and phishing in particular&#8212;requires different protection; we need to keep an up-to-date list of bad sites on the device to make sure we can warn people before they browse into a trap. Providing this protection on a mobile device is much more difficult than on a desktop system, in no small part because we have to make sure that list doesn&#8217;t get stale, yet:<br /><br /><ul><li>Mobile data costs money for most users around the world. Data size matters a lot.</li><li>Mobile data speeds are slower than Wi-Fi in much of the world. Data size matters a lot.</li><li>Cellular connectivity quality is much more uneven, so getting the right data to the device quickly is critically important. Data size matters a lot.</li></ul><br /><b>Maximum Protection Per Bit</b><br /><b><br /></b>Bytes are big: our mantra is that every single bit that Safe Browsing sends a mobile device must improve protection. Network bandwidth and battery are the scarcest resources on a mobile device, so we had to carefully rethink how to best protect mobile users. Some social engineering attacks only happen in certain parts of the world, so we only send information that protects devices in the geographic regions they&#8217;re in.<br /><br />We also make sure that we send information about the riskiest sites first: if we can only get a very short update through, as is often the case on lower-speed networks in emerging economies, the update really has to count. We also worked with Google&#8217;s compression team to make the little data that we do send as small as possible.<br /><br />Together with the Android Security team, we made the software on the device extra stingy with memory and processor use, and careful about minimizing network traffic. All of these details matter to us; we must not waste our users&#8217; data plans, or a single moment of their battery life.<br /><br /><b>More Mobile</b><br /><br />We hunt badness on the Internet so that you don&#8217;t discover it the hard way, and our protection should never be an undue burden on your networking costs or your device&#8217;s battery. As more of the world relies on the mobile web, we want to make sure you&#8217;re as safe as can be, as efficiently as possible.<br /><br /><br />]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Noé Lutz, Nathan Parker, Stephan Somogyi; Google Chrome and Safe Browsing Teams</span><br /><br /><br />  Google Safe Browsing has been protecting well over a billion desktop users against <a href="https://googleblog.blogspot.com/2015/03/protecting-people-across-web-with.html">malware, unwanted software, and social engineering sites</a> on the web for years. Today, we’re pleased to announce that we’ve extended our protective umbrella to hundreds of millions of Chrome users on Android.<br /><br /><b>How To Get It</b><br /><b><br /></b>If you’re an Android user, you probably already have it! This new Safe Browsing client on Android is part of Google Play Services, starting with version 8.1. The first app to use it is Chrome, starting with version 46—we’re now protecting all Android Chrome users by default. If you look at Chrome’s Settings &gt; Privacy menu, you can verify that Safe Browsing is enabled and that you’re protected. Chrome warns you about dangerous sites as shown below. It does this&nbsp;<a href="https://www.google.com/chrome/browser/privacy/whitepaper.html#malware">while preserving your privacy, just like on desktop</a>.<br /><br /><b>What Came Before</b><br /><b><br /></b>The Android platform and the Play Store have long had protection against potentially harmful apps. And as our adversaries have improved their skills in trying to evade us, we’ve improved our detection, keeping Android app users safe. But not all dangers to mobile users come from apps.<br /><br /><b>What’s New</b><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-zPRNoz24Ih4/VmTu1X6KYpI/AAAAAAAAALc/VdvM_K2GF6o/s1600/deceptive_mob_interstitial.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://3.bp.blogspot.com/-zPRNoz24Ih4/VmTu1X6KYpI/AAAAAAAAALc/VdvM_K2GF6o/s400/deceptive_mob_interstitial.png" width="232" /></a></div><a href="https://googleonlinesecurity.blogspot.com/2015/11/safe-browsing-protection-from-even-more.html">Social engineering</a>—and phishing in particular—requires different protection; we need to keep an up-to-date list of bad sites on the device to make sure we can warn people before they browse into a trap. Providing this protection on a mobile device is much more difficult than on a desktop system, in no small part because we have to make sure that list doesn’t get stale, yet:<br /><br /><ul><li>Mobile data costs money for most users around the world. Data size matters a lot.</li><li>Mobile data speeds are slower than Wi-Fi in much of the world. Data size matters a lot.</li><li>Cellular connectivity quality is much more uneven, so getting the right data to the device quickly is critically important. Data size matters a lot.</li></ul><br /><b>Maximum Protection Per Bit</b><br /><b><br /></b>Bytes are big: our mantra is that every single bit that Safe Browsing sends a mobile device must improve protection. Network bandwidth and battery are the scarcest resources on a mobile device, so we had to carefully rethink how to best protect mobile users. Some social engineering attacks only happen in certain parts of the world, so we only send information that protects devices in the geographic regions they’re in.<br /><br />We also make sure that we send information about the riskiest sites first: if we can only get a very short update through, as is often the case on lower-speed networks in emerging economies, the update really has to count. We also worked with Google’s compression team to make the little data that we do send as small as possible.<br /><br />Together with the Android Security team, we made the software on the device extra stingy with memory and processor use, and careful about minimizing network traffic. All of these details matter to us; we must not waste our users’ data plans, or a single moment of their battery life.<br /><br /><b>More Mobile</b><br /><br />We hunt badness on the Internet so that you don’t discover it the hard way, and our protection should never be an undue burden on your networking costs or your device’s battery. As more of the world relies on the mobile web, we want to make sure you’re as safe as can be, as efficiently as possible.<br /><br /><br />]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/protecting-hundreds-of-millions-more-mobile-users/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Safe Browsing protection from even more deceptive attacks</title>
		<link>https://googledata.org/google-online-security/safe-browsing-protection-from-even-more-deceptive-attacks-2/</link>
		<comments>https://googledata.org/google-online-security/safe-browsing-protection-from-even-more-deceptive-attacks-2/#comments</comments>
		<pubDate>Fri, 13 Nov 2015 16:35:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=e6ca1c22a8ab095fa295ac41e45e7856</guid>
		<description><![CDATA[<span>Posted by Emily Schechter, Program Manager and No&#233; Lutz, Software Engineer</span><br /><br /><a href="https://www.google.com/transparencyreport/safebrowsing/">Safe Browsing</a> has been protecting over one billion people from <a href="https://support.google.com/websearch/answer/106318?hl=en&#38;rd=1">traditional phishing attacks</a> on the web for more than eight years. The threat landscape is constantly changing&#8212;bad actors on the web are using more and different types of deceptive behavior to trick you into performing actions that you didn&#8217;t intend or want, so we&#8217;ve expanded protection to include social engineering.<br /><span><br /></span>Social engineering is a much broader category than traditional phishing and encompasses more types of deceptive web content. A social engineering attack happens when either:<br /><br /><ul><li>The content pretends to act, or looks and feels, like a trusted entity &#8212; like a bank or government.</li><li>The content tries to trick you into doing something you&#8217;d only do for a trusted entity &#8212; like sharing a password or calling tech support.</li></ul><br />Below are some examples of social engineering attacks that try to trick you into thinking the content is delivered by Google or Chrome. Other trusted brands are also commonly abused for social engineering attacks.<br /><br /><div><a href="http://4.bp.blogspot.com/-ynZ1AqWr_34/VkVhE2IwJ-I/AAAAAAAAAKI/-wZYWtBhNMg/s1600/pasted%2Bimage%2B0%2B%25281%2529.png"><img border="0" height="217" src="http://4.bp.blogspot.com/-ynZ1AqWr_34/VkVhE2IwJ-I/AAAAAAAAAKI/-wZYWtBhNMg/s400/pasted%2Bimage%2B0%2B%25281%2529.png" width="400"></a></div><div><i>This page tries to trick you into downloading and executing malware or unwanted software. It uses Chrome&#8217;s logo and name to confuse you into believing the site is operated by Google. Content like this may include an inconspicuous legal disclaimer that states it is not affiliated with Google. This does not change the deceptive nature of this content&#8212;as always, use caution when downloading files from the web.</i></div><br /><div><a href="http://4.bp.blogspot.com/-1r-KjOaeTj4/VkVhE0H9Q4I/AAAAAAAAAKM/AH5z1GFkAzQ/s1600/alert6.png"><img border="0" height="283" src="http://4.bp.blogspot.com/-1r-KjOaeTj4/VkVhE0H9Q4I/AAAAAAAAAKM/AH5z1GFkAzQ/s400/alert6.png" width="400"></a></div><div><i>This is a fake tech phone support page. This page mimics a warning and may trick you into calling a third-party company that pretends to be Google or some other trusted entity, but charges a fee for support. (Chrome does not offer paid remote support).</i>&#160;</div><br /><div><a href="http://1.bp.blogspot.com/-AUsGCQ8mFKk/VkVhEwvSAbI/AAAAAAAAAKE/KfqIEiKC0c8/s1600/pasted%2Bimage%2B0%2B%25282%2529.png"><img border="0" height="320" src="http://1.bp.blogspot.com/-AUsGCQ8mFKk/VkVhEwvSAbI/AAAAAAAAAKE/KfqIEiKC0c8/s320/pasted%2Bimage%2B0%2B%25282%2529.png" width="310"></a></div><div><i>This is a fake Google login page. It might trick you into disclosing your account login credentials. Other phishing sites like this could trick you into giving up other personal information such as credit card information. Phishing sites may look exactly like the real site&#8212;so be sure to look at the address bar to check that the URL is correct, and also check to see that the website begins with https://. See more information <a href="https://www.google.com/safetycenter/everyone/cybercrime/identity-theft/">here</a>.</i></div><div><br /></div><div>If we identify that a web page contains social engineering content, Chrome will warn you by displaying the following warning:</div><div></div><div><a href="http://2.bp.blogspot.com/-EfkkneVbG9g/VkWDCJfAzdI/AAAAAAAAAKw/P2BogvdTq-U/s1600/SocEngWarning.jpg"><img border="0" height="250" src="http://2.bp.blogspot.com/-EfkkneVbG9g/VkWDCJfAzdI/AAAAAAAAAKw/P2BogvdTq-U/s400/SocEngWarning.jpg" width="400"></a></div><div>(If you believe Safe Browsing has classified a web page in error, please report it <a href="https://www.google.com/safebrowsing/report_error/">here</a>.)</div><div><br /></div><div>We'll continue to improve Google's Safe Browsing protection to help more people stay safer online. Check out the <a href="https://www.google.com/transparencyreport/safebrowsing/">Safe Browsing Transparency Report</a> to find out more.</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Emily Schechter, Program Manager and Noé Lutz, Software Engineer</span><br /><br /><a href="https://www.google.com/transparencyreport/safebrowsing/">Safe Browsing</a> has been protecting over one billion people from <a href="https://support.google.com/websearch/answer/106318?hl=en&amp;rd=1">traditional phishing attacks</a> on the web for more than eight years. The threat landscape is constantly changing—bad actors on the web are using more and different types of deceptive behavior to trick you into performing actions that you didn’t intend or want, so we’ve expanded protection to include social engineering.<br /><span class="byline-author"><br /></span>Social engineering is a much broader category than traditional phishing and encompasses more types of deceptive web content. A social engineering attack happens when either:<br /><br /><ul><li>The content pretends to act, or looks and feels, like a trusted entity — like a bank or government.</li><li>The content tries to trick you into doing something you’d only do for a trusted entity — like sharing a password or calling tech support.</li></ul><br />Below are some examples of social engineering attacks that try to trick you into thinking the content is delivered by Google or Chrome. Other trusted brands are also commonly abused for social engineering attacks.<br /><br /><div style="text-align: center;"><a href="http://4.bp.blogspot.com/-ynZ1AqWr_34/VkVhE2IwJ-I/AAAAAAAAAKI/-wZYWtBhNMg/s1600/pasted%2Bimage%2B0%2B%25281%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="217" src="http://4.bp.blogspot.com/-ynZ1AqWr_34/VkVhE2IwJ-I/AAAAAAAAAKI/-wZYWtBhNMg/s400/pasted%2Bimage%2B0%2B%25281%2529.png" width="400" /></a></div><div style="text-align: center;"><i>This page tries to trick you into downloading and executing malware or unwanted software. It uses Chrome’s logo and name to confuse you into believing the site is operated by Google. Content like this may include an inconspicuous legal disclaimer that states it is not affiliated with Google. This does not change the deceptive nature of this content—as always, use caution when downloading files from the web.</i></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-1r-KjOaeTj4/VkVhE0H9Q4I/AAAAAAAAAKM/AH5z1GFkAzQ/s1600/alert6.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="283" src="http://4.bp.blogspot.com/-1r-KjOaeTj4/VkVhE0H9Q4I/AAAAAAAAAKM/AH5z1GFkAzQ/s400/alert6.png" width="400" /></a></div><div style="text-align: center;"><i>This is a fake tech phone support page. This page mimics a warning and may trick you into calling a third-party company that pretends to be Google or some other trusted entity, but charges a fee for support. (Chrome does not offer paid remote support).</i>&nbsp;</div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-AUsGCQ8mFKk/VkVhEwvSAbI/AAAAAAAAAKE/KfqIEiKC0c8/s1600/pasted%2Bimage%2B0%2B%25282%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://1.bp.blogspot.com/-AUsGCQ8mFKk/VkVhEwvSAbI/AAAAAAAAAKE/KfqIEiKC0c8/s320/pasted%2Bimage%2B0%2B%25282%2529.png" width="310" /></a></div><div class="separator" style="clear: both; text-align: center;"><i>This is a fake Google login page. It might trick you into disclosing your account login credentials. Other phishing sites like this could trick you into giving up other personal information such as credit card information. Phishing sites may look exactly like the real site—so be sure to look at the address bar to check that the URL is correct, and also check to see that the website begins with https://. See more information <a href="https://www.google.com/safetycenter/everyone/cybercrime/identity-theft/">here</a>.</i></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">If we identify that a web page contains social engineering content, Chrome will warn you by displaying the following warning:</div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-EfkkneVbG9g/VkWDCJfAzdI/AAAAAAAAAKw/P2BogvdTq-U/s1600/SocEngWarning.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="250" src="http://2.bp.blogspot.com/-EfkkneVbG9g/VkWDCJfAzdI/AAAAAAAAAKw/P2BogvdTq-U/s400/SocEngWarning.jpg" width="400" /></a></div><div class="separator" style="clear: both; text-align: left;">(If you believe Safe Browsing has classified a web page in error, please report it <a href="https://www.google.com/safebrowsing/report_error/">here</a>.)</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">We'll continue to improve Google's Safe Browsing protection to help more people stay safer online. Check out the <a href="https://www.google.com/transparencyreport/safebrowsing/">Safe Browsing Transparency Report</a> to find out more.</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/safe-browsing-protection-from-even-more-deceptive-attacks-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>New Research: Encouraging trends and emerging threats in email security</title>
		<link>https://googledata.org/google-online-security/new-research-encouraging-trends-and-emerging-threats-in-email-security/</link>
		<comments>https://googledata.org/google-online-security/new-research-encouraging-trends-and-emerging-threats-in-email-security/#comments</comments>
		<pubDate>Thu, 12 Nov 2015 15:30:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=7f42c75b2ff18d6222b5190b1c6d8838</guid>
		<description><![CDATA[<span>Posted by&#160;</span><span>Elie Bursztein,</span>&#160;Anti-Fraud and Abuse Research and Nicolas Lidzborski, Gmail Security Engineering Lead<br /><span><br /></span><span>We&#8217;re constantly working to help make email more secure for everyone. These efforts are reflected in security protections like <a href="http://gmailblog.blogspot.com/2010/01/default-https-access-for-gmail.html">default HTTPS</a> in Gmail as well as our <a href="http://www.google.com/transparencyreport/saferemail/?hl=en">Safer Email Transparency report</a>, which includes information about email security beyond just Gmail.</span><br /><span><br /></span><span>To that end, in partnership with the University of Michigan and the University of Illinois, we&#8217;re publishing the results of a <a href="http://conferences2.sigcomm.org/imc/2015/papers/p27.pdf">multi-year study</a> that measured how email security has evolved since 2013. While Gmail was the foundation of this research, the study&#8217;s insights apply to email more broadly, not unlike our <a href="http://www.google.com/transparencyreport/saferemail/?hl=en">Safer Email Transparency report</a>. It&#8217;s our hope that these findings not only help make Gmail more secure, but will also be used to help protect email users everywhere as well.</span><br /><span><br /></span><span><b>Email security strengthens, industry-wide</b></span><br /><span><br /></span><span>The study showed that email is more secure today than it was two years ago.</span><br /><span><br /></span><span>Here are some specific findings:</span><a href="http://2.bp.blogspot.com/-iAfcKQj2t2g/VkSbeMRc8_I/AAAAAAAAAJw/_esZQo8ZyFI/s1600/gmail%2Bgraphic.jpg"><img border="0" height="640" src="http://2.bp.blogspot.com/-iAfcKQj2t2g/VkSbeMRc8_I/AAAAAAAAAJw/_esZQo8ZyFI/s640/gmail%2Bgraphic.jpg" width="620"></a><br /><div></div><div><b>Newer security challenges and how we can address them</b></div><div><br /></div><div>Our study identified several new security challenges as well.</div><div><br /></div><div>First, we found regions of the Internet actively preventing message encryption by tampering with requests to initiate SSL connections. To mitigate this attack, we are working closely with partners through the industry association <a href="http://www.m3aawg.org/">M3AAWG</a> to strengthen &#8220;opportunistic TLS&#8221; using technologies that we pioneered with Chrome to protect websites against interception.</div><div><br /></div><div>Second, we uncovered malicious DNS servers publishing bogus routing information to email servers looking for Gmail. These nefarious servers are like telephone directories that intentionally list misleading phone numbers for a given name. While this type of attack is rare, it&#8217;s very concerning as it could allow attackers to censor or alter messages before they are relayed to the email recipient.</div><div><br /></div><div>While these threats do not affect Gmail to Gmail communication, they may affect messaging between providers. To notify our users of potential dangers, we are developing in-product warnings for Gmail users that will display when they receive a message through a non-encrypted connection. These warnings will begin to roll-out in the coming months.</div><div><br /></div><div>All email services&#8212;Gmail included&#8212;depend on the trust of their users. Partnering with top researchers helps us make the email ecosystem as a whole safer and more secure for everyone. Security threats won&#8217;t disappear, but studies like these enable providers across the industry to fight them with better, more powerful protections today and going forward.</div><div><br /></div><div><i>[This work was made possible thanks to the contribution of many Googlers including Vijay Eranti, Kurt Thomas, John Rae-Grant, and Mark Risher.]</i></div><span><br /></span>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by&nbsp;</span><span class="byline-author">Elie Bursztein,</span>&nbsp;Anti-Fraud and Abuse Research and Nicolas Lidzborski, Gmail Security Engineering Lead<br /><span class="byline-author"><br /></span><span class="byline-author">We’re constantly working to help make email more secure for everyone. These efforts are reflected in security protections like <a href="http://gmailblog.blogspot.com/2010/01/default-https-access-for-gmail.html">default HTTPS</a> in Gmail as well as our <a href="http://www.google.com/transparencyreport/saferemail/?hl=en">Safer Email Transparency report</a>, which includes information about email security beyond just Gmail.</span><br /><span class="byline-author"><br /></span><span class="byline-author">To that end, in partnership with the University of Michigan and the University of Illinois, we’re publishing the results of a <a href="http://conferences2.sigcomm.org/imc/2015/papers/p27.pdf">multi-year study</a> that measured how email security has evolved since 2013. While Gmail was the foundation of this research, the study’s insights apply to email more broadly, not unlike our <a href="http://www.google.com/transparencyreport/saferemail/?hl=en">Safer Email Transparency report</a>. It’s our hope that these findings not only help make Gmail more secure, but will also be used to help protect email users everywhere as well.</span><br /><span class="byline-author"><br /></span><span class="byline-author"><b>Email security strengthens, industry-wide</b></span><br /><span class="byline-author"><br /></span><span class="byline-author">The study showed that email is more secure today than it was two years ago.</span><br /><span class="byline-author"><br /></span><span class="byline-author">Here are some specific findings:</span><a href="http://2.bp.blogspot.com/-iAfcKQj2t2g/VkSbeMRc8_I/AAAAAAAAAJw/_esZQo8ZyFI/s1600/gmail%2Bgraphic.jpg" imageanchor="1" style="clear: left; display: inline !important; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="640" src="http://2.bp.blogspot.com/-iAfcKQj2t2g/VkSbeMRc8_I/AAAAAAAAAJw/_esZQo8ZyFI/s640/gmail%2Bgraphic.jpg" width="620" /></a><br /><div class="separator" style="clear: both; text-align: left;"></div><div class="separator" style="clear: both; text-align: left;"><b>Newer security challenges and how we can address them</b></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Our study identified several new security challenges as well.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">First, we found regions of the Internet actively preventing message encryption by tampering with requests to initiate SSL connections. To mitigate this attack, we are working closely with partners through the industry association <a href="http://www.m3aawg.org/">M3AAWG</a> to strengthen “opportunistic TLS” using technologies that we pioneered with Chrome to protect websites against interception.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Second, we uncovered malicious DNS servers publishing bogus routing information to email servers looking for Gmail. These nefarious servers are like telephone directories that intentionally list misleading phone numbers for a given name. While this type of attack is rare, it’s very concerning as it could allow attackers to censor or alter messages before they are relayed to the email recipient.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">While these threats do not affect Gmail to Gmail communication, they may affect messaging between providers. To notify our users of potential dangers, we are developing in-product warnings for Gmail users that will display when they receive a message through a non-encrypted connection. These warnings will begin to roll-out in the coming months.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">All email services—Gmail included—depend on the trust of their users. Partnering with top researchers helps us make the email ecosystem as a whole safer and more secure for everyone. Security threats won’t disappear, but studies like these enable providers across the industry to fight them with better, more powerful protections today and going forward.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><i>[This work was made possible thanks to the contribution of many Googlers including Vijay Eranti, Kurt Thomas, John Rae-Grant, and Mark Risher.]</i></div><span class="byline-author"><br /></span>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/new-research-encouraging-trends-and-emerging-threats-in-email-security/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Sustaining Digital Certificate Security</title>
		<link>https://googledata.org/google-online-security/sustaining-digital-certificate-security/</link>
		<comments>https://googledata.org/google-online-security/sustaining-digital-certificate-security/#comments</comments>
		<pubDate>Wed, 28 Oct 2015 21:03:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=c94dadb48e16b75de8946e5819b38a18</guid>
		<description><![CDATA[<span>Posted by Ryan Sleevi, Software Engineer</span><br /><br /><i>This post updates our <a href="https://googleonlinesecurity.blogspot.com/2015/09/improved-digital-certificate-security.html">previous notification</a> of a misissued certificate for google.com</i><br /><br />Following our notification, Symantec published <a href="https://www-secure.symantec.com/connect/sites/default/files/Test_Certificates_Incident_Final_Report_10_13_2015v3b.pdf">a report</a> in response to our inquiries and disclosed that 23 test certificates had been issued without the domain owner&#8217;s knowledge covering five organizations, including Google and Opera.<br /><br />However, we were still able to find several more questionable certificates using only the Certificate Transparency logs and a few minutes of work. We shared these results with other root store operators on October 6th, to allow them to independently assess and verify our research.<br /><br />Symantec performed another audit and, on October 12th, announced that they had found an additional <a href="https://www-secure.symantec.com/connect/sites/default/files/TestCertificateIncidentReportOwnedDomains.pdf">164 certificates</a> over 76 domains and <a href="https://www-secure.symantec.com/connect/sites/default/files/TestCertificateIncidentReportUnregisteredv2.pdf">2,458 certificates</a> issued for domains that were never registered.<br /><span><br /></span>It&#8217;s obviously concerning that a CA would have such a long-running issue and that they would be unable to assess its scope after being alerted to it and conducting an audit. Therefore we are firstly going to require that as of June 1st, 2016, all certificates issued by Symantec itself will be required to support Certificate Transparency. In this case, logging of non-EV certificates would have provided significantly greater insight into the problem and may have allowed the problem to be detected sooner.<br /><span><br /></span>After this date, certificates newly issued by Symantec that do not conform to the Chromium Certificate Transparency policy may result in interstitials or other problems when used in Google products.<br /><span><br /></span>More immediately, we are requesting of Symantec that they further update their public incident report with:<br /><ol><li>A post-mortem analysis that details why they did not detect the additional certificates that we found.</li><li>Details of each of the failures to uphold the relevant Baseline Requirements and EV Guidelines and what they believe the individual root cause was for each failure.</li></ol>We are also requesting that Symantec provide us with a detailed set of steps they will take to correct and prevent each of the identified failures, as well as a timeline for when they expect to complete such work. Symantec may consider this latter information to be confidential and so we are not requesting that this be made public.<br /><div><br /></div><div>Following the implementation of these corrective steps, we expect Symantec to undergo a Point-in-time Readiness Assessment and a third-party security audit. The point-in-time assessment will establish Symantec&#8217;s conformance to each of these standards:</div><div><ul><li>WebTrust Principles and Criteria for Certification Authorities</li><li>WebTrust Principles and Criteria for Certification Authorities &#8211; SSL Baseline with Network Security</li><li>WebTrust Principles and Criteria for Certification Authorities &#8211; Extended Validation SSL</li></ul><div><br /></div><div>The third-party security audit must assess:&#160;</div><div><ul><li>The veracity of Symantec&#8217;s claims that at no time private keys were exposed to Symantec employees by the tool.</li><li>That Symantec employees could not use the tool in question to obtain certificates for which the employee controlled the private key.</li><li>That Symantec&#8217;s audit logging mechanism is reasonably protected from modification, deletion, or tampering, as described in Section 5.4.4 of their CPS.</li></ul></div><div><br /></div><div>We may take further action as additional information becomes available to us.</div></div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Ryan Sleevi, Software Engineer</span><br /><br /><i>This post updates our <a href="https://googleonlinesecurity.blogspot.com/2015/09/improved-digital-certificate-security.html">previous notification</a> of a misissued certificate for google.com</i><br /><br />Following our notification, Symantec published <a href="https://www-secure.symantec.com/connect/sites/default/files/Test_Certificates_Incident_Final_Report_10_13_2015v3b.pdf">a report</a> in response to our inquiries and disclosed that 23 test certificates had been issued without the domain owner’s knowledge covering five organizations, including Google and Opera.<br /><br />However, we were still able to find several more questionable certificates using only the Certificate Transparency logs and a few minutes of work. We shared these results with other root store operators on October 6th, to allow them to independently assess and verify our research.<br /><br />Symantec performed another audit and, on October 12th, announced that they had found an additional <a href="https://www-secure.symantec.com/connect/sites/default/files/TestCertificateIncidentReportOwnedDomains.pdf">164 certificates</a> over 76 domains and <a href="https://www-secure.symantec.com/connect/sites/default/files/TestCertificateIncidentReportUnregisteredv2.pdf">2,458 certificates</a> issued for domains that were never registered.<br /><span class="byline-author"><br /></span>It’s obviously concerning that a CA would have such a long-running issue and that they would be unable to assess its scope after being alerted to it and conducting an audit. Therefore we are firstly going to require that as of June 1st, 2016, all certificates issued by Symantec itself will be required to support Certificate Transparency. In this case, logging of non-EV certificates would have provided significantly greater insight into the problem and may have allowed the problem to be detected sooner.</span><br /><span class="byline-author"><br /></span>After this date, certificates newly issued by Symantec that do not conform to the Chromium Certificate Transparency policy may result in interstitials or other problems when used in Google products.</span><br /><span class="byline-author"><br /></span>More immediately, we are requesting of Symantec that they further update their public incident report with:</span><br /><ol><li>A post-mortem analysis that details why they did not detect the additional certificates that we found.</li><li>Details of each of the failures to uphold the relevant Baseline Requirements and EV Guidelines and what they believe the individual root cause was for each failure.</li></ol>We are also requesting that Symantec provide us with a detailed set of steps they will take to correct and prevent each of the identified failures, as well as a timeline for when they expect to complete such work. Symantec may consider this latter information to be confidential and so we are not requesting that this be made public.<br /><div><br /></div><div>Following the implementation of these corrective steps, we expect Symantec to undergo a Point-in-time Readiness Assessment and a third-party security audit. The point-in-time assessment will establish Symantec’s conformance to each of these standards:</div><div><ul><li>WebTrust Principles and Criteria for Certification Authorities</li><li>WebTrust Principles and Criteria for Certification Authorities – SSL Baseline with Network Security</li><li>WebTrust Principles and Criteria for Certification Authorities – Extended Validation SSL</li></ul><div><br /></div><div>The third-party security audit must assess:&nbsp;</div><div><ul><li>The veracity of Symantec’s claims that at no time private keys were exposed to Symantec employees by the tool.</li><li>That Symantec employees could not use the tool in question to obtain certificates for which the employee controlled the private key.</li><li>That Symantec’s audit logging mechanism is reasonably protected from modification, deletion, or tampering, as described in Section 5.4.4 of their CPS.</li></ul></div><div><br /></div><div>We may take further action as additional information becomes available to us.</div></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/sustaining-digital-certificate-security/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Behind the red warning: more info about online site safety</title>
		<link>https://googledata.org/google-online-security/behind-the-red-warning-more-info-about-online-site-safety/</link>
		<comments>https://googledata.org/google-online-security/behind-the-red-warning-more-info-about-online-site-safety/#comments</comments>
		<pubDate>Tue, 20 Oct 2015 22:21:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=5b58adf7421a589c173c979fef83556a</guid>
		<description><![CDATA[<span>Posted by&#160;</span><br /><span>Adrienne Porter Felt, Chrome Security Engineer and Warning Wizard</span><br /><span>Emily Schechter, Safe Browsing Program Manager and Menace to Malware</span><br /><span>Ke Wang, Safe Browsing Engineer and Developer of Defense</span><br /><br />You&#8217;re browsing the web, checking out the latest news on your favorite band, when suddenly you see a red warning screen: &#8220;The site ahead contains malware.&#8221; These warnings aren&#8217;t new&#8212;since 2006, Google Safe Browsing has shown them when you navigate to an unsafe site. The warnings protect you from harms caused by unsafe sites, such as malware infections and phishing attacks. But it hasn&#8217;t always been clear why a specific website triggers a warning, and you may want to learn more.<br /><br />To demystify these warnings, we&#8217;re launching a <a href="https://www.google.com/transparencyreport/safebrowsing/diagnostic/">Site Status section</a> in the Transparency Report. The next time you come across a Safe Browsing warning, you can search for the blocked website in the Transparency Report to learn why it&#8217;s been flagged by our systems.<br /><br />The new Site Status section of the Transparency Report replaces our previous <a href="http://googleonlinesecurity.blogspot.com/2008/05/safe-browsing-diagnostic-to-rescue.html">Safe Browsing diagnostic page</a>. It includes a clearer interface and simpler explanations of the issues, such as details for sites that host <a href="https://www.google.com/about/company/unwanted-software-policy.html">unwanted software</a>. We&#8217;ve added it to the Transparency Report so that the Safe Browsing section of the report is a one-stop shop for information to help you understand what Safe Browsing is and how it works.<br /><div><a href="http://2.bp.blogspot.com/-hvm45VU4jPc/ViagY5n4oSI/AAAAAAAAAI0/5EcSgxXdQ5M/s1600/SiteStatus.jpg"><img border="0" height="451" src="http://2.bp.blogspot.com/-hvm45VU4jPc/ViagY5n4oSI/AAAAAAAAAI0/5EcSgxXdQ5M/s640/SiteStatus.jpg" width="640"></a></div><div><br /></div>If a favorite website shows up as &#8220;dangerous,&#8221; it&#8217;s often due to user-uploaded bad content or a temporary malware infection. The Site Status will return to normal once the webmaster has cleaned up the website. To help speed up this process, we automatically give the webmaster a heads-up about the problem via <a href="http://g.co/SearchConsole">Search Console</a>; if you use <a href="https://www.google.com/analytics/">Google Analytics</a>, we&#8217;ll also warn you there if your site has malware on it. (Webmasters, check the <a href="https://www.google.com/webmasters/hacked/">help center</a> to learn how to remove malware from your websites.)<br /><br />We&#8217;re constantly working to keep users safe and informed online. Visit the updated Site Status section in the <a href="https://www.google.com/transparencyreport/safebrowsing/diagnostic/">Transparency Report</a> to experience it yourself.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by&nbsp;</span><br /><span class="byline-author">Adrienne Porter Felt, Chrome Security Engineer and Warning Wizard</span><br /><span class="byline-author">Emily Schechter, Safe Browsing Program Manager and Menace to Malware</span><br /><span class="byline-author">Ke Wang, Safe Browsing Engineer and Developer of Defense</span><br /><br />You’re browsing the web, checking out the latest news on your favorite band, when suddenly you see a red warning screen: “The site ahead contains malware.” These warnings aren’t new—since 2006, Google Safe Browsing has shown them when you navigate to an unsafe site. The warnings protect you from harms caused by unsafe sites, such as malware infections and phishing attacks. But it hasn’t always been clear why a specific website triggers a warning, and you may want to learn more.<br /><br />To demystify these warnings, we’re launching a <a href="https://www.google.com/transparencyreport/safebrowsing/diagnostic/">Site Status section</a> in the Transparency Report. The next time you come across a Safe Browsing warning, you can search for the blocked website in the Transparency Report to learn why it’s been flagged by our systems.<br /><br />The new Site Status section of the Transparency Report replaces our previous <a href="http://googleonlinesecurity.blogspot.com/2008/05/safe-browsing-diagnostic-to-rescue.html">Safe Browsing diagnostic page</a>. It includes a clearer interface and simpler explanations of the issues, such as details for sites that host <a href="https://www.google.com/about/company/unwanted-software-policy.html">unwanted software</a>. We’ve added it to the Transparency Report so that the Safe Browsing section of the report is a one-stop shop for information to help you understand what Safe Browsing is and how it works.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-hvm45VU4jPc/ViagY5n4oSI/AAAAAAAAAI0/5EcSgxXdQ5M/s1600/SiteStatus.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="451" src="http://2.bp.blogspot.com/-hvm45VU4jPc/ViagY5n4oSI/AAAAAAAAAI0/5EcSgxXdQ5M/s640/SiteStatus.jpg" width="640" /></a></div><div style="text-align: center;"><br /></div>If a favorite website shows up as “dangerous,” it’s often due to user-uploaded bad content or a temporary malware infection. The Site Status will return to normal once the webmaster has cleaned up the website. To help speed up this process, we automatically give the webmaster a heads-up about the problem via <a href="http://g.co/SearchConsole">Search Console</a>; if you use <a href="https://www.google.com/analytics/">Google Analytics</a>, we’ll also warn you there if your site has malware on it. (Webmasters, check the <a href="https://www.google.com/webmasters/hacked/">help center</a> to learn how to remove malware from your websites.)<br /><br />We’re constantly working to keep users safe and informed online. Visit the updated Site Status section in the <a href="https://www.google.com/transparencyreport/safebrowsing/diagnostic/">Transparency Report</a> to experience it yourself.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/behind-the-red-warning-more-info-about-online-site-safety/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Simplifying the Page Security Icon in Chrome</title>
		<link>https://googledata.org/google-online-security/simplifying-the-page-security-icon-in-chrome/</link>
		<comments>https://googledata.org/google-online-security/simplifying-the-page-security-icon-in-chrome/#comments</comments>
		<pubDate>Tue, 13 Oct 2015 16:27:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=715409a1aba47866ae74cb5b613789ed</guid>
		<description><![CDATA[<span>Posted by Lucas Garron and Chris Palmer, Chrome security team</span><br /><br />Sometimes, websites try to use HTTPS to be secure and get it mostly right, but they have minor errors. Until recently, Chrome marked this security state with a yellow &#8220;caution triangle&#8221; badge on the page security icon in the URL bar.<br /><br />Starting with version 46, Chrome will mark the &#8220;HTTPS with Minor Errors&#8221; state using the same neutral page icon as HTTP pages.<br /><div><a href="http://4.bp.blogspot.com/-6PB1NmSrPb0/VhwmosBiZdI/AAAAAAAAAIg/w_Y3XwZS_wk/s1600/dubious-as-neutral.png"><img border="0" height="336" src="http://4.bp.blogspot.com/-6PB1NmSrPb0/VhwmosBiZdI/AAAAAAAAAIg/w_Y3XwZS_wk/s640/dubious-as-neutral.png" width="640"></a></div><div>There are two reasons for this:</div><div></div><ol><li>This change is a better visual indication of the security state of the page relative to HTTP.</li><li>Chrome users will have fewer security states to learn.</li></ol><div><b><br /></b></div><div><b>(Not) Warning About Mixed Content</b></div><div><br /></div><div>This change will mainly affect HTTPS pages that contain certain <a href="https://developers.google.com/web/fundamentals/discovery-and-distribution/avoid-mixed-content/fixing-mixed-content?hl=en">mixed content</a>, such as HTTP images.</div><div><br /></div><div>Site operators face a dilemma: Switching an HTTP site to HTTPS can initially result in mixed content, which is undesirable in the long term but important for debugging the migration. During this process the site may not be fully secured, but it will usually not be less secure than before.</div><div><br /></div><div>Removing the yellow &#8220;caution triangle&#8221; badge means that most users will not perceive a warning on mixed content pages during such a migration. We hope that this will encourage site operators to switch to HTTPS sooner rather than later.</div><div><br /></div><div><b>Fewer Security States</b></div><div><br /></div><div>This change will reduce the number of page security states in Chrome from four to three.</div><div><br /></div><div>We have to strike a balance: representing the security state of a webpage as accurately as possible, while making sure users are not overwhelmed with too many possible states and details. We&#8217;ve come to understand that our yellow &#8220;caution triangle&#8221; badge can be confusing when compared to the HTTP page icon, and we believe that it is better not to emphasize the difference in security between these two states to most users. For developers and other interested users, it will still be possible to tell the difference by checking whether the URL begins with &#8220;https://&#8221;.</div><div><br /></div><div>In the long term, we hope that most sites on the internet will become secure, and we <a href="https://www.chromium.org/Home/chromium-security/marking-http-as-non-secure">plan</a> to reduce the icon to just two states: secure and not secure. The change announced in this post is a small step in that direction.</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Lucas Garron and Chris Palmer, Chrome security team</span><br /><br />Sometimes, websites try to use HTTPS to be secure and get it mostly right, but they have minor errors. Until recently, Chrome marked this security state with a yellow “caution triangle” badge on the page security icon in the URL bar.<br /><br />Starting with version 46, Chrome will mark the “HTTPS with Minor Errors” state using the same neutral page icon as HTTP pages.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-6PB1NmSrPb0/VhwmosBiZdI/AAAAAAAAAIg/w_Y3XwZS_wk/s1600/dubious-as-neutral.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="336" src="http://4.bp.blogspot.com/-6PB1NmSrPb0/VhwmosBiZdI/AAAAAAAAAIg/w_Y3XwZS_wk/s640/dubious-as-neutral.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: left;">There are two reasons for this:</div><div class="separator" style="clear: both; text-align: left;"></div><ol><li>This change is a better visual indication of the security state of the page relative to HTTP.</li><li>Chrome users will have fewer security states to learn.</li></ol><div><b><br /></b></div><div><b>(Not) Warning About Mixed Content</b></div><div><br /></div><div>This change will mainly affect HTTPS pages that contain certain <a href="https://developers.google.com/web/fundamentals/discovery-and-distribution/avoid-mixed-content/fixing-mixed-content?hl=en">mixed content</a>, such as HTTP images.</div><div><br /></div><div>Site operators face a dilemma: Switching an HTTP site to HTTPS can initially result in mixed content, which is undesirable in the long term but important for debugging the migration. During this process the site may not be fully secured, but it will usually not be less secure than before.</div><div><br /></div><div>Removing the yellow “caution triangle” badge means that most users will not perceive a warning on mixed content pages during such a migration. We hope that this will encourage site operators to switch to HTTPS sooner rather than later.</div><div><br /></div><div><b>Fewer Security States</b></div><div><br /></div><div>This change will reduce the number of page security states in Chrome from four to three.</div><div><br /></div><div>We have to strike a balance: representing the security state of a webpage as accurately as possible, while making sure users are not overwhelmed with too many possible states and details. We’ve come to understand that our yellow “caution triangle” badge can be confusing when compared to the HTTP page icon, and we believe that it is better not to emphasize the difference in security between these two states to most users. For developers and other interested users, it will still be possible to tell the difference by checking whether the URL begins with “https://”.</div><div><br /></div><div>In the long term, we hope that most sites on the internet will become secure, and we <a href="https://www.chromium.org/Home/chromium-security/marking-http-as-non-secure">plan</a> to reduce the icon to just two states: secure and not secure. The change announced in this post is a small step in that direction.</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/simplifying-the-page-security-icon-in-chrome/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>HTTPS support coming to Blogspot</title>
		<link>https://googledata.org/google-online-security/https-support-coming-to-blogspot/</link>
		<comments>https://googledata.org/google-online-security/https-support-coming-to-blogspot/#comments</comments>
		<pubDate>Wed, 30 Sep 2015 17:16:00 +0000</pubDate>
		<dc:creator><![CDATA[Jo-el van Bergen]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=11f061538c79a7bd883f5461447f5b49</guid>
		<description><![CDATA[<span>Posted by Jo-el van Bergen, Software Engineer, Security.</span><br /><br />Since <a href="http://gmailblog.blogspot.com/2008/07/making-security-easier.html" target="_blank">2008</a>, we've worked to encrypt the connections between our users and Google servers. Over the years&#160;we've announced that Search, Gmail, Drive, and many other products have encrypted connections by default, and most recently, we've made a similar announcement for <a href="http://googleonlinesecurity.blogspot.com/2015/04/ads-take-step-towards-https-everywhere.html" target="_blank">our ads products</a>.  <br /><br />In this same vein, today we're expanding on the <a href="https://www.youtube.com/watch?v=cBhZ6S0PFCY&#38;utm_source=wmx_blog&#38;utm_medium=referral&#38;utm_campaign=tls_en_post" target="_blank">HTTPS Everywhere</a> mission and beginning an initial rollout of HTTPS support for Blogspot. HTTPS is a cornerstone of internet security as it provides several important benefits: it makes it harder for bad actors to steal information or track the activities of blog authors and visitors, it helps check that visitors open the correct website and aren&#8217;t being redirected to a malicious location, and it helps detect if a bad actor tries to change any data sent from Blogger to a blog visitor. <br /><br />While this initial rollout won&#8217;t support all of our Blogger users, we wanted to take the first step to make HTTPS available for Blogspot; for those users who want to try it early. <br /><br />We&#8217;re rolling this out gradually and Blogspot authors interested in enabling HTTPS support can begin opting-in today. Simply log into <a href="https://www.blogger.com/" target="_blank">https://www.blogger.com</a>, click on the blog you&#8217;d like to make HTTPS enabled, navigate to the Settings page, and select "yes" for "HTTPS Availability". Unfortunately, blogs with custom domains are not supported in this first version.  <br /><div><a href="https://2.bp.blogspot.com/-i2Zz0p3UoX4/VgsPJGm9_fI/AAAAAAAAROA/HoN3rq-s93U/s1600/unnamed.png"><img border="0" height="385" src="http://2.bp.blogspot.com/-i2Zz0p3UoX4/VgsPJGm9_fI/AAAAAAAAROA/HoN3rq-s93U/s640/unnamed.png" width="640"></a></div>Once enabled, your blog will become accessible over both HTTP and HTTPS connections. Blogspot authors should be aware that if they choose to encrypt at this time, some of the current functionality of their blog may not work over HTTPS. This can be a result of template, gadgets, and blog post content, and is often caused by <i><a href="https://developers.google.com/web/fundamentals/discovery-and-distribution/avoid-mixed-content/what-is-mixed-content" target="_blank">mixed content</a></i> errors, some of which may be <a href="https://support.google.com/blogger/answer/6286127?p=mixed_content" target="_blank">fixable by the author themselves</a>.  <br /><br />We&#8217;ll also be moving some of our own blogs over to HTTPS gradually, beginning with the <a href="https://googleblog.blogspot.com/" target="_blank">Official Google Blog</a> and the <a href="https://googleonlinesecurity.blogspot.com/" target="_blank">Google Online Security Blog</a>.  <br /><br />For the Blogspot authors who try this out - we&#8217;re interested to hear your <a href="https://productforums.google.com/forum/?hl=&#38;pli=1#!forum/blogger" target="_blank">feedback</a> while we continue to improve this feature and its capabilities! For more information, visit our <a href="https://support.google.com/blogger/?p=blog_security_https" target="_blank">Help Center</a>.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Jo-el van Bergen, Software Engineer, Security.</span><br /><br />Since <a href="http://gmailblog.blogspot.com/2008/07/making-security-easier.html" >2008</a>, we've worked to encrypt the connections between our users and Google servers. Over the years&nbsp;we've announced that Search, Gmail, Drive, and many other products have encrypted connections by default, and most recently, we've made a similar announcement for <a href="http://googleonlinesecurity.blogspot.com/2015/04/ads-take-step-towards-https-everywhere.html" >our ads products</a>.  <br /><br />In this same vein, today we're expanding on the <a href="https://www.youtube.com/watch?v=cBhZ6S0PFCY&amp;utm_source=wmx_blog&amp;utm_medium=referral&amp;utm_campaign=tls_en_post" >HTTPS Everywhere</a> mission and beginning an initial rollout of HTTPS support for Blogspot. HTTPS is a cornerstone of internet security as it provides several important benefits: it makes it harder for bad actors to steal information or track the activities of blog authors and visitors, it helps check that visitors open the correct website and aren’t being redirected to a malicious location, and it helps detect if a bad actor tries to change any data sent from Blogger to a blog visitor. <br /><br />While this initial rollout won’t support all of our Blogger users, we wanted to take the first step to make HTTPS available for Blogspot; for those users who want to try it early. <br /><br />We’re rolling this out gradually and Blogspot authors interested in enabling HTTPS support can begin opting-in today. Simply log into <a href="https://www.blogger.com/" >https://www.blogger.com</a>, click on the blog you’d like to make HTTPS enabled, navigate to the Settings page, and select "yes" for "HTTPS Availability". Unfortunately, blogs with custom domains are not supported in this first version.  <br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-i2Zz0p3UoX4/VgsPJGm9_fI/AAAAAAAAROA/HoN3rq-s93U/s1600/unnamed.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="385" src="http://2.bp.blogspot.com/-i2Zz0p3UoX4/VgsPJGm9_fI/AAAAAAAAROA/HoN3rq-s93U/s640/unnamed.png" width="640" /></a></div>Once enabled, your blog will become accessible over both HTTP and HTTPS connections. Blogspot authors should be aware that if they choose to encrypt at this time, some of the current functionality of their blog may not work over HTTPS. This can be a result of template, gadgets, and blog post content, and is often caused by <i><a href="https://developers.google.com/web/fundamentals/discovery-and-distribution/avoid-mixed-content/what-is-mixed-content" >mixed content</a></i> errors, some of which may be <a href="https://support.google.com/blogger/answer/6286127?p=mixed_content" >fixable by the author themselves</a>.  <br /><br />We’ll also be moving some of our own blogs over to HTTPS gradually, beginning with the <a href="https://googleblog.blogspot.com/" >Official Google Blog</a> and the <a href="https://googleonlinesecurity.blogspot.com/" >Google Online Security Blog</a>.  <br /><br />For the Blogspot authors who try this out - we’re interested to hear your <a href="https://productforums.google.com/forum/?hl=&amp;pli=1#!forum/blogger" >feedback</a> while we continue to improve this feature and its capabilities! For more information, visit our <a href="https://support.google.com/blogger/?p=blog_security_https" >Help Center</a>.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/https-support-coming-to-blogspot/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>New research: The underground market fueling for-profit abuse</title>
		<link>https://googledata.org/google-online-security/new-research-the-underground-market-fueling-for-profit-abuse/</link>
		<comments>https://googledata.org/google-online-security/new-research-the-underground-market-fueling-for-profit-abuse/#comments</comments>
		<pubDate>Thu, 24 Sep 2015 17:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=857d71fb1f5b332c4be44ddb152d525c</guid>
		<description><![CDATA[<span>Posted by Kurt Thomas and Elie Bursztein, Google Anti-Fraud and Abuse Research</span><br /><span><br /></span><span>Recently, we teamed up with top researchers exploring innovative anti-abuse strategies to build a holistic understanding of for-profit abuse. The full report, which you can read <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/43798.pdf">here</a>, was presented in June at the <a href="http://weis2015.econinfosec.org/program.html">Workshop on the Economics of Information Security</a>&#160;2015.</span><br /><span><br /></span><span>Over the last decade, Internet crime has matured into an underground economy where a large number of globally distributed criminals trade in data, knowledge, and services specifically geared towards defrauding users and businesses. Within this black market, criminals buy and sell compromised machines, scam hosting, exploit kits, and wholesale access to pilfered user records including usernames and passwords, credit card numbers, and other sensitive personal data. The availability of such specialized resources has transformed for-profit abuse into a cooperative effort among criminals each satisfying a cog in a supply chain.</span><br /><span><br /></span><span><b>Profiting from abuse: a bird&#8217;s eye view</b></span><br /><span><b><br /></b></span><span>Here&#8217;s an example of the underground value chain required to make money from spamming knock-off luxury products:</span><br /><br /><div></div><span><span><br /></span>In aggregate, the problem may appear intractable to stop. However, if we view this scenario in an economic light, then increasing the cost of fake accounts, phone numbers, or compromised websites cuts into the profitability of abuse. In the end, abuse propped up by cost-ineffective resources will crumble.</span><br /><span><br /></span><span><b>Collaborating to better understand the underground</b></span><br /><span><b><br /></b></span><span>Given the complex underbelly of abuse, we pulled together experts from industry and academia to build a systematic understanding of how criminals operate. Our previous example represents just one configuration of a value chain. In our example, revenue originates solely from victims buying counterfeit products. Criminals could adapt this strategy to scam users into paying for fake anti-virus, defraud advertisers via clickbots, or liquidate a victim&#8217;s banking assets. Regardless of the composition, we argue there is always a profit center through which victims transfer new capital into the underground. These schemes form a spectrum between selling products to unwitting victims to outright theft. A medley of alternatives such as dating scams, call-center scams, premium SMS fraud, DDoS extortion, or even stealing and re-selling gaming assets all fall within this spectrum and ultimately derive a payout from victims outside the underground.</span><br /><span><br /></span><span>These profit centers are in turn propped up by an ecosystem of support infrastructure that can be configured arbitrarily by criminals per their requirements. This infrastructure includes compromised hosts, human labor, networking and hosting, and accounts and engagement&#8212;all available for a fee. For example, 1,000 Google accounts cost on the order of $170, compared to CAPTCHAs which cost $1 per thousand. These costs reflect socio-economic factors as well as the impact of technical, legal, and law enforcement interventions on the availability of resources.</span><br /><span><br /></span><span><b>Redefining the abuse arms race</b></span><br /><span><br /></span><span>Client and server-side security has dominated industry&#8217;s response to digital abuse over the last decade. The spectrum of solutions&#8212;automated software updates, personal anti-virus, network packet scanners, firewalls, spam filters, password managers, and two-factor authentication to name a few&#8212;all attempt to reduce the attack surface that criminals can penetrate.</span><br /><span><br /></span><span>While these safeguards have significantly improved user security, they create an arms race: criminals adapt or find the subset of systems that remain vulnerable and resume operation.</span><br /><span><br /></span><span>To overcome this reactive defense cycle, we are improving our approach to abuse fighting to also strike at the support infrastructure, financial centers, and actors that incentivize abuse. By exploring the value chain required to bulk register accounts, we were able to make Google accounts <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/43134.pdf">30&#8211;40% more expensive on the black market</a>. Success stories from our academic partners include <a href="http://cseweb.ucsd.edu/~savage/papers/CCS12Priceless.pdf">disrupting payment processing</a> for illegal pharmacies and counterfeit software outlets advertised by spam, <a href="http://www.icir.org/vern/papers/twitter-acct-purch.usesec13.pdf">cutting off access to fake accounts</a> that pollute online services, and <a href="https://seclab.cs.ucsb.edu/media/uploads/papers/torpig.pdf">disabling the command and control</a> infrastructure of botnets.</span>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Kurt Thomas and Elie Bursztein, Google Anti-Fraud and Abuse Research</span><br /><span class="byline-author"><br /></span><span class="byline-author">Recently, we teamed up with top researchers exploring innovative anti-abuse strategies to build a holistic understanding of for-profit abuse. The full report, which you can read <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/43798.pdf">here</a>, was presented in June at the <a href="http://weis2015.econinfosec.org/program.html">Workshop on the Economics of Information Security</a>&nbsp;2015.</span><br /><span class="byline-author"><br /></span><span class="byline-author">Over the last decade, Internet crime has matured into an underground economy where a large number of globally distributed criminals trade in data, knowledge, and services specifically geared towards defrauding users and businesses. Within this black market, criminals buy and sell compromised machines, scam hosting, exploit kits, and wholesale access to pilfered user records including usernames and passwords, credit card numbers, and other sensitive personal data. The availability of such specialized resources has transformed for-profit abuse into a cooperative effort among criminals each satisfying a cog in a supply chain.</span><br /><span class="byline-author"><br /></span><span class="byline-author"><b>Profiting from abuse: a bird’s eye view</b></span><br /><span class="byline-author"><b><br /></b></span><span class="byline-author">Here’s an example of the underground value chain required to make money from spamming knock-off luxury products:</span><br /><br /><div style="text-align: center;"><iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/sZ-RWIy7dkw" width="560"></iframe></div><span class="byline-author"><span class="byline-author"><br /></span>In aggregate, the problem may appear intractable to stop. However, if we view this scenario in an economic light, then increasing the cost of fake accounts, phone numbers, or compromised websites cuts into the profitability of abuse. In the end, abuse propped up by cost-ineffective resources will crumble.</span><br /><span class="byline-author"><br /></span><span class="byline-author"><b>Collaborating to better understand the underground</b></span><br /><span class="byline-author"><b><br /></b></span><span class="byline-author">Given the complex underbelly of abuse, we pulled together experts from industry and academia to build a systematic understanding of how criminals operate. Our previous example represents just one configuration of a value chain. In our example, revenue originates solely from victims buying counterfeit products. Criminals could adapt this strategy to scam users into paying for fake anti-virus, defraud advertisers via clickbots, or liquidate a victim’s banking assets. Regardless of the composition, we argue there is always a profit center through which victims transfer new capital into the underground. These schemes form a spectrum between selling products to unwitting victims to outright theft. A medley of alternatives such as dating scams, call-center scams, premium SMS fraud, DDoS extortion, or even stealing and re-selling gaming assets all fall within this spectrum and ultimately derive a payout from victims outside the underground.</span><br /><span class="byline-author"><br /></span><span class="byline-author">These profit centers are in turn propped up by an ecosystem of support infrastructure that can be configured arbitrarily by criminals per their requirements. This infrastructure includes compromised hosts, human labor, networking and hosting, and accounts and engagement—all available for a fee. For example, 1,000 Google accounts cost on the order of $170, compared to CAPTCHAs which cost $1 per thousand. These costs reflect socio-economic factors as well as the impact of technical, legal, and law enforcement interventions on the availability of resources.</span><br /><span class="byline-author"><br /></span><span class="byline-author"><b>Redefining the abuse arms race</b></span><br /><span class="byline-author"><br /></span><span class="byline-author">Client and server-side security has dominated industry’s response to digital abuse over the last decade. The spectrum of solutions—automated software updates, personal anti-virus, network packet scanners, firewalls, spam filters, password managers, and two-factor authentication to name a few—all attempt to reduce the attack surface that criminals can penetrate.</span><br /><span class="byline-author"><br /></span><span class="byline-author">While these safeguards have significantly improved user security, they create an arms race: criminals adapt or find the subset of systems that remain vulnerable and resume operation.</span><br /><span class="byline-author"><br /></span><span class="byline-author">To overcome this reactive defense cycle, we are improving our approach to abuse fighting to also strike at the support infrastructure, financial centers, and actors that incentivize abuse. By exploring the value chain required to bulk register accounts, we were able to make Google accounts <a href="http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/43134.pdf">30–40% more expensive on the black market</a>. Success stories from our academic partners include <a href="http://cseweb.ucsd.edu/~savage/papers/CCS12Priceless.pdf">disrupting payment processing</a> for illegal pharmacies and counterfeit software outlets advertised by spam, <a href="http://www.icir.org/vern/papers/twitter-acct-purch.usesec13.pdf">cutting off access to fake accounts</a> that pollute online services, and <a href="https://seclab.cs.ucsb.edu/media/uploads/papers/torpig.pdf">disabling the command and control</a> infrastructure of botnets.</span>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/new-research-the-underground-market-fueling-for-profit-abuse/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Improved Digital Certificate Security</title>
		<link>https://googledata.org/google-online-security/improved-digital-certificate-security/</link>
		<comments>https://googledata.org/google-online-security/improved-digital-certificate-security/#comments</comments>
		<pubDate>Sat, 19 Sep 2015 01:23:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=a9a166f079e67ba1459eadeb83235940</guid>
		<description><![CDATA[<span>Posted by Stephan Somogyi, Security &#38; Privacy PM, and Adam Eijdenberg, Certificate Transparency PM</span><br /><br />On September 14, around 19:20 GMT, Symantec&#8217;s Thawte-branded CA issued an Extended Validation (EV) pre-certificate for the domains <a href="http://google.com/">google.com</a> and <a href="http://www.google.com/">www.google.com</a>. This pre-certificate was neither requested nor authorized by Google.<br /><br />We discovered this issuance via <a href="http://www.certificate-transparency.org/">Certificate Transparency</a> logs, which Chrome has required for EV certificates starting January 1st of this year. The issuance of this pre-certificate was recorded in both Google-operated and DigiCert-operated logs.<br /><br />During our ongoing discussions with Symantec we determined that the issuance occurred during a Symantec-internal testing process.<br /><br />We have updated Chrome&#8217;s revocation metadata to include the public key of the misissued certificate. Additionally, the issued pre-certificate was valid only for one day.<br /><br />Our primary consideration in these situations is always the security and privacy of our users; we currently do not have reason to believe they were at risk.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Stephan Somogyi, Security &amp; Privacy PM, and Adam Eijdenberg, Certificate Transparency PM</span><br /><br />On September 14, around 19:20 GMT, Symantec’s Thawte-branded CA issued an Extended Validation (EV) pre-certificate for the domains <a href="http://google.com/">google.com</a> and <a href="http://www.google.com/">www.google.com</a>. This pre-certificate was neither requested nor authorized by Google.<br /><br />We discovered this issuance via <a href="http://www.certificate-transparency.org/">Certificate Transparency</a> logs, which Chrome has required for EV certificates starting January 1st of this year. The issuance of this pre-certificate was recorded in both Google-operated and DigiCert-operated logs.<br /><br />During our ongoing discussions with Symantec we determined that the issuance occurred during a Symantec-internal testing process.<br /><br />We have updated Chrome’s revocation metadata to include the public key of the misissued certificate. Additionally, the issued pre-certificate was valid only for one day.<br /><br />Our primary consideration in these situations is always the security and privacy of our users; we currently do not have reason to believe they were at risk.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/improved-digital-certificate-security/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Disabling SSLv3 and RC4</title>
		<link>https://googledata.org/google-online-security/disabling-sslv3-and-rc4/</link>
		<comments>https://googledata.org/google-online-security/disabling-sslv3-and-rc4/#comments</comments>
		<pubDate>Fri, 18 Sep 2015 00:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=2bb0f8f2502433614a7a5bdb43941fd0</guid>
		<description><![CDATA[<span>Posted by Adam Langley, Security Engineer</span><br /><span><br /></span><span>As the <a href="http://googleonlinesecurity.blogspot.com/2014/09/gradually-sunsetting-sha-1.html">previously</a> <a href="http://googleonlinesecurity.blogspot.com/2014/09/gradually-sunsetting-sha-1.html">announced</a> transition to SHA-256 certificates is nearing completion, we are planning the next changes to Google&#8217;s TLS configuration. As part of those changes, we expect to disable support for SSLv3 and RC4 in the medium term.</span><br /><span><br /></span><span>SSLv3 has been <a href="https://tools.ietf.org/html/rfc2246">obsolete</a> for over 16 years and is so full of known problems that the IETF has decided that it <a href="https://tools.ietf.org/html/rfc7568">must no longer be used</a>. RC4 is a 28 year old cipher that has done remarkably well, but is now the subject of <a href="https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/vanhoef">multiple</a> <a href="https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/garman">attacks</a> at security conferences. The IETF has decided that RC4 also warrants a statement that it too <a href="https://tools.ietf.org/html/rfc7465">must no longer be used</a>.</span><br /><span><br /></span><span>Because of these issues we expect to disable both SSLv3 and RC4 support at Google&#8217;s frontend servers and, over time, across our products in general, including Chrome, Android, our webcrawlers and our SMTP servers. (Indeed, SSLv3 support has already been removed from Chrome.) The <a href="https://www.trustworthyinternet.org/ssl-pulse/">SSL Pulse</a> survey of the top 200,000 HTTPS sites finds that, already, 42% of sites have disabled RC4 and 65% of sites have disabled SSLv3.</span><br /><span><br /></span><span>If your TLS client, webserver or email server requires the use of SSLv3 or RC4 then the time to update was some years ago, but better late than never. However, note that just because you might be using RC4 today doesn&#8217;t mean that your client or website will stop working: TLS can negotiate cipher suites and problems will only occur if you don&#8217;t support anything but RC4. (Although if you&#8217;re using SSLv3 today then things will stop working when we disable it because SSLv3 is already a last resort.)</span><br /><span><br /></span><span><b>Minimum standards for TLS clients</b></span><br /><span><b><br /></b></span><span>Google's frontend servers do a lot more than terminate connections for browsers these days; there are also lots of embedded systems talking to Google using TLS. In order to reduce the amount of work that the deprecation of outdated cryptography causes, we are also announcing suggested minimum standards for TLS clients today. This applies to TLS clients in general: certainly those that are using TLS as part of HTTPS, but also, for example, SMTP servers using STARTTLS.</span><br /><span><br /></span><span>We can't predict the future, but devices that meet these requirements are likely to be able to continue functioning without changes to their TLS configuration up to 2020. You should expect these standards to be required in cases where Google runs certification programs, but it&#8217;s a very good idea to meet them anyway.</span><br /><span><br /></span><span>Devices that don&#8217;t meet these standards aren&#8217;t going to stop working anytime soon (unless they depend on RC4 or SSLv3&#8212;see above), but they might be affected by further TLS changes in the coming years.</span><br /><span><br /></span><span>Specifically, we are requiring:</span><br /><br /><ol><li>TLS 1.2 must be supported.</li><li>A Server Name Indication (SNI) extension must be included in the handshake and must contain the domain that's being connected to.</li><li>The cipher suite TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 must be supported with P-256 and uncompressed points.</li><li>At least the certificates in <a href="https://pki.google.com/roots.pem">https://pki.google.com/roots.pem</a> must be trusted.</li><li>Certificate handling must be able to support DNS Subject Alternative Names and those SANs may include a single wildcard as the left-most label in the name.</li></ol><div><br /></div><div>In order to make testing as easy as possible we have set up <a href="https://cert-test.sandbox.google.com/">https://&#173;cert-test.&#173;sandbox.&#173;google.&#173;com</a>, which requires points 1&#8211;3 to be met in order to make a successful connection. Thus, if your TLS client can&#8217;t connect to that host then you need to update your libraries or configuration.</div><div><br /></div><div><b>No longer serving a cross-sign to Equifax</b></div><div><br /></div><div>At the moment the certificate chains that Google properties serve most often include a cross-sign from our CA, GeoTrust, to our previous CA, Equifax. This allows clients that only trust our previous CA to continue to function. However, this cross-sign is only a transitional workaround for such clients and we will be removing it in the future. Clients that include our required set of root CAs (at <a href="https://pki.google.com/roots.pem">https://pki.google.com/roots.pem</a>) will not be affected, but any that don&#8217;t include the needed GeoTrust root may stop working.</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Adam Langley, Security Engineer</span><br /><span class="byline-author"><br /></span><span class="byline-author">As the <a href="http://googleonlinesecurity.blogspot.com/2014/09/gradually-sunsetting-sha-1.html">previously</a> <a href="http://googleonlinesecurity.blogspot.com/2014/09/gradually-sunsetting-sha-1.html">announced</a> transition to SHA-256 certificates is nearing completion, we are planning the next changes to Google’s TLS configuration. As part of those changes, we expect to disable support for SSLv3 and RC4 in the medium term.</span><br /><span class="byline-author"><br /></span><span class="byline-author">SSLv3 has been <a href="https://tools.ietf.org/html/rfc2246">obsolete</a> for over 16 years and is so full of known problems that the IETF has decided that it <a href="https://tools.ietf.org/html/rfc7568">must no longer be used</a>. RC4 is a 28 year old cipher that has done remarkably well, but is now the subject of <a href="https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/vanhoef">multiple</a> <a href="https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/garman">attacks</a> at security conferences. The IETF has decided that RC4 also warrants a statement that it too <a href="https://tools.ietf.org/html/rfc7465">must no longer be used</a>.</span><br /><span class="byline-author"><br /></span><span class="byline-author">Because of these issues we expect to disable both SSLv3 and RC4 support at Google’s frontend servers and, over time, across our products in general, including Chrome, Android, our webcrawlers and our SMTP servers. (Indeed, SSLv3 support has already been removed from Chrome.) The <a href="https://www.trustworthyinternet.org/ssl-pulse/">SSL Pulse</a> survey of the top 200,000 HTTPS sites finds that, already, 42% of sites have disabled RC4 and 65% of sites have disabled SSLv3.</span><br /><span class="byline-author"><br /></span><span class="byline-author">If your TLS client, webserver or email server requires the use of SSLv3 or RC4 then the time to update was some years ago, but better late than never. However, note that just because you might be using RC4 today doesn’t mean that your client or website will stop working: TLS can negotiate cipher suites and problems will only occur if you don’t support anything but RC4. (Although if you’re using SSLv3 today then things will stop working when we disable it because SSLv3 is already a last resort.)</span><br /><span class="byline-author"><br /></span><span class="byline-author"><b>Minimum standards for TLS clients</b></span><br /><span class="byline-author"><b><br /></b></span><span class="byline-author">Google's frontend servers do a lot more than terminate connections for browsers these days; there are also lots of embedded systems talking to Google using TLS. In order to reduce the amount of work that the deprecation of outdated cryptography causes, we are also announcing suggested minimum standards for TLS clients today. This applies to TLS clients in general: certainly those that are using TLS as part of HTTPS, but also, for example, SMTP servers using STARTTLS.</span><br /><span class="byline-author"><br /></span><span class="byline-author">We can't predict the future, but devices that meet these requirements are likely to be able to continue functioning without changes to their TLS configuration up to 2020. You should expect these standards to be required in cases where Google runs certification programs, but it’s a very good idea to meet them anyway.</span><br /><span class="byline-author"><br /></span><span class="byline-author">Devices that don’t meet these standards aren’t going to stop working anytime soon (unless they depend on RC4 or SSLv3—see above), but they might be affected by further TLS changes in the coming years.</span><br /><span class="byline-author"><br /></span><span class="byline-author">Specifically, we are requiring:</span><br /><br /><ol><li>TLS 1.2 must be supported.</li><li>A Server Name Indication (SNI) extension must be included in the handshake and must contain the domain that's being connected to.</li><li>The cipher suite TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 must be supported with P-256 and uncompressed points.</li><li>At least the certificates in <a href="https://pki.google.com/roots.pem">https://pki.google.com/roots.pem</a> must be trusted.</li><li>Certificate handling must be able to support DNS Subject Alternative Names and those SANs may include a single wildcard as the left-most label in the name.</li></ol><div><br /></div><div>In order to make testing as easy as possible we have set up <a href="https://cert-test.sandbox.google.com/">https://­cert-test.­sandbox.­google.­com</a>, which requires points 1–3 to be met in order to make a successful connection. Thus, if your TLS client can’t connect to that host then you need to update your libraries or configuration.</div><div><br /></div><div><b>No longer serving a cross-sign to Equifax</b></div><div><br /></div><div>At the moment the certificate chains that Google properties serve most often include a cross-sign from our CA, GeoTrust, to our previous CA, Equifax. This allows clients that only trust our previous CA to continue to function. However, this cross-sign is only a transitional workaround for such clients and we will be removing it in the future. Clients that include our required set of root CAs (at <a href="https://pki.google.com/roots.pem">https://pki.google.com/roots.pem</a>) will not be affected, but any that don’t include the needed GeoTrust root may stop working.</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/disabling-sslv3-and-rc4/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Cutting unwanted ad injectors out of advertising</title>
		<link>https://googledata.org/google-online-security/cutting-unwanted-ad-injectors-out-of-advertising/</link>
		<comments>https://googledata.org/google-online-security/cutting-unwanted-ad-injectors-out-of-advertising/#comments</comments>
		<pubDate>Thu, 10 Sep 2015 16:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=93849c7a48eab8229559d95be9a32224</guid>
		<description><![CDATA[<span>Posted by Vegard Johnsen, Product Manager, Google Ads Traffic Quality</span><br /><br />For the last few months, we&#8217;ve been raising awareness of the ad injection economy, showing how unwanted ad injectors can <a href="http://googleonlinesecurity.blogspot.com/2015/03/out-with-unwanted-ad-injectors.html">hurt user experience</a>, <a href="http://googleonlinesecurity.blogspot.com/2015/04/beyond-annoyance-security-risks-of.html">jeopardize user security</a>, and <a href="http://googleonlinesecurity.blogspot.com/2015/05/new-research-ad-injection-economy.html">generate significant volumes of unwanted ads</a>. We&#8217;ve used learnings from <a href="http://research.google.com/pubs/pub43346.html">our research</a> to prevent and remove unwanted ad injectors from Google services and improve our policies and technologies to make it more difficult to spread this unwanted software.<br /><br />Today, we&#8217;re announcing a new measure to remove injected ads from the advertising ecosystem, including an automated filter in DoubleClick Bid Manager that removes impressions generated by ad injectors before any bid is made.<br /><br /><b>Unwanted ad injectors: disliked by users, advertisers, and publishers</b><br />Unwanted ad injectors are programs that insert new ads, or replace existing ones, in the pages users visit while browsing the web. Unwanted ad injectors aren&#8217;t part of a healthy ads ecosystem. They&#8217;re part of an environment where bad practices hurt users, advertisers, and publishers alike.<br /><br />We&#8217;ve received almost 300,000 user complaints about them in Chrome since the beginning of 2015&#8212;more than any other issue, and it&#8217;s no wonder. Ad injectors affect all sites equally. You wouldn&#8217;t be happy if you tried to get the morning news and saw this:<br /><div><a href="http://2.bp.blogspot.com/-PtIZF7QCZu4/VfDRipUP6NI/AAAAAAAAAHc/afBg4qoklQU/s1600/image%2B%25281%2529.png"><img border="0" height="300" src="http://2.bp.blogspot.com/-PtIZF7QCZu4/VfDRipUP6NI/AAAAAAAAAHc/afBg4qoklQU/s400/image%2B%25281%2529.png" width="400"></a></div>Not only are they intrusive, but people are often tricked into installing them in the first place, via deceptive advertising, or software &#8220;bundles.&#8221; Ad injection can also be a security risk, as the <a href="http://www.nytimes.com/2015/03/02/technology/how-superfishs-security-compromising-adware-came-to-inhabit-lenovos-pcs.html">recent &#8220;Superfish&#8221; incident</a> showed.<br /><br />Ad injectors are problematic for advertisers and publishers as well. Advertisers often don&#8217;t know their ads are being injected, which means they don&#8217;t have any idea where their ads are running. Publishers, meanwhile, aren&#8217;t being compensated for these ads, and more importantly, they unknowingly may be putting their visitors in harm&#8217;s way, via spam or malware in the injected ads.<br /><br /><b>Removing injected inventory from advertising</b><br />Earlier this quarter, we launched an automated filter on DoubleClick Bid Manager to prevent advertisers from buying injected ads across the web. This new system detects ad injection and proactively creates a blacklist that prevents our systems from bidding on injected inventory. Advertisers and agencies using our platforms are already protected. No adjustments are needed. No settings to change.<br /><br />We currently blacklist 1.4% of the inventory accessed by DoubleClick Bid Manager across exchanges. However, we&#8217;ve found this percentage varies widely by provider. Below is a breakdown showing the filtered percentages across some of the largest exchanges:<br /><div><a href="http://2.bp.blogspot.com/-qiAnH4o2lRo/VfDRsewnD0I/AAAAAAAAAHk/y_z-2WqROFc/s1600/Ad%2BInjection%2Bper%2Bexchange%2Bview.png"><img border="0" height="360" src="http://2.bp.blogspot.com/-qiAnH4o2lRo/VfDRsewnD0I/AAAAAAAAAHk/y_z-2WqROFc/s640/Ad%2BInjection%2Bper%2Bexchange%2Bview.png" width="640"></a></div>We&#8217;ve always enforced <a href="http://www.google.com/doubleclick/adxseller/guidelines.html">policies</a> <a href="https://support.google.com/platformspolicy/answer/3013851?rd=1">against</a> the sale of injected inventory on our ads platforms, including the DoubleClick Ad Exchange. Now advertisers using DoubleClick Bid Manager can avoid injected inventory across the web.<br /><br /><b>No more injected ads?</b><br />We don&#8217;t expect the steps we&#8217;ve outlined above to solve the problem overnight, but we hope others across the industry take action to cut ad injectors out of advertising. With the tangle of different businesses involved&#8212;knowingly, or unknowingly&#8212;in the ad injector ecosystem, progress will only be made if we all work together. We strongly encourage all members of the ads ecosystem to review their policies and practices and take actions to tackle this issue.]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by Vegard Johnsen, Product Manager, Google Ads Traffic Quality</span><br /><br />For the last few months, we’ve been raising awareness of the ad injection economy, showing how unwanted ad injectors can <a href="http://googleonlinesecurity.blogspot.com/2015/03/out-with-unwanted-ad-injectors.html">hurt user experience</a>, <a href="http://googleonlinesecurity.blogspot.com/2015/04/beyond-annoyance-security-risks-of.html">jeopardize user security</a>, and <a href="http://googleonlinesecurity.blogspot.com/2015/05/new-research-ad-injection-economy.html">generate significant volumes of unwanted ads</a>. We’ve used learnings from <a href="http://research.google.com/pubs/pub43346.html">our research</a> to prevent and remove unwanted ad injectors from Google services and improve our policies and technologies to make it more difficult to spread this unwanted software.<br /><br />Today, we’re announcing a new measure to remove injected ads from the advertising ecosystem, including an automated filter in DoubleClick Bid Manager that removes impressions generated by ad injectors before any bid is made.<br /><br /><b>Unwanted ad injectors: disliked by users, advertisers, and publishers</b><br />Unwanted ad injectors are programs that insert new ads, or replace existing ones, in the pages users visit while browsing the web. Unwanted ad injectors aren’t part of a healthy ads ecosystem. They’re part of an environment where bad practices hurt users, advertisers, and publishers alike.<br /><br />We’ve received almost 300,000 user complaints about them in Chrome since the beginning of 2015—more than any other issue, and it’s no wonder. Ad injectors affect all sites equally. You wouldn’t be happy if you tried to get the morning news and saw this:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-PtIZF7QCZu4/VfDRipUP6NI/AAAAAAAAAHc/afBg4qoklQU/s1600/image%2B%25281%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="http://2.bp.blogspot.com/-PtIZF7QCZu4/VfDRipUP6NI/AAAAAAAAAHc/afBg4qoklQU/s400/image%2B%25281%2529.png" width="400" /></a></div>Not only are they intrusive, but people are often tricked into installing them in the first place, via deceptive advertising, or software “bundles.” Ad injection can also be a security risk, as the <a href="http://www.nytimes.com/2015/03/02/technology/how-superfishs-security-compromising-adware-came-to-inhabit-lenovos-pcs.html">recent “Superfish” incident</a> showed.<br /><br />Ad injectors are problematic for advertisers and publishers as well. Advertisers often don’t know their ads are being injected, which means they don’t have any idea where their ads are running. Publishers, meanwhile, aren’t being compensated for these ads, and more importantly, they unknowingly may be putting their visitors in harm’s way, via spam or malware in the injected ads.<br /><br /><b>Removing injected inventory from advertising</b><br />Earlier this quarter, we launched an automated filter on DoubleClick Bid Manager to prevent advertisers from buying injected ads across the web. This new system detects ad injection and proactively creates a blacklist that prevents our systems from bidding on injected inventory. Advertisers and agencies using our platforms are already protected. No adjustments are needed. No settings to change.<br /><br />We currently blacklist 1.4% of the inventory accessed by DoubleClick Bid Manager across exchanges. However, we’ve found this percentage varies widely by provider. Below is a breakdown showing the filtered percentages across some of the largest exchanges:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-qiAnH4o2lRo/VfDRsewnD0I/AAAAAAAAAHk/y_z-2WqROFc/s1600/Ad%2BInjection%2Bper%2Bexchange%2Bview.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="360" src="http://2.bp.blogspot.com/-qiAnH4o2lRo/VfDRsewnD0I/AAAAAAAAAHk/y_z-2WqROFc/s640/Ad%2BInjection%2Bper%2Bexchange%2Bview.png" width="640" /></a></div>We’ve always enforced <a href="http://www.google.com/doubleclick/adxseller/guidelines.html">policies</a> <a href="https://support.google.com/platformspolicy/answer/3013851?rd=1">against</a> the sale of injected inventory on our ads platforms, including the DoubleClick Ad Exchange. Now advertisers using DoubleClick Bid Manager can avoid injected inventory across the web.<br /><br /><b>No more injected ads?</b><br />We don’t expect the steps we’ve outlined above to solve the problem overnight, but we hope others across the industry take action to cut ad injectors out of advertising. With the tangle of different businesses involved—knowingly, or unknowingly—in the ad injector ecosystem, progress will only be made if we all work together. We strongly encourage all members of the ads ecosystem to review their policies and practices and take actions to tackle this issue.]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/cutting-unwanted-ad-injectors-out-of-advertising/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>Say hello to the Enigma conference</title>
		<link>https://googledata.org/google-online-security/say-hello-to-the-enigma-conference-2/</link>
		<comments>https://googledata.org/google-online-security/say-hello-to-the-enigma-conference-2/#comments</comments>
		<pubDate>Tue, 18 Aug 2015 17:09:00 +0000</pubDate>
		<dc:creator><![CDATA[Elie Bursztein]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=2d35b7743a59a92ef57032f9a57777b9</guid>
		<description><![CDATA[Posted by Elie Bursztein - Anti-abuse team, Parisa Tabriz - Chrome Security and Niels Provos - Security teamUSENIX Enigma is a new conference focused on security, privacy and electronic crime through the lens of emerging threats and novel attacks. The ...]]></description>
				<content:encoded><![CDATA[<div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 13.333333333333332px; font-style: italic; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Posted by Elie Bursztein - Anti-abuse team, Parisa Tabriz - Chrome Security and Niels Provos - Security team</span></div><b id="docs-internal-guid-3c5ef7e3-41c8-b90f-7ac0-4e8f4ffd3775" style="font-weight: normal;"><br /></b><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><a href="http://enigma.usenix.org/" style="text-decoration: none;"><span style="background-color: transparent; color: #1155cc; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">USENIX Enigma</span></a><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> is a new conference focused on security, privacy and electronic crime through the lens of emerging threats and novel attacks. The goal of this conference is to help industry, academic, and public-sector practitioners better understand the threat landscape. Enigma will have a single track of 30-minute talks that are curated by a panel of experts, featuring strong technical content with practical applications to current and emerging threats.</span></div><b style="font-weight: normal;"><br /></b><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: center;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><img height="395px;" src="https://lh6.googleusercontent.com/HXrluCoEXyWHzuEny6JHZ2sjgZJDXQz9nChmBn3ByBfk-IggnxH4XFfWlP1rI7LG6FPSH_LdTJpDphddDKcoIr5eQoqbfME07QpZG8wd7B5KXzz0CcdTk4KUBKWgrtv1ZjjzGdE" style="-webkit-transform: rotate(0.00rad); border: none; transform: rotate(0.00rad);" width="527px;" /></span></div><b style="font-weight: normal;"><br /></b><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: #002c40; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Google is excited to both sponsor and help USENIX build Enigma, since we share many of its core principles: transparency, openness, and cutting-edge security research. Furthermore, we are proud to provide Enigma with with engineering and design support, as well as volunteer participation in program and steering committees.</span></div><br /><div dir="ltr" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"><span style="background-color: transparent; color: #002c40; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">The first instantiation of Enigma will be held January 25-27 in San Francisco. You can sign up for more information about the conference or propose a talk through the official conference site at </span><a href="http://enigma.usenix.org/" style="text-decoration: none;"><span style="background-color: transparent; color: #1155cc; font-family: Arial; font-size: 14.666666666666666px; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">http://enigma.usenix.org</span></a></div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/say-hello-to-the-enigma-conference-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
		<item>
		<title>New research: Comparing how security experts and non-experts stay safe online</title>
		<link>https://googledata.org/google-online-security/new-research-comparing-how-security-experts-and-non-experts-stay-safe-online/</link>
		<comments>https://googledata.org/google-online-security/new-research-comparing-how-security-experts-and-non-experts-stay-safe-online/#comments</comments>
		<pubDate>Thu, 23 Jul 2015 09:00:00 +0000</pubDate>
		<dc:creator><![CDATA[Google Security PR]]></dc:creator>
				<category><![CDATA[Google Online Security]]></category>
		<category><![CDATA[google security]]></category>
		<category><![CDATA[online security]]></category>

		<guid isPermaLink="false">https://googledata.org/?guid=06de8d93a43f86927e68b85be5bb8e46</guid>
		<description><![CDATA[<span>Posted by</span><br /><span>Iulia Ion, Software Engineer</span><br /><span>Rob Reeder, Research Scientist</span><br /><span>Sunny Consolvo, User Experience Researcher</span><br /><br />Today, you can find more online security tips in a few seconds than you could use in a lifetime. While this collection of best practices is rich, it&#8217;s not always useful; it can be difficult to know which ones to prioritize, and why.<br /><br />Questions like &#8216;Why do people make some security choices (and not others)?&#8217; and &#8216;How effectively does the security community communicate its best practices?&#8217; are at the heart of a new paper called, <i>&#8220;...no one can hack my mind&#8221;: Comparing Expert and Non-Expert Security Practices&#8221;</i> that we&#8217;ll present this week at the <a href="https://cups.cs.cmu.edu/soups/2015/">Symposium on Usable Privacy and Security</a>.<br /><br />This paper outlines the results of two surveys&#8212;one with 231 security experts, and another with 294 web-users who aren&#8217;t security experts&#8212;in which we asked both groups what they do to stay safe online. We wanted to compare and contrast responses from the two groups, and better understand differences and why they may exist.<br /><br /><b>Experts&#8217; and non-experts&#8217; top 5 security practices</b><br /><br />Here are experts&#8217; and non-experts&#8217; top security practices, according to our study. We asked each participant to list 3 practices:<br /><div><a href="http://2.bp.blogspot.com/-BYD2O7mTI9E/VbB5_O_OdFI/AAAAAAAAAGs/NMkvxIUuq_8/s1600/Beutler_Google_Security-practices-v6.png"><img border="0" height="294" src="http://2.bp.blogspot.com/-BYD2O7mTI9E/VbB5_O_OdFI/AAAAAAAAAGs/NMkvxIUuq_8/s400/Beutler_Google_Security-practices-v6.png" width="400"></a></div><b>Common ground: careful password management</b><br /><div><div><br /></div><div>Clearly, careful password management is a priority for both groups. But, they differ on their approaches.</div><div><br /></div><div>Security experts rely heavily on password managers, services that store and protect all of a user&#8217;s passwords in one place. Experts reported using password managers, for at least some of their accounts, three-times more frequently than non-experts. As one expert said, <i>&#8220;Password managers change the whole calculus because they make it possible to have both strong and unique passwords.&#8221;</i></div></div><div><br /></div><div>On the other hand, only 24% of non-experts reported using password managers for at least some of their accounts, compared to 73% of experts. Our findings suggested this was due to lack of education about the benefits of password managers and/or a perceived lack of trust in these programs. <i>&#8220;I try to remember my passwords because no one can hack my mind,&#8221;</i> one non-expert told us.</div><div><br /></div><div><b>Key differences: software updates and antivirus software</b></div><div><br /></div><div>Despite some overlap, experts&#8217; and non-experts&#8217; top answers were remarkably different.</div><div><br /></div><div>35% of experts and only 2% of non-experts said that installing software updates was one of their top security practices. Experts recognize the benefits of updates&#8212;<i>&#8220;Patch, patch, patch,&#8221;</i> said one expert&#8212;while non-experts not only aren&#8217;t clear on them, but are concerned about the potential risks of software updates. A non-expert told us: <i>&#8220;I don&#8217;t know if updating software is always safe. What [if] you download malicious software?&#8221;</i> and <i>&#8220;Automatic software updates are not safe in my opinion, since it can be abused to update malicious content.&#8221;</i></div><div><br /></div><div>Meanwhile, 42% of non-experts vs. only 7% of experts said that running antivirus software was one of the top three three things they do to stay safe online. Experts acknowledged the benefits of antivirus software, but expressed concern that it might give users a false sense of security since it&#8217;s not a bulletproof solution.</div><div><br /></div><div><b>Next Steps</b></div><div><br /></div><div>In the immediate term, we encourage everyone to read the <a href="https://www.usenix.org/system/files/conference/soups2015/soups15-paper-ion.pdf">full research paper</a>, borrow experts&#8217; top practices, and also check out our tips for <a href="https://privacy.google.com/#google-online">keeping your information safe on Google</a>.</div><div><br /></div><div>More broadly, our findings highlight fundamental misunderstandings about basic online security practices. Software updates, for example, are the seatbelts of online security; they make you safer, period. And yet, many non-experts not only overlook these as a best practice, but also mistakenly worry that software updates are a security <i>risk</i>.</div><div><br /></div><div>No practice on either list&#8212;expert or non-expert&#8212;makes users less secure. But, there is clearly room to improve how security best practices are prioritized and communicated to the vast majority of (non expert) users. We&#8217;re looking forward to tackling that challenge.</div>]]></description>
				<content:encoded><![CDATA[<span class="byline-author">Posted by</span><br /><span class="byline-author">Iulia Ion, Software Engineer</span><br /><span class="byline-author">Rob Reeder, Research Scientist</span><br /><span class="byline-author">Sunny Consolvo, User Experience Researcher</span><br /><br />Today, you can find more online security tips in a few seconds than you could use in a lifetime. While this collection of best practices is rich, it’s not always useful; it can be difficult to know which ones to prioritize, and why.<br /><br />Questions like ‘Why do people make some security choices (and not others)?’ and ‘How effectively does the security community communicate its best practices?’ are at the heart of a new paper called, <i>“...no one can hack my mind”: Comparing Expert and Non-Expert Security Practices”</i> that we’ll present this week at the <a href="https://cups.cs.cmu.edu/soups/2015/">Symposium on Usable Privacy and Security</a>.<br /><br />This paper outlines the results of two surveys—one with 231 security experts, and another with 294 web-users who aren’t security experts—in which we asked both groups what they do to stay safe online. We wanted to compare and contrast responses from the two groups, and better understand differences and why they may exist.<br /><br /><b>Experts’ and non-experts’ top 5 security practices</b><br /><br />Here are experts’ and non-experts’ top security practices, according to our study. We asked each participant to list 3 practices:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-BYD2O7mTI9E/VbB5_O_OdFI/AAAAAAAAAGs/NMkvxIUuq_8/s1600/Beutler_Google_Security-practices-v6.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="294" src="http://2.bp.blogspot.com/-BYD2O7mTI9E/VbB5_O_OdFI/AAAAAAAAAGs/NMkvxIUuq_8/s400/Beutler_Google_Security-practices-v6.png" width="400" /></a></div><b>Common ground: careful password management</b><br /><div><div><br /></div><div>Clearly, careful password management is a priority for both groups. But, they differ on their approaches.</div><div><br /></div><div>Security experts rely heavily on password managers, services that store and protect all of a user’s passwords in one place. Experts reported using password managers, for at least some of their accounts, three-times more frequently than non-experts. As one expert said, <i>“Password managers change the whole calculus because they make it possible to have both strong and unique passwords.”</i></div></div><div><br /></div><div>On the other hand, only 24% of non-experts reported using password managers for at least some of their accounts, compared to 73% of experts. Our findings suggested this was due to lack of education about the benefits of password managers and/or a perceived lack of trust in these programs. <i>“I try to remember my passwords because no one can hack my mind,”</i> one non-expert told us.</div><div><br /></div><div><b>Key differences: software updates and antivirus software</b></div><div><br /></div><div>Despite some overlap, experts’ and non-experts’ top answers were remarkably different.</div><div><br /></div><div>35% of experts and only 2% of non-experts said that installing software updates was one of their top security practices. Experts recognize the benefits of updates—<i>“Patch, patch, patch,”</i> said one expert—while non-experts not only aren’t clear on them, but are concerned about the potential risks of software updates. A non-expert told us: <i>“I don’t know if updating software is always safe. What [if] you download malicious software?”</i> and <i>“Automatic software updates are not safe in my opinion, since it can be abused to update malicious content.”</i></div><div><br /></div><div>Meanwhile, 42% of non-experts vs. only 7% of experts said that running antivirus software was one of the top three three things they do to stay safe online. Experts acknowledged the benefits of antivirus software, but expressed concern that it might give users a false sense of security since it’s not a bulletproof solution.</div><div><br /></div><div><b>Next Steps</b></div><div><br /></div><div>In the immediate term, we encourage everyone to read the <a href="https://www.usenix.org/system/files/conference/soups2015/soups15-paper-ion.pdf">full research paper</a>, borrow experts’ top practices, and also check out our tips for <a href="https://privacy.google.com/#google-online">keeping your information safe on Google</a>.</div><div><br /></div><div>More broadly, our findings highlight fundamental misunderstandings about basic online security practices. Software updates, for example, are the seatbelts of online security; they make you safer, period. And yet, many non-experts not only overlook these as a best practice, but also mistakenly worry that software updates are a security <i>risk</i>.</div><div><br /></div><div>No practice on either list—expert or non-expert—makes users less secure. But, there is clearly room to improve how security best practices are prioritized and communicated to the vast majority of (non expert) users. We’re looking forward to tackling that challenge.</div>]]></content:encoded>
			<wfw:commentRss>https://googledata.org/google-online-security/new-research-comparing-how-security-experts-and-non-experts-stay-safe-online/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="" length="" type="" />
		</item>
	</channel>
</rss>
