Google Data

  • Home
  • Everything Google
    • Gmail (Google Mail)
    • Google Adsense
    • Google Adsense Feeds
    • Google Adwords
    • Google Adwords API
    • Google Affiliate Network
    • Google Ajax API
    • Google Analytics
    • Google Android
    • Google App Engine
    • Google Apps
    • Google Blog
    • Google Blogger Buzz
    • Google Books
    • Google Checkout
    • Google Chrome
    • Google Code
    • Google Conversions
    • Google CPG
    • Google Custom Search
    • Google Desktop
    • Google Desktop API
    • Google Docs
    • Google DoubleClick
    • Google Earth
    • Google Enterprise
    • Google Finance
    • Google Gadgets
    • Google Gears
    • Google Grants
    • Google Mac Blog
    • Google Maps
    • Google Merchant
    • Google Mobile
    • Google News
    • Google Notebook
    • Google Online Security
    • Google Open Source
    • Google OpenSocial
    • Google Orkut
    • Google Photos (Picasa)
    • Google Product Ideas
    • Google Public Policy
    • Google Reader
    • Google RechargeIT
    • Google Research
    • Google SketchUp
    • Google Student Blog
    • Google Talk
    • Google Testing
    • Google Translate
    • Google TV Ads Blog
    • Google Video
    • Google Voice
    • Google Wave
    • Google Webmaster Central
    • Google Website Optimizer
    • Google.org
    • iGoogle
    • Uncategorized
    • Youtube
    • Youtube API
  • Resources
    • Google Products & Services
    • Google Twitter Directory
    • Google Trademarks
    • Google Data Site Map
  • About
  • Contact
  • Subscribe via RSS

Google Summer of Code 2016 wrap-up: Oppia

January 6th, 2017  |  Published in Google Open Source

Google Summer of Code (GSoC) is an annual program that encourages university students to become open source contributors. This guest post is part of a series of blog posts from the open source projects and organizations that participated in GSoC 2016.

The Oppia project makes it easy for anyone to create lightweight, interactive online lessons that simulate personal tutoring. These activities, called “explorations,” can be shared with others around the world as standalone tutorials (such as Programming with Carla and Quadratic Equations), or embedded in websites to supplement an existing course (such as “Take Your Medicine” on edX and Computational Thinking for Educators).

2016 was Oppia’s first year participating in GSoC and it was a blast! More students flocked to our ideas page than we had expected, and our Gitter channel was full of people saying hello and looking for starter projects. Over the course of the summer, with the help of two capable and enthusiastic students, we were able to bring the following new features to the Oppia codebase:

A new creator dashboard -- Avijit Gupta


An important principle of Oppia is that lessons can be easily improved over time -- it’s hard to figure out all the possible ways a student can go wrong at the outset, but it’s much easier to respond appropriately to a new misconception that arises.

Each creator on Oppia has a “creator dashboard” which allows them to see the lessons they’ve created, as well as the feedback they’ve received from learners. Avijit completed a full revamp of this page, updating its design (for both desktop and mobile) and finding ways to display all the necessary information in an intuitive way so that creators can easily improve their lessons while getting feedback on their teaching.

The new creator dashboard.

In addition, Avijit added functionality allowing creators to view student misconceptions that were not well-addressed, to make it easier for them to improve the feedback for those answers. He has continued to help out with the Oppia open source project as a maintainer and reviewer, even after GSoC, and is mentoring other contributors who are working on further improvements to the creator dashboard. You can read more about the project in his GSoC writeup!

Speed improvements -- Vishal Gupta


In order to improve the accessibility of lessons for students with poor internet connectivity, Vishal’s project aimed to make Oppia speedier and less bandwidth-intensive. He started by implementing a performance testing framework to benchmark his efforts, and also integrated it with our continuous integration system in order to protect against performance regressions. He then turned his efforts to caching as many static resources as possible, implementing a cache slug system that causes new files to be downloaded only after a new release is made.

In addition, Vishal removed JavaScript code that was inlined in the main templates, and refactored it out into an external script which could then be cached for better performance. You can read more about this project in his post on the Oppia blog.

We’d like to extend our grateful thanks not only to Avijit and Vishal, but also to our many willing and enthusiastic mentors, and to Google for supporting our open source work with GSoC.

Join us in helping improve educational opportunities for students around the world. If you’d like to subscribe to news and updates about Oppia’s participation in GSoC, you can sign up to the oppia-gsoc-announce mailing list -- or, if you’re already feeling enthusiastic, you can start helping out with the project right away!

By Ben Henning and Sean Lip, Organization Administrators for Oppia

Google Brain Residency Program – 7 months in and looking ahead

January 5th, 2017  |  Published in Google Research

Posted by Jeff Dean, Google Senior Fellow and Leslie Phillips, Google Brain Residency Program Manager

“Beyond being incredibly instructive, the Google Brain Residency program has been a truly affirming experience. Working alongside people who truly love what they do--and are eager to help you develop your own passion--has vastly increased my confidence in my interests, my ability to explore them, and my plans for the near future.”
-Akosua Busia, B.S. Mathematical and Computational Science, Stanford University ‘16
2016 Google Brain Resident

In October 2015 we launched the Google Brain Residency, a 12-month program focused on jumpstarting a career for those interested in machine learning and deep learning research. This program is an opportunity to get hands on experience using the state-of-the-art infrastructure available at Google, and offers the chance to work alongside top researchers within the Google Brain team.

Our first group of residents arrived in June 2016, working with researchers on problems at the forefront of machine learning. The wide array of topics studied by residents reflects the diversity of the residents themselves — some come to the program as new graduates with degrees ranging from BAs to Ph.Ds in computer science to physics and mathematics to biology and neuroscience, while other residents come with years of industry experience under their belts. They all have come with a passion for learning how to conduct machine learning research.

The breadth of research being done by the Google Brain Team along with resident-mentorship pairing flexibility ensures that residents with interests in machine learning algorithms and reinforcement learning, natural language understanding, robotics, neuroscience, genetics and more, are able to find good mentors to help them pursue their ideas and publish interesting work. And just seven months into the program, the Residents are already making an impact in the research field.

To date, Google Brain Residents have submitted a total of 21 papers to leading machine learning conferences, spanning topics from enhancing low resolution images to building neural networks that in turn design novel, task specific neural network architectures. Of those 21 papers, 5 were accepted in the recent BayLearn Conference (two of which, “Mean Field Neural Networks” and “Regularizing Neural Networks by Penalizing Their Output Distribution’’, were presented in oral sessions), 2 were accepted in the NIPS 2016 Adversarial Training workshop, and another in ISMIR 2016 (see the full list of papers, including the 14 submissions to ICLR 2017, after the figures below).
An LSTM Cell (Left) and a state of the art RNN Cell found using a neural network (Right). This is an example of a novel architecture found using the approach presented in “Neural Architecture Search with Reinforcement Learning” (B. Zoph and Q. V. Le, submitted to ICLR 2017). This paper uses a neural network to generate novel RNN cell architectures that outperform the widely used LSTM on a variety of different tasks.
The training accuracy for neural networks, colored from black (random chance) to red (high accuracy). Overlaid in white dashed lines are the theoretical predictions showing the boundary between trainable and untrainable networks. (a) Networks with no dropout. (b)-(d) Networks with dropout rates of 0.01, 0.02, 0.06 respectively. This research explores whether theoretical calculations can replace large hyperparameter searches. For more details, read “Deep Information Propagation” (S. S. Schoenholz, J. Gilmer, S. Ganguli, J. Sohl-Dickstein, submitted to ICLR 2017).

Accepted conference papers
(Google Brain Residents marked with asterisks)

Unrolled Generative Adversarial Networks
Luke Metz*, Ben Poole, David Pfau, Jascha Sohl-Dickstein
NIPS 2016 Adversarial Training Workshop (oral presentation)

Conditional Image Synthesis with Auxiliary Classifier GANs
Augustus Odena*, Chris Olah, Jon Shlens
NIPS 2016 Adversarial Training Workshop (oral presentation)

Regularizing Neural Networks by Penalizing Their Output Distribution
Gabriel Pereyra*, George Tucker, Lukasz Kaiser, Geoff Hinton
BayLearn 2016 (oral presentation)

Mean Field Neural Networks
Samuel S. Schoenholz*, Justin Gilmer*, Jascha Sohl-Dickstein
BayLearn 2016 (oral presentation)

Learning to Remember
Aurko Roy, Ofir Nachum*, Łukasz Kaiser, Samy Bengio
BayLearn 2016 (poster session)

Towards Generating Higher Resolution Images with Generative Adversarial Networks
Augustus Odena*, Jonathon Shlens
BayLearn 2016 (poster session)

Multi-Task Convolutional Music Models
Diego Ardila, Cinjon Resnick*, Adam Roberts, Douglas Eck
BayLearn 2016 (poster session)

Audio DeepDream: Optimizing Raw Audio With Convolutional Networks
Diego Ardila, Cinjon Resnick*, Adam Roberts, Douglas Eck
ISMIR 2016 (poster session)

Papers under review (Google Brain Residents marked with asterisks)

Learning to Remember Rare Events
Lukasz Kaiser, Ofir Nachum*, Aurko Roy, Samy Bengio
Submitted to ICLR 2017

Neural Combinatorial Optimization with Reinforcement Learning
Irwan Bello*, Hieu Pham*, Quoc V. Le, Mohammad Norouzi, Samy Bengio
Submitted to ICLR 2017

HyperNetworks
David Ha*, Andrew Dai, Quoc V. Le
Submitted to ICLR 2017

Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Noam Shazeer, Azalia Mirhoseini*, Krzysztof Maziarz, Quoc Le, Jeff Dean
Submitted to ICLR 2017

Neural Architecture Search with Reinforcement Learning
Barret Zoph* and Quoc Le
Submitted to ICLR 2017

Deep Information Propagation
Samuel Schoenholz*, Justin Gilmer*, Surya Ganguli, Jascha Sohl-Dickstein
Submitted to ICLR 2017

Capacity and Learnability in Recurrent Neural Networks
Jasmine Collins*, Jascha Sohl-Dickstein, David Sussillo
Submitted to ICLR 2017

Unrolled Generative Adversarial Networks
Luke Metz*, Ben Poole, David Pfau, Jascha Sohl-Dickstein
Submitted to ICLR 2017

Conditional Image Synthesis with Auxiliary Classifier GANs
Augustus Odena*, Chris Olah, Jon Shlens
Submitted to ICLR 2017

Generating Long and Diverse Responses with Neural Conversation Models
Louis Shao, Stephan Gouws, Denny Britz*, Anna Goldie, Brian Strope, Ray Kurzweil
Submitted to ICLR 2017

Intelligible Language Modeling with Input Switched Affine Networks
Jakob Foerster, Justin Gilmer*, Jan Chorowski, Jascha Sohl-dickstein, David Sussillo
Submitted to ICLR 2017

Regularizing Neural Networks by Penalizing Confident Output Distributions
Gabriel Pereyra*, George Tucker*, Jan Chorowski, Lukasz Kaiser, Geoffrey Hinton
Submitted to ICLR 2017

Unsupervised Perceptual Rewards for Imitation Learning
Pierre Sermanet, Kelvin Xu*, Sergey Levine
Submitted to ICLR 2017

Improving policy gradient by exploring under-appreciated rewards
Ofir Nachum*, Mohammad Norouzi, Dale Schuurmans
Submitted to ICLR 2017

Protein Secondary Structure Prediction Using Deep Multi-scale Convolutional Neural Networks and Next-Step Conditioning
Akosua Busia*, Jasmine Collins*, Navdeep Jaitly

The diverse and collaborative atmosphere fostered by the Brain team has resulted in a group of researchers making great strides on a wide range of research areas which we are excited to share with the broader community. We look forward to even more innovative research that is yet to be done from our 2016 residents, and are excited for the program to continue into it’s second year!

We are currently accepting applications for the 2017 Google Brain Residency Program. To learn more about the program and to submit your application, visit g.co/brainresidency. Applications close January 13th, 2017.

A new guide to testing and experimentation in AdWords

January 5th, 2017  |  Published in Google Adwords

Google Best Practices for AdWords
Digital marketing is always evolving. It’s crucial to evolve your approach as well. This ongoing process of optimization should be built upon a solid framework of testing and experimentation.

By using all of the tools available to you, including campaign drafts and experiments in AdWords, you can test and understand whether any proposed changes to your account will help you reach your business goals.

We’ve published a new guide that helps you understand whether any proposed changes to your account will help you improve performance. It covers topics such as:

  • Testing changes to your AdWords account
  • Creating experiments that produce clear results
  • Analyzing results to choose experiment winners


As you experiment with your own account, use this guide to take full advantage of AdWords campaign drafts and experiments.

Want to stay on top of even more Best Practices? Sign up to receive our monthly newsletter.

Posted by Matt Lawson, Director, Performance Ads Marketing

Page setup and ODF and EPUB support in the Google Docs, Sheets, and Slides Android apps

January 4th, 2017  |  Published in Google Apps, Google Docs

New versions of the Docs, Sheets, and Slides apps for Android are now available on Google Play, including the following new features to help you get more done on the move:

Page setup in Google Docs and image support improvements in Sheets
You can now change the orientation, paper size, and page color Docs on Android. In Sheets, you can use the IMAGE function to view images inside of cells.





EPUB and ODF support on Android
Docs now supports exporting your files in the EPUB (.epub) and OpenDocument Text (.odt) file formats. Additionally, you can now import OpenDocument Text (.odt) files from the Android app. Similarly, you will now be able to import and export OpenDocument Spreadsheets (.ods) in Sheets and OpenDocument Presentations (.odp) in Slides.

Please note: these features are only available when the device is online. When the device is offline, an error dialog will show saying "Data connection needed to open this file type".


Launch Details
Release track:
Launching to both Rapid release and Scheduled release

Editions:
Available to all G Suite editions

Rollout pace:
Full rollout (1-3 days for feature visibility)

Impact:
All end users

Action:
Change management suggested/FYI


More Information
Help Center: Page setup
How to use Google Slides
How to use Google Docs
How to use Google Sheets


Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

Manage email effectively with unified routing settings in Admin console

January 4th, 2017  |  Published in Google Apps

G Suite customers use Gmail because it's simple to use and reliable, and because it allows them to customize their enterprise-specific routing settings in the Admin console. However, as we expanded these features to cover more use-cases, it became clear that a consolidated view of all routing settings would make it easier to manage at a glance. That's why today, we're beginning to unify Gmail's routing settings in the Gmail Routing section of the Admin console that was launchedone year ago.





Beginning week of January 23, 2017, your existing Sending routing, Receiving routing, Catch-all address and Domain-level routingsettings will become read only. These settings are still active, but in order to make changes, you must click the CONVERT button, next to the existing rule. We've taken steps to make this process as simple as possible by converting the previous setting and mapping it to the new fields automatically. The converted setting is then placed in the unified Routing section, where all of your rules can be managed from one place. The old setting will no longer be visible in the old user interface, so you don't need to clean up old rules afterward.




Action Suggested
  1. To make changes to an existing routing setting, click the CONVERT button next to the rule. Since the user interface has changed slightly, we suggest re-familiarizing yourself with the new look and feel.

  2. To add new routing settings, use the Routing section from Gmail's advanced settings section. The previous way to create routing settings will soon be removed.
  3. To migrate your existing rules to the new section ahead of time, you can use the CONVERT button for all of your rules. If you decide not to make any changes to your routing rules, then your rules will be automatically converted to the new Routing format and section on March 6, 2017.

Launch Details
Release track:
Settings converted to read only on both Rapid release and Scheduled release week of January 23, 2017.
Automatic routing settings conversion occurring after March 6, 2017.

Rollout pace:
Gradual rollout (potentially longer than 3 days for feature visibility)

Editions:
Available to all G Suite editions

Impact:
Admins only

Action:
Admin action suggested/FYI


More Information
Set up routing for your domain or organization


Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

Grumpy: Go running Python!

January 4th, 2017  |  Published in Google Open Source

Google runs millions of lines of Python code. The front-end server that drives youtube.com and YouTube’s APIs is primarily written in Python, and it serves millions of requests per second! YouTube’s front-end runs on CPython 2.7, so we’ve put a ton of work into improving the runtime and adapting our application to work optimally within it. These efforts have borne a lot of fruit over the years, but we always run up against the same issue: it's very difficult to make concurrent workloads perform well on CPython.

To solve this problem, we investigated a number of other Python runtimes. Each had trade-offs and none solved the concurrency problem without introducing other issues.
MeatGrinder.png
So we asked ourselves a crazy question: What if we were to implement an alternative runtime optimized for real-time serving? Once we started going down the rabbit hole, Go seemed like an obvious choice of platform since its operational characteristics align well with our use case (e.g. lightweight threads). We wanted first class language interoperability and Go’s powerful runtime type reflection system made this straightforward. Python in Go felt very natural, and so Grumpy was born.

Grumpy is an experimental Python runtime for Go. It translates Python code into Go programs, and those transpiled programs run seamlessly within the Go runtime. We needed to support a large existing Python codebase, so it was important to have a high degree of compatibility with CPython (quirks and all). The goal is for Grumpy to be a drop-in replacement runtime for any pure-Python project.

Two design choices we made had big consequences. First, we decided to forgo support for C extension modules. This means that Grumpy cannot leverage the wealth of existing Python C extensions but it gave us a lot of flexibility to design an API and object representation that scales for parallel workloads. In particular, Grumpy has no global interpreter lock, and it leverages Go’s garbage collection for object lifetime management instead of counting references. We think Grumpy has the potential to scale more gracefully than CPython for many real world workloads. Results from Grumpy’s synthetic Fibonacci benchmark demonstrate some of this potential:



Second, Grumpy is not an interpreter. Grumpy programs are compiled and linked just like any other Go program. The downside is less development and deployment flexibility, but it offers several advantages. For one, it creates optimization opportunities at compile time via static program analysis. But the biggest advantage is that interoperability with Go code becomes very powerful and straightforward: Grumpy programs can import Go packages just like Python modules! For example, the Python snippet below uses Go’s standard net/http package to start a simple server:

from __go__.net.http import ListenAndServe, RedirectHandler

handler = RedirectHandler('http://github.com/google/grumpy', 303)
ListenAndServe('127.0.0.1:8080', handler)

We’re excited about the prospects for Grumpy. Although it’s still alpha software, most of the language constructs and many core built-in types work like you’d expect. There are still holes to fill — many built-in types are missing methods and attributes, built-in functions are absent and the standard library is virtually empty. If you find things that you wish were working, file an issue so we know what to prioritize. Or better yet, submit a pull request.

Stay Grumpy!

By Dylan Trotter, YouTube Engineering

What’s New in G Suite newsletter – December 2016

January 3rd, 2017  |  Published in Google Apps


Check out the latest "What's New in G Suite" newsletter [pdf] for a roundup of all G Suite launches from December 2016.

Newsletter Archive & Translated Versions (coming soon for December issue)

Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

Rails Girls Summer of Code: Changing the face of tech

January 3rd, 2017  |  Published in Google Open Source

This is a guest post from Laura Gaetano who organizes Rails Girls Summer of Code, a global fellowship program inspired by Google Summer of Code.

Have you seen that picture of Margaret Hamilton, the NASA engineer who worked on the computer systems for the Apollo 11 launch? She’s standing next to the human-sized pile of listings of the Apollo Guidance Computer source code that she worked on. Do you know about Ada Lovelace, often cited as the very first computer programmer?

From World War II until the 1980s, women engineers and women computer operators were fairly common. There was a steady rise in women entering STEM fields, and young girls had role models and strong women to look up to. We're well acquainted with the drop in female engineering graduates worldwide after this time period, and the subsequent drop in the percentage of women entering the world of tech. We're here to help change that, and reverse the trend.

Rails Girls Summer of Code (RGSoC) aims to bring more diversity into the world of tech — specifically, into the world of open source software, where women make up a mere 11% of the community. The global program offers 3-month scholarships to teams of women to allow them to work full-time on an open source project of their choice – aided by local coaches and guided by the project maintainer (or a core contributor). The scholarships are funded through the support of the community as well as our sponsors, via a crowdfunding campaign.

Local vs. Global
We all cherish our local community and understand how strong of a support network it can be, especially for newcomers. The Rails Girls chapters worldwide emphasize that need: most coaches and organisers are local, and many alums go on to create their own study groups, or become coaches or organisers themselves. RGSoC also relies strongly on a global network of user groups — both Rails Girls chapters and similar organisations such as PyLadies or DjangoGirls.

Thanks to our connections with these different groups, we are able to reach people in remote or unlikely locations, and build the most diverse group of applicants possible. This is very important to us. Since the beginning, the program has provided the opportunity to bring together women with different experiences, backgrounds, locales and age groups to come together and be part of the same global initiative.

Our Structure
Last year, we received over 90 team applications. When applying, each two-person team chooses from a list of pre-selected projects. These projects are maintained by people we either personally know, or who have reached out to us prior to the application period. We look for projects with patient, open-minded contributors who are active in their community, and projects that provide a lot of learning opportunities for applicants.

Project maintainers (also called mentors) are in touch with students in order to adapt the roadmap throughout the summer to the students' needs and check up on their progress. On a daily basis, students spend the majority of their time with coaches. The coaches help, support, and teach the students throughout the summer. Each team is also appointed a supervisor, who supports students on the organisational side of things. They are the glue that keeps the whole team together, and a way for the core RGSoC team to keep track of how every team is doing.

Our Stats
Our program started in 2013 with 18 teams, 10 of which were sponsored and 8 of which were volunteer teams. The following year, 16 teams participated with 10 sponsored spots. The real breakthrough came in 2015 when we were able to fund 16 sponsored teams, a substantial increase from the previous years. Not only did this enable us to have more impact — with a potential 12 more women entering the tech world and STEM workforce than the previous years — but it also shows the community’s trust in the program.

In 2016, the Ruby community awarded us with a Ruby Hero Award, and we managed to collect enough money to sponsor 16 teams from five continents with another 4 teams joining as volunteers. This year was also the first time we had teams based in Uganda, Egypt, Singapore and the Czech Republic.
Our stats from 2016 (Image: Laura Gaetano/RGSoC)
In 2015, we contacted our alums from 2013 and 2014 to find out what they were doing after the program. The responses were impressive: out of 64 graduates, over 90% are now currently working in the tech field. A fair number of graduates have even founded their own startup. Not only that some of these women have found their calling, but we might have made a small difference in the community of open source, and are on the right track to really shake things up.

Where do we go from here
On the first of July last year, we kicked off our program with over 130 people participating — including coaches, supervisors, designers, helpdesk coaches and project mentors. We were incredibly excited to have 20 teams in 16 cities and 11 different countries, spanning time zones, from UTC+10 to UTC-7.
Our 2016 sponsored and volunteer teams! (Image: Ana Sofia Pinho/RGSoC)
We’ve seen in the past just how much of an impact we’ve had in our participants’ lives, and are hoping that this trend will continue to rise. We hope that some of our previous editions' teams graduated with the skills and confidence to become NASA engineers, web developers, or anything else they want to be. Hopefully someday they will become a young woman’s role model, and realise the important role they served in changing the future of engineering and of open source software.

By Laura Gaetano, Organizer of Rails Girls Summer of Code

Other sites are using my ad code. What should I do?

December 28th, 2016  |  Published in Google Adsense

Hello AdSense Publishers, if you are seeing sites on your performance reports or under your sites list that don’t belong to you, this post is for you!

First off, you are not under attack. Although you may feel some alarm seeing another site show up in your account, there's nothing to worry about. The most common scenario is another site has copied your site code and pasted it onto their site, also copying the ad code in the process. Since your publisher id is still in the ad code, the new site will now appear in your account.

To address this you’ll want to make two quick changes to your account.


  1. Verify which sites in your account are yours.


  • Sign in to your AdSense account.
  • Click Settings in the sidebar. 
  • In the dropdown, select My sites.
  • In your site list, locate the site you want to verify and click the Down arrow

Down Arrow
  • Next to "Verified site", turn the switch on
 
2. Once verified, enable site authorization. Site authorization is an optional feature that lets you identify your verified sites as the only sites that are permitted to use your Google ad code. 

  • Click Settings in the sidebar.
  • In the dropdown, select My sites.
  • On the "Manage sites" page, click More 
More
  • Click Site authorization.
  • Next to "Only authorize my verified sites to use my ad code", turn the switch on 
On
  • Click Save. Your changes should take effect within 48 hours.

Once your sites are verified and site authorization is turned on, ads will still show on the unverified site(s), and impressions and clicks will be recorded. However, advertisers will not be charged, and you won't receive any earnings for that site. Any policy violations that occur on these sites also won’t count against you. 

The URLs will still show up in your list of sites so you are aware of who else is using your ad code. The sites may also still show in your performance report in order to keep a definitive record of activity. If you are seeing the sites in your reports but don’t want to, consider setting your report type to “Verified Sites” to ensure you are only seeing data related to your verified sites. More on managing your sites here.

Post by Brandon Canniff, AdSense Support Specialist

Open source down under: Linux.conf.au 2017

December 28th, 2016  |  Published in Google Open Source

It’s a new year and open source enthusiasts from around the globe are preparing to gather at the edge of the world for Linux.conf.au 2017. Among those preparing are Googlers, including some of us from the Open Source Programs Office.

This year Linux.conf.au is returning to Hobart, the riverside capital of Tasmania, home of Australia’s famous Tasmanian devils, running five days between January 16 and 20.
Circle_DevilTuz.png
Tuz, a Tasmanian devil sporting a penguin beak, is the Linux.conf.au mascot.
(Artwork by Tania Walker licensed under CC BY-SA.)
The conference, which began in 1999 and is community organized, is well equipped to explore the theme, "the Future of Open Source," which is reflected in the program schedule and miniconfs.

You’ll find Googlers speaking (listed below) as well as participating in the hallway track. Don’t miss our Birds of a Feather session if you’re a student, educator, project maintainer, or otherwise interested in talking about outreach and student programs like Google Summer of Code and Google Code-in.

Monday, January 16th
12:20pm The Sound of Silencing by Julien Goodwin

Tuesday, January 17th
All day    Community Leadership Summit X at LCA

Wednesday, January 18th
2:15pm   Community Building Beyond the Black Stump by Josh Simmons

Thursday, January 19th
4:35pm   Using Python for creating hardware to record FOSS conferences! by Tim Ansell

Friday, January 20th
1:20pm   Linux meets Kubernetes by Vishnu Kannan

Not able to make it to the conference? Keynotes and sessions will be livestreamed, and you can always find the session recordings online after the event.

We’ll see you there!

By Josh Simmons, Open Source Programs Office

Taking the pulse of Google Code-in 2016

December 23rd, 2016  |  Published in Google Open Source

GCI official horizontal_1372x448dp.png

Today is the official midpoint of this year’s Google Code-in contest and we are delighted to announce this is our most popular year ever! 930 teenagers from 60 countries have completed 3,503 tasks with 17 open source organizations. The number of students successfully completing tasks has almost met the total number of students from the 2015 contest already.

Tasks that the students have completed include:
  • writing test suites
  • improving mobile UI 
  • writing documentation and creating videos to help new users 
  • working on internationalization efforts
  • fixing and finding bugs in the organization's’ software 
Participants from all over the world
In total, over 2,800 students from 87 countries have registered for the contest and we look forward to seeing great work from these (and more!) students over the next few weeks. 2016 has also seen a huge increase in student participation in places such as Indonesia, Vietnam and the Philippines.

Google Code-in participants by country

Please welcome two new countries to the GCI family: Mauritius and Moldova! Mauritius made a very strong debut to the contest and currently has 13 registered students who have completed 31 tasks.

The top five countries with the most completed tasks are:
  1. India: 982
  2. United States: 801
  3. Singapore: 202
  4. Vietnam: 119
  5. Canada: 117
Students, there is still plenty of time to get started with Google Code-in. New tasks are being added daily to the contest site — there are over 1,500 tasks available for students to choose from right now! If you don’t see something that interests you today, check back again every couple of days for new tasks.

The last day to register for the contest and claim a task is Friday, January 13, 2017 with all work being due on Monday, January 16, 2017 at 9:00 am PT.

Good luck to all of the students participating this year in Google Code-in!

By Stephanie Taylor, Google Code-in Program Manager

All numbers reported as of 8:00 PM Pacific Time, December 22, 2016.

Introducing the ExifInterface Support Library

December 21st, 2016  |  Published in Google Android

Posted by Ian Lake, Developer Advocate

With the release of the 25.1.0 Support Library, there's a new entry in the family: the ExifInterface Support Library. With significant improvements introduced in Android 7.1 to the framework's ExifInterface, it only made sense to make those available to all API 9+ devices via the Support Library's ExifInterface.

The basics are still the same: the ability to read and write Exif tags embedded within image files: now with 140 different attributes (almost 100 of them new to Android 7.1/this Support Library!) including information about the camera itself, the camera settings, orientation, and GPS coordinates.

Camera Apps: Writing Exif Attributes

For Camera apps, the writing is probably the most important - writing attributes is still limited to JPEG image files. Now, normally you wouldn't need to use this during the actual camera capturing itself - you'd instead be calling the Camera2 API CaptureRequest.Builder.set() with JPEG_ORIENTATION, JPEG_GPS_LOCATION or the equivalents in the Camera1 Camera.Parameters. However, using ExifInterface allows you to make changes to the file after the fact (say, removing the location information on the user's request).

Reading Exif Attributes

For the rest of us though, reading those attributes is going to be our bread-and-butter; this is where we see the biggest improvements.

Firstly, you can read Exif data from JPEG and raw images (specifically, DNG, CR2, NEF, NRW, ARW, RW2, ORF, PEF, SRW and RAF files). Under the hood, this was a major restructuring, removing all native dependencies and building an extensive test suite to ensure that everything actually works.

For apps that receive images from other apps with a content:// URI (such as those sent by apps that target API 24 or higher), ExifInterface now works directly off of an InputStream; this allows you to easily extract Exif information directly out of content:// URIs you receive without having to create a temporary file.

Uri uri; // the URI you've received from the other app
InputStream in;
try {
  in = getContentResolver().openInputStream(uri);
  ExifInterface exifInterface = new ExifInterface(in);
  // Now you can extract any Exif tag you want
  // Assuming the image is a JPEG or supported raw format
} catch (IOException e) {
  // Handle any errors
} finally {
  if (in != null) {
    try {
      in.close();
    } catch (IOException ignored) {}
  }
}

Note: ExifInterface will not work with remote InputStreams, such as those returned from a HttpURLConnection. It is strongly recommended to only use them with content:// or file:// URIs.

For most attributes, you'd simply use the getAttributeInt(), getAttributeDouble(), or getAttribute() (for Strings) methods as appropriate.

One of the most important attributes when it comes to displaying images is the image orientation, stored in the aptly-named TAG_ORIENTATION, which returns one of the ORIENTATION_ constants. To convert this to a rotation angle, you can post-process the value.

int rotation = 0;
int orientation = exifInterface.getAttributeInt(
    ExifInterface.TAG_ORIENTATION,
    ExifInterface.ORIENTATION_NORMAL);
switch (orientation) {
  case ExifInterface.ORIENTATION_ROTATE_90:
    rotation = 90;
    break;
  case ExifInterface.ORIENTATION_ROTATE_180:
    rotation = 180;
    break;
  case ExifInterface.ORIENTATION_ROTATE_270:
    rotation = 270;
    break;
}

There are some helper methods to extract values from specific Exif tags. For location data, the getLatLong() method gives you the latitude and longitude as floats and getAltitude() will give you the altitude in meters. Some images also embed a small thumbnail. You can check for its existence with hasThumbnail() and then extract the byte[] representation of the thumbnail with getThumbnail() - perfect to pass to BitmapFactory.decodeByteArray().

Working with Exif: Everything is optional

One thing that is important to understand with Exif data is that there are no required tags: each and every tag is optional - some services even specifically strip Exif data. Therefore throughout your code, you should always handle cases where there is no Exif data, either due to no data for a specific attribute or an image format that doesn't support Exif data at all (say, the ubiquitous PNGs or WebP images).

Add the ExifInterface Support Library to your project with the following dependency:

compile "com.android.support:exifinterface:25.1.0"

But when an Exif attribute is exactly what you need to prevent a mis-rotated image in your app, the ExifInterface Support Library is just what you need to #BuildBetterApps

Geolocation and Firebase for the Internet of Things

December 21st, 2016  |  Published in Google Maps

Posted by Ken Nevarez, Industry Solutions Lead at Google

GPS is the workhorse of location based services, but there are use cases where you may want to avoid the cost and power consumption of GPS hardware or locate devices in places where GPS lacks accuracy, such as in urban environments or buildings.

We've seen recent growth in Internet of Things (IoT) applications using the Google Maps Geolocation API instead of GPS for asset tracking, theft prevention, usage optimization, asset servicing, and more. As part of my 20 percent project at Industry Solutions, I created a prototype IoT device that can locate itself using surrounding WiFi networks and the Google Maps Geolocation API. In this post, I’ll discuss some interesting implementation features and outline how you can create the prototype yourself.

I built a device that scans for local WiFi and writes results (WiFi hotspots and their signal strength) to a Firebase Realtime Database. A back-end service then reads this data and uses the Google Maps Geolocation API to turn this into a real-world location, which can be plotted on a map.

Set up the Device & Write Locally

For this proof of concept, I used the Intel Edison as a Linux-based computing platform and augmented it with Sparkfun’s Edison Blocks. To build the device, you will need an Intel Edison, a Base Block, a Battery Block and a Hardware pack.

Developing for the Edison is straightforward using the Intel XDK IDE. We will be creating a simple Node.js application in JavaScript. I relied on 3 libraries: Firebase for the database connection, wireless-tools/iwlist to capture WiFi networks, and macaddress to capture the device MAC. Installation instructions can be found on the linked pages.

Step 1: get the device MAC address and connect to Firebase:
function initialize() {
    macaddress.one('wlan0', function (err, mac) {
        mac_address = mac;
        if (mac === null) {
            console.log('exiting due to null mac Address');
            process.exit(1);
        }
        firebase.initializeApp({
            serviceAccount: '/node_app_slot/.json',
            databaseURL: 'https://.firebaseio.com/'
        });
        var db = firebase.database();
        ref_samples = db.ref('/samples');
        locationSample();
    });
}
The above code contains two placeholders:

  1. The service-account-key is a private key you create in the Firebase Console. Follow the gear icon in the upper left of console, select “settings”, and click Generate New Private Key. Place this key on your Edison in the directory /node_app_slot/. See this Firebase documentation for more information.
  2. The project-id in the database URL is found in the Firebase console database page after you have linked your Google project with Firebase.


Step 2: scan for WiFi networks every 10 seconds and write locally:
function locationSample() {
    var t = new Date();
    iwlist.scan('wlan0', function(err, networks) {
        if(err === null) {
            ref_samples.push({
                mac: mac_address,
                t_usec: t.getTime(),
                t_locale_string: t.toLocaleString(),
                networks: networks,
            });
        } else {
            console.log(err);
        }        
    });
    setTimeout(locationSample, 10000);
}

Write to the cloud

The locationSample() function above writes detectable WiFi networks to a Firebase database that syncs to the cloud when connected to a network.

Caveat: To configure access rights and authentication to Firebase, I set up the device as a “server”. Instructions for this configuration are on the Firebase website. For this proof of concept, I made the assumption that the device was secure enough to house our credentials. If this is not the case for your implementation you should instead follow the instructions for setting up the client JavaScript SDK.

The database uses 3 queues to manage workload: a WiFi samples queue, a geolocation results queue and a visualization data queue. The workflow will be: samples from the device go into a samples queue, which gets consumed to produce geolocations that are put into a geolocations queue. Geolocations are consumed and formatted for presentation, organized by device, and the output is stored in a visualizations bucket for use by our front end website.

Below is an example of a sample, a geolocation, and our visualization data written by the device and seen in the Firebase Database Console.

Processing the Data with Google App Engine

To execute the processing of the sample data I used a long running Google App Engine Backend Module and a custom version of the Java Client for Google Maps Services.

Caveat: To use Firebase with App Engine, you must use manual scaling. Firebase uses background threads to listen for changes and App Engine only allows long-lived background threads on manually scaled backend instances.

The Java Client for Google Maps Services takes care of a lot of the communications code required to use the Maps APIs and follows our published best practices for error handling and retry strategies that respect rate limits. The GeolocateWifiSample() function below is registered as an event listener with Firebase. It loops over each network reported by the device and incorporates it into the geolocation request.
private void GeolocateWifiSample(DataSnapshot sample,  Firebase db_geolocations, Firebase db_errors) {
    // initalize the context and request
    GeoApiContext context = new GeoApiContext(new GaeRequestHandler()).setApiKey("");
    GeolocationApiRequest request = GeolocationApi.newRequest(context)
            .ConsiderIp(false);
    // for every network that was reported in this sample...
    for (DataSnapshot wap : sample.child("networks").getChildren()) {
        // extract the network data from the database so it’s easier to work with
        String wapMac = wap.child("address").getValue(String.class);
        int wapSignalToNoise = wap.child("quality").getValue(int.class);
        int wapStrength = wap.child("signal").getValue(int.class);
        // include this network in our request
        request.AddWifiAccessPoint(new WifiAccessPoint.WifiAccessPointBuilder()
                .MacAddress(wapMac)
                .SignalStrength(wapStrength)
                .SignalToNoiseRatio(wapSignalToNoise)
                .createWifiAccessPoint());
    }
    ...
    try {
        // call the api
        GeolocationResult result = request.CreatePayload().await();
        ...
        // write results to the database and remove the original sample
    } catch (final NotFoundException e) {
        ...
    } catch (final Throwable e) {
        ...
    }
}
Register the GeolocateWifiSample() function as an event handler. The other listeners that process geolocation results and create the visualization data are built in a similar pattern.
ChildEventListener samplesListener = new ChildEventListener() {
    @Override
    public void onChildAdded(DataSnapshot dataSnapshot, String previousChildName) {
        // geolocate and write to new location
        GeolocateWifiSample(dataSnapshot, db_geolocations, db_errors);
    }
    ...
};
db_samples.addChildEventListener(samplesListener);

Visualize the Data

To visualize the device locations I used Google App Engine to serve stored data from Firebase and the Google Maps JavaScript API to create a simple web page that displays the results. The index.html page contains an empty
with id “map”. I initialized this
to contain the Google Map object. I also added “child_added” and “child_removed” event handlers to update the map as the data changes over time.
function initMap() {
    // attach listeners
    firebase.database().ref('/visualization').on('child_added', function(data) {
        ...
        data.ref.on('child_added', function(vizData) {
            circles[vizData.key]= new CircleRoyale(map,
                                vizData.val().lat,
                                vizData.val().lng,
                                vizData.val().accuracy,
                                color);
          set_latest_position(data.key, vizData.val().lat, vizData.val().lng);
        });
        data.ref.on('child_removed', function(data) {
            circles[data.key].removeFromMap();
        });
    });
    // create the map
    map = new google.maps.Map(document.getElementById('map'), {
      center: get_next_device(),
      zoom: 20,
      scaleControl: true,
    });
    ...
}
Since the API returns not only a location but also an indication of accuracy, I’ve created a custom marker that has a pulsing radius to indicate the accuracy component.
Two devices (red and blue) and their last five known positions

What’s next?

In this post I’ve outlined how you can build an IoT device that uses Google Maps Geolocation API to track any internet-connected device - from robotics to wearables. The App Engine processing module can be expanded to use other Google Maps APIs Web Services providing geographic data such as directions, elevation, place or time zone information. Happy building!

As an alternative, you can achieve a similar solution using Google Cloud Platform as a replacement for Firebase—this article shows you how.

author image
About Ken: Ken is a Lead on the Industry Solutions team. He works with customers to bring innovative solutions to market.

Google Summer of Code 2016 wrap-up: Public Lab

December 21st, 2016  |  Published in Google Open Source

This post is part of our series of guest posts from students, mentors and organization administrators who participated in Google Summer of Code 2016.


How we made this our best Google Summer of Code ever

This was our fourth year doing Google Summer of Code (GSoC), and it was our best year ever by a wide margin! We had five hard-working students who contributed over 17,000 new lines of (very useful) code to our high-priority projects.

Students voluntarily started coding early and hit the ground running, with full development environments and a working knowledge of GitHub Flow-style pull request process. They communicated with one another and provided peer support. They wrote tests. Hundreds of them! They blogged about their work as they went, and chatted with other community members about how to design features.

All of that was amazing, and it was made better by the fact that we were accepting pull requests with new code twice weekly. Tuesdays and Fridays, I went through new submissions, provided feedback, and pulled new code into our master branch, usually publishing it to our production site once a week.

I don't know how other projects do things, but this was very new for us, and it's revolutionized how we work together. In past years, students would work on their forks, slowly building up features. Then in a mad dash at the end, we’d try to merge them into trunk, with lots of conflicts and many hours (weeks!) of work on the part of project maintainers.

Screenshot_2016-08-26_at_11.44.16_AM.png
What made this year so good?

Many things aligned to make this summer great, and basically none of them are our ideas. I'm sure plenty of you are cringing at how we used to do things, but I also don't think that it's that unusual for projects not "born" in the fast-paced world of modern code collaboration.

We used ideas and learned from Nicolas Bevacqua, author of JavaScript Application Design and of the woofmark and horsey libraries which I've contributed to. We've also learned a great deal from the Hoodie community, particularly Gregor Martynus, who we ran into at a BostonJS meetup. Lastly, we learned from SpinachCon, organized by Shauna Gordon McKeon and Deb Nicholson, where people refine their install process by actually going through the process while sitting next to each other.

Broadly, our strategies were:

  • Good documentation for newcomers (duh)
  • Short and sweet install process that you've tried yourself (thanks, SpinachCon!)
  • Predictable, regular merge schedule
  • Thorough test suite, and requiring tests with each pull request
  • Modularity, insisting that projects be broken into small, independently testable parts and merged as they’re written

Installation and pull requests

Most of the above sound kind of obvious or trivial, but we saw a lot of changes when we put it all together. Having a really fast install process, and guidance on getting it running in a completely consistent environment like the virtualized Cloud9 service, meant that many students were able to get the code running the same day they found the project. We aimed for an install time of 15 minutes max, and supplied a video of this for one of our codebases.

We also asked students to make a small change (even just add a space to a file) and walk through the GitHub Flow pull request (PR) submission process. We had clear step-by-step guidance for this, and we took it as a good sign when students were able to read through it and do this.

Importantly, we really tried to make each step welcoming, not demanding or dismissive, of folks who weren’t familiar with this process. This ultimately meant that all five students already knew the PR process when they began coding.

Twice-weekly merge schedule

We were concerned that, in past years, students only tried merging a few times and typically towards the end of the summer. This meant really big conflicts (with each other, often) and frustration.

This year we decided that, even though we’re a tiny organization with just one staff coder, we’d try merging on Tuesday and Friday mornings, and we mostly succeeded. Any code that wasn’t clearly presented, commits squashed, passing tests, and submitting new tests, was reviewed and I left friendly comments and requests so it could be merged the following week.

At first I felt bad rejecting PRs, but we had such great students that they got used to the strictness. They got really good at separating out features, demonstrating their features through clear tests, and some began submitting more than two PRs per week - always rebasing on top of the latest master to ensure a linear commit history. Sweet!

Wrap-up and next steps

The last thing we did was to ask each student, essentially as their documentation, to write a series of new issues which clearly described the problem and/or desired behavior, leave suggestions and links to specific lines of code or example code, and mark them with the special “help-wanted” tag which was so helpful to them when they first started out. We asked each to also make one extra-welcoming “first-timers-only” issue which walks a new contributor through every step of making a commit and even provides suggested code to be inserted.

This final requirement was key. While I personally made each of the initial set of “help-wanted” and “first-timers-only” issues before GSoC, now five students were offloading their unfinished to-dos as very readable and inviting issues for others. The effect was immediate, in part because these special tags are syndicated on some sites. Newcomers began picking them up within hours and our students were very helpful in guiding them through their first contributions to open source.

I want to thank everyone who made this past summer so great, from our champion mentors and community members, to our stellar students, to all our inspirations in this new process, to the dozen or so new contributors we’ve attracted since the end of August.

By Jeff Warren, Organization Administrator for PublicLab.org

Get the guide to finding success in new markets on Google Play

December 20th, 2016  |  Published in Google Android

Posted by Lily Sheringham, Developer Marketing at Google Play


With just a few clicks, you can publish an app to Google Play and access a global audience of more than 1 billion 30 days active users. Finding success in global markets means considering how each market differs, planning for high quality localization, and tailoring your activity to the local audience. The new Going Global Playbook provides best practices and tips, with advice from developers who've successfully gone global.

This guide includes advice to help you plan your approach to going global, prepare your app for new markets, take your app to market, and also include data and insights for key countries and other useful resources.

This ebook joins others that we've recently published including The Building for Billions Playbook and The News Publisher Playbook. All of our ebooks are promoted in the Playbook for Developers app, which is where you can stay up to date with all the news and best practices you need to find success on Google Play.

How useful did you find this blogpost?


★ ★ ★ ★ ★
                                           

Previously


Jan 5, 2017
Google Brain Residency Program – 7 months in and looking ahead

by Research Blog | Read | No Comments

Posted by Jeff Dean, Google Senior Fellow and Leslie Phillips, Google Brain Residency Program Manager

“Beyond being incredibly instructive, the Google Brain Residency program has been a truly affirming experience. Working alongside people who truly love what they do–and are eager to help you develop your own passion–has vastly increased my confidence in my interests, my ability to explore them, and my plans for the near future.”
-Akosua Busia, B.S. Mathematical and Computational Science, Stanford University ‘16
2016 Google Brain Resident

In October 2015 we launched the Google Brain Residency, a 12-month program focused on jumpstarting a career for those interested in machine learning and deep learning research. This program is an opportunity to get hands on experience using the state-of-the-art infrastructure available at Google, and offers the chance to work alongside top researchers within the Google Brain team.

Our first group of residents arrived in June 2016, working with researchers on problems at the forefront of machine learning. The wide array of topics studied by residents reflects the diversity of the residents themselves — some come to the program as new graduates with degrees ranging from BAs to Ph.Ds in computer science to physics and mathematics to biology and neuroscience, while other residents come with years of industry experience under their belts. They all have come with a passion for learning how to conduct machine learning research.

The breadth of research being done by the Google Brain Team along with resident-mentorship pairing flexibility ensures that residents with interests in machine learning algorithms and reinforcement learning, natural language understanding, robotics, neuroscience, genetics and more, are able to find good mentors to help them pursue their ideas and publish interesting work. And just seven months into the program, the Residents are already making an impact in the research field.

To date, Google Brain Residents have submitted a total of 21 papers to leading machine learning conferences, spanning topics from enhancing low resolution images to building neural networks that in turn design novel, task specific neural network architectures. Of those 21 papers, 5 were accepted in the recent BayLearn Conference (two of which, “Mean Field Neural Networks” and “Regularizing Neural Networks by Penalizing Their Output Distribution’’, were presented in oral sessions), 2 were accepted in the NIPS 2016 Adversarial Training workshop, and another in ISMIR 2016 (see the full list of papers, including the 14 submissions to ICLR 2017, after the figures below).

An LSTM Cell (Left) and a state of the art RNN Cell found using a neural network (Right). This is an example of a novel architecture found using the approach presented in “Neural Architecture Search with Reinforcement Learning” (B. Zoph and Q. V. Le, submitted to ICLR 2017). This paper uses a neural network to generate novel RNN cell architectures that outperform the widely used LSTM on a variety of different tasks.
The training accuracy for neural networks, colored from black (random chance) to red (high accuracy). Overlaid in white dashed lines are the theoretical predictions showing the boundary between trainable and untrainable networks. (a) Networks with no dropout. (b)-(d) Networks with dropout rates of 0.01, 0.02, 0.06 respectively. This research explores whether theoretical calculations can replace large hyperparameter searches. For more details, read “Deep Information Propagation” (S. S. Schoenholz, J. Gilmer, S. Ganguli, J. Sohl-Dickstein, submitted to ICLR 2017).


Accepted conference papers
(Google Brain Residents marked with asterisks)

Unrolled Generative Adversarial Networks
Luke Metz*, Ben Poole, David Pfau, Jascha Sohl-Dickstein
NIPS 2016 Adversarial Training Workshop (oral presentation)

Conditional Image Synthesis with Auxiliary Classifier GANs
Augustus Odena*, Chris Olah, Jon Shlens
NIPS 2016 Adversarial Training Workshop (oral presentation)

Regularizing Neural Networks by Penalizing Their Output Distribution
Gabriel Pereyra*, George Tucker, Lukasz Kaiser, Geoff Hinton
BayLearn 2016 (oral presentation)

Mean Field Neural Networks
Samuel S. Schoenholz*, Justin Gilmer*, Jascha Sohl-Dickstein
BayLearn 2016 (oral presentation)

Learning to Remember
Aurko Roy, Ofir Nachum*, Łukasz Kaiser, Samy Bengio
BayLearn 2016 (poster session)

Towards Generating Higher Resolution Images with Generative Adversarial Networks
Augustus Odena*, Jonathon Shlens
BayLearn 2016 (poster session)

Multi-Task Convolutional Music Models
Diego Ardila, Cinjon Resnick*, Adam Roberts, Douglas Eck
BayLearn 2016 (poster session)

Audio DeepDream: Optimizing Raw Audio With Convolutional Networks
Diego Ardila, Cinjon Resnick*, Adam Roberts, Douglas Eck
ISMIR 2016 (poster session)

Papers under review (Google Brain Residents marked with asterisks)

Learning to Remember Rare Events
Lukasz Kaiser, Ofir Nachum*, Aurko Roy, Samy Bengio
Submitted to ICLR 2017

Neural Combinatorial Optimization with Reinforcement Learning
Irwan Bello*, Hieu Pham*, Quoc V. Le, Mohammad Norouzi, Samy Bengio
Submitted to ICLR 2017

HyperNetworks
David Ha*, Andrew Dai, Quoc V. Le
Submitted to ICLR 2017

Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Noam Shazeer, Azalia Mirhoseini*, Krzysztof Maziarz, Quoc Le, Jeff Dean
Submitted to ICLR 2017

Neural Architecture Search with Reinforcement Learning
Barret Zoph* and Quoc Le
Submitted to ICLR 2017

Deep Information Propagation
Samuel Schoenholz*, Justin Gilmer*, Surya Ganguli, Jascha Sohl-Dickstein
Submitted to ICLR 2017

Capacity and Learnability in Recurrent Neural Networks
Jasmine Collins*, Jascha Sohl-Dickstein, David Sussillo
Submitted to ICLR 2017

Unrolled Generative Adversarial Networks
Luke Metz*, Ben Poole, David Pfau, Jascha Sohl-Dickstein
Submitted to ICLR 2017

Conditional Image Synthesis with Auxiliary Classifier GANs
Augustus Odena*, Chris Olah, Jon Shlens
Submitted to ICLR 2017

Generating Long and Diverse Responses with Neural Conversation Models
Louis Shao, Stephan Gouws, Denny Britz*, Anna Goldie, Brian Strope, Ray Kurzweil
Submitted to ICLR 2017

Intelligible Language Modeling with Input Switched Affine Networks
Jakob Foerster, Justin Gilmer*, Jan Chorowski, Jascha Sohl-dickstein, David Sussillo
Submitted to ICLR 2017

Regularizing Neural Networks by Penalizing Confident Output Distributions
Gabriel Pereyra*, George Tucker*, Jan Chorowski, Lukasz Kaiser, Geoffrey Hinton
Submitted to ICLR 2017

Unsupervised Perceptual Rewards for Imitation Learning
Pierre Sermanet, Kelvin Xu*, Sergey Levine
Submitted to ICLR 2017

Improving policy gradient by exploring under-appreciated rewards
Ofir Nachum*, Mohammad Norouzi, Dale Schuurmans
Submitted to ICLR 2017

Protein Secondary Structure Prediction Using Deep Multi-scale Convolutional Neural Networks and Next-Step Conditioning
Akosua Busia*, Jasmine Collins*, Navdeep Jaitly

The diverse and collaborative atmosphere fostered by the Brain team has resulted in a group of researchers making great strides on a wide range of research areas which we are excited to share with the broader community. We look forward to even more innovative research that is yet to be done from our 2016 residents, and are excited for the program to continue into it’s second year!

We are currently accepting applications for the 2017 Google Brain Residency Program. To learn more about the program and to submit your application, visit g.co/brainresidency. Applications close January 13th, 2017.


Jan 5, 2017
A new guide to testing and experimentation in AdWords

by Charissa Yee | Read | No Comments

Google Best Practices for AdWords

Digital marketing is always evolving. It’s crucial to evolve your approach as well. This ongoing process of optimization should be built upon a solid framework of testing and experimentation.

By using all of the tools available to you, including campaign drafts and experiments in AdWords, you can test and understand whether any proposed changes to your account will help you reach your business goals.

We’ve published a new guide that helps you understand whether any proposed changes to your account will help you improve performance. It covers topics such as:

  • Testing changes to your AdWords account
  • Creating experiments that produce clear results
  • Analyzing results to choose experiment winners

As you experiment with your own account, use this guide to take full advantage of AdWords campaign drafts and experiments.

Want to stay on top of even more Best Practices? Sign up to receive our monthly newsletter.

Posted by Matt Lawson, Director, Performance Ads Marketing


Jan 4, 2017
Page setup and ODF and EPUB support in the Google Docs, Sheets, and Slides Android apps

by Jane Smith | Read | No Comments

New versions of the Docs, Sheets, and Slides apps for Android are now available on Google Play, including the following new features to help you get more done on the move: Page setup in Google Docs and image support improvements in SheetsYou can now ch…


Jan 4, 2017
Manage email effectively with unified routing settings in Admin console

by Jane Smith | Read | No Comments

G Suite customers use Gmail because it’s simple to use and reliable, and because it allows them to customize their enterprise-specific routing settings in the Admin console. However, as we expanded these features to cover more use-cases, it became clea…


Jan 4, 2017
Grumpy: Go running Python!

by Open Source Programs Office | Read | No Comments

Google runs millions of lines of Python code. The front-end server that drives youtube.com and YouTube’s APIs is primarily written in Python, and it serves millions of requests per second! YouTube’s front-end runs on CPython 2.7, so we’ve put a ton of work into improving the runtime and adapting our application to work optimally within it. These efforts have borne a lot of fruit over the years, but we always run up against the same issue: it’s very difficult to make concurrent workloads perform well on CPython.

To solve this problem, we investigated a number of other Python runtimes. Each had trade-offs and none solved the concurrency problem without introducing other issues.
MeatGrinder.png
So we asked ourselves a crazy question: What if we were to implement an alternative runtime optimized for real-time serving? Once we started going down the rabbit hole, Go seemed like an obvious choice of platform since its operational characteristics align well with our use case (e.g. lightweight threads). We wanted first class language interoperability and Go’s powerful runtime type reflection system made this straightforward. Python in Go felt very natural, and so Grumpy was born.

Grumpy is an experimental Python runtime for Go. It translates Python code into Go programs, and those transpiled programs run seamlessly within the Go runtime. We needed to support a large existing Python codebase, so it was important to have a high degree of compatibility with CPython (quirks and all). The goal is for Grumpy to be a drop-in replacement runtime for any pure-Python project.

Two design choices we made had big consequences. First, we decided to forgo support for C extension modules. This means that Grumpy cannot leverage the wealth of existing Python C extensions but it gave us a lot of flexibility to design an API and object representation that scales for parallel workloads. In particular, Grumpy has no global interpreter lock, and it leverages Go’s garbage collection for object lifetime management instead of counting references. We think Grumpy has the potential to scale more gracefully than CPython for many real world workloads. Results from Grumpy’s synthetic Fibonacci benchmark demonstrate some of this potential:

Second, Grumpy is not an interpreter. Grumpy programs are compiled and linked just like any other Go program. The downside is less development and deployment flexibility, but it offers several advantages. For one, it creates optimization opportunities at compile time via static program analysis. But the biggest advantage is that interoperability with Go code becomes very powerful and straightforward: Grumpy programs can import Go packages just like Python modules! For example, the Python snippet below uses Go’s standard net/http package to start a simple server:

from __go__.net.http import ListenAndServe, RedirectHandler

handler = RedirectHandler('http://github.com/google/grumpy', 303)
ListenAndServe('127.0.0.1:8080', handler)

We’re excited about the prospects for Grumpy. Although it’s still alpha software, most of the language constructs and many core built-in types work like you’d expect. There are still holes to fill — many built-in types are missing methods and attributes, built-in functions are absent and the standard library is virtually empty. If you find things that you wish were working, file an issue so we know what to prioritize. Or better yet, submit a pull request.

Stay Grumpy!

By Dylan Trotter, YouTube Engineering


Jan 3, 2017
What’s New in G Suite newsletter – December 2016

by Jane Smith | Read | No Comments

Check out the latest “What’s New in G Suite” newsletter [pdf] for a roundup of all G Suite launches from December 2016.Newsletter Archive & Translated Versions (coming soon for December issue)Launch release calendarLaunch detail categoriesGet these…

What is Google Data?

Google Data is the only site where you can get news from 60+ official Google blogs all in one place. We have published 24,243 official posts since January 2005.



Categories

  • Gmail (Google Mail) (436)
  • Google Adsense (1,013)
  • Google Adsense Feeds (46)
  • Google Adwords (1,225)
  • Google Adwords API (366)
  • Google Affiliate Network (141)
  • Google Ajax API (121)
  • Google Analytics (961)
  • Google Android (654)
  • Google App Engine (332)
  • Google Apps (1,868)
  • Google Blog (1,650)
  • Google Blogger Buzz (527)
  • Google Books (300)
  • Google Checkout (162)
  • Google Chrome (312)
  • Google Code (828)
  • Google Conversions (137)
  • Google CPG (198)
  • Google Custom Search (129)
  • Google Desktop (144)
  • Google Desktop API (175)
  • Google Docs (718)
  • Google DoubleClick (730)
  • Google Earth (1,006)
  • Google Enterprise (1,278)
  • Google Finance (88)
  • Google Gadgets (31)
  • Google Gears (60)
  • Google Grants (285)
  • Google Mac Blog (149)
  • Google Maps (736)
  • Google Merchant (219)
  • Google Mobile (386)
  • Google News (152)
  • Google Notebook (20)
  • Google Online Security (221)
  • Google Open Source (932)
  • Google OpenSocial (197)
  • Google Orkut (381)
  • Google Photos (Picasa) (109)
  • Google Product Ideas (23)
  • Google Public Policy (675)
  • Google Reader (124)
  • Google RechargeIT (29)
  • Google Research (424)
  • Google SketchUp (692)
  • Google Student Blog (848)
  • Google Talk (107)
  • Google Testing (326)
  • Google Translate (98)
  • Google TV Ads Blog (163)
  • Google Video (293)
  • Google Voice (86)
  • Google Wave (38)
  • Google Webmaster Central (376)
  • Google Website Optimizer (101)
  • Google.org (183)
  • iGoogle (87)
  • Uncategorized (1,262)
  • Youtube (779)
  • Youtube API (173)

Tags

adsense adsense google advertisers advertising adwords adwords google adwords tips analytics analytics google android gmail for domains Google Adsense Google Adwords google adwords help Google Analytics google analytics ga.js Google Android Google Apps google apps enterprise google apps free google apps gmail google apps standard google apps sync Google Blog google calendar google chat google culture Google Docs Google Earth Google Enterprise google insights google lat-long Google Mobile Google News google products google statistics google student google updates official google blog open source publishers students website analytics website statistics website stats

Archives

  • January 2017 (8)
  • December 2016 (58)
  • November 2016 (83)
  • October 2016 (97)
  • September 2016 (113)
  • August 2016 (90)
  • July 2016 (87)
  • June 2016 (100)
  • May 2016 (112)
  • April 2016 (101)
  • March 2016 (116)
  • February 2016 (93)
  • January 2016 (66)
  • December 2015 (111)
  • November 2015 (105)
  • October 2015 (105)
  • September 2015 (116)
  • August 2015 (98)
  • July 2015 (106)
  • June 2015 (149)
  • May 2015 (94)
  • April 2015 (107)
  • March 2015 (124)
  • February 2015 (109)
  • January 2015 (69)
  • December 2014 (123)
  • November 2014 (107)
  • October 2014 (134)
  • September 2014 (132)
  • August 2014 (110)
  • July 2014 (118)
  • June 2014 (142)
  • May 2014 (129)
  • April 2014 (99)
  • March 2014 (125)
  • February 2014 (112)
  • January 2014 (88)
  • December 2013 (104)
  • November 2013 (147)
  • October 2013 (152)
  • September 2013 (114)
  • August 2013 (119)
  • July 2013 (136)
  • June 2013 (173)
  • May 2013 (213)
  • April 2013 (168)
  • March 2013 (167)
  • February 2013 (145)
  • January 2013 (132)
  • December 2012 (156)
  • November 2012 (151)
  • October 2012 (203)
  • September 2012 (168)
  • August 2012 (190)
  • July 2012 (151)
  • June 2012 (169)
  • May 2012 (169)
  • April 2012 (193)
  • March 2012 (227)
  • February 2012 (195)
  • January 2012 (191)
  • December 2011 (243)
  • November 2011 (248)
  • October 2011 (282)
  • September 2011 (271)
  • August 2011 (261)
  • July 2011 (275)
  • June 2011 (259)
  • May 2011 (148)
  • April 2011 (83)
  • March 2011 (92)
  • February 2011 (109)
  • January 2011 (253)
  • December 2010 (397)
  • November 2010 (322)
  • October 2010 (328)
  • September 2010 (402)
  • August 2010 (305)
  • July 2010 (293)
  • June 2010 (362)
  • May 2010 (367)
  • April 2010 (355)
  • March 2010 (363)
  • February 2010 (294)
  • January 2010 (267)
  • December 2009 (321)
  • November 2009 (308)
  • October 2009 (380)
  • September 2009 (344)
  • August 2009 (302)
  • July 2009 (347)
  • June 2009 (357)
  • May 2009 (352)
  • April 2009 (353)
  • March 2009 (351)
  • February 2009 (296)
  • January 2009 (239)
  • December 2008 (253)
  • November 2008 (273)
  • October 2008 (357)
  • September 2008 (316)
  • August 2008 (239)
  • July 2008 (241)
  • June 2008 (250)
  • May 2008 (247)
  • April 2008 (263)
  • March 2008 (251)
  • February 2008 (187)
  • January 2008 (172)
  • December 2007 (179)
  • November 2007 (177)
  • October 2007 (195)
  • September 2007 (148)
  • August 2007 (161)
  • July 2007 (182)
  • June 2007 (152)
  • May 2007 (109)
  • April 2007 (98)
  • March 2007 (93)
  • February 2007 (75)
  • January 2007 (75)
  • December 2006 (82)
  • November 2006 (122)
  • October 2006 (115)
  • September 2006 (106)
  • August 2006 (120)
  • July 2006 (69)
  • June 2006 (61)
  • May 2006 (51)
  • April 2006 (46)
  • March 2006 (48)
  • February 2006 (45)
  • January 2006 (46)
  • December 2005 (51)
  • November 2005 (50)
  • October 2005 (23)
  • September 2005 (39)
  • August 2005 (28)
  • July 2005 (15)
  • June 2005 (25)
  • May 2005 (27)
  • April 2005 (31)
  • March 2005 (31)
  • February 2005 (3)
  • January 2005 (1)

Recent Posts

  • Google Summer of Code 2016 wrap-up: Oppia
  • Google Brain Residency Program – 7 months in and looking ahead
  • A new guide to testing and experimentation in AdWords
  • Page setup and ODF and EPUB support in the Google Docs, Sheets, and Slides Android apps
  • Manage email effectively with unified routing settings in Admin console
  • Grumpy: Go running Python!
  • What’s New in G Suite newsletter – December 2016
  • Rails Girls Summer of Code: Changing the face of tech
  • Other sites are using my ad code. What should I do?
  • Open source down under: Linux.conf.au 2017
January 2017
M T W T F S S
« Dec    
  1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31  

Page 1 of 1,61612345...102030...»Last »


Copyright © 2017 GoogleData.org, All Rights Reserved.
This site is not operated by, sponsored by, endorsed by, or affiliated with Google Inc. in any way.
Rss Feed Tweeter button Facebook button Technorati button Reddit button Myspace button Linkedin button Delicious button Digg button Stumbleupon button Newsvine button