Working together to filter automated data-center traffic
July 21st, 2015 | by Alexis R. Shellhammer | published in Google DoubleClick
July 21st, 2015 | by Alexis R. Shellhammer | published in Google DoubleClick
July 21st, 2015 | by Google Security PR | published in Google Online Security
Posted by Vegard Johnsen, Product Manager Google Ad Traffic Quality
Today the Trustworthy Accountability Group (TAG) announced a new pilot blacklist to protect advertisers across the industry. This blacklist comprises data-center IP addresses associated with non-human ad requests. We’re happy to support this effort along with other industry leaders—Dstillery, Facebook, MediaMath, Quantcast, Rubicon Project, TubeMogul and Yahoo—and contribute our own data-center blacklist. As mentioned to Ad Age and in our recent call to action, we believe that if we work together we can raise the fraud-fighting bar for the whole industry.
Data-center traffic is one of many types of non-human or illegitimate ad traffic. The newly shared blacklist identifies web robots or “bots” that are being run in data centers but that avoid detection by the IAB/ABC International Spiders & Bots List. Well-behaved bots announce that they’re bots as they surf the web by including a bot identifier in their declared User-Agent strings. The bots filtered by this new blacklist are different. They masquerade as human visitors by using User-Agent strings that are indistinguishable from those of typical web browsers.
In this post, we take a closer look at a few examples of data-center traffic to show why it’s so important to filter this traffic across the industry.
Impact of the data-center blacklist
When observing the traffic generated by the IP addresses in the newly shared blacklist, we found significantly distorted click metrics. In May of 2015 on DoubleClick Campaign Manager alone, we found the blacklist filtered 8.9% of all clicks. Without filtering these clicks from campaign metrics, advertiser click-through rates would have been incorrect and for some advertisers this error would have been very large.
Below is a plot that shows how much click-through rates in May would have been inflated across the most impacted of DoubleClick Campaign Manager’s larger advertisers.
Two examples of bad data-center traffic
There are two distinct types of invalid data-center traffic: where the intent is malicious and where the impact on advertisers is accidental. In this section we consider two interesting examples where we’ve observed traffic that was likely generated with malicious intent.
Publishers use many different strategies to increase the traffic to their sites. Unfortunately, some are willing to use any means necessary to do so. In our investigations we’ve seen instances where publishers have been running software tools in data centers to intentionally mislead advertisers with fake impressions and fake clicks.
First example
UrlSpirit is just one example of software that some unscrupulous publishers have been using to collaboratively drive automated traffic to their websites. Participating publishers install the UrlSpirit application on Windows machines and they each submit up to three URLs through the application’s interface. Submitted URLs are then distributed to other installed instances of the application, where Internet Explorer is used to automatically visit the list of target URLs. Publishers who have not installed the application can also leverage the network of installations by paying a fee.
At the end of May more than 82% of the UrlSpirit installations were being run on machines in data centers. There were more than 6,500 data-center installations of UrlSpirit, with each data-center installation running in a separate virtual machine. In aggregate, the data-center installations of UrlSpirit were generating a monthly rate of at least half a billion ad requests— an average of 2,500 fraudulent ad requests per installation per day.
Second example
HitLeap is another example of software that some publishers are using to collaboratively drive automated traffic to their websites. The software also runs on Windows machines, and each instance uses the Chromium Embedded Framework to automatically browse the websites of participating publishers—rather than using Internet Explorer.
Before publishers can use the network of installations to drive traffic to their websites, they need browsing minutes. Participating publishers earn browsing minutes by running the application on their computers. Alternatively, they can simply buy browsing minutes—with bundles starting at $9 for 10,000 minutes or up to 1,000,000 minutes for $625.
Publishers can specify as many target URLs as they like. The number of visits they receive from the network of installations is a function of how long they want the network of bots to spend on their sites. For example, ten browsing minutes will get a publisher five visits if the publisher requests two-minute visit durations.
In mid-June, at least 4,800 HitLeap installations were being run in virtual machines in data centers, with a unique IP associated with each HitLeap installation. The data-center installations of HitLeap made up 16% of the total HitLeap network, which was substantially larger than the UrlSpirit network.
In aggregate, the data-center installations of HitLeap were generating a monthly rate of at least a billion fraudulent ad requests—or an average of 1,600 ad requests per installation per day.
Not only were these publishers collectively responsible for billions of automated ad requests, but their websites were also often extremely deceptive. For example, of the top ten webpages visited by HitLeap bots in June, nine of these included hidden ad slots — meaning that not only was the traffic fake, but the ads couldn’t have been seen even if they had been legitimate human visitors.
http://vedgre.com/7/gg.html is illustrative of these nine webpages with hidden ad slots. The webpage has no visible content other than a single 300×250px ad. This visible ad is actually in a 300×250px iframe that includes two ads, the second of which is hidden. Additionally, there are also twenty-seven 0×0px hidden iframes on this page with each hidden iframe including two ad slots. In total there are fifty-five hidden ads on this page and one visible ad. Finally, the ads served on http://vedgre.com/7/gg.html appear to advertisers as though they have been served on legitimate websites like indiatimes.com, scotsman.com, autotrader.co.uk, allrecipes.com, dictionary.com and nypost.com, because the tags used on http://vedgre.com/7/gg.html to request the ad creatives have been deliberately spoofed.
An example of collateral damage
Unlike the traffic described above, there is also automated data-center traffic that impacts advertising campaigns but that hasn’t been generated for malicious purposes. An interesting example of this is an advertising competitive intelligence company that is generating a large volume of undeclared non-human traffic.
This company uses bots to scrape the web to find out which ad creatives are being served on which websites and at what scale. The company’s scrapers also click ad creatives to analyse the landing page destinations. To provide its clients with the most accurate possible intelligence, this company’s scrapers operate at extraordinary scale and they also do so without including bot identifiers in their User-Agent strings.
While the aim of this company is not to cause advertisers to pay for fake traffic, the company’s scrapers do waste advertiser spend. They not only generate non-human impressions; they also distort the metrics that advertisers use to evaluate campaign performance—in particular, click metrics. Looking at the data across DoubleClick Campaign Manager this company’s scrapers were responsible for 65% of the automated data-center clicks recorded in the month of May.
Going forward
Google has always invested to prevent this and other types of invalid traffic from entering our ad platforms. By contributing our data-center blacklist to TAG, we hope to help others in the industry protect themselves.
We’re excited by the collaborative spirit we’ve seen working with other industry leaders on this initiative. This is an important, early step toward tackling fraudulent and illegitimate inventory across the industry and we look forward to sharing more in the future. By pooling our collective efforts and working with industry bodies, we can create strong defenses against those looking to take advantage of our ecosystem. We look forward to working with the TAG Anti-fraud working group to turn this pilot program into an industry-wide tool.
July 21st, 2015 | by Rob Newton | published in Google Adwords
Data-center traffic is one of many types of non-human or illegitimate ad traffic. The newly shared blacklist identifies web robots or “bots” that are being run in data-centers but that avoid detection by the IAB/ABC International Spiders & Bots List. Well-behaved bots announce that they’re bots as they surf the web by including a bot identifier in their declared User-Agent strings. The bots filtered by this new blacklist are different. They masquerade as human visitors by using User-Agent strings that are indistinguishable from those of typical web browsers.
Google has always invested to prevent this and other types of invalid traffic from entering our ad platforms. By contributing our data-center blacklist to TAG, we hope to help others in the industry protect themselves.
We’re excited by the collaborative spirit we’ve seen working with other industry leaders on this initiative. This is an important, early step toward tackling fraudulent and illegitimate inventory across the industry and we look forward to sharing more in the future. By pooling our collective efforts and working with industry bodies, we can create strong defenses against those looking to take advantage of our ecosystem. We look forward to working with the TAG Anti-fraud working group to turn this pilot program into an industry-wide tool.
For more details about the kinds of data-center traffic filtered by the TAG Data-center IP blacklist read our detailed post on the Google Online Security Blog.
Posted by Vegard Johnsen, Product Manager Google Ad Traffic Quality
July 21st, 2015 | by Alexis R. Shellhammer | published in Google DoubleClick
Today the Trustworthy Accountability Group (TAG) announced a new pilot blacklist to protect advertisers across the industry. This blacklist comprises data-center IP addresses associated with non-human ad requests. We’re happy to support this effort along with other industry leaders—Dstillery, Facebook, MediaMath, Quantcast, Rubicon Project, The Trade Desk, TubeMogul and Yahoo—and contribute our own data-center blacklist. As mentioned to Ad Age and in our recent call to action, we believe that if we work together we can raise the fraud-fighting bar for the whole industry.
Data-center traffic is one of many types of non-human or illegitimate ad traffic. The newly shared blacklist identifies web robots or “bots” that are being run in data centers but that avoid detection by the IAB/ABC International Spiders & Bots List. Well-behaved bots announce that they’re bots as they surf the web by including a bot identifier in their declared User-Agent strings. The bots filtered by this new blacklist are different. They masquerade as human visitors by using User-Agent strings that are indistinguishable from those of typical web browsers.
In this post, we take a closer look at a few examples of data-center traffic to show why it’s so important to filter this traffic across the industry.
When observing the traffic generated by the IP addresses in the newly shared blacklist, we found significantly distorted click metrics. In May of 2015 on DoubleClick Campaign Manager alone, we found the blacklist filtered 8.9% of all clicks. Without filtering these clicks from campaign metrics, advertiser click-through rates would have been incorrect and for some advertisers this error would have been very large.
Below is a plot that shows how much click-through rates in May would have been inflated across the most impacted of DoubleClick Campaign Manager’s larger advertisers.
hidden ad slots — meaning that not only was the traffic fake, but the ads couldn’t have been seen even if they had been legitimate human visitors.
http://vedgre.com/7/gg.html is illustrative of these nine webpages with hidden ad slots. The webpage has no visible content other than a single 300×250px ad. This visible ad is actually in a 300×250px iframe that includes two ads, the second of which is hidden. Additionally, there are also twenty-seven 0×0px hidden iframes on this page with each hidden iframe including two ad slots. In total there are fifty-five hidden ads on this page and one visible ad. Finally, the ads served on http://vedgre.com/7/gg.html appear to advertisers as though they have been served on legitimate websites like indiatimes.com, scotsman.com, autotrader.co.uk, allrecipes.com, dictionary.com and nypost.com, because the tags used on http://vedgre.com/7/gg.html to request the ad creatives have been deliberately spoofed.
Unlike the traffic described above, there is also automated data-center traffic that impacts advertising campaigns but that hasn’t been generated for malicious purposes. An interesting example of this is an advertising competitive intelligence company that is generating a large volume of undeclared non-human traffic.
This company uses bots to scrape the web to find out which ad creatives are being served on which websites and at what scale. The company’s scrapers also click ad creatives to analyze the landing page destinations. To provide its clients with the most accurate possible intelligence, this company’s scrapers operate at extraordinary scale and they also do so without including bot identifiers in their User-Agent strings.
While the aim of this company is not to cause advertisers to pay for fake traffic, the company’s scrapers do waste advertiser spend. They not only generate non-human impressions; they also distort the metrics that advertisers use to evaluate campaign performance—in particular, click metrics. Looking at the data across DoubleClick Campaign Manager this company’s scrapers were responsible for 65% of the automated data-center clicks recorded in the month of May.
Google has always invested to prevent this and other types of invalid traffic from entering our ad platforms. By contributing our data-center blacklist to TAG, we hope to help others in the industry protect themselves.
We’re excited by the collaborative spirit we’ve seen working with other industry leaders on this initiative. This is an important, early step toward tackling fraudulent and illegitimate inventory across the industry and we look forward to sharing more in the future. By pooling our collective efforts and working with industry bodies, we can create strong defenses against those looking to take advantage of our ecosystem. We look forward to working with the TAG Anti-fraud working group to turn this pilot program into an industry-wide tool.
Vegard Johnsen Product Manager, Google Ads Traffic Quality |
July 20th, 2015 | by Jane Smith | published in Google Apps, Google Docs
Launch Details
Release track:
Launching to Rapid release, with Scheduled release coming in two weeks
Rollout pace:
Full rollout (1–3 days for feature visibility)
Impact:
All end users
Action:
Change management suggested/FYI
July 20th, 2015 | by Google Public Policy Blog | published in Google Public Policy
Cross-posted on the Google Online Security Blog
As the usage and complexity of software grows, the importance of security research has grown with it. It’s through diligent research that we uncover and fix bugs — like Heartbleed and POODLE — that can cause serious security issues for web users around the world.
The time and effort it takes to uncover bugs is significant, and the marketplace for these vulnerabilities is competitive. That’s why we provide cash rewards for quality security research that identifies problems in our own products or proactive improvements to open-source products. We’ve paid more than $4 million to researchers from all around the world – our current Hall of Fame includes researchers from Germany, the U.S., Japan, Brazil, and more than 30 other countries.
Problematic new export controls
With the benefits of security research in mind, there has been some public head scratching and analysis around proposed export control rules put forth by the U.S. Department of Commerce that would negatively affect vulnerability research.
The Commerce Department’s proposed rules stem from U.S. membership in the Wassenaar Arrangement, a multilateral export control association. Members of the Wassenaar Arrangement have agreed to control a wide range of goods, software, and information, including technologies relating to “intrusion software” (as they’ve defined that term).
We believe that these proposed rules, as currently written, would have a significant negative impact on the open security research community. They would also hamper our ability to defend ourselves, our users, and make the web safer. It would be a disastrous outcome if an export regulation intended to make people more secure resulted in billions of users across the globe becoming persistently less secure.
Google comments on proposed rules
Earlier today, we formally submitted comments on the proposed rules to the United States Commerce Department’s Bureau of Industry and Security (BIS). Our comments are lengthy, but we wanted to share some of the main concerns and questions that we have officially expressed to the U.S. government today:
We’re committed to working with BIS to make sure that both white hat security researchers’ interests and Google users’ interests are front of mind. The proposed BIS rule for public comment is available here, and comments can also be sent directly to [email protected]. If BIS publishes another proposed rule on intrusion software, we’ll make sure to come back and update this blog post with details.
July 20th, 2015 | by Jane Smith | published in Google Apps, Google Docs
Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates
July 20th, 2015 | by Maps Devel | published in Google Maps
Posted by Selena Salazar, Product Marketing Manager, Google Maps APIs
When Dan Vanderkam, a developer and self-confessed history lover, moved to New York in 2011, he found himself wondering what the city used to look like. He figured other people probably shared his curiosity, so he built a website, oldnyc.org, that overlays the city’s streets with more than 80,000 images from its past.
“I’ve always enjoyed historic photos because they capture so many details that you’d never write about,” Vanderkam says, guessing it’s something his father, a historian, passed down to him. “They really are a window into the past.”
Vanderkam approached Matt Knutzen, a geospatial librarian at the New York Public Library, whom he met a couple of years earlier at an event at the Google New York office.
At the time, Vanderkam was developing a website that mapped old pictures of San Francisco.
“We immediately began talking about historical photos and the possibility of overlaying images in 3D space,” Knutzen recalls.
For the San Francisco project, Vanderkam scraped photos from the San Francisco Public Library’s online archive, and only asked for permission to use them once he’d completed the site.
“I realized later how risky it was to ask them that late,” he says. “So, when I went to New York and started oldnyc, I did things differently by approaching the library first.”
Knutzen, who studied geography and cartography in college, was immediately taken with the idea and knew exactly what resources to direct Vanderkam to. He introduced Vanderkam to the Milstein division of the library, which included a collection called Photographic Views of New York City, 1870s – 1970s.
The images, which are mounted on brown paper, include horse-drawn carts on barely developed dirt roads in the Bronx, skaters at Central Park in 1905, and a giant alligator balloon at Columbus Circle for Macy’s Thanksgiving Parade.
On oldnyc.org, clicking on a red dot marker will open a popup box containing photos of the location sorted from the earliest date to the most recent. Some of the images are undated. The site also shows the descriptions that Vanderkam says were written on the backs of some pictures.
Luckily for Vanderkam, Knutzen had already created a metadata spreadsheet containing unique identifiers for each photo corresponding to cross street locations.
“Plugging the data into Google Maps APIs was made easier because I didn’t have to scrape information or hunt for street names,” said Vanderkam.
To convert the addresses to latitudes and longitudes, Vanderkam used the Google Maps Geocoding API.
In order to customize his map, he used the Google Maps JavaScript API to create location markers, and adjust hue, lightness, saturation, gamma, invert lightness, and visibility. He also specified that his map only include roads and not other elements.
Vanderkam wanted other people to be able to engage with the pictures of the city, so he used a comments API from Facebook to allow site visitors to share their personal experiences.
Getting the photos on his site was also a challenge. Some of the photos were mounted individually, but others were mounted in pairs or more. To automate the task of matching an image with the coordinates in Knutzen’s spreadsheet and then uploading it to site, Vanderkam wrote code so that the computer could recognise the brown paper and isolate the photos. This was no simple task. To display them, Vanderkam used an expandable image grid library that enlarges a photo when it’s clicked on. The code is published on GitHub by danvk.
Vanderkam also created “rotate” and “improve this transcription” buttons to encourage site visitors to correct an image’s orientation or the transcription of the descriptive text on the backs of photos.
“Oldnyc is an incredibly valuable research tool. It’s now the best way into the photo collection,” says Knutzen. “Having the information of places aligned spatially combines information to give us new knowledge to make new discoveries.”
Vanderkam doesn’t have plans to work on maps for more cities, but hopes that someone else would be interested in picking up the baton. “The code for it is all open source, so if anyone is interested in building a version for their hometown, they should get in touch,” he says.
July 20th, 2015 | by Google Security PR | published in Google Online Security
Posted by
Neil Martin, Export Compliance Counsel, Google Legal
Tim Willis, Hacker Philanthropist, Chrome Security Team
Cross-posted on the Google Public Policy Blog
As the usage and complexity of software grows, the importance of security research has grown with it. It’s through diligent research that we uncover and fix bugs — like Heartbleed and POODLE — that can cause serious security issues for web users around the world.
The time and effort it takes to uncover bugs is significant, and the marketplace for these vulnerabilities is competitive. That’s why we provide cash rewards for quality security research that identifies problems in our own products or proactive improvements to open-source products. We’ve paid more than $4 million to researchers from all around the world – our current Hall of Fame includes researchers from Germany, the U.S., Japan, Brazil, and more than 30 other countries.
Problematic new export controls
With the benefits of security research in mind, there has been some public head scratching and analysis around proposed export control rules put forth by the U.S. Department of Commerce that would negatively affect vulnerability research.
The Commerce Department’s proposed rules stem from U.S. membership in the Wassenaar Arrangement, a multilateral export control association. Members of the Wassenaar Arrangement have agreed to control a wide range of goods, software, and information, including technologies relating to “intrusion software” (as they’ve defined that term).
We believe that these proposed rules, as currently written, would have a significant negative impact on the open security research community. They would also hamper our ability to defend ourselves, our users, and make the web safer. It would be a disastrous outcome if an export regulation intended to make people more secure resulted in billions of users across the globe becoming persistently less secure.
Google comments on proposed rules
Earlier today, we formally submitted comments on the proposed rules to the United States Commerce Department’s Bureau of Industry and Security (BIS). Our comments are lengthy, but we wanted to share some of the main concerns and questions that we have officially expressed to the U.S. government today:
July 20th, 2015 | by Maps Devel | published in Google Maps
Posted by Chris Huff, Vice President of Mobile Development, The Weather Channel
Editor’s note: Today’s guest blogger is Chris Huff, Vice President of Mobile Development at The Weather Channel. Read how The Weather Channel uses Google Maps APIs to power their popular Android app. The Weather Channel is just one of many customers who shared their story as part of our cross-country road trip, Code the Road.
We know from experience that the combination of weather, mapping and community input can result in ideas that keep people safe and informed. Our Android app goes far beyond basic weather forecasting, which is why we chose Google Maps. We use Google Maps Android API, Google Maps JavaScript API and ImageOverlays to place geodata, such as weather alerts, hurricanes, and storm tracks, and weather tiles, such as radar maps and clouds, on top of Google Maps.
Radar maps are one of the app’s main features, and we work hard to get them right. We get radar imagery from multiple sources and produce raster images from them. Then we take hundreds of the images and animate them in a frame-based animation sequence. The Google Maps Android API gives us overlays to place the animation on top of maps, and also lets us add additional objects such as pins and polygons to visualize lightning strikes or a storm’s direction. You can see an example below.
The more local weather reporting is, the more accurate it is; a thunderstorm may skip one neighborhood but hit another. So to improve accuracy and to build a community around our app, we’ve been working to make it more social. People send us information about weather near them, and we use the Google Maps Android API to add a pin to the map for each user-created report. Anyone can tap a pin to see the detailed report. Here’s an example of social weather reporting.
With more than 68 million downloads, the app has been a tremendous success. We get 2 billion requests for radar maps every year. There’s an old saying that everyone talks about the weather but no one does anything about it. We beg to disagree. With the Google Maps APIs we’re giving people detailed, useful, live information about the weather, and we believe that’s doing quite a bit.
As part of the Code the Road series we hosted the 24-hour hackathon event, “Storm the Road: Hack for Safety with The Weather Channel and Google Maps”, on June 23. The event gave developers an opportunity to come together to create a new app or feature for mobile or web that helps keep the public safe and informed.
July 17th, 2015 | by Google Blogs | published in Google Blog
It’s Friday, which means it’s time for a good, old-fashioned Search trends round up. Read on to see what the world wanted to know this week.
To Pluto and Beyond
It may have lost its planetary title, but Pluto officially became the most distant solid object to be visited by spacecraft as NASA’s New Horizons successfully completed its “flyby” this week. The mission to explore the dwarf giant took a mere nine years and 3 billion miles of space travel and has already returned some truly incredible high-resolution images of Pluto and its moons, including the most epic vacation Instagram of all time. Searches for news about the mission topped 5 million, plus another 100,000+ searches for photos.
Oh baby
In other historic and out of this world news, Louis Tomlinson is the first member of One Direction to announce he’s going to be a dad. While this was likely upsetting news for many 1D fans, it didn’t stop the searches–Louis racked up more than 500,000 of them this week. And, we’ve all got Jimmy Fallon to thank for this week’s most disgusting search trend, a truly horrifying finger injury called ring avulsion. Fallon, who was sporting a bandaged hand from behind “The Tonight Show” desk, explained that he had surgery following a freak accident involving a kitchen counter and his wedding ring. Searchers took to Google to find out more and with search interest in images of ring avulsion up 50x over the last week, we’re willing to bet at least a few of them probably wish they hadn’t.
Awards season…again
Nominations for the 2015 Emmy Awards were announced this week, inspiring more than 200,000 searches. Gone but definitely not forgotten “Parks and Recreation” was the most searched Emmy nominee for a comedy series, though Amy Schumer (whose new movie “Trainwreck” opens today) edged out Amy Poehler as the most searched lead actress in a comedy. In a potential preview of the 2016 Emmy Awards, Rachel McAdams, who co-stars in the new season of HBO’s “True Detective,” was searched more than 50,000 times this week.
It was a good week all around, and to play us out, we recommend Wilco’s new album, Star Wars. The surprise, free-to-download release this week resulted in more than 50,000 searches–and a lot of excited Tweedy fans.
Posted by Megan Slack, who searched this week for [ring avulsion].
July 17th, 2015 | by Maps Devel | published in Google Maps
We had a fantastic day on June 26 at Epcot where we hosted an event with the Disney team for 40 Tech Sassy Girlz, an Orlando-based non-profit program designed to provide girls in grades 6 through 12 exposure and access to STEM (Science, Technology, Engineering and Mathematics) fields and careers.
The day began with a private tour of the Code the Road bus where the girls were able to see technology first-hand, ask questions, and get a great team photo in front of our bus.
Rebecca Moore, computer scientist and longtime software professional at Google opened the day with a presentation and discussion about how engineering and technology has changed her life. At Google, she conceived and leads the Google Earth Outreach program, which supports nonprofits, communities and indigenous peoples around the world in applying Google’s mapping tools to the world’s pressing problems in areas such as environmental conservation, human rights and creating a sustainable society.
The girls also heard from Disney engineer Dan Soto who introduced them to the My Disney Experience app and spoke about prototyping and everything that goes into building an app. James McGill, Engineering Manager for the Google Maps APIs gave a talk about being an engineer and how fun it is to work with code and technology every day.
The girls had a hand at and intro to coding with Made with Code where they learned the basics of iteration, counters, and more. Made with Code is a project that creates engaging tutorials to help women get interested in and learn how to code.
Once the girls had learned all about engineering and the basics of coding, we went outside into the park for a technology-focused tour of Epcot. There were five stops on this science themed exploration: Spaceship Earth, Test Track, Soarin’, Journey Into Imagination, The Seas with Nemo and Friends, and the Google Liquid Galaxy.
Our final surprise was a visit from Mickey Mouse, himself, for a photo op and even a few hugs.
Although we’ve ended our 30 day journey and parked the bus, we’ll be sharing our Code the Road summary video and blog post soon.
Posted by Ashley Smith, Developer Marketing, Google Maps APIs
July 17th, 2015 | by Yamini Gupta | published in Google DoubleClick
Publishers’ growth in programmatic revenue is outpacing traditional direct sales for desktop and mobile across display and video advertising. New technologies like “programmatic guaranteed” are further blurring the lines between direct and programmatic channels.
A new study by The Boston Consulting Group, commissioned by Google, found that despite this trend, many publishers are failing to appropriately capitalize on the programmatic opportunity. For example, the study found that less than 25 percent of programmatic team time is spent on value-creating activities, causing publishers to miss out on significant revenues.
The study also closely analyzed the operations of those publishers that consistently outperform their peers in terms of value creation and efficiency, and arrived at best practices and approaches other publishers can follow to achieve similar success. Using simpler ad tech stack configurations, best in class publishers were on average 30% more efficient, had up to 24% higher CPMs, and delivered 10% more impressions otherwise lost to discrepancies.
Head over to DoubleClick.com to read the full study.
Posted by Yamini Gupta
Product Marketing, DoubleClick
July 16th, 2015 | by The Gmail Team | published in Gmail (Google Mail)
Posted by Vijay Umapathy, Product Manager
Wouldn’t it be nice if your concert ticket appeared at the top of your inbox just before the event started, or your hotel confirmation came back right before you needed to check in?
Starting today, Snooze in Inbox is getting a bit more convenient—when you snooze an email that has dates and times (like event confirmations or shipping notifications), you’ll get a one-tap option to snooze to exactly the right time, like the day of your concert or the morning your package will arrive.
You’ll see these new Snooze options on most of these types of emails:
So next time, try snoozing that email to the day before the event. Then sit back and relax, knowing your email will come back at just the right time.
July 16th, 2015 | by Sarah H | published in Google Student Blog
From Google Glass to Project Loon, Adwords to Docs, our interns have the opportunity to work on some of Google’s most cutting edge and innovative projects. Interns also work across sales and other business functions, bringing a fresh perspective to the work done at Google. To show you just how much of an impact interns make and to highlight their unique experiences, we’re bringing you a special blog series: Google Intern Spotlights. This week, the spotlight is on Matt Walker, who is a Data Center Intern in our Council Bluffs, Iowa office. He spent six years in the Marine Corps before going to school to become a computer engineer. Currently, he attends Cal Poly San Luis Obispo.
Matt and his wife after a helicopter ride over Badlands National Park in South Dakota on 4th of July weekend. |
Give us one fun, outlandish fact about yourself.
I have ridden an ostrich.
What inspired you to apply to this internship? What about Google made you want to be an intern here?
I applied for this internship after seeing an ad on the recommended-video feed of YouTube titled “Google likes Vets.” I watched the video, saw that they were looking for veterans going to school for engineering…and bingo-bango…I am an intern.
As for what made me want to intern, I think it was the notion of being a part of a company/program that affects so many people. It sounds cheesy, but it’s true.
What was your path to Google like?
I was in the Marine Corps for six years. Three years as an Information System Specialist (classified Outlook administrator), and three years as a Marine Security Guard in four embassies overseas. After being honorably discharged in 2011, I began pursuing a degree in Computer Engineering, first at Reedley Community College, then at Cal Poly SLO. I have held various jobs as a student (security guard, front desk at a gym, tutor, programmer, and RA) that all helped fill out my resume.
What team are you on? What stands out to you about your co-workers (besides that they’re smart)?
I am on the Hardware Operations Team (HwOps) at the Council Bluffs, IA Data Center (CBF). I would have to say the biggest point of interest about all the people that I work with is the diversity of skillsets and backgrounds that surrounds me. It seems like everyone is not only good at their job, but also really talented in some other (usually useful and applicable) area.
Can you give us a high-level overview of your project? What part of the project do you find most interesting and why?
Aside from being bounced around from team to team every couple weeks, I am working on a personnel management tool. In a place such as CBF, where the buildings are not only huge on their own, but there are also many of them spread across two sites nine miles apart, knowing who is where at a glance is important. My app will allow users to transmit what building they are currently in, so that program managers can make more informed decisions in regards to personnel, especially in urgent situations (critical service failure or something of that nature).
What’s your typical day like?
Well, I wake up in the morning, feelin’ like…well, like I do every other morning.
When I am not working on a task that is unique to whatever team I am on that week, I am either working on my personal project or asking other sections to allow me to shadow them for a couple of hours. I talk to a lot of FTEs and contractors alike. I do a lot of walking, a good amount of scootering (primary mode of transportation in a data center) and my fair share of sitting.
Has there been anything that has surprised you about working at Google?
I have found that when working with large groups of people, usually there’s a few individuals who you wind up going out of your way to avoid; however, at Google, I have not encountered any of those types of people. Everyone I have met or trained with has been so warm, positive and supportive. I believe that can only come from the fact that they are happy to be doing what they are doing and excited to share that feeling with those around them.
You work in the Council Bluffs, Iowa office. Is there a particular place or room on campus that you really like?
I don’t think I really have a favorite room or place, but there is a sitting area that I really like because of the coffee table, as seen below.
If you could only follow one account on Twitter, who/what would it be?
For this to happen, I would have to get a Twitter account. Once I did that, I would probably follow a comedian who is somewhat active…if I am going to be bothered by further notifications on my phone, they might as well be funny.
On the weekends, what’s your go-to place?
My bed. I do enjoy sleeping in quite a bit. When I am not doing that, I try to go somewhere new. I don’t think that I have been to the same place twice…yet.
Do you have any words of advice to aspiring Google interns, specifically for engineering students?
If you have not already learned to do so, learn to humble yourself. Being “good” at something at school is not the same as “good” on-site. The level of knowledge and skill that I am surrounded with is so humbling. With that said, I could not be in a better position to learn from some of the most experienced people.
What does “being Googley” mean to you?
Being as courteous to the people I interact with as they have been to me.
Best conversation in a MK/cafe/elevator?
Starting a conversation with a fellow intern about work led to another conversation with a FTE that concluded with him essentially solving a problem I was having with my car. I would never have expected he would have been able to help me!
Posted by Steven Claunch, Online Hiring and Insights Team