Tuesday 28 June 2011
Facebook StumbleUpon Twitter Google+ Pin It

Google Developer Day coming to a city near you in 2011


By Monica Tran, Developer Marketing Team

As Vic Gundotra announced previously, Google Developer Day (GDD) will be coming to eight cities in 2011. Please save the date, as we prepare to bring our world tour of GDDs to a city near you.

  • September 16: Sao Paulo, Brazil
  • September 19-20: Buenos Aires, Argentina
  • October 10: Moscow, Russia
  • October 18: Prague, Czech Republic
  • November 1: Tokyo, Japan
  • November 8: Sydney, Australia
  • November 13: Tel-Aviv, Israel
  • November 19: Berlin, Germany
Google Developer Days are a chance to learn about our latest developer products and meet the engineers who work on them. As in years past, we will have an application process when registration opens, so stay tuned, as we will continue to bring you updates on this blog.

In the meantime, get the conversation started with our official Google Developer Day 2011 hashtag, #gdd11.

In the past four years, Monica Tran has been around the world, working as a Product Marketing Manager in Mountain View, London, and Tokyo. After a good run on Google I/O, Monica is back to lead the charge on Google Developer Day, happening in 8 cities worldwide in 2011.

Posted by Scott Knaster, Editor

Swiffy: convert SWF files to HTML5


By Marcel Gordon, Product Manager, Swiffy

Some Google projects really do start from one person hacking around. Last summer, an engineering intern named Pieter Senster joined the mobile advertising team to explore how we could display Flash animations on devices that don’t support Adobe Flash player. Pieter made such great progress that Google hired him full time and formed a team to work on the project. Swiffy was born!

Today we’re making the first version of Swiffy available on Google Labs. You can upload a SWF file, and Swiffy will produce an HTML5 version which will run in modern browsers with a high level of SVG support such as Chrome and Safari. It’s still an early version, so it won’t convert all Flash content, but it already works well on ads and animations. We have some examples of converted SWF files if you want to see it in action.

Swiffy uses a compact JSON representation of the animation, which is rendered using SVG and a bit of HTML5 and CSS3. ActionScript 2.0 is also present in the JSON object, and is interpreted in JavaScript in the browser. This representation makes the Swiffy animations almost as compact as the original SWF files.

Swiffy is a great example of how far the web platform has come. Swiffy animations benefit from the recent advancements in JavaScript execution speed and hardware accelerated 2D graphics in the browser. Viva la Web!

Update Sep. 8, 2011: links updated to reflect Swiffy's move out of Labs.

Marcel Gordon is the Product Manager for Swiffy, doing the make-up and carrying the drinks. On the weekends he dresses up as a pirate and takes his hamster for short walks.

Posted by Scott Knaster, Editor

Monday 27 June 2011
Facebook StumbleUpon Twitter Google+ Pin It

19 companies create innovative products with Google technologies


By Becky Chappell, Developer Marketing Team

Every year at Google I/O, companies arrive excited to demo their coolest new products in the Developer Sandbox. In the past, these demos were only accessible to I/O attendees. This year, we wanted to help spread the love and knowledge from these innovative companies to developers who were unable to attend the conference and to attendees who want to relive the excitement or check out companies they missed.

In addition to the "Who’s at Google I/O" guest posts you may have seen from some of our developers on this blog, we’ve created video case studies from 19 of the companies that demoed their products in this year’s I/O Sandbox. These videos highlight innovations that span the entire range of Google product areas, from Android to Google Apps and Google Web Toolkit to Google TV.

Did you know:
  • that CardinalCommerce is working with Google to enable alternative payments on the web and mobile devices?
  • that you can see what your town looked like in the 1800’s using HistoryPin’s photo layer on Google Maps?
  • that you can now play Angry Birds in Chrome? (OK, you probably knew that!)

Here are a few videos to get you started:







Watch the full playlist of videos here. As always, you can visit Google Code to learn about Google technologies you can use to build your own amazing new product.

Becky Chappell is an Associate Product Marketing Manager on the YouTube Product Marketing team, but she’s been spending 20% of her time with the super awesome Developer Marketing team. When she isn’t at work, Becky likes to test out new restaurants in San Francisco and attempt to replicate tasty recipes in her own kitchen.

Posted by Scott Knaster, Editor

Friday 24 June 2011
Facebook StumbleUpon Twitter Google+ Pin It

Fridaygram


By Scott Knaster, Google Code Blog Editor

Even by the standards of web software, the Google App Engine team moves pretty quickly, with a new version every 6 weeks or so. (I learned how fast they go when I was writing App Engine technical docs and trying to keep up with what the engineers were creating.) This week, the team launched App Engine 1.5.1, with a nice collection of new features, including support in the SDK for testing the High Replication Datastore, an API to use ProtoRPC from Python, and support for user presence in the Channel API. You can see the complete list and more details in the App Engine Blog.

Nature does new releases too, although it usually takes a lot longer to add new features and fix bugs. Can the current version of humans sense a magnetic field? Scientists in Massachusetts found indirect clues by replacing a fly protein with a human one. In a study, the human cryptochrome protein restored the fly’s ability to respond to a magnetic field. Not only is that pretty cool, that BBC Science News page has an awesome picture of a fly.

Finally, if you have a few minutes this weekend, check out this nifty video that takes an original and fun look at multiplication.




Fridaygram posts are lighter than our usual fare. They're designed for your Friday afternoon and weekend enjoyment. Each Fridaygram item must pass only one test: it has to be interesting to us nerds.

Friday 17 June 2011
Facebook StumbleUpon Twitter Google+ Pin It

Fridaygram


By Scott Knaster, Google Code Blog Editor

When you search with Google, you use words to create your query. That basic fact changed earlier this week when we launched Search by Image. This new feature is well-described by its name: you just hand Google an image, and it searches for similar images on the web. You can try it out by going to Google Image Search and dragging one of the sample images into the search box. This is handy for figuring out the names of things, such as buildings, paintings, and logos, or for finding images that are similar to one you already have.



Search by image is a great example of magical technology that helps people learn or do their jobs. Here’s one that improves your health: spray-on skin. For people with burns and other skin damage, ReCell is a technology that allows doctors to take a sample of healthy skin cells, culture them to grow more, and then literally spray the cells onto the injury. This process speeds healing and reduces the risk of complications. Although it sounds like science fiction, it’s already being used in much of the world.

Finally, you probably saw our playable guitar doodle in honor of Les Paul’s birthday. And you might have seen that the guitar used a lot of cool web technology, including HTML5 Canvas, CSS, JavaScript, Flash, Google Font API, goo.gl, and Google App Engine. But you might not have known that people recorded themselves playing everything from Lady Gaga to Led Zeppelin. Enjoy!

Fridaygram posts are lighter than our usual fare. They're designed for your Friday afternoon and weekend enjoyment. Each Fridaygram item must pass only one test: it has to be interesting to us nerds.

Wednesday 15 June 2011
Facebook StumbleUpon Twitter Google+ Pin It

Working with Chrome's file browser handler


By Jeremy Glassenberg, Platform Manager, Box

This post is part of Who's at Google I/O, a series of guest blog posts written by developers who appeared in the Developer Sandbox at Google I/O 2011.


During the day 2 keynote of Google I/O, I was excited to see Box's integration with the Chromebook's file browser handler getting demoed on the big stage. The integration makes local files and files you encounter on the web easily accessible to cloud services inside Chrome OS.

Chrome's file browser handler utilizes the new HTML5 file system API, designed to enable web applications to interact with local files. This API lets web applications read files, edit files, and create new files within a designated local space on a user's machine. This includes creating binary files for application data, and in Box's case, accessing user-created files to let people easily move their content to the cloud.

As mentioned during the Google I/O keynote, the integration between Box and the Chrome OS file browser handler only took our team a weekend to build. We were able to build the integration quickly because of the simplicity of both Chrome's file browser platform and Box's API, both of which were designed to make content integrations like this easy for developers to implement.

In this case, the Quick Importer tool from the Box API made the entire development process just a few steps:

1. We created a Chrome extension manifest to work with Box.
{
"name”: "Box Uploader",
...
"file_browser_handlers": [
{
"id”: "upload",
"default_title": "Save to Gallery", // What the button will display
"file_filters": [
]
}
],
2. In the Chrome manifest, we specified the relevant file types to which the service applies. In our case, that's most file types, as seen below. Specialized services may just want certain types, such as images for Picasa.
"file_browser_handlers": [
{
"id": "upload",
"default_title": "Save to Box",
"file_filters": [
"filesystem:*.*"
]
}
],
3. With some JavaScript code connecting to the file browser handler, we set up a way to upload files through Box’s Quick Importer.
var fm = new FileManager();
fm.uploadServer = 'https://www.box.net/<...>';

if (bgPage && bgPage.filesToUpload.length) {
var entry;
while(entry = bgPage.filesToUpload.pop()) {
entry.file(function(file) {
fm.uploadFile(file);
});
}
}
That's actually all there was to the integration.

Once the file is uploaded to the Box API's Quick Import URL, our page is displayed to authenticate the user, to let the user select a Box folder to save the file, and then to upload the file.


While such an integration can be customized through our API, our Quick Import provided an easy and fast means to connect the platforms. Developers can customize the integration by using direct calls to our API, and take advantage of additional features such as automatic sharing, if they prefer.

Thanks to the simplicity of Chrome's file browser handler and some extra tools in the Box API, our development time was very short (just a weekend), but it could have actually been even quicker. We had a couple of unusual complications that weekend:

1. The Google Chrome team was still experimenting with the file browser, so development from both sides was happening in parallel, which can be a bit tricky. Now that the file browser has been thoroughly tested, you should have an even easier time.

2. I took my girlfriend out a couple times, since her final exams were coming up soon afterward. I love you, Hayley!

Once the content has been uploaded to Box, it’s accessible to many Google services, including Gmail, Google Docs, and Google Calendar, through additional integrations on our site with Google Apps. Ah, the wonders of open platforms.


Jeremy Glassenberg is the Platform Manager at Box, where he oversees partner integrations, API and platform product management, and Box’s community of several thousand developers. In addition to managing Box's developer platform, Jeremy is a part-time blogger at ProgrammableWeb, and a contributor to several open-source projects.

Posted by Scott Knaster, Editor

Tracking performance with HTTP Archive


By Arvind Jain, Make the Web Faster Team

At Google, we put a lot of effort into making the web faster. To understand the impact of our work, we need to track the speed of the web over time. HTTP Archive allows us to do that.

HTTP Archive generates regular reports illustrating trends such as page size and Page Speed score of the top pages on the web. Interested users can download the raw dataset for free, modify the source code to perform their own analyses, and unearth valuable trends.

HTTP Archive crawls the world’s top 18,000 URLs, with a plan to increase that number to a million or more in the coming months.

Google engineers built HTTP Archive as an open source service. We are now transitioning the ownership and maintenance of it to the Internet Archive. Google is proud to support the continued development of HTTP Archive and to help create a rich repository of data that developers can use to conduct performance research.

Arvind Jain founded and leads the Make the Web Faster initiative at Google. As part of that initiative, Arvind also started the Instant Pages effort, just announced yesterday.

Posted by Scott Knaster, Editor

Wednesday 8 June 2011
Facebook StumbleUpon Twitter Google+ Pin It

Add Gesture Search to your Android apps


By Yang Li, Research Scientist

Gesture Search from Google Labs now has an API. You can use the API to easily integrate Gesture Search into your Android apps, so your users can gesture to write text and search for application-specific data. For example, a mobile ordering application for a restaurant might have a long list of menu items; with Gesture Search, users can draw letters to narrow their search.


Another way to use Gesture Search is to enable users to select options using gestures that correspond to specific app functions, like a touch screen version of keyboard shortcuts, rather than forcing hierarchical menu navigation.

In this post, I’ll demonstrate how we can embed Gesture Search (1.4.0 or later) into an Android app that enables a user to find information about a specific country. To use Gesture Search, we first need to create a content provider named CountryProvider, according to the format required by Android Search framework. This content provider consists of 238 country names.

Then, in GestureSearchAPIDemo, the main activity of the app, we invoke Gesture Search when a user selects a menu item. (Gesture Search can be invoked in other ways depending on specific applications.) To do this, we create an Intent with the action "com.google.android.apps.gesturesearch.SEARCH" and the URI of the content provider. If the data is protected (for example, see AndroidManifest.xml), we also need to grant read permission for the content URI to Gesture Search. We then call startActivityForResult to invoke Gesture Search.
public boolean onCreateOptionsMenu(Menu menu) {
super.onCreateOptionsMenu(menu);
menu.add(0, GESTURE_SEARCH_ID, 0, R.string.menu_gesture_search)
.setShortcut('0', 'g').setIcon(android.R.drawable.ic_menu_search);
return true;
}

@Override
public boolean onOptionsItemSelected(MenuItem item) {
switch (item.getItemId()) {
case GESTURE_SEARCH_ID:
try {
Intent intent = new Intent();
intent.setAction("com.google.android.apps.gesturesearch.SEARCH");
intent.setData(SuggestionProvider.CONTENT_URI);
intent.addFlags(Intent.FLAG_GRANT_READ_URI_PERMISSION);
intent.putExtra(SHOW_MODE, SHOW_ALL);
intent.putExtra(THEME, THEME_LIGHT);
startActivityForResult(intent, GESTURE_SEARCH_ID);
} catch (ActivityNotFoundException e) {
Log.e("GestureSearchExample", "Gesture Search is not installed");
}
break;
}
return super.onOptionsItemSelected(item);
}
In the code snippet above, we also specify that we want to show all of the country names when Gesture Search is brought up by intent.putExtra(SHOW_MODE, SHOW_ALL). The parameter name and its possible values are defined as follows:
/** 
* Optionally, specify what should be shown when launching Gesture Search.
* If this is not specified, SHOW_HISTORY will be used as a default value.
*/
private static String SHOW_MODE = "show";
/** Possible values for invoking mode */
// Show the visited items
private static final int SHOW_HISTORY = 0;
// Show nothing (a blank screen)
private static final int SHOW_NONE = 1;
// Show all of date items
private static final int SHOW_ALL = 2;

/**
* The theme of Gesture Search can be light or dark.
* By default, Gesture Search will use a dark theme.
*/
private static final String THEME = "theme";
private static final int THEME_LIGHT = 0;
private static final int THEME_DARK = 1;

/** Keys for results returned by Gesture Search */
private static final String SELECTED_ITEM_ID = "selected_item_id";
private static final String SELECTED_ITEM_NAME = "selected_item_name";
As you can see in the code, when Gesture Search appears, we can show a recently selected country name, or nothing. Gesture Search then appears with a list of all the country names. The user can draw gestures directly on top of the list and a target item will pop up at the top. When a user taps a country name, Gesture Search exits and returns the result to the calling app. The following method is invoked for processing the user selection result, reading the Id and the name of the chosen data item.
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (resultCode == Activity.RESULT_OK) {
switch (requestCode) {
case GESTURE_SEARCH_ID:
long selectedItemId = data.getLongExtra(SELECTED_ITEM_ID, -1);
String selectedItemName = data.getStringExtra(SELECTED_ITEM_NAME);
// Print out the Id and name of the item that is selected
// by the user in Gesture Search
Log.d("GestureSearchExample", selectedItemId + ": " + selectedItemName);
break;
}
}
}
To use the Gesture Search API, you must be sure Gesture Search is installed. To test this condition, catch ActivityNotFoundException as shown in the above code snippet and display a MessageBox asking the user to install Gesture Search.

You can download the sample code at http://code.google.com/p/gesture-search-api-demo.

Yang Li builds interactive systems to make information easily accessible anywhere anytime. He likes watching movies and spending quality time with his family.

Posted by Scott Knaster, Editor

Tuesday 7 June 2011
Facebook StumbleUpon Twitter Google+ Pin It

+1'ing our API docs


By Ashleigh Rentz, API Docs Program Manager

"Hey Scott, how do I format this API call so the data comes back as a string instead of an object?"

Sometimes it’s hard to find the right doc at the right time. Lots of web pages mention the terms you’re looking for, but which ones actually have them in the right context? We ask our friends and coworkers these questions because we bet they’ve seen the problem before. We trust their technical judgment and we know they can skip straight to the right answer.

That’s why we’ve just added the +1 button to the top of most code.google.com API docs:


Whenever you find the key information you need, we hope you’ll +1 that page and let the world know! It’s a simple way to help point the people you code with in the right direction and make RTFM’ing a bit easier for everyone.

Ashleigh Rentz is a Program Manager supporting the team of technical writers who tirelessly document Google’s developer APIs. She can often be seen skating down the halls between meetings.

Posted by Scott Knaster, Editor

Friday 3 June 2011
Facebook StumbleUpon Twitter Google+ Pin It

Spring cleaning for some of our APIs


By Adam Feldman, APIs Product Manager

UPDATE June 3: In the days since we announced the deprecation of the Translate API, we’ve seen the passion and interest expressed by so many of you, through comments here (believe me, we read every one of them) and elsewhere. I’m happy to share that we’re working hard to address your concerns, and will be releasing an updated plan to offer a paid version of the Translate API. Please stay tuned; we’ll post a full update as soon as possible.


Google I/O is always an exciting time for developers. This year we made 34 separate announcements, including 7 new APIs:
With all of the recent API announcements, our API directory is getting quite long! However, some of our older APIs have been superseded by bigger and better things and others may not be receiving the necessary love.

As the web evolves and priorities change, we sometimes deprecate APIs – that is, remove them from active development – to free up resources and concentrate on moving forward. Today we're announcing a spring cleaning for some of our APIs.

Note that the vast majority of Google APIs are not affected by this announcement.

Following the standard deprecation period – often, as long as three years – some of the deprecated APIs will be shut down. The rest have no scheduled date for shutdown, but won’t get any new features. The policy for each deprecated API is specified in its documentation.
Wherever possible, the documentation includes suggested alternatives designed to help you achieve similar functionality — whether it’s a new version or related offering. We hope these alternatives, along with lengthy deprecation periods, will help minimize the impact and allow us to focus on building great products together.

Adam Feldman is a Product Manager, focusing on all of Google's APIs and making sure Google provides the best possible platform to developers.

Posted by Scott Knaster, Editor

Thursday 2 June 2011
Facebook StumbleUpon Twitter Google+ Pin It

Now open source: 20 Things I Learned about Browsers and the Web


By Min Li Chan, Google Chrome Team

Late last year, we released an illustrated online guidebook for everyday users who are curious about how browsers and the web work. In building 20 Things I Learned about Browsers and the Web with HTML5, JavaScript and CSS with our friends at Fi, we heard from many of you that you’d like to get your hands on the source code. Today, we’re open sourcing all the code for this web book at http://code.google.com/p/20thingsilearned, so that you can use and tinker with the code for your own projects.


20 Things I Learned was celebrated this year as an Official Honoree at the 15th Annual Webby Awards in the categories of Education, Best Visual Design (Function), and Best Practices. For those of you who missed our initial release last year, here’s a quick recap of the APIs behind some of the web book’s popular features:
  • The book uses the HTML5 canvas element to animate some of the illustrations in the book and enhance the experience with transitions between the hard cover and soft pages of the book. The page flips, including all shadows and highlights, are generated procedurally through JavaScript and drawn on canvas. You can read more about the page flips on this HTML5rocks tutorial.
  • The book takes advantage of the Application Cache API so that is can be read offline after a user’s first visit.
  • With the Local Storage API, readers can resume reading where they left off.
  • The History API provides a clutter-free URL structure that can be indexed by search engines.
  • CSS3 features such as web fonts, animations, gradients and shadows are used to enhance the visual appeal of the app.


With this open source release, we’ve also taken the opportunity to translate 20 Things I Learned into 15 languages: Bahasa Indonesia, Brazilian Portuguese, Chinese (Simplified and Traditional), Czech, Dutch, English, French, German, Italian, Japanese, Polish, Russian, Spanish, and Tagalog.


We hope that web books like 20 Things I Learned continue to inspire web developers to find compelling ways to bring the power of open web technologies to education. 20 Things I Learned is best experienced in Chrome or any up-to-date, HTML5-compliant modern browser. For those of you who’ve previously read this web book, don’t forget to hit refresh on your browser to see the new language options.

Min Li Chan is a Product Marketing Manager on the Google Chrome Team and the project curator/author for 20 Things I Learned about Browsers and the Web.

Posted by Scott Knaster, Editor

Wednesday 1 June 2011
Facebook StumbleUpon Twitter Google+ Pin It

Bringing more context to Gmail contextual gadgets


As part of the launch of Gmail contextual gadgets, Google released a set of predefined extractors that developers could use. These extractors allow developers to match content within a single part of an email message, such as the subject, and use that content to display relevant information to the current user.

Many Gmail contextual gadget developers have expressed a desire to match on more complex patterns than is possible with the predefined extractors. Today, with the launch of the Google Apps extensions console, these complex patterns, known as custom extractors, are now available to drive contextual gadgets.

Custom extractors allow developers to trigger their gadget when a series of conditions are met. For example, a developer could write an extractor that triggered a gadget only when “Hello world” appeared in the subject and “john@example.com” was the sender of the email. This allows developers to more finely tune their gadgets, and provide even more relevant contextual information.

If you’re interested in writing a custom extractor you can get started by reading our documentation. If you have questions, please post them in the forum.

Introducing a new way for IT developers to extend Google Apps

Today we're introducing the Google Apps extensions console, a new tool to help IT departments and in-house applications developers integrate with Google Apps.

In-house developers can now access the same Google Apps extension points first introduced in the Google Apps Marketplace. Applications can create links in the navigation bar (alongside “Calendar” and “Documents”), share a single sign-on with Google accounts, and run inside Gmail using rich contextual gadgets.

The extensions console helps in-house developers create new projects, manage team permissions, retrieve OAuth credentials, and upload their application manifest. Once the app is ready to deploy, administrators can install the app to their domain control panel for wider release.

You can get started with the console documentation to learn more.