Wednesday 25 May 2011
Facebook StumbleUpon Twitter Google+ Pin It

How to get from weekend idea to funded startup


By Don Dodge, Developer Advocate

Almost every developer has an idea and might want to start a company. Where do you start? Entrepreneurs Paul Buchheit, Joe Kraus, and Seth Priebatsch explained how to go from hacking on the nights and weekends to building an investor funded startup. We also discussed how to find co-founders, attract investors, and focus on the key decisions. You can watch the complete Google I/O session on YouTube. Here are some highlights.



Should I have a co-founder? Having strong co-founders join you in transforming your idea into a real company is critical to success. There is a positive correlation between the number of co-founders and successful outcomes up to about four co-founders. Beyond four co-founders there isn’t much data. But having more co-founders on your team definitely improves your chances of success.

What are important characteristics of a co-founder? It helps if you have worked together before, know each other well, have complimentary expertise, and can communicate openly and honestly. Joe Kraus said you should be able to settle arguments with great answers, not the compromise middle position. What else should you look for in a co-founder?
  1. Experience starting a company
  2. Domain experience and an understanding of the market
  3. Balance and different experience than your own
  4. Passion about the company vision
How do you get started? Paul Buchheit knew he wanted to start a company but didn’t know how. So, he decided to join a startup to get some experience. That startup was Google. Paul learned how startups grow, and worked with some great people who would later become his co-founders at FriendFeed. Having experience at a startup earns you credibility with potential co-founders, employees, and investors.

What matters most; team, traction, idea, or market segment? They all matter, but the people on the team are the number one consideration. The founding team shapes the product vision and sets the direction for the company. Potential employees and investors are attracted...or not, by the members of the founding team. The idea matters, but will probably change significantly over time, so most investors don’t fixate on the idea. The market segment is important, but only as a gauge of the range of successful outcomes. Traction from early users or customers makes it much easier to raise money.

How do you find investors? People invest in businesses they understand, or people they know. Look for investors that have started companies in your area, or have invested in similar companies in the past. Talk to everyone you know about your idea. Joe Kraus, co-founder of Excite, tells the story of how he read a book about starting companies, called the author, got introduced to other people, who introduced him to other people, and finally ended up with a $3M investment from Kleiner Perkins, one of the top VCs in the world.

Should you raise money from VCs or Angels? The first consideration is who can help you most. You want more than just money from investors. You want help, advice, introductions to other people who can help, and maybe access to press. Aside from help, it depends on how much money you need to raise. Friends and Family is the best place to start to raise small amounts of money. Angel investors can fund anywhere from $100K to $1M or more. Venture Capitalists (VCs) usually invest $1M to $3M in a first round Series A investment.

Incubators, Angels, and VCs - Seth Priebatsch, founder of SCVNGR.com did all three in starting his company. Seth entered a business plan competition at Princeton...and won. He used that to get the initial product built, and then applied to DreamIT, a startup incubator. That experience at the incubator allowed him to build and refine the product. Next he raised a small amount of money from Angels and brought on advisers to help him grow the company. That led to a small round from VCs. Seth believes the more investors you have, the more help, advice, and experience you get.

How do you arrive at a valuation for the company? Joe Kraus says it is an art, not a science. It depends on the stage of the company, the competition, and how fast the market segment is growing. Most early stage startups don’t have revenue and don’t have many users so the valuation is typically between $1M and $3M, and depends on the experience of the founding team, how much progress you have made on the product, and the relative success of competitors. The best way to determine a fair valuation is by having several competing investors give you proposals.

Do I need a business plan? No, but you do need a good slide deck that explains what you want to do, what problem it solves, why it will be successful, and how your team can execute on the vision. Here is a link to a post that explains how to pitch your company to investors. A good pitch deck and a product demo are what most investors are looking for. Business plans might be useful for helping you refine your ideas and vision, but most investors will never read it.

Are patents, IP, and trademarks important? Paul Buchheit says in most cases they don’t matter for early stage startups. Joe Kraus added, patents might be of some value to a potential acquirer, but probably just as a defense against patent infringement cases. Patents are very expensive to obtain (legal bills) and they take two to four years, sometimes longer, to actually get issued. By that time most startups are out of business, acquired, or moving on to something else. Even if you have a patent, most startups can’t afford to defend them in court against potential infringers. The legal expense of defending a patent, and time lost away from your business, make it nearly impossible for a small startup.

Don Dodge is a Developer Advocate at Google helping developers build new applications on Google platforms and technologies. Prior to joining Google Don was a startup evangelist at Microsoft. He is also a veteran of five start-ups including Forte Software, AltaVista, Napster, Bowstreet, and Groove Networks.

Posted by Scott Knaster, Editor

Tuesday 24 May 2011
Facebook StumbleUpon Twitter Google+ Pin It

Hacking for humanity in Silicon Valley and around the globe


By Carlos Cuesta, Developer Marketing Team

Cross-posted with the Official Google Blog

Two years ago representatives from Google, Microsoft, Yahoo!, Hewlett-Packard, NASA and the World Bank came together to form the Random Hacks of Kindness (RHoK) program. The idea was simple: technology can and should be used for good. RHoK brings together subject matter experts, volunteer software developers and designers to create open source and technology agnostic software solutions that address challenges facing humanity. On June 4-5, 2011 we’ll hold the third Random Hacks of Kindness global event at five U.S. locations and 13 international sites, giving local developer communities the opportunity to collaborate on problems in person.


The RHoK community has already developed some applications focused on crisis response such as I’mOK, a mobile messaging application for disaster response that was used on the ground in Haiti and Chile; and CHASM, a visual tool to map landslide risk currently being piloted by the World Bank in landslide affected areas in the Caribbean. Person Finder, a tool created by Google’s crisis response team to help people find friends and loved ones after a natural disaster, was also refined at RHoK events and effectively deployed in Haiti, Chile and Japan.

We’re inviting all developers, designers and anyone else who wants to help “hack for humanity,” to attend one of the local events on June 4-5. There, you’ll meet other open source developers, work with experts in disaster and climate issues and contribute code to exciting projects that make a difference. If you’re in Northern California, come join us at the Silicon Valley RHoK event at Google headquarters.

And if you’re part of an organization that works in the fields of crisis response or climate change, you can submit a problem definition online, so that developers and volunteers can work on developing technology to address the challenge.

Visit http://www.rhok.org/ for more information and to sign up for your local event, and get set to put your hacking skills to good use.



Carlos Cuesta is a Product Marketing Manager in Mountain View, focusing on Google Geo APIs (Maps, Earth, Fusion Tables, and Places). When he's not at work, he spends his spare time hunting down the latest food trucks and street vendors, using Google Maps, of course.

Posted by Scott Knaster, Editor

Friday 20 May 2011
Facebook StumbleUpon Twitter Google+ Pin It

Fridaygram


By Scott Knaster, Google Code Blog Editor

Did you participate in Google I/O last week? Nearly 1 million people did, by attending in person in San Francisco, gathering at dozens of I/O Extended events around the world, or watching the live streamed keynotes and sessions on YouTube.


Google I/O pushed an enormous amount of information out into the world. Here on this blog, we did our part by publishing many posts about new Google announcements, along with a bunch of guest posts written by developers. Because there were so many posts last week, I figured you might have missed some, so I want to highlight a couple of them here.

In this post, Cameron Henneke writes about his experience developing GQueues Mobile, a task manager app. Cameron discusses the trade-offs developers have to think about when coding for mobile platforms. Should you develop in HTML or go native? What are the advantages to each? How will that choice affect development? What do your users really want? Cameron’s post contains a thorough and candid discussion of his decision-making process.

Another post describes a versatile new technology called near field communication (NFC) and how doubleTwist uses it to share information from one Android device to another. NFC provides a super-low overhead way for two devices to exchange a small amount of data, and doubleTwist’s post not only demonstrates a practical use of NFC in an app, but also provides a lot of sample code to show how they did it.


Finally, I was pretty busy during Google I/O and I didn’t get to see all the sessions I wanted. Luckily, it’s not too late for anybody to experience more of I/O by watching session videos on YouTube. For my weekend nerd fun, I plan to grab some popcorn and go here. When it’s time to take a break, I can even rock out with Jane's Addiction on the After Hours video. Party!

Thursday 19 May 2011
Facebook StumbleUpon Twitter Google+ Pin It

ChromeVox: built-in spoken feedback for Chrome OS


By T.V. Raman, Research Scientist

Cross-posted with the Chromium Blog

We recently unveiled ChromeVox — a built-in screen reader for Chrome OS — during Google I/O 2011. This is an early developer beta that is designed to help authors of web applications come up to speed with platform accessibility on Chrome OS.

ChromeVox is built as a Chrome extension. This means that unlike most accessibility software, it is built using only web technologies like HTML5, CSS and Javascript. As the built-in accessibility solution for Chrome OS, it can help users with special needs access modern web apps, including those that utilize W3C ARIA (Access to Rich Internet Applications) to provide a rich, desktop-like experience.

ChromeVox leverages two of Chrome's experimental extension APIs, the experimental.tts API for cross-platform text-to-speech, and the experimental.accessibility API that lets an extension listen for accessibility events in Chrome's menus and toolbars. In turn, ChromeVox exposes a simple screen reader API to web developers who want to further customize the ChromeVox user experience. Thus, within your application, you can:
  • Automatically generate spoken messages and earcons.
  • Set ChromeVox to synchronize with your application's current focus.
ChromeVox also comes with an interactive online tutorial that demonstrates how users of spoken feedback interact with webpages. Examples range from static content to interactive applications. You can test these same navigation techniques within your own applications to quickly verify users can reach all portions of your application using the keyboard and obtain meaningful feedback. You can then annotate your application with the necessary ARIA properties and other accessibility enhancements to ensure that blind and visually impaired users gain complete access to your application. Please see our Google I/O 2011 talk for more.

Details on enabling accessibility in Chrome OS can be found on the Accessibility help page, and the Chrome extension is available for download from our Wiki page. For now, ChromeVox is targeted at end-users on Chrome OS, but it may also prove a useful tool to web developers using Chrome on all major platforms. We welcome your feedback via our Open Source project website at http://google-axs-chrome.googlecode.com.


T. V. Raman is a research scientist at Google. He leads a team of engineers building innovative user interfaces on Android and Chrome OS, and researches creating highly efficient eyes-free interfaces.

Posted by Scott Knaster, Editor

Friday 13 May 2011
Facebook StumbleUpon Twitter Google+ Pin It

Keynote and session videos from Google I/O now live


By Mike Winton, Director of Developer Relations

With Google I/O 2011 just two days behind us, we wanted to thank the nearly one million developers who joined us at Moscone Center, attended I/O Extended events, and watched online via I/O Live from 161 countries around the world. In addition to the announcements made at the keynote presentations, we had more than 30 announcements in our 110 sessions. HD recordings of all these sessions are now available online. Here are some of the announcements:
Highlights from this year’s event are posted at www.google.com/io, where we are featuring photos, announcements, and the latest videos. Also, stay tuned for a feature on “Backstage at Google I/O” where we will highlight the developers and artists who helped to make the event possible this year.

Google I/O kicks off the year as our biggest developer event--but we’re only getting started. As of today, we are announcing locations for our eight Google Developer Days (GDDs), which will take place all over the world with more than a few DevFests in between. Stay tuned for more info on the 2011 event details, but we’ll look forward to seeing you in Brazil, Argentina, Prague, Moscow, Tokyo, Sydney, Israel and Germany for our Google Developer Team world tour.


Mike Winton founded and leads Google's global Developer Relations organization. He also enjoys spending time with his family and DJing electronic music.

Posted by Scott Knaster, Editor

Dreams in 3D: a WebGL experience for the modern browser


By Ricardo Cabello (aka Mr.doob), Google Data Arts Team



Last August, we released “The Wilderness Downtown”, a music experience that brought together HTML5 and JavaScript, as well as the Google Maps and Street View APIs. Today, we’re excited to introduce our newest project, “3 Dreams of Black”, made with WebGL, HTML5 and JavaScript, and designed for modern browsers like Google Chrome. We previewed this music experience yesterday with web developers at Day 2 of the Google I/O keynote.

“3 Dreams of Black” takes you on a journey through three dream worlds constructed through a combination of rich 2D drawings and animations interwoven with interactive 3D sequences. Throughout various points in these dream worlds, you can grab your mouse and guide the protagonist’s point of view through the experience. This music experience also includes a 3D model creator that allows you to create your own relics and contribute to the shared collective dream. “3 Dreams of Black” is written and directed by Chris Milk, and developed with a few folks here at Google.



In creating “3 Dreams of Black”, we’ve had the opportunity to build many tools, libraries, and models. We’ve fully opened up the source code and made it available for web developers to tinker with us at www.ro.me/tech. In addition to the code, a few other highlights include eight WebGL demos, a fun model viewer for interacting with some of the animals from the web experience, and the Three.js 3D library used for building the experience. In addition, a big part of the project was to define a good pipeline for getting all the animals and environment models right in WebGL -- for this, we extended Blender with custom plugins so we could manipulate and export the data with ease.







“3 Dreams of Black” is set to the song “Black” off the album ROME, presented by Danger Mouse & Daniele Luppi, featuring Jack White and Norah Jones on vocals, to be released soon on the record label EMI. Because it’s built in WebGL, it requires a WebGL-supported browser like Chrome, and Windows Vista / Mac OS X 10.6 and above to help ensure that your computer has the necessary and up-to-date graphics drivers. We hope you’ll take a moment to dive into the experience and the developer resources at www.ro.me

Ricardo Cabello is a designer/developer in the Google Data Arts Team. He is the creator of several popular Chrome Experiments, including Google Gravity, Ball Pool, and Harmony.

Posted by Scott Knaster, Editor

Wednesday 11 May 2011
Facebook StumbleUpon Twitter Google+ Pin It

Making money with Google In-App Payments for the Web

Cross-posted from the Google Checkout Blog

By Amit Fulay, Product Manager and Mikhail Seregine, Software Engineer

Today at Google I/O, we launched the developer API of Google In-App Payments for the web. In-App Payments enables any web application to receive payments from users and keep them engaged in your application. It is available to all US developers in sandbox today and will be followed by a consumer launch and an international rollout over the summer.

The team started building Google In-App Payments soon after Jambool was acquired by Google in August 2010. This project brought Social Gold technology and expertise and combined it with Google scale. For the payments platform that we’re announcing today, the theme is simplicity:

The simple API makes integration fast so you can start getting paid sooner. Implementing In-App Payments requires only two API calls: one to initiate the payment, and one to accept the notification when a payment is made.

The simple user experience will let your users pay without leaving the app or entering billing details. Users who have previously completed a purchase on Google Checkout, Android MarketGoogle eBookstore, YouTube Movies and more can use that same account to pay for items in your app in minimal steps. Fast payments that keep users in your game or site can boost customer conversion rates, increasing your revenue.

The simple pricing model is a flat payment processing fee of 5%. It’s 5% whether you distribute your app yourself or via the Chrome Web Store. There are no fixed purchase fees, setup costs, or monthly minimums.

The JavaScript version of the In-App Payments API is available to developers today. A Flash version of the API will be available in the coming weeks.

We invite you to sign up, start integrating your apps and send us feedback. Let’s work together to delight consumers this summer with amazing web apps, monetized effectively, all in the app.

Getting organized with the Tasks API

Cross-posted from the Google Apps Developer Blog

Google Tasks helps many of us to remember all those things that keep us busy. Towards the end of last year we asked our users what they wanted to see improved with Google Tasks and an overwhelming request was for the ability to access tasks from anywhere — be it on the move, on the desktop, or through their favorite Web apps.

Today, we’re checking off a big to-do from our list and are inviting you to try out the new Google Tasks API. Using the Google Tasks API, developers can — for the very first time — create rich applications which integrate directly with Google Tasks.

The Google Tasks API provides developers with a powerful set of API endpoints for retrieving and modifying Google Tasks content and metadata. It offers a simple, RESTful interface and supports all basic operations required to query, manage and sync a user’s tasks and task lists. The API uses JSON for data representation and works with multiple authentication mechanisms including OAuth 2.0.

Plain HTTP using JSONUsing Google API Client Library for Java
POST /tasks/v1/lists/<list-ID>/tasks
Content-Type: application/json
...
{ title: "Publish blog post" }
Task task = new Task();
task.setTitle("Publish
blog post");
client.tasks.insert(
"list-ID",
task).execute();
Client libraries are provided for several major programming environments and should help you get up and running quickly.

The API is available in Labs and can be activated for your project through the API Console. Get started today by trying the Tasks API yourself using the API Explorer and taking a look at the documentation.


If you want to see the API in action check out the Google Tasks Chrome Extension. If you are at Google I/O we invite you to come along and hear the Google Tasks team talk about the new API today.

We thank the early adopters that have worked with us and built their own Google Tasks integrations over the last weeks. We’d like to highlight a few of them:
  • Producteev is a task management platform that lets teams and individuals access their to-dos from a lot of different locations (web, mobile, email, calendars...). You will now have all your Producteev's tasks available in Google Tasks and vice versa!
  • Mavenlink's project collaboration suite allows you to communicate, share files, track time, invoice, and make or receive payments in one place. With its Google Tasks integration, your Mavenlink project tasks & Google Tasks always stay in sync.
  • Manymoon is the top installed social task and project management app in the Google Apps Marketplace and makes it simple to get work done online with co-workers, partners, and customers. Manymoon's users can now create and view tasks with Gmail and Google Calendar through Google Tasks.
  • Zoho offers a suite of online business, collaboration and productivity applications for small businesses. So far they have integrated Zoho CRM & Zoho Projects with the Tasks API.

Get Started with the Google Tasks API today!

Want to weigh in on this topic? Discuss on Buzz


Posted by Fabian Schlup & Nicolas Garnier
Google Tasks API Team

A new kind of computer: Chromebook

Sundar
Linus
By Linus Upson, Vice President of Engineering, and Sundar Pichai, Senior Vice President, Chrome

Cross-posted from the Official Google Blog

Update 5/11 3:00 PM: Added video of keynote

A little less than two years ago we set out to make computers much better. Today, we’re announcing the first Chromebooks from our partners, Samsung and Acer. These are not typical notebooks. With a Chromebook you won’t wait minutes for your computer to boot and browser to start. You’ll be reading your email in seconds. Thanks to automatic updates the software on your Chromebook will get faster over time. Your apps, games, photos, music, movies and documents will be accessible wherever you are and you won't need to worry about losing your computer or forgetting to back up files. Chromebooks will last a day of use on a single charge, so you don’t need to carry a power cord everywhere. And with optional 3G, just like your phone, you’ll have the web when you need it. Chromebooks have many layers of security built in so there is no anti-virus software to buy and maintain. Even more importantly, you won't spend hours fighting your computer to set it up and keep it up to date.

At the core of each Chromebook is the Chrome web browser. The web has millions of applications and billions of users. Trying a new application or sharing it with friends is as easy as clicking a link. A world of information can be searched instantly and developers can embed and mash-up applications to create new products and services. The web is on just about every computing device made, from phones to TVs, and has the broadest reach of any platform. With HTML5 and other open standards, web applications will soon be able to do anything traditional applications can do, and more.

Chromebooks will be available online June 15 in the U.S., U.K., France, Germany, Netherlands, Italy and Spain. More countries will follow in the coming months. In the U.S., Chromebooks will be available from Amazon and Best Buy and internationally from leading retailers.

Even with dedicated IT departments, businesses and schools struggle with the same complex, costly and insecure computers as the rest of us. To address this, we’re also announcing Chromebooks for Business and Education. This service from Google includes Chromebooks and a cloud management console to remotely administer and manage users, devices, applications and policies. Also included is enterprise-level support, device warranties and replacements as well as regular hardware refreshes. Monthly subscriptions will start at $28/user for businesses and $20/user for schools.

There are over 160 million active users of Chrome today. Chromebooks bring you all of Chrome's speed, simplicity and security without the headaches of operating systems designed 20 to 30 years ago. We're very proud of what the Chrome team along with our partners have built, and with seamless updates, it will just keep getting better.

For more details please visit www.google.com/chromebook.



Watch the keynote here:




Linus Upson is Vice President of Engineering and Sundar Pichai is Senior Vice President, Chrome

Posted by Scott Knaster, Editor

Tuesday 10 May 2011
Facebook StumbleUpon Twitter Google+ Pin It

Android Meet App Engine, App Engine Meet Android

By Chris Ramsdale, Product Manager, GWT and GPE

Imagine this: you've spent the past few months hammering away at the latest mobile game sensation, Mystified Birds, and you are one level away from complete mastery. And then it happens. In a fit of excitement you throw your hands up, and along with them your Nexus S, which settles nicely at the bottom of the pool you happen to be relaxing next to. The phone is rendered useless. Luckily, your insurance policy covers the replacing the device and the Android Market handles replacing your apps. Unluckily though, all of your Mystified Birds data went the way of your device, leaving you to start from scratch.

Wouldn't it be great if your new device not only contained all of your apps, but all of your valuable data as well? We think so. With Google Plugin for Eclipse (GPE) v2.4 it's much easier to build native Android apps that can take data with them wherever they go. And there's no better place to host your backend service and store your data than Google's cloud service, App Engine.

With the latest release of GPE, we're bringing together these two great Google platforms, Android and App Engine, with a set of easy-to-use developer tools. Diving a bit deeper, here are some of the features offered in GPE 2.4:

Project Creation

With GPE 2.4, you now have the ability to create App Engine-connected Android projects. This new Eclipse project wizard generates fully functioning Android and GWT clients that are capable of talking to the same App Engine backend using the same RPC code and business logic.

Cloud to Device Messaging Support

Polling for backend changes on a mobile device is inefficient and will result in poor app performance and battery drain. As a solution for Android developers, the Android team built Cloud to Device Messaging (C2DM), a service for sending lightweight pings to notify apps when they have pending data. We heard back from developers that integrating with C2DM results in a lot of boilerplate (and sometimes fragile) code that they would rather not maintain. With the 2.4 release of GPE, when you create a new App Engine connected Android project, you'll get this code for free. All you have to do is hook up the app-specific code to customize the handling of the C2DM notification.

RPC Generation and Tooling

Writing and maintaining RPC code (code that allows your app to communicate with backend servers) is monotonous and error prone. Let's face it, you're a mobile developer and the last thing you want to be spending time on is writing (or debugging) this type of code. In GPE 2.4 we're introducing tooling that removes this task for you, and will generate all of the underlying RPC boilerplate code within a few clicks. You specify the model objects that will be used between client and server, and GPE generates the RPC service, DTOs, and client-side calling code. To make this even better, the generated code works across Android and GWT apps, so any future changes that you make will only need to be made once.

Want to get started? Download GPE 2.4 Beta here. Note that you'll need to install the Android Developer Tools (ADT) plugin as a prerequisite, which can be found here.

If you have any feedback, we'd love to hear it and the GPE Group is the right place to submit it. The App Engine and Android Developer Groups are also great sources of information.


Chris Ramsdale is Product Manager for GWT and GPE: cramsdale@google.com

Posted by Scott Knaster, Editor

Streamline your web font requests: introducing “text=”


By Raph Levien, Engineer, Google Web Fonts

Last week, the Google Web Fonts team announced a new feature on the Google Web Fonts Blog. Since we’re discussing this feature today at Google I/O, we’d like to share this news with Google Code Blog readers as well.

Oftentimes, when you want to use a web font on your website or application, you know in advance which letters you’ll need. This often occurs when you’re using a web font in a logo or heading.

That’s why we’re introducing a new beta feature to the Google Web Fonts API. The feature is called “text=”, and allows you to specify which characters you’ll need. To use it, simply add “text=” to your Google Web Fonts API requests. Here’s an example:

<link href='http://fonts.googleapis.com/css?family=Special+Elite &text=MyText' rel='stylesheet' type='text/css'>

Google will optimize the web font served based on the contents of this parameter. For example, if you only require a few letters for a logo, such as “MyText”, Google will return a font file that is optimized to those letters. Typically, that means Google will return a font file that contains only the letters you requested. Other times, Google might return a more complete font file, especially when that will lead to better caching performance.

The “text=” parameter has the potential to dramatically cut down web font file size. In some preliminary studies, web fonts can be cut from 35k down to just 5k (or even smaller), if only short strings of text are required. If you have a longer string, you can shorten the request by removing duplicate characters, as the order of characters in the string doesn’t matter. Of course, the font you get back is optimized even if there are duplicate character in the request.

The effect of this feature is even more pronounced on mobile devices, where connection speeds are limited. Using the text= parameter, you can ensure your users will have a great, quick loading experience.

We’re happy to say that the feature also works for international fonts. There’s no need to also specify the subset= parameter, as text= has access to all the characters in the original font. To access Unicode characters, use standard technique of url-encoding the UTF-8 representation of the string. Therefore, ¡Hola! is represented as: text=%c2%a1Hola! .

We hope you enjoy this new feature.


Raph Levien is an expert on fonts and graphics technologies. Raph designed Inconsolata, one of the fonts available on the Web Font API. Raph enjoys photography and spending time with his family.

Posted by Scott Knaster, Editor

Google Prediction API helps all apps to adapt and learn


By Travis Green, Product Manager

Now your apps can get smarter with as little as a single line of code. They can learn to continually adapt to changing conditions and to integrate new information. This week at Google I/O, we’re making the Google Prediction API generally available, meaning you can create apps with these capabilities for yourself. Additionally, we’re introducing several significant new features, including:
  • The ability to stream data and tune your predictive models
  • A forthcoming gallery of user-developed, pre-built models to add smarts even faster.
The Google Prediction API can be used by almost any app to recommend the useful, extract the essential, and automate the repetitive. For example:
  • Recommend a new movie to a customer.
  • Identify most important customers.
  • Automatically tag posts with relevant flags.
For example, Ford Motor Co. Research is working to use the Prediction API to optimize plug-in hybrid vehicle fuel efficiency by optionally providing users with likely destinations to choose from, and soon, optimizing driving controls to conserve fuel. Because the API is a cloud-hosted RESTful service, Ford has been able to access its computationally-intensive machine learning algorithms to find patterns that rank potential destinations based on previous driving paths. Ford will be demonstrating their work at the API’s I/O Session.

Here’s a summary of the features we added to the API today:
  • Streaming training data: Continually incorporate feedback for fast-adapting systems (e.g. user-chosen tags vs predicted ones, final purchases vs expected).
  • General availability: Anyone can now sign up to use the API. Paid users also receive a 99.9% SLA with increased quota.
  • New JavaScript library: Now deploy the Prediction API in your JavaScript – in addition to our updated Python and Java libraries.
Today, we are also announcing the Prediction API’s forthcoming gallery of pre-trained third party predictive models (try these demo models right now), and we will be adding more constantly (maybe yours – waitlist). Once complete, all Prediction API users will be able to:
  • Subscribe to others’ models: improve your apps with others’ predictive data tools.
  • Sell access to your models (e.g. sentiment analysis on social media).
  • Import customized models through the open-standard PMML encoding.
See our recent blog post for even more ideas, and get started at the Google APIs Console.

Thanks to our community of preview developers, who have played a crucial role in helping us make the Google Prediction API simpler and more powerful since its announcement last year at I/O 2010. We are thrilled to invite all developers to join them.


Travis Green's favorite part about his job is designing smart applications. In his spare time, he is in the great outdoors (looking for trouble).

Posted by Scott Knaster, Editor

Benetech: enabling reading for all


By by Gerardo Capiel, VP of Engineering, Benetech

This post is part of Who's at Google I/O, a series of guest blog posts written by developers who are appearing in the Developer Sandbox at Google I/O.


Benetech is a different kind of technology company, where we measure our success not on ROI, but Return to Humanity. Benetech is a non-profit organization that builds software solutions to address large scale and global social needs in literacy, human rights and the environment. Many of our software solutions are delivered via an open source model.

One of our latest literacy projects has been to develop an Android-based book e-reader for people with print disabilities. Print disabilities, such as blindness, paralysis, or dyslexia, effectively prevent a person from reading traditional print books. Many of those people qualify to have free or inexpensive access to books thanks to an exemption in U.S. copyright law called the Chafee Exemption. Bookshare, one of Benetech’s largest solutions, provides over 100,000 e-books in the accessible DAISY format (similar to ePub) to over 130,000 Chafee-qualified people in the U.S. Once downloaded from Bookshare, DAISY books can be consumed using Assistive Technology (AT), which employs Text to Speech technology (TTS), electronic refreshable braille, or large fonts for low vision users. Bookshare was originally built 10 years ago on a PHP architecture, was migrated to a Java/Hibernate/MyBatis framework and we recently migrated the content repository to S3.

Bookshare has a public REST-based API, which enables AT developers to directly integrate their applications with our API. Through the API, an AT application can enable a user to directly search for books, browse books based on category or recently added books and download a book packaged up as either a DAISY file or a BRF file commonly used by an electronic refreshable Braille display, such as HumanWare’s BrailleNote. Being able to directly download from the AT application simplifies the potentially frustrating experience of having to manually transfer the books from a PC to the AT software or device. The API supports anonymous use, which provides access to freely available books and open educational resources that have no copyright restrictions. Only qualified Bookshare members can access the copyrighted books and periodicals. To learn more about the Bookshare API and obtain a developer key, visit http://developer.bookshare.org/.

Recently Benetech challenged a group of volunteers to build a free, open source, mobile and accessible e-reader which leverages the Bookshare API. The volunteers chose to extend FBReaderJ, a popular open source e-reader for Android which leverages Android’s TTS API (android.speech.tts.TextToSpeech). The project is a work in progress, but so far the volunteers have added DAISY format support and Bookshare API integration. They are now working on improving the accessibility of the application and are evaluating different user interaction experiences to making it easy for print disabled users to access books. To learn more about Android accessibility, check out http://eyes-free.googlecode.com/ and to check or contribute to the project visit http://github.com/amahule/fbreaderj.

Ultimately, we believe this Android e-reader could also benefit people who don’t qualify under Chafee, but who have other disabilities, such as Attention Deficit and Hyperactivity Disorder (ADHD), which makes it hard for them to enjoy printed or even traditional e-books. Furthermore, TTS technology and the Google Translate API may help us use inexpensive Android devices to distribute valuable knowledge locked up in print to illiterate populations in developing countries. Accessing knowledge for illiterate populations will be critical to the success of emerging democracies.

We welcome ideas you may have about our app or Benetech in general. We particularly welcome anyone interested in contributing product development skills to our Android e-reader project or any other open source projects Benetech is working on regarding literacy, human rights or the environment. To learn more about how you can volunteer your skills and time, please go to http://benetech.org/join_us/volunteer_opportunities.shtml.


Come see Benetech in the Developer Sandbox at Google I/O on May 10-11.

Gerardo Capiel is a two-time Internet entrepreneur turned social entrepreneur. When he's not geeking out for humanity, he's looking for inside tips on the best food in San Francisco.

Posted by Scott Knaster, Editor

doubleTwist’s Adventures in NFC Land


By Jason LeBrun, Senior Android Engineer at doubleTwist

This post is part of Who's at Google I/O, a series of guest blog posts written by developers who are appearing in the Developer Sandbox at Google I/O.

One of the most exciting aspects of Google's Android platform is the rapid release pace, so support for cutting-edge technologies can be included in the platform early on. As engineers, we're excited about such features, because we like to tinker and test the limits of new technologies. But the real challenge is using these new capabilities to integrate easy-to-use features for end-users, so that everyone can be excited. The introduction of the Near-Field Communication (NFC) API in Android 2.3, accompanying the release of an NFC hardware feature in Google's Nexus S, gave us the opportunity to do just that.

NFC is meant to send and receive small amounts of data. This data can be read from passive (non-powered) devices (e.g. credit cards or interactive posters), or active devices (e.g. payment kiosks). NFC can also be used to communicate between two NFC-equipped devices thanks to a protocol introduced by Google which defines a way for two active devices to exchange NFC messages that follow the NFC Data Exchange Format (NDEF) specification. This protocol (NDEF Push Protocol (NPP)) is implemented in Android 2.3.3 and beyond.

The NPP is very simple: it allows us to send an NDEF message to another device, which will then process the message as if it had been read from a passive tag. Thus, to understand how to use NPP, we need only to understand the NDEF message. An NDEF message is a collection of NDEF records. An NDEF record is a short header describing the contents of the record’s data payload, and then the data payload.

This new NFC/NPP capability enables the exchange of a few hundred bytes of information in a well-defined format between devices, using proximity as a method for instigation and authentication. In other words, we can now share a small amount of data between two devices without going through the normal steps of pairing or association that are required by more conventional (and higher-bandwidth) avenues like Bluetooth or WiFi—the fact that the two phones are in close proximity is enough evidence to convince the NFC software that the devices are eligible to receive information from each other.

The NFC API for Android handles all of the details of receiving and parsing NFC messages. It then decides what to do with the message by investigating the intent filters registered for applications on the device. In some cases the messages may have characteristics that can lead to finer-grained dispatching. In the case of an NDEF message, you can register for NFC messages at varying levels of detail: from as vague as any NDEF message to as detailed as an NDEF message containing a URI matching a given pattern. A pleasant result of this approach is that the operating system can have "catch-all" applications for messages that have unknown details.

Let’s consider a possible application of the NFC technology in the context of the doubleTwist app. If you’re listening to a song and you want to share the details of the song with your friend, how can NFC help with this? Well, we can use NFC to bundle up a small bit of information about the song in a well-defined format, which the other phone can then parse and use to display information about the song. We will pass the metadata in a single NDEF Record containing a JSON object.

To ensure that our message is routed to doubleTwist on the receiving phone, we rely on Android's ability to dispatch messages based on the data type of the payload, signaled by setting the the type name field to TNF_MIME_MEDIA. This indicates that the type field of the NDEF record should be a MIME type as specified by RFC2046. To take advantage of this capability, we create our own custom MIME type to use in the type field. Since the NDEF message must be composed of bytes, we’ll also need to convert the strings into byte arrays. The code for all of this looks like this:

//Assume that musicService is an interface to our music
//playback service
JSONObject songMetadata = new JSONObject();
String title = musicService.getCurrentTrackTitle();
String artist = musicService.getCurrentTrackArtist();
String album = musicService.getCurrentTrackAlbum();
songMetadata.put(“title”, title);
songMetadata.put(“artist”, artist);
songMetadata.put(“album”, album);

String mimeType = "application/x-doubletwist-taptoshare";
byte[] mimeBytes = mimeType.getBytes(Charset.forName("UTF-8"));
String data = songMetadata.toString();
byte[] dataBytes = data.getBytes(Charset.forName("UTF-8"));
byte[] id = new byte[0]; //We don’t use the id field
r = NdefRecord(NdefRecord.TNF_MIME_MEDIA, mimeBytes, id, dataBytes);
NdefMessage m = new NdefMessage(new NdefRecord[]{r});

Similarly, our application registers an Intent filter to define the Activity that will handle NFC messages that match this mime type. So, when another phone running doubleTwist receives our message, doubleTwist will have priority in processing the message. To do this, we add an <intent-filter> element to AndroidManifest.xml's entry for the activity that should launch to handle this NDEF message:

<activity name="SongInfoShare">
<intent-filter>
<action android:name="android.nfc.action.NDEF_DISCOVERED" />
<data android:mimeType="application/x-doubletwist-taptoshare" />
<category android:name="android.intent.category.DEFAULT"/>
</intent-filter>
</activity>

The SongInfoShare activity will be started whenever the NFC radio receives our special message from another NFC-capable phone. In the onCreate method for this activity, we can check the action for the intent that started to activity to verify that it’s the NDEF_DISCOVERED action, and then use the normal intent helper methods to get the NFC message data from the Intent. Here’s what the code will look like for our info-sharing example:

public class SongInfoShare extends Activity {

public void onCreate(Bundle b) {
if(android.nfc.ACTION_NDEF_DISCOVERED.equals(getIntent().getAction()) {
Parcelable[] msgs =
b.getParcelableArray(NfcAdapter.EXTRA_NDEF_MESSAGES);
if(msgs != null && msgs.length > 0) {
NdefMessage m = (NdefMessage)msgs[0];
NdefRecord[] r = m.getRecords();
JSONObject sharedInfo = null;
if(r != null && r.length > 0) {
try {
String payload = new String(r[0].getPayload());
sharedInfo = new JSONObject(payload);
} catch (JSONException e) {
Log.d(TAG, "Couldn't get JSON: ",e);
}
}

//At this point, we can use the contents of
//sharedInfo to set up the contents of the activity.

So, now we’re able to easily pass along song information to a friend by simply bringing the NFC radios close together. No cutting, pasting, tapping, or searching for options in menus!

So, what happens if our friend isn’t running doubleTwist? Well, the stock Android NFC handler app can help us. It will display any text or link records contained in the NDEF message it receives. So, for example, we can also share a pre-formatted link that will do a Google search for the song information. To achieve this, we can modify the end of our first code example as follows:

r1 = NdefRecord(NdefRecord.TNF_MIME_MEDIA, mimeBytes, id, dataBytes);
String query = UrlEncoder.encode("\""+artist+"/" /""+title"\"", "UTF-8");
String searchLink = "http://www.google.com/?q="+query;
byte[] searchBytes = data.getBytes(Charset.forName("UTF-8"));
NdefRecord r2 = new NdefRecord(
NdefRecord.TNF_ABSOLUTE_URL, searchBytes, id, searchBytes);
NdefMessage m = new NdefMessage(new NdefRecord[]{r1, r2});

Now, in the case where the receiving user doesn’t have doubleTwist installed, they will at least be presented with a link that they can click on to begin a Google search for more information about the song.

The first version of Google’s NDEF Push Protocol API has allowed us to bring some novel enhancements to our app using one of the newest available handset technologies. We are looking forward to further development of device-to-device NFC APIs that will allow even richer inter-device communication using just a tap!



Come see doubleTwist in the Developer Sandbox at Google I/O on May 10-11.

Jason LeBrun is Senior Android Engineer at doubleTwist, a digital media startup based in San Francisco. doubleTwist was co-founded by Jon Lech Johansen and Monique Farantzos in 2007 with the mission to create a unifying media platform that enables users to enjoy and organize their media across all their home devices. Android, which launched later that year, has enabled doubleTwist to deliver on this promise.

Google Storage for Developers open to all, with new features


By Navneet Joneja, Product Manager

For those of you who have been waiting to use Google Storage, we’re happy to announce that effective immediately, you can get a Google Storage for Developers account without needing to request an invitation.

We’ve also launched several significant enhancements to the service, including more flexible and powerful security features, simplified sharing, the ability to store data in Europe, support for larger objects (up to 5 TB), team-oriented accounts and a completely free promotional tier.

OAuth 2.0 Support
OAuth 2.0 is the new recommended Google Storage authentication and authorization scheme. OAuth 2.0 is an industry standard that we’ve adopted across Google, offering many benefits:
  • Simpler - never sign a request again! OAuth 2.0 uses access tokens for authentication, which obviate the need for complicated signature schemes. We recommend that all OAuth 2.0-authenticated requests be made over SSL.
  • More flexible and powerful - OAuth 2.0 allows for three-legged authentication, where a user can grant an application permission to access Google Storage on their behalf (and revoke the grant at any time if necessary).
  • Secure
    • Your user name and password are used once to create a long-lived (refresh) token which is scoped to allow access to Google Storage o behalf of that account. Your user name and password never needs to live on disk.
    • You don’t need to distribute your long-lived credentials to all the servers that need to access Google Storage. Instead, you can manage your long-lived credential centrally, and use it to create short-lived access tokens that you pass to your host cluster. This means that your long-lived tokens are safer.
    • You can revoke credentials at any time.
Learn more about using OAuth 2.0 with Google Storage for Developers here.

Simplified Sharing
We’re happy to announce that your customers and partners no longer need to sign up for Google Storage in order to be able to access data that you’ve shared with them; you can now share data with anyone who has a Google account.

New Storage Region: Europe
In response to popular demand, effective immediately, we’re adding a new option for data location. You can now choose to store your data in Europe - and get the same high-performance, massively scalable, reliable service as you do when you store your data in the United States.

Regardless of which region you choose, your data will be replicated to multiple geographically diverse Google data centers within the region in order to provide high levels of availability and reliability.

Learn more here.

Team-oriented accounts
Google Storage now uses a team-oriented account model, which allows you to create and manage your account in a more natural way.

We’ve moved to the Google API console for signup and administration, and adopted the same project-based account model that is used by other APIs. After creating a project, you configure billing for it once, and can then add any number of developers to it without requiring them to sign up for separate Google Storage accounts. This means individual developers on a team don’t need to configure billing in order to use Google Storage. This change also makes Google Storage account management consistent with other Google APIs.

Since buckets are now associated with projects rather than individuals, you will need to specify the new x-goog-project-id header with your list-buckets and create-bucket API calls. However, your existing code will continue to work using your new “default” project that already owns all the buckets you created before we implemented project-based accounts. Learn more about projects and the backwards-compatible default project mechanism here.

Support for chunked transfer encoding
Google Storage now allows you to upload your data to Google Storage without knowing the object size in advance, using the standard HTTP chunked transfer encoding mechanism. Using this feature, you can stream data into Google Storage instead of buffering it on your own server before sending it to Google.

New API Version
In order to continue to innovate and deliver high-impact features, we’ve found the need to make some backwards-incompatible changes. In order to deliver these features while retaining backwards compatibility for existing code that doesn’t need the new features, we’re introducing a new API version header. The new API version that implements this change is version 2. From version 2 onwards, all API calls will be versioned using the new x-goog-api-version header.

Version 2 of the Google Storage API does not support HMAC signature-based authentication. Although we strongly recommend that you move your code to use OAuth 2.0, explicitly specify the project ID when creating and listing buckets and use the new API version header, your existing code will continue to work; if you don’t specify the API version in your request, your requests will still be handled by our old API.

Free Trial Use
Starting right now, all new accounts will receive a monthly free quota of:
  • 5 GB of free storage,
  • 25 GB of free data transfer into Google Storage,
  • 25 GB of free data transfer out of Google Storage,
  • 2,500 free PUT, POST and LIST requests, and
  • 25,000 free GET, HEAD and other requests.
Existing accounts will continue to receive 100 GB of free storage until the July 1, 2011 and, in addition, will also receive the same free bandwidth and requests as new accounts. On July 1, 2011, they will automatically move to the new plan.

The new promotional plan will be effective until Dec 31, 2011. Please note that promotional usage will only apply to a user’s first project that uses Google Storage (for existing users, this is the project that was automatically created for them during the account migration).

If you’re an existing Google Storage for Developers user, thank you for using our product and for your valuable feedback that continues to help us evolve the service to meet your needs. As always, we continue to welcome your feedback in our discussion group. If you haven’t yet tried Google Storage, get your Google Storage account and get started for free today.


Navneet Joneja loves being at the forefront of the next generation of simple and reliable software infrastructure, the foundation on which next-generation technology is being built. When not working, he can usually be found dreaming up new ways to entertain his intensely curious one-year-old.

Posted by Scott Knaster, Editor

The Go programming language is coming to Google App Engine

By David Symonds, Nigel Tao, and Andrew Gerrand of the Go Team

The growing ranks of Go programmers will soon have another platform: Go will be the third language, after Python and Java, in which to write Google App Engine apps. This is an exciting new experimental feature of App Engine and a major milestone for Go.

Go is an open source language, initially designed at Google, that was released in November 2009 and has seen significant development since launch. It is a statically typed, compiled language with a dynamic and lightweight feel. It’s also an interesting new option for App Engine because Go apps will be compiled to native code, making Go a good choice for more CPU-intensive tasks. Plus the garbage collection and concurrency features of the language, combined with excellent libraries, make it a great fit for web apps.

As of today, the App Engine SDK for Go is available for download, and we will soon enable deployment of Go apps into the App Engine infrastructure. If you’re interested in starting early, sign up to be first through the door when we open it up to early testers. Once it proves solid, we’ll open it up to everyone, although it will remain an experimental App Engine feature for a while.

You don’t need an existing Go installation; the SDK is fully self-contained, so it’s very simple to get a local web app up and running. The SDK is a really easy way to start playing with Go.

More details can be found at the Go Programming Language blog.

Posted by Scott Knaster, Editor

Making stories with social media and YouTube videos


By Xavier Damman, co-founder of Storify

This post is part of Who's at Google I/O, a series of guest blog posts written by developers who are appearing in the Developer Sandbox at Google I/O.


Storify is part of the Google I/O Sandbox. Please come say hi to find out more about how you can leverage our APIs so your users can remix your content to create stories to share on social networks.

Storify provides a super simple drag and drop user experience to create stories using elements from the web: tweets, YouTube videos, Facebook updates, SlideShare presentations, audioboo files, and so on (see Storify in action here). This post explains how we incorporate videos in Storify using YouTube Data API and Player API. All the code snippets are in JavaScript. In fact, our full stack is in JavaScript: we use NodeJS and MongoDB which we think is an über cool mix.


The source of the source

To create a Storify source, we need to be able to get a feed of results using JSONp (basically JSON with a callback function so that you can do cross domain calls; from the YouTube API perspective this is the JSON-C format). For YouTube, the main search API endpoint looks like this:
request: function(formdata) {
return {
'url' : 'http://gdata.youtube.com/feeds/api/videos',
'params': {
'v' : 2,
'max-results' : 20,
'alt' : 'jsonc',
'q' : formdata.keywords
}
};
}
This function is called when the user clicks Submit in the search tab of the YouTube source in the Storify Editor. The main controller executes the request and sends the JSON result to the results method, which returns an array of normalized results:
results: function(json) {

if (json.data && json.data.totalItems && json.data.totalItems == 0) {
throw "No results found";
}

var videos = json.data.items;
var results_array = [];

for (var i = 0; i < videos.length; i++) {
var normalizedResult = {
permalink : 'http://www.youtube.com/watch?v='+videos[i].id,
source : 'youtube',
elementClass : 'video',
metadata : videos[i],
thumbnail : videos[i].thumbnail.sqDefault,
title: videos[i].title,
description : videos[i].description.substr(0,140),
author: {username: videos[i].uploader },
created_at : videos[i].uploaded,
oembed: {html: '<iframe id="youtube-'+videos[i].id+'" type="text/html" width="360" height="294" src="http://www.youtube.com/embed/'+videos[i].id+'?enablejsapi=1&origin=storify.com" frameborder="0">'
}};

results_array.push(normalizedResult);
}
return results_array;
}

Thanks to this normalized representation of a story element – in this case, it’s a video object – we can build an object-oriented story as the user drags and drops any of these elements. This technique has multiple benefits: we maintain attribution to the original content creator, and we can track the content as it spreads across the web (how many times it has been seen and from where).

The story element also provides the oEmbed HTML code. This is used to render the video embed when the video is added to the story. For that purpose we use the YouTube Player API with their new iframe embed.

Story.json

We have a very simple way to get any data out of our platform: just append .json to any storify.com URL and you get the JSON representation of the content of that page.

For example:

Add the Storify Editor to your site

The Storify Editor can be called in an iframe. You just need to provide a callback parameter, like this: http://storify.com/story/new?callback=yoursiteurlcallback. The user will be asked to authenticate with Twitter and then will be able to create a new story. Once the user is done and hits “Publish”, we call you back, passing you the permalink of the new story created:
yoursiteurlcallback?permalink=storyPermalink.

You can then either fetch the JSON of the story by appending “.json” to the storyPermalink or you can embed the story by loading <script src=”storyPermalink.js”></script>. This is a great way to provide your community with a way to create stories right from your site.



This is only the start. We plan to open a Sources API so that any developer can build a source for any service. Please come see us at our booth at the Google I/O Sandbox if you’re interested in joining our developer community. And check out this article in the New York Times to learn more.


Come see Storify in the Developer Sandbox at Google I/O on May 10-11.

Xavier Damman is the co-founder of Storify. He is also the founder of HackDemocracy, a community of hackers who want to improve our democracies using technology.

Posted by Scott Knaster, Editor

New Books API for developers


By Posted by Kevin Landry, Google Books API Team

Cross-posted from Inside Google Books

Today, we are releasing a new Google Books API in Code Labs. The new Books API will help you write applications to query for the more than 15 million books that are searchable on books.google.com including book metadata, pricing and more. This API replaces the Google Book Search Data and Javascript APIs.

The previous GData API and Javascript API will still be available, but ongoing development work will be focused on the new Google Books API so we encourage you to make the transition to take advantage of new features including:
We’re excited about some of the new capabilities this API gives our developer partners. Using OAuth 2.0, a developer can allow users to retrieve their stored information, including bookshelves in their library, ratings and reviews. With the new JSON structure, accessing this information can be done with ease. Whether you want to use the API on a web page, in a Chrome extension, or on a device, the new Books API gives you a way to access the world’s largest selection of ebooks.

Kevin Landry is the technical lead for personalization and the API for the Google Books team. He has previously worked on the Google Audio and Google Analytics teams.

Posted by Scott Knaster, Editor

Cooliris builds visual search for Chrome with next-generation web APIs


By Austin Shoemaker, CTO, Cooliris

This post is part of Who's at Google I/O, a series of guest blog posts written by developers who are appearing in the Developer Sandbox at Google I/O.

At Cooliris, we develop media applications targeting two broad categories – web applications that run in any standards-compliant web browser, and native applications that target specific platforms with tailored user experiences (e.g. iOS, Android, Windows, Mac OS X).

When we developed the Cooliris Extension for Google Chrome, we set out to create an efficient and visually stunning media browsing experience for the web. To deliver on this promise, we developed a native runtime specialized for multiple operating systems and browsers. While we were delighted with the result, the cost of developing native code for multiple platforms was significant. Ideally we could target a single cross-platform API that provided the performance, flexibility, and native feel of platform-specific code.

The new generation of high-performance graphics APIs supported by Google Chrome enables developers to create cross-platform web applications with a presentation layer that looks and feels more like a native application. Our latest Chrome extension, Image Search by Cooliris, leverages these APIs to provide an immersive visual search experience. The following technologies were key to making this product possible:
  1. WebGL: The ability to render our user interface with OpenGL ES enabled us to recreate our original 3D Wall experience without the use of plug-ins. Because it is a web standard, our extension is already available on both Windows and Mac versions of Google Chrome with no additional effort on our part. The new requestAnimationFrame API enables the app to render frames only when the display refreshes, significantly improving performance and power usage.
  2. Google Image Search API: The AJAX search API brings a rich source of relevant content into the wall. Users begin typing in the search box and instantly see results appear on the wall, allowing for effortless discovery and navigation.
Now that we have built the Image Search by Cooliris extension, we are using next-generation web APIs to push the envelope even further. LiveShare, our next-generation communication platform, is built to take advantage of these capabilities from the beginning – for example, the demo of our upcoming product built on WebGL already uses Chrome's file drag-and-drop API to optimize upload performance.

We have never been more excited about the potential of the web platform to deliver great applications – with HTML5, WebGL, and other emerging standards, it has become possible to create products that rival the performance and user experience of native software.


Come see Cooliris in the Developer Sandbox at Google I/O on May 10-11.

Austin Shoemaker is CTO and co-founder of Cooliris. Austin studied Computer Science at Stanford and leads product development for the Cooliris platform. Previously, Austin worked at Apple developing consumer applications.

Posted by Scott Knaster, Editor

Google App Engine: a new version and a preview of future plans


By Greg D’Alesandre, Senior Product Manager, Google App Engine

Google App Engine, which provides the ability to develop and host applications on Google’s infrastructure, has grown tremendously since it launched in preview status in 2008. More than 100,000 developers use App Engine every month to deliver apps that dynamically scale with usage without the need to manage hardware or software. App Engine now hosts more than 200,000 active apps that serve over 1.5 billion site views daily.

Over the last three years, we’ve collected great feedback from our customers and now believe that the biggest thing we can do to help our customers is to graduate App Engine from preview status. When App Engine graduates from preview status, which we expect to do in the second half of this year, we’ll add additional enterprise-grade features that allow us to support many more business application scenarios. Graduation from preview status also indicates a longer term commitment by Google to the product and provides a deprecation policy whereby we will support prior versions of product APIs for a guaranteed amount of time, allowing applications written to prior API specifications to continue to function.

Today, we’re moving forward with our business focus with the release of App Engine 1.5.0 which includes Backends, improved Task Queues, a new experimental Go runtime, and more:
  • With Backends (for both Python and Java), App Engine can now support applications that require long running and high memory processes. This feature allows for new classes of applications such as report generation apps and custom search engines to be hosted on the platform.
  • The improvements to Task Queues allow for applications to control how tasks are executed and easily share the work using the new REST-based APIs. This API access expands App Engine’s compatibility with other on-premise and cloud services, furthering our commitment to an open development platform.
  • With 1.5.0 we are launching an experimental runtime for the Go Programming Language. Go is an open source, statically typed, compiled language with a dynamic and lightweight feel. It’s also an interesting new option for App Engine because Go apps will be compiled to native code, making Go a good choice for more CPU-intensive tasks. As of today, the App Engine SDK for Go is available for download, and we will soon enable deployment of Go apps into the App Engine infrastructure. If you’re interested in starting early, sign up to be first through the door when we open it up to early testers. If you’d like to learn more, read it about it on the Go Blog.
In addition, when we take App Engine out of preview in the second half of this year, we will provide a 99.95% uptime service level agreement, operational and developer support, offline billing, and a new Terms of Service agreement geared towards businesses. We will also introduce a new pricing structure for App Engine based on more transparent usage-based pricing.

We’re announcing these features and pricing changes in advance so that our customers have time to review them. Adding business features will help App Engine meet a broader set of needs and the new, more transparent pricing model will help customers better align their App Engine investment with their business goals. Learn more about these changes on the App Engine blog.

Finally, we are really excited to graduate from Preview and we want to let everyone know how much we appreciate the support, commitment, and feedback we have received. We look forward to working with you for years to come.


Greg D'Alesandre is now the Senior Product Manager for App Engine after coming back from riding the Google Wave in Sydney. And he's obsessed with chocolate, no, seriously, obsessed.

Posted by Scott Knaster, Editor