At the I/O conference ,Google provided an update on its operating system, launched a tablet, illustrated cool glass with a skydiving demo and generally delighted a bunch of developers. The company shared the latest stats. Over 400 million Android-based devices (phones and tablets) have been sold, compared to 100 million at last yearâ€™s conference. Over 1 million Android devices (phones and tablets) are activated every day, up from 400,000 a year ago. This translated to roughly 11.5 Android device activiation per second. Compare this to 2010 at CES and in 2011 on Twitter when Nokia said it sells 13 phones per second. Clearly, there has been a rapid ramp in Android which is hurting Nokia and RIM. But unlike Nokia, Androidâ€™s volume is spread over a wide range of device manufacturers. The fragmented market will create challenges for many device manufacturers. However, the operating system will improve what a majority of consumers will be able to do in the future. Some takeaways from the morning keynote:
The operating system will anticipate your next action to improve your experience. For example, both Google and RIM have demonstrated the ability for the OS to predict what words you will type next and automatically type them for you. In the newest version of Android, the OS will predict the next place you will touch on the screen to make your touch interface more responsive. Android 4.1 Jelly Bean is incremental but necessary. Android 4.1 improves the performance of scrolling and swiping by predicting where the user is most likely to touch. This prediction speeds up the response time. It might sound trivial but response time for touch is a critical part of the mobile experience.
Search gets a remake. The biggest new feature consists of various improvements to search. In Jelly Bean, Google overhauled search, making the interface more visual with what it called cards. Itâ€™s faster with more natural language voice search. Cards have pictures or maps and shortcuts. Web search results are also presented as a back up set of data sources for the card.
Voice typing works without a data connection. In a move that makes Siri look outdated, Google showed a demo of translating voice to text while in airplane mode. The service will start with English and add other languages.
Google Now is a set of context services that Iâ€™ve been calling Right Time Experiences. For some time Iâ€™ve said that applications would become predictive adaptive and semantic. Google announced Google Now, which is supposed to get you the right information at the right time automatically. The Google Now services will learn over time and become most customized to an individual user. Google illustrated several examples of this at I/O including places, traffic, and appointments. What do I mean by this? In the past, an application or a service was a self-contained entity. For example, your calendar only knew about your appointments and a CRM system only had access let a limited source of customer data. Today, data sources such as a weather, traffic feeds etc. have APIs that allows an application to connect to it. Location and social graphs can also be accessed if the user allows it. Applications will tap into available sources such as your calendar, your location, and a traffic feed. With this data a map application such as Google Maps could alert you that there is a road closure and youâ€™ll need to leave earlier to make your appointment. Services will gather data from multiple sources and make recommendations or deliver content based on what you need.