Google Has an AI Lead and Is Putting It to Good Use

 | May 18, 2017 | 8:48 PM EDT
  • Comment
  • Print Print
  • Print
Stock quotes in this article:










At this point, most of those following the product announcements and R&D efforts of consumer tech giants know that the subset of AI known as machine learning -- broadly defined as the use of algorithms that can learn on their own by taking in relevant data -- is a big deal for practically all of them. And that it's being used to do things like field voice commands, detect objects within photos and get cars to drive themselves.

But there are a couple of facets to this trend that aren't as well-appreciated:

  1. The usefulness of a machine learning algorithm for handling tasks normally done by humans doesn't necessarily improve at a linear pace, but can improve exponentially or close to it when a tipping point is reached due to all of the data that the algorithm has been run against.
  2. Machine learning R&D work can often be applied to many different tasks, including some that don't have much in common at first glance.

Both of these phenomena work very much in Alphabet/Google's (GOOGL) favor. Though (AMZN) , Microsoft  (MSFT) and others have also made tremendous progress in delivering AI-powered products and services on a large scale, Google arguably remains a step ahead when it comes to many of the tasks that both Google and rivals are trying to address. And as the announcements made at this week's Google I/O developers conference show, a lot of these offerings seem to be hitting a tipping point in terms of what they can do.

Alphabet is a holding in Jim Cramer's Action Alerts PLUS Charitable Trust Portfolio. Want to be alerted before Cramer buys or sells GOOGL? Learn more now.

During his Wednesday I/O keynote talk, CEO Sundar Pichai pointed out that the word error rate for Google's speech recognition services had fallen to 4.9% from 8.5% just over the course of the last ten months. He also pointed out that the error rate for Google's image-recognition algorithms was now below the human error rate. Along the same lines, Google had previously disclosed that Google Translate's services now at times deliver human-level accuracy due to their adoption of an AI-based translation technique.

Pichai's disclosures were followed by the unveiling of a slew of new AI-powered Google services. Among them:

  1. Google Lens, an augmented reality (AR) tool that can recognize objects, buildings and text picked up by a phone's rear camera, and then act on the information. Examples shown included providing information on a detected restaurant or plant, translating a sign from Japanese to English and automatically entering a Wi-Fi router's login info upon seeing it on the device.

    Lens will be integrated with Google Assistant and Photos, with the former able to use what Lens has detected to continue an interaction with a user. Indirectly, it competes with the AR developer platform that Facebook (FB) just launched.
  2. New Google Photos features. Photos will now be able to automatically detect the presence of contacts within pictures, and recommend the sharing of photos with them. It will also be able to automatically remove unwanted items from a photo.
  3. The arrival of Actions on Google, which lets third-party developers support interactions via Google Assistant, on Android phones. The service is Google's version of Amazon's Alexa Skills.
  4. The addition of Smart Replies -- suggested replies that are based on Google's analysis of a message, and which a user can send with a couple of taps -- to Gmail. Smart Replies were previously available on the much less popular Allo messaging app.

Such services often depend on core research done by the well-funded Google Brain AI research division. Google claimed last year it's applying machine learning to over 100 projects, and has indicated that many of these projects involve engineers taking the same basic research and adding to it to address a specific task.

This approach undoubtedly has much to do with the launching of, which was also announced during the keynote. isn't a new division as much as it is a cross-company initiative to bring together the company's core and applied AI work, as well as its efforts to create AI-related computing hardware, software libraries and cloud services that can be leveraged by both Google and third-party developers.

The "computing hardware" aspect of this effort took a big step forward with the unveiling of a second-gen tensor processor, a proprietary chip meant for machine learning work. Each new tensor processor can deliver an impressive 45 teraflops of performance, and is placed on a four-chip tensor processing unit (TPU). 64 TPUs can be rigged together to create an 11.5-petaflop supercomputer. Importantly, whereas the original tensor processor was only meant for running machine learning algorithms to deliver real-world services (inferencing), the new one can also handle the more demanding task of training an algorithm for a particular job.

In addition to using its new TPUs for its own projects, Google plans to sell access to them through the Google Cloud Platform (GCP). That makes the product to some degree a threat to Nvidia (NVDA) , whose Tesla server CPUs are widely used for AI training work. It should be noted, however, that Google (like other cloud giants) remains a big Nvidia client, and that Pichai gave a shout-out during his keynote to Nvidia's powerful new Tesla V100 GPU.

There were also a slew of other AI-related efforts that were discussed during Google's I/O keynote. These included the use of machine learning to improve Google Search, have Google Maps' Street View cameras automatically detect signs and to help pathologists analyze high-resolution imagery for signs of cancer.

Ultimately, however, the biggest takeaway from the keynote isn't how useful Google's AI algorithms for one particular task or another. It's how the company's AI work in general has become a core competitive advantage whose presence is being felt company-wide.

Columnist Conversations

What ISN'T this company doing? Continues to dominate -- nice set of deals unveiled overnight.
The futures are up slightly this morning as traders buy yesterday's junk. As noted in last night's Strategy Se...
Equifax's (EFX) CEO is being replaced today in response to the data breach incident. Trading in the shares ...
TheStreet's Scott Gamm has Jordan Belfort on-camera today. Any questions you may have for the Wolf of Wall Str...



News Breaks

Powered by


Except as otherwise indicated, quotes are delayed. Quotes delayed at least 20 minutes for all exchanges. Market Data provided by Interactive Data. Company fundamental data provided by Morningstar. Earnings and ratings provided by Zacks. Mutual fund data provided by Valueline. ETF data provided by Lipper. Powered and implemented by Interactive Data Managed Solutions.

TheStreet Ratings updates stock ratings daily. However, if no rating change occurs, the data on this page does not update. The data does update after 90 days if no rating change occurs within that time period.

IDC calculates the Market Cap for the basic symbol to include common shares only. Year-to-date mutual fund returns are calculated on a monthly basis by Value Line and posted mid-month.