From a Minor Role to the Center of Attention, AI Has Become the Mainstay of Google I/O

The countdown to the 2019 Google I/O conference has begun, and along the way, its AI lineup has become increasingly powerful. Today, we will take a look at the AI products that you know and don’t know about, which were born at the Google I/O conference.
This year is the Google I/O conference. AI FirstThis is the fourth year of the AI strategy. Since the 2016 conference, we can see their AI black technology displayed to the world at every conference.
In 2016, AI elements were officially added with the Google Assistant and smart speaker devices, and in 2017, the theme became Mobile First to AI First.
By 2018, the I/O conference was completely dominated by AI, becoming a veritable AI Only .
Before this year's conference comes, let's take a look at the AI products that were born at the I/O conference and take stock of them. At the same time, we will prepare for this year's new releases.
Google Assistant is constantly evolving
At the Google I/O conference in 2016, Google launched its artificial intelligence masterpiece, Google Assistant, which is an intelligent voice assistant based on Google Now, artificial intelligence and machine deep learning.
Users can use voice to ask Google Assistant to do many things, such as: looking up performance reservations and ticketing information, remembering things for you, writing notes to your calendar... and it's important to note that Google Assistant is always improving itself. The more users use it, the smarter it becomes.
In 2017, the new Google Assistant was successfully launched on the iPhone, with more powerful functions and wider application scenarios. Google CEO Sundar Pichai also stated at the conference that Google Assistant is the most important product or service carrier of Google's machine learning technology.
In line with its "AI First" theme, Google Assistant has made several updates in three aspects: voice, text, and images. The previous version only supported voice input, but the updated version now supports text or image input. The new version is also more open and is no longer limited to Google's own products. They released the SDK to developers and cooperated with other smart home manufacturers.
At the I/O conference in 2018, a new version of Google Assistant was launched. The new version implements multi-task answering, supports natural language dialogue and multiple rounds of dialogue with contextual scenarios.
The new "voice" is one of the biggest highlights of the new version. Thanks to DeepMind's Wavenet technology, it can provide 6 very natural "human voices", and the timbre is so perfect that it can be mistaken for the real thing. It also learns to imitate more human tones, such as a super realistic "MmHmm" is enough.
Technological breakthroughs have taken Google Assistant’s recognition accuracy to a higher level, giving it the potential to play a role in a wider range of AI scenarios.
The Google Home smart speaker, which is developed based on the Google Assistant, has also been continuously updated and iterated to demonstrate its outstanding functions, becoming an excellent smart home product.
Google's powerful core: TPU
Google first announced the first generation of TPU (Tensor Processing Unit) at the 2016 I/O conference, and has released upgraded versions every year since then.
TPU is a chip designed specifically for machine learning. In fact, before it was announced, it had already been successfully used in AlphaGo as the basis for its prediction and decision-making technologies.
The first generation of TPU is aimed at reasoning, focusing on computing performance, and its efficiency is much higher than that of GPU.
The second-generation TPU officially announced at the 2017 I/O conference has further deepened the learning and reasoning capabilities of artificial intelligence. Compared with the first-generation TPU, which can only perform reasoning, it can also be used to train neural networks.
In 2018, the center position of the entire I/O conference was given to TPU - this tensor processor built by Google for deep learning has become Google's strongest muscle in the field of artificial intelligence.
Compared to the TPU 2.0, which was released last year but has not yet been publicly available, the TPU 3.0 has a peak computing performance eight times that of the TPU 2.0.
The performance improvement also comes at a considerable cost. Due to the huge heat generation, engineers had to replace the radiator with a brass tube. As you might expect, Sundar Pichai also admitted that the cooling solution for this generation of TPU is liquid cooling. This is the first time Google has introduced liquid cooling.
Google Lens, the camera that solves problems
Google Lens, first launched in 2017, is an artificial intelligence application based on image recognition and OCR technology that can identify objects by looking at images. Its usage scenarios include: identifying tourist landmarks; extracting text information and mobile phone numbers from photos; and real-time translation integrated with Google Translate.
A year later, Google Lens, in addition to recognition, can also add support for object style matching and real-time scene matching based on the recognized information. Users can use Google Lens to take pictures of objects of interest and get products with similar styles.
There are also some cool features. For example, when it recognizes a singer's portrait, it can play the singer's MV in picture-in-picture mode.
In addition, support for many device manufacturers has been added, including OnePlus, ASUS, Xiaomi, etc.
N details of attention
In addition, several core products have become smarter at the annual conference, and the attention to detail shows us Google's dedication.
Gmail
Last year, Gmail added the Smart Compose feature, which uses machine learning to prompt for the rest of the sentence, helping users quickly edit emails and providing intelligent text supplements.
Google Map
Driven by AI, Google directly uses a combination of AI and satellite images to add more businesses and new addresses to the map. Through big data, the AI-powered Google Map will also make personalized recommendations, such as recommending your favorite food restaurants based on the restaurants you often visit.
Google News
It automatically adds and translates subtitles to videos, and launches a new visual format called Newscasts, which uses natural language understanding technology to recommend articles and videos on a single topic to users.
More details are not included here. To give you a bird's-eye view, we have compiled the AI products and their updates and upgrades released at Google I/O since 2016 as follows:
Based on the current situation, I believe that this year's conference will see AI everywhere. The I/O conference will start in a few hours, so let's wait and see!
Ps: Students who can’t suppress their curiosity can save the following live broadcast link and watch it on time at 1:00 a.m. Beijing time on May 8.
Official live broadcast address:http://t.cn/Eo6MWU1
iQiyi live broadcast address:http://t.cn/Eoiz60J
Sina Chinese simultaneous interpretation live broadcast address:http://t.cn/Eo6xurx
Sina English original live broadcast address:http://t.cn/Eo6JlcJ