TensorFlow 4th Year, What’s the Most Important? @GDD 2019

On the second day of GDD, developers were still enthusiastic and the TensorFlow RoadShow was packed. It has been four years since TensorFlow was launched in 2015. Google has built a whole ecosystem around TensorFlow, and the user base is growing. So what has the Google TensorFlow team brought us this time?
Yesterday, at the GDD conference, Google gave a detailed presentation of its recent developments and new products. On the second day of GDD, the focus was on TensorFlow, which has been released for four years.
Early this morning, Apple held a new product launch conference and launched the triple-camera iPhone 11 series. In addition, new products such as iPad, Apple Arcade, Apple TV+, and iApple Watch Series 5 were also released.
If Apple is stimulating the consumer group with new products, then on the other side of the ocean, the Google Developer Conference is proceeding in a low-key and steady manner, introducing its latest technological advances in detail and providing the most practical help to developers.
The special TensorFlow RoadShow filled the entire day's schedule. So what highlights did the TensorFlow team bring to today's GDD?
TensorFlow: The most popular machine learning framework
At the TensorFlow RoadShow, its Asia-Pacific product manager Liang Xinping was the first to appear and shared "The Present and Future of Machine Learning" and gave an overview of the development of TensorFlow.

There are three key points in the current development of machine learning:Datasets, computing power, and models.Tensorflow is the most successful machine learning platform in line with the trend of machine learning.
Since its release in 2015,TensorFlow has been improving and updating.To date, there are more than 41 millionDownloads, more than 50,000 timesSubmission volume,9900 timesCode change requests, and More than 1,800Contributors.

Because of its powerful functions, there are more and more actual cases using TensorFlow, and many companies and institutions are using it for research and development. In addition, the TensorFlow Chinese website has also been launched, and the Chinese community and technical resources are expanding day by day.
After introducing these situations, a comprehensive display of TensorFlow was immediately launched, and the engineers of its team gave a detailed introduction to the progress of TensorFlow.
Key Points: TensorFlow 2.0
The much-anticipated 2.0 version finally came out in 2019. In June this year, the TensorFlow 2.0 Beta version was released. At today's GDD, engineers announced that TensorFlow 2.0 RC is now available. Compared with version 1.0, the new version is centered aroundEase of use, high performance, and scalability.Three aspects have been upgraded.
The most attractive thing is that the use of Keras As a high-level API, optimize the default Eager Execution , removing duplicate functionality and providingUnified API .

TensorFlow 2.0 uses Keras and Eager Execution, which makes it easy to build models and achieve robust model deployment in production environments on any platform.
After introducing the details of 2.0, Google engineer Liang Yanhui also gave a detailed introduction to the method of upgrading from version 1.0 to 2.0.
Google has already started internal version migration, and the official website also provides detailed code migration guides and tools. If users really need or rely on a 1.0 version of the API, they can easily migrate it to version 2.0 according to the guide.
So what are the specific aspects of TensorFlow 2.0 that deserve attention? Google engineers have made a detailed introduction from the following perspectives.
TF.Text: Training NLP Models
As an important direction in machine learning, natural language processing has a huge market demand. TF officially launched and upgraded TF.Text,It provides powerful text processing capabilities for TensorFlow 2.0 and is compatible with dynamic graph mode.

TF.Text is a TensorFlow 2.0 library that can be easily installed using PIP. It can perform regular preprocessing in text-based models and provides more features and operations for language modeling that are not available in TensorFlow core components.
The most common function isTokenization of textTokenization is the process of breaking a string into tokens. These tokens may be words, numbers, punctuation marks, or a combination of several elements.
TF.Text's Tokenizer provides a new type of tensor for recognizing text, Ragged Tensors.Three new Tokenizers.The most basic of these is the Whitespace Tokenizer, which splits a UTF-8 string on whitespace characters defined by ICU (such as space, tab, newline).
The TF.Text library also includesNormalization, n-grams, and token sequence constraintsThere are many benefits to using TF.Text, such as users do not need to worry about the consistency of training and prediction, and do not need to manage preprocessing scripts themselves.
TensorFlow Lite: Deploying machine learning on the edge
Two senior Google software engineers, Wang Tiezhen and Liu Renjie, introduced the functional updates and technical details of TensorFlow Lite.

TensorFlow Lite is a framework for deploying machine learning applications on mobile phones and embedded devices.The main reasons for choosing to deploy on the client are reflected in the following three points:
First: There is almost no delay, which can provide a stable and timely user experience;
Second: It does not need to be connected to the Internet and can be used in environments where there is no Internet or the Internet is very poor;
Third: Privacy protection, data will not be transmitted to the cloud, and all processing can be done on the terminal.
Given these advantages, there is already a large market for applications that deploy machine learning on the terminal based on TensorFlow Lite, and in 2.0, the ability to deploy models has also been enhanced.
For example, Xianyu APP is used in the rental scenario.Automatically label images using TensorFlow Lite.Improved rental efficiency; Ecovacs Robotics deployed TensorFlow Lite in its sweeping robots to achieve automatic obstacle avoidance, etc. TensorFlow Lite is also widely used in Google products, such as Google Photos, Input Method, and Cloud Assistant.
According to statistics, there areOver 2 billionA mobile device with a TensorFlow Lite-based application installed.
However, there are still many challenges in deploying machine learning on the client. For example, compared with the cloud,The terminal has less computing power and memory, and deployment on the terminal needs to consider power consumption. TensorFlow Lite has also made optimizations and improvements to address these challenges in order to make machine learning easier to deploy on the terminal.
The final implementation port of TensorFlow Lite can not only be deployed on Android and iOS, but also suitable for embedded systems (such as Raspberry Pi), hardware accelerators (such as Edge TPU), and microcontrollers (MCU).

Currently, it has been applied in image classification, object detection, posture estimation, speech recognition, and gesture recognition, and functions such as BERT, style transfer, and voice wake-up will be released later.
How to deploy your own model in TensorFlow Lite? Liu Renjie introduced that this only requires three steps: training the TF model, converting to the TF Lite format, and deploying the model to the end device. According to the integrated library of TF 2.0, it can be achieved with only a few code calls.
TensorFlow.js: A platform for making WeChat applets
TensorFlow.js is a deep learning platform customized for JavaScript.You can run existing models, retrain existing models, and train new models.

To increase its practicality,TensorFlow.js supports multiple platforms:Browsers, wireless terminals (such as WeChat applets), servers, and desktops. In addition to running machine learning models on multiple platforms, you can also train models. In addition, it has GPU acceleration and automatically supports WebGL.
In the live demonstration, they showed a virtual fitting program Modiface based on TensorFlow.js.The smallest and fastest virtual makeup trial app.It is reported that functions such as hairstyle conversion, age conversion simulation, and skin quality detection will be realized in the future.

In addition, Google engineers introduced that TensorFlow.js is applicable to websites and wireless terminals, and has a large number of machine learning application scenarios, such as augmented reality AR, gesture and body-based interaction, speech recognition, barrier-free websites, semantic analysis, intelligent conversations, and web page optimization.
Currently, TensorFlow.js already supports functions such as image classification, object recognition, posture recognition, voice command recognition, and text classification. For example, the WeChat applet plug-in launched can achieve rich functions using one API.
Expect more surprises from Google and TensorFlow
In addition to the above-mentioned TensorFlow features, Tf.distribute, TensorFlow optimization toolkit, and some enterprise application cases of TensorFlow were also introduced. Finally, Liang Xinping took the stage again to share the community situation of TensorFlow.

In the core construction of TensorFlow, more than 2135Contributors. Yes 109Google Developer Experts in Machine Learning; more than 46 TensorFlow User Group. He also details how to join the TensorFlow community.
With the end of TensorFlow RoadShow, Google Developer Conference also concluded all schedules and came to a successful conclusion. For all technical developers, the practical information brought by this event should be much more intuitive than watching Apple's press conference.
Let us look forward to the next breakthrough of TensorFlow, and hope that Google can be more powerful in the field of AI. See you again at GDD next year!
