Transfer Learning: Where It Is Used Today

Artificial intelligence (AI) and deep learning (DL), in particular, are among the most significant technological advances in recent history. This technology has become an indispensable assistant in everyday life and makes our experience of using various services and platforms more comfortable.

Transfer learning (TL) is a reuse of a pre-trained model to solve a new problem. Currently, it’s popular in DL because it allows you to train deep neural networks on a relatively small amount of data. It is helpful in the field of data science since most real-world problems, as a rule, do not have or need millions of labeled data points to create complex models. Check this detailed post from Serokell to understand how transfer learning works.

TL helps data science professionals learn from knowledge gained from a previously used machine learning model to solve many issues. Let’s look at a few examples.

NLP

TL uses the knowledge of pre-trained AI models capable of understanding linguistic structures to solve cross-domain problems. Everyday NLP tasks, such as predicting the next word, answering questions, and machine translation, use DL models such as XLNet, Albert, BERT, etc.

Computer vision

Source: analyticsindiamag.com

DL networks are used to solve image-related tasks because they can work well in identifying complex image features. Image recognition, object detection, and image noise removal are typical applications of TL since all image-related tasks require basic knowledge and the detection of patterns of familiar images.

Audio/Speech Recognition

TL algorithms are essential to solving audio/speech-related tasks, such as speech recognition or speech-to-text translation. When we say “Siri” or “Ok, Google!” the primary AI model developed for English speech recognition is busy processing our commands on the back panel.

The Chinese search engine Baidu is also investing in AI-enabled applications. One of the fascinating developments of the Baidu research laboratory is what the company calls Deep Voice, a deep neural network capable of generating synthetic voices that are very difficult to distinguish from natural human speech. The network analyzes the unique subtleties of rhythm, accent, pronunciation, and pitch to create realistic speech.

The latest version of Deep Voice 2 technology can have a crucial impact on natural language processing, which is the basis of voice search and voice image recognition systems. And yes, it uses TL.

Gaming Industry

The introduction of AI has taken games to a whole new level. Besides the substantial leap in the intelligence of NPCs, the computers learned to beat even professional players. DeepMind’s AlphaGo neural network program is proof of this, as it has successfully defeated a professional Go player.

AlphaGo is a master of this particular game, but it is useless when assigned to play other titles. This is because its algorithm is tailored to the game of Go. However, thanks to TL, developers are teaching the algorithm to play different games. To do this, AlphaGo must forget the game of Go and adapt to the new algorithms and techniques of the new game.

The main reason for using TL

Source: medium.datadriveninvestor.com

Training a model on a massive amount of data requires not only obtaining this data but also resources and time. For example, when Google was developing its modern Xception image classification model, it trained two versions: one on the ImageNet dataset (14 million images) and the other on the JFT dataset (350 million images). Training on 60 NVIDIA K80 GPUs with various optimizations took three days for one experiment with ImageNet. The experiment with JFT took more than a month.

However, now that the pre-trained Xception has been released, teams can refine their versions much faster using TL. For example, a team from the University of Illinois and Argonne National Laboratory recently prepared a model for classifying images of galaxies.

Although their dataset consists of only 35,000 tagged images, they could fine-tune Xception in just eight minutes. The resulting version can classify galaxies with 99.8% accuracy at superhuman speed. This speed is the main reason for using TL.

Transfer Learning Today

In recent years, transfer learning has seen a lot of success in many fields. One popular application of this method is image recognition, where we can use a training set of pictures to improve our ability to recognize similar pictures later on.

Another area where transfer learning has had great success is deep learning. Deep learning is a type of machine learning that allows us to build artificial neural networks (ANNs) that are very complex and require large amounts of data. Traditionally, ANNs have been trained using supervised methods, in which we provide examples of the correct answer and the computer learns from these examples how to produce the correct answer for future cases. However, supervised methods are often time-consuming and require large amounts of data. transfer learning can be used to overcome these limitations by first teaching an ANN how to perform a task using a smaller set of data that was specifically designed for this purpose. The ANN then uses this knowledge to learn new tasks without needing any additional input from human trainers.

One such example is Google’s “AutoML” project, which uses Transfer Learning to train deep neural networks automatically using off-the-shelf commercial software products like Microsoft Windows Azure Machine Learning Service (MMLS) or Google Cloud Platform AutoML Services. After training an initial network on some predetermined data sets, AutoML can then learn by “self-taught” how to train other deep neural networks using a wider range of data.

Where can transfer learning be used?

Source: thenewstack.io

Transfer learning is a method of learning where the student does not have to re-learn everything from scratch. Instead, they can use what they have learned in one context (the “transfer” task) and apply it to another context (the “learning” task).

There are many different applications for transfer learning, including:

Natural language processing: Companies like Google use transfer learning to improve their natural language processing ability. By training their computers on large amounts of data, they are able to better understand human speech.

Robotics: Companies like Airbus and Boeing use transfer learning to create more efficient robots. Rather than having one robot design that is used across hundreds of products, companies can train their machines using examples from different products. This allows for more customized robots that are more effective in specific contexts.

Conclusion

More and more companies are creating ML models, and developers are using them to design new tools. As companies like OpenAI, Google, Facebook, and other tech giants release powerful open-source templates, the tools available to machine learning developers are becoming more powerful and stable.

Instead of spending time creating a model from scratch using PyTorch or TensorFlow, data scientists use open-source data and TL to create products, which means the emergence of a new generation of software-based machine learning.