In the Battle for Deep Learning Proliferation
(This is an introductory post to the world of distributed learning. Watch out for our upcoming 3 post’s series where we break down the different methods of training AI on the edge).
Deep Learning is one of the most significant aspects of artificial intelligence (AI), as it enables computers to learn on their own through pattern recognition, rather than through pre-programming by humans.
Moore’s Law, (Gordon Moore’s 1965)states that “the number of transistors in a dense integrated circuit doubles approximately every two years”. This prediction has been incredibly accurate and insightful. Moreover, in tandem, the burgeoning development of computer processors and computer memory, along with increasingly faster network speeds, continues unabated.
Everything is Interconnected
Mobile phones, the Internet of Things (IoT), autonomous cars, industrial robots, precision agricultural machines, skimmers, smart homes, smart electric and water meters are just a few examples. Not only are smart devices able to communicate with humans, but they also communicate amongst themselves. For example, traffic lights that report to municipal control rooms, security cameras feeding information to war rooms and more are already here.
With the increased penetration and proliferation of connected devices, the amount of data collected from the world is increasing exponentially. Coupled with the need for Deep Learning computational algorithms, a new generation of processors is on the rise. These new processors and new hardware are expected to deliver breakthrough results in terms of power consumption, heat dissipation, space and cost. In addition, such products will be readily available via mass production.
What is Deep Learning?
Deep Learning is a computational model consisting of a set of algorithms that mimic the learning processes of the human brain. This field has captured the imagination of computer scientists and the business world and has become an integral part of the global high-tech industry. One reason that the deep learning field is so significant is that it provides and promises solutions to everyday processes and problems — identifying words on smartphones, upgrading medical techniques, advancing medical discoveries, developing autonomous transportation and introducing smarter, more efficient industry and financial tools.
Deep Learning is based on a method that was identified several decades ago. However, only in recent years have conditions sufficiently matured to implement this type of AI.
Nvidia, formerly known as a gaming graphics processor manufacturer, was the first to introduce this concept. Nvidia realised that its Graphics Processing Units (GPUs), originally designed for personal computers, were extremely suited to run AI data tasks as they were far more efficient and cost-effective than traditional processors.
The Growth of Edge Devices
Deep Learning continues to drive companies to create new solutions that can provide better hardware acceleration and optimisation for edge devices. Today, even typical edge devices have impressive processing power, and in some cases GPUs, as well as dedicated processing units and processors that are ideal for AI.
Some companies now focus on designing new architecture-based chips that offer an optimal solution for AI tasks. There are several types of such AI chips, such as chips designed for data centres and chips designed for edge devices (for example, smartphones, cars, IoT devices and more). Some AI chips are designed specifically for inference (prediction/recognition/recommendation) and focus on the implementation of a new capability acquired during the AI training phase. That said, we already see some AI chips that are intended specifically for the training phase and can acquire new capabilities from existing data.
It is clear that progress in the AI arena is significant and will continue its upsurge.
Advances in AI Chips
AI chips are only part of the current revival in the chip market. In recent years, many companies, from start-ups to giant high‑tech companies, have developed hardware and chips for Deep Learning. We see the momentum through out the entire ecosystem:
- Samsung released the Neural Processing Module (NPU) chip, which is optimised for Deep Learning algorithmic computations and designed to efficiently process thousands of such computations simultaneously.
- Hailo, a startup company, developed a specialised Deep Learning processor that delivers the performance of a data-centre-class computer to edge devices.
- Intel acquired the deep-learning startup Nervana.
- Qualcomm races to overtake Nvidia in the development of AI chips.
- Google incorporates AI processors called TPUs in its data centre, which help with search and translation tasks plus Google Assist.
- AI chips also enable organisations to use the latest processors through cloud services. Amazon launched a new AI processor called Inferentia, which enables its cloud customers to run machine learning applications. This processor was developed by Israel’s Annapurna Labs, which was acquired by Amazon in 2015.
- Facebook and Microsoft are developing such a chip together, and Huawei, Baidu of China and IBM are working on their own AI chips.
- Tesla recently launched an AI chip for its electric vehicles for autonomous driving, and Apple builds AI chips for its mobile devices.
The Future of Deep Learning on the Edge
Chips intended for edge devices are designed as small as possible, and thus comparatively have less power than a server. In order to achieve the same level of power that a server has, new algorithms are needed to harness multiple chips together in order to combine their power as a server or more. The training of such algorithms provides a significant opportunity for software companies to develop a new generation of distributed algorithms for Deep Learning, such as federated learning.
The implementation of Deep Learning and AI on edge devices can significantly reduce a company’s cloud costs, while also providing increased data privacy, by removing the need to send data to an external server for calculations and training.
The symbiotic relationship between connected applications and high-power chips is primed for expansion.
The future is here!
The rush to create a new generation of algorithm solutions is happening today.