AI chip wars open: who can dominion with Nvidia, AMD and Intel?

Special correspondent Zhang Han reports from Las Vegas and New York Editor's note People are more and more optimistic about the prospects of artificial intelligence and its potential explosiveness, and whether it is possible to develop chips with ultra-high computing capabilities and meet the market as the key to artificial intelligence platforms. As a result, in 2016, it became a year for chip companies and Internet giants to fully deploy in the chip field. Among these, Nvidia maintains its absolute leading position. But with the giants including Google, Facebook, Microsoft, Amazon, and Baidu joining in the decisive battle, the future pattern of artificial intelligence is still to be solved. In 2016, everyone saw the prospects of artificial intelligence and its potential explosive power. However, whether AlphaGo or self-driving cars, in order to make any subtle arithmetic possible, its foundation is the hardware's computing power: that is, Whether or not chips with high computational capabilities and meet market demands have become the key to the artificial intelligence platform. Therefore, there is no doubt that 2016 has also become a year in which chip companies and Internet giants have been fully deployed in the chip field: Intel’s CPU chip giant Intel’s three major acquisitions of artificial intelligence and GPU-based companies within the year; Google announced afterwards. Developed its own processing system, and Apple, Microsoft, Facebook and Amazon all joined in. Among them, the leader Nvidia has become the absolute darling of the capital market because of its advantages in the field of artificial intelligence: In the past year, Nvidia, which was once known for gaming chips, has kept its stock price stable for more than ten years. The 30 US dollar quickly soared to 120 US dollars. Whenever the capital markets were hesitating whether AI's enthusiasm made Nvidia's share price high, on February 10th, Nvidia released its fourth quarter 2016 earnings report, showing that its revenue increased by 55% year-on-year, and net profit reached 655 million US dollars. The year-on-year increase of 216%. "As Intel, Microsoft, and other giants invest in artificial intelligence-based chip technology, Nvidia has already shown in the Q4 financial report that this chip company, which has invested nearly 12 years in the field of artificial intelligence, has begun to reap considerable profits." Senior Technical Review Home Therese Poletti pointed out after the release of its financial report. Research firm Tractica LLC estimates that hardware spending due to deep learning projects will increase from 43.6 million dollars in 2015 to 4.1 billion dollars in 2024, while the company’s related software spending will increase from US$109 million to US$10 billion in the same period. It is this large market that has attracted giants such as Google, Facebook, Microsoft, Amazon, and Baidu to successively announce their technological shifts toward artificial intelligence. "In terms of artificial intelligence-related technologies, Nvidia still maintains an absolute lead, but as technologies such as Google and TPU continue to be introduced to the market, the future AI hardware landscape remains to be solved." An inconvenient European senior Employees indicated to the 21st Century Business Herald. Nvidia Leads Significantly in GPU Field According to the latest annual report of Nvidia, its most important business areas have seen double-digit growth. In addition to its growth in game business, which has always had a leading edge, more of its gains actually come from two new business segments, data center operations and autonomous driving. Annual report data shows that the data center business has a growth of 138%, while autopilot has a 52% increase. “In fact, this is the most illustrative content in the entire Nvidia financial report, because the growth of data business and automated driving is basically inspired by the development of artificial intelligence and deep learning.” An American computer hardware analyst to the 21st century According to the economic report. In the current field of deep learning, putting neural networks into practical use goes through two phases: First, training, followed by execution. From the current environment, the training phase requires a lot of data processing GPUs (graphics processors, the same below), which is the leading area for graphics rendering with games and highly graphical applications for image rendering. In the transition phase, CPUs that need to deal with complex programs, that is, Microsoft's leading field for more than a decade. "The current success of NVIDIA represents the success of the GPU. It is one of the earliest GPU leaders." The industry analyst said. Deep learning neural networks, especially hundreds of thousands of layers of neural networks, have a very high demand for high-performance computing, and GPUs have natural advantages in handling complex operations: it has excellent parallel matrix computing capabilities, training and classification of neural networks. Can provide a significant acceleration effect. For example, rather than artificially defining a human face from the start, researchers can display images of millions of human faces and let the computer define what the human face should look like. When learning such examples, GPUs can be faster than traditional processors, greatly speeding up the training process. Therefore, GPU-powered supercomputers have become the best choice for training a variety of deep neural networks. For example, the Google brain used Nvidia's GPU for deep learning. "We are building a camera with tracking capabilities, so we need to find the most suitable chip, GPU is our first choice." EU Quile CEO Quine CEO Gunleik Groven at the CES (International Consumer Electronics Show) in January this year Told this reporter. Currently, Internet giants such as Google, Facebook, Microsoft, Twitter, and Baidu are all using this chip called GPU to let servers learn massive photos, videos, sound documents, and social media information to improve search and automate photos. Various software functions such as tags. Some automakers are also using this technology to develop self-driving cars that can sense the surrounding environment and avoid dangerous areas. In addition to its long-term leadership in GPUs and graphics computing, Nvidia is also one of the first technology companies to invest in artificial intelligence. In 2008, Wu Enda, who was studying at Stanford at the time, published a paper on neural network training using CUDA on a GPU. Alex Krizhevsky, a student of Geoff Hilton, one of the “Big Three of the Deep Learning” in 2012, significantly improved the image recognition accuracy in ImageNet with Nvidia’s GeForce graphics card. This is also the beginning of Nvidia’s focus on deep learning. According to reports, there are currently more than 3,000 AI start-up companies in the world, and most of them use hardware platforms provided by Nvidia. "Deep learning proved to be very effective." Huang Renxun said in the quarterly press release on February 10th. While enumerating current GPU computing platforms that are rapidly being applied in the fields of artificial intelligence, cloud computing, games, and automated driving, Huang Renxun said that in the coming years, deep learning will become a basic core tool for computer computing. AMD and Intel giants' AI evolution Investors and chip makers are watching all the Internet giants' every move. Just taking Nvidia’s data center business as an example, the company has been providing data services for Google for a long time. Nvidia is not the only leader in GPUs. Giants Intel and AMD have different advantages in this area. In November 2016, Intel Corporation released an AI processor called Nervana, which they claimed will test the prototype in the middle of next year. If all goes well, the final form of Nervana chips will be in the end of 2017. The chip name is based on a company Nervana bought earlier from Intel. According to Intel’s people, this company is the first company in the world to create chips specifically for AI. Intel company disclosed some details about this chip. According to them, this project code is "Lake Crest" and Nervana Engine and Neon DNN related software will be used. This chip can speed up various neural networks, such as the Google TensorFlow framework. The chip consists of an array of so-called "processing clusters" that deal with simplified mathematical operations called "active points." Compared to floating-point arithmetic, this method requires less data and therefore brings a 10x performance improvement. Lake Crest uses private data connections to create larger, faster clusters with a torus or other topology. This helps users create larger and more diversified neural network models. This data connection contains 12 100Gbps bidirectional connections, and its physical layer is based on 28G serial-to-parallel conversion. TPU and FPGA may counterattack In addition to the above-mentioned chip giant's advancement in the GPU field, more companies are attempting to trigger a full round of subversion. Its representative for Google announced in 2016 that it will independently develop a new processing system called TPU. TPU is a dedicated chip designed specifically for machine learning applications. By reducing the computational precision of the chip and reducing the number of transistors needed to implement each computational operation, the number of operations per second that the chip can run can be higher, so that a finely tuned machine learning model can run on the chip. Faster and faster for users to get smarter results. Google embedded the TPU accelerator chip in the circuit board and used the existing hard disk PCI-E interface to access the data center server. According to Google senior vice president Urs Holzle, the current use of Google TPU and GPU will continue for some time, but he said that the GPU can perform drawing operations with multiple purposes. The TPU is an ASIC, which is designed for specific applications. The special specification logic IC, because it only performs a single job, is faster, but the disadvantage is higher cost. In addition to the above-mentioned Google, Microsoft is also using a new type of processor called the Field Variable Programmable Gate Array (FPGA). According to reports, this FPGA already supports Microsoft Bing. In the future, they will drive new search algorithms based on deep neural networks—artificial intelligence modeled on the human brain’s structure—when executing several commands of this artificial intelligence. The speed is a few orders of magnitude faster than an ordinary chip. With it, your computer screen will only blank 23 milliseconds instead of 4 seconds. In a third-generation prototype, the chip is located at the edge of each server and plugs directly into the network, but still creates an FPGA pool that any machine can access. This starts to look like what's available for Office 365. In the end, Project Catapult is ready to go live. In addition, the cost of Catapult hardware only accounts for 30% of the total cost of all other accessories in the server, and the required operating energy is less than 10%, but its processing speed is twice as fast. In addition, there are companies, such as Nervada and Movidius, that simulate the parallel mode of the GPU, but focus on moving data faster and omitting the features needed for the image. Other companies, including IBM, which used a chip called True North, have developed chip designs inspired by other brain features such as neurons, synapses and more. Due to the great prospects of deep learning and artificial intelligence in the future, all giants are trying their best to gain technical advantages. If one of these companies, such as Google, replaces existing chips with a new chip, this is basically equivalent to subverting the entire chip industry. "Either Nvidia, Intel, Google or Baidu are looking for the basis for a wide range of applications of artificial intelligence in the future," Therese Poletti said. Many people hold the same view with Urs Holzle, vice president of Google, that in the distant future of artificial intelligence, GPUs will not replace CPUs, and TPUs will not replace GPUs. The chip market will see even greater demand and prosperity. (Edit: Xin Ling, if you have any questions or suggestions, please contact :)

This entry was posted in on