Artificial intelligence (AI) is transforming industries and changing how data is managed, interpreted and, most importantly, used to solve real problems for people and businesses faster than ever.
Today’s Microsoft’s Bing* Intelligent Search newsdemonstrates how Intel® FPGA (field programmable gate array) technology is powering some of the world’s most advanced AI platforms. Advances to the Bing search engine with real-time AI will help people do more and learn by going beyond delivering standard search results. Bing Intelligent Search will provide answers instead of web pages, and enable a system that understands words and the meaning behind them, the context and intent of a search.
In today’s data-centric world, users are asking more of their search engines than ever. Advanced Intel technology gives Bing the power of real-time AI to deliver more Intelligent Search to users every day. This requires incredibly compute-intensive workloads that are accelerated by Microsoft’s AI platform for deep learning, Project Brainwave, running on Intel® Arria® and Intel® Stratix® FPGAs.
How are FPGAs delivering Intelligent Search? Intel FPGAs power the technology that allows Bing to quickly process millions of articles across the web to get you contextual answers. Using machine learning and reading comprehension, Bing will now rapidly provide intelligent answers that help users find what they’re looking for faster, instead of a list of links for the users to manually check. Now you can check it out on your own for lots of types of questions, such as “Is coffee good for you?” or “What are the mental health benefits of yoga?” that generate multiple perspectives. Or even ask “How many calories are in a hot dog?” and Bing will share the calories and the number of minutes running required to burn the calories ingested.
In applications like Bing Intelligent Search, Intel FPGAs are making real-time AI possible by providing completely customizable hardware acceleration to complement Intel® Xeon® CPUs for computationally heavy parts of the deep neural networks while retaining the flexibility to evolve with rapidly changing AI models and tune to achieve high throughput and maximum performance to deliver real-time AI.
This an exciting example of how Intel FPGAs enable developers to design accelerator functions directly in the processing hardware to reduce latency, increase throughput and improve power efficiency. FPGAs efficient and flexible architecture accelerates the performance of AI workloads, including machine learning and deep learning, along with a wide range of other workloads, such as networking, storage, data analytics and high-performance computing.
Microsoft and Intel have a decades-long collaboration to maximize the performance and capabilities of data center infrastructure across a wide range of use cases. Intelligent Search is another example of the way our two companies work together, selecting the right tools for the job and rising to the challenges of today’s cloud data centers. Today, that collaboration has yielded another sophisticated and exciting AI deployment. This illustrates clearly – unlike the debate if coffee is good for you – how the power of Intel FPGAs to rapidly change how data is processed to make searches smarter will have long-term benefits for everything and everyone.
Reynette K. Au is vice president of marketing in the Programmable Solutions Group at Intel Corporation.