Why We Invested in Analog Inference

Overview The mission of TDK Ventures is to accelerate technologies and empower entrepreneurs, whose ideas will synergistically further productivity, sustainability, and social responsibility globally. Such hard-tech problems require hard-tech solutions, and as such many of the startups we support are tackling the fundamental scientific challenges that are pervasive to not just one application but could […]

Overview

The mission of TDK Ventures is to accelerate technologies and empower entrepreneurs, whose ideas will synergistically further productivity, sustainability, and social responsibility globally. Such hard-tech problems require hard-tech solutions, and as such many of the startups we support are tackling the fundamental scientific challenges that are pervasive to not just one application but could inspire an entire new generation of technology — megatrends. One such key avenue we are hoping to usher in is global digital transformation in the hopes of fusing human and computational/machine interactions to streamline and optimize operations from increased output, reduced energy demand, and a win-win for both human productivity and global environment.

In partnering and supporting the incredible team at Analog Inference, together we believe we can help move the needle on computing. Their game changing inroads in areas like –

  • Design and engineering of mobile, robotic, wearable, and extended reality devices
  • Extending the limits of light artificial intelligence on the internet of things (IoT)
  • More powerful sensor and actuation fusion

– have the potential to not only change the industry, but deliver extraordinary monetary returns in the process, while strategically strengthening product development within the greater TDK Group. A win-win-win. We conduct rigorous deep exploration, and such decisions are not made lightly, with our backing reflecting extremely high confidence in the vision of our Analog Inference colleagues. This article details our comprehensive evaluation of Analog Inference’s business model, technology development, and commercialization potential that led to our conclusion that a TDK Ventures/Analog Inference collaboration will deliver valuable synergies and strong return on investment.

Introduction: Evolution of Computing

The new currency in the exchange of human ideas, information, and indeed currency itself is not money, but rather computing power. More and more, everything we do and almost every aspect of our lives is (at least in part) digital and as a result a vast computer infrastructure is necessary to process all that resulting data and information (computing) for our use. Well quoted by Rhonda Dirvi, Arm’s Automotive & IoT senior director of marketing programs, “As the data becomes more useful, more data will be collected. Collecting the data means taking our analog world and converting it to digital” [1]. It is not enough for computing ability to just be able to handle the astonishing amount of data generated, but it also must be able to process quickly with minimal (ideally no observable) delay handling any numbers of users at once — concepts often referred to as responsiveness, latency, and bandwidth.

For decades, progress in computing ability has been able to keep pace with global demand spurred on by the incredible trend, as Moore’s Law predicts that computing power on an integrated circuit doubles every year and a half or so — a rule of thumb which held true for an incredible 50 years. However, all good things must come to an end and recently continued miniaturization and efficiencies have seen diminishing returns. In stark contrast, data to process has only continued to skyrocket. No longer are computers the only or even primary concern. Instead, the development of cell phones, smart phones, mobile devices, wearables, and all internet of things’ (IoT) applications has necessitated the need for computing power anywhere, and everything operates at the touch of a button. These devices on the edge of human-machine interaction have been particularly demanding from a processing perspective and have required special innovation to meet demand — known as edge computing — and is one of the primary challenges driving most modern computing innovations.

Edge computing increasingly is being viewed as a “must have” for advancement of the fourth industrial revolution and mobilization of the IoT. To make it work, industry, governments, healthcare systems, retailers, and other organizations will require deployment of “general purpose infrastructure…to run cloud-like workloads to manage existent and emergent use cases,” [2]. Unconfigured hardware that sidesteps the need for virtualization and other resource-consuming processes (“bare metal” machination) simplifies edge compute “with firmware that allows remote autoconfiguration…a combination of performance and ease of delivery. (U)nified control planes will be key to leveraging edge computing hardware,” [3]. However, as Chris Bergey, senior vice president and general manager of infrastructure at Arm, explained during TDK Ventures’ Digital Transformation Week 2022, “edge computing has disappointed because the expense and compute limitations has limited its rollout to verticals employing massive bandwidth. There is so much energy, money, intelligence being put into it that those things are going to happen, but (digital) prototype to reality is probably something like 20 years away. The metaverse and virtual worlds drive a different level of computing and have different latency requirements. Those things drive edge computing, but you need to have it under a high percentage of use cases to be able to depend on it, otherwise you must put all that compute into the device itself,” [4].

Initially, while Moore’s law was still holding up, the heavy demands of computing — particularly in edge applications — were handled by (conceptually) outsourcing the problem and sending the compute problem to a central server of beefier capability, then downloading the computed “solution” to act on. This is what cloud computing is all about and is the basis for the now widespread concept of the cloud. For many cases, this has been a good solution, but as our world has become more and more inundated with edge devices — where fast response time and mass bandwidth are king — cloud computing has begun to strain. With such a technical challenge being made more evident with each passing year, the industry has been hard at work scouring for a new paradigm that can save the day: AI Edge Computing.

Figure 1. Graph overviewing the progression of computing, specifically node type since the birth of the digital age in the late 1940’s. Cloud ML (machine learning) driven by AI nodes is projected to be the next generation and megatrend in computing power.

Analog as an Alternative Pathway to AI Edge Computing

While Moore’s law has begun to dwindle, artificial intelligence (AI) has emerged to serve as a key operator of computing processes. A significant portion of the growth achieved via AI, can be attributed to edge applications, where the responsiveness and efficiency is the most necessary. We see digital computing is the most common form of computing used today and is the foundation of traditional computers and digital devices. In digital computing, information is represented using discrete symbols, typically binary digits (bits), which can have values of 0 or 1. These bits are manipulated using logical operations (AND, OR, NOT, etc.) and arithmetic operations (addition, subtraction, multiplication, division) to perform computations. Key characteristics of digital computing include: 1) discrete representation, 2) high precision, 3) robustness, and 4) standardization.

In contrast to digital computing, analog computing uses “native” electrical signals and their unique interactions with circuits to combine memory, logic, and computation functions into integrated tasks performed in massive parallel arrays. The signals mirror digital operations’ behavior triggering unique outputs to specific inputs, but have several important advantages over digital, including improved speed, lower power demands, and no memory movement as memory is a physical resistor equivalent element that remains in a given place. Parallel design enables analog to perform millions of multiply-accumulate operations simultaneously [5]. “The most resource-intensive aspect of AI is data transfer. Transferring data often takes more time and power than actually computing with it. With analog AI, data is stored and processed in the same place (allowing us to) eliminate data movement” and removing the von Neumann bottleneck that limits system throughput caused by “the relative speed of data processing versus movement” [6].

By leveraging analog computing technologies, as many industry experts hypothesize, it may be possible to mitigate or avoid the inefficiencies found in digital computing methods and fully utilize the speed and efficiency that AI computing offers — particularly in a variety of edge use cases, where the AI inference can be executed onboard edge devices.

TDK Ventures Strategic Outlook & Analog Inference

In summary, the demand for AI functionality for processing at the edge of networks is naturally bottlenecked in digital computing architecture. An analog approach enables computing to be performed in-memory and converted to and from digital signals, dropping power needs by greater than 90 percent. Just as important, this reduction in energy consumption does not require a tradeoff in performance, as functions can be completed at a rate of 100 tera operations per second per watt (TOPS/W). TOPS/W appears to be one of the most accurate and telling measures of low-power edge-application performance.

Figure 2. Analog solves the energy problem of AI.

With this bigger picture of the technical landscape in mind, as we continued our deep explorations into the computing innovation space, we began to learn how key AI computing was to the industry as a whole and the potential solutions that analog methodologies represented. However, before analog can come through on its promise, there are a few key technical limitations that would have to be addressed:

  • Precision — Simply through its operation, any analog process equipment generates electrical and electronic noise that is picked up and included in the signal. This “noise floor” affects the precision with which the signal can be delivered and interpreted. The type of neural network involved amplifies or mitigates the noise floor, making them more suitable to edge applications.
  • Stochasticity— Neural networks all have degrees of random, non-predictable errors and variations. When computing in the analog domain, however, the architecture will inherently self-cancel the errors. This is evident in biological computing, awesome computing machinery that tolerates quite a bit of “noise”. Analog computing takes maximal advantage of this natural error annealing.
  • Variance — Also a result of the equipment’s self-generated noise, as well as heat, vibration, and other factors, potentially rendering the inferencing signal less than deterministic and untrustworthy. Designing analog circuits to include tracking bit lines minimize variation errors, though introducing additional space overhead requirements [7].
  • Analog/Digital Integration— The interface between the analog signal and the digital (i.e., converting between the two formats) necessarily introduces tradeoffs. Among the most troublesome is the forced decision to sacrifice conversion speed for signal accuracy or vice-versa. Both characteristics must be maintained in some critical edge-computing use cases, such as self-driving vehicles, precision manufacturing, and life-safety applications.
  • Transparency — Analog systems cannot be investigated or examined easily to determine the cause of unexpected or inaccurate results. This lack of transparency into the information flow, makes it difficult to determine how the system arrives at a result and therefore no means of verifying its accuracy [8].

In considering potential partnerships for the computing industry, and more specifically compute technologies that we could successfully accelerate to market and work to scale the positive societal benefits, we considered solutions based on how they addressed each of these limitations. It was in this pursuit that we had the pleasure of getting to know the Analog Inference team and the incredible solutions they offer.

Analog Inference leads the pack in harnessing analog computing for AI at the edge. Its power efficiency advantage in in-memory computing will expedite and streamline many edge AI applications. Its in-memory analog-computing capabilities meets all future requirements such as energy efficiency, low latency, high performance, and low cost. The chips eliminate the need for power-sapping SRAM and DRAM, because weights are stored in non-volatile, multi-level memory. These memory cells perform the computations, making space available, because multipliers and adders are not necessary.

Figure 3. Analog solves the physics of AI.

The company has completed digital and analog designs for its test chips and documented tapeout of its AI system-on-chip (SOC). Demonstration and release of this AI4 SOC processor is scheduled soon. On the software side, Analog Inference has already released alpha versions of its estimator, optimizer, and compiler with stack production and release slated for next year. In anticipation of software and hardware rollouts, the firm has doubled its number of full-time employees over the past two years and has engaged with national laboratories, big data corporations, and server, switch, and sensor manufacturers.

In strategic alignment, TDK Corporation is particularly interested in Analog Inference’s work in further miniaturizing chips and customizing them for far-edge applications, such as computer vision, feature extraction, preprocessing, sensor fusion, and MEMS microphonics. Analog Inference’s best-in-class power-to-performance rating, high-resolution, high-definition AI capabilities, extremely low latency, and cloud-grade AI models position it to dominate this segment. These differentiators provide the company with the tools it needs to establish its products in beachhead markets, including edge servers, smart retail, security, smart cities, factory automation, product inspection, and other expanding industries. In fact, Analog Inference already has built foundational ecosystems with original equipment and original design manufacturers, systems integrators, and independent AI analytics vendors.

Analog Inference can penetrate a number of markets with an initial focus on smart retail, smart city, and machine monitoring and sensing to leverage the technology’s superior performance in object recognition, heat mapping, behavior prediction, and more. These industries present stable, high-volume markets that will enable the company to scale quickly with mitigated risk. The initially targeted industries also have infrastructure in place that will benefit from Analog Inference’s strong power performance and cost mitigation.

While the efficiency and performance increases are undeniable, it shouldn’t go unnoticed that this also serves as a huge role for sustainability. Thousands and millions of nodes add up and the power draw is not insignificant — for the planet or for collective humanities checkbooks — Analog Inference’s solution helps both.

Figure 4. Analog Inference applications

Why We Invested in Analog Inference

TDK Ventures invests in companies it determines occupy “king of the hill” status in their technology fields. The thought is that when these technologies mature and use cases develop, the hill will expand into a mountain with our portfolio companies still at the top. Analog Interface’s positioning atop the edge AI computing space starts with its coprocessing chips’ performance and cost advantages. The products’ large capacity empowers neural nets at the edge powerful and secure enough to drive use-case specific activities. Their mature process node outperforms digital GPU on speed and accuracy.

A major Mobile Device OEM has confirmed the Analog Interface AI chipsets run at an order of magnitude higher (100 TOPS/W) than those of the other 200-plus companies evaluated. OEMs recognize that these speed, performance, and energy-consumption advances will help them realize greater margins by deploying neural fabric scaled from low power. The applications in video surveillance and monitoring are obvious, but as AI on the edge continues to become indispensable, Analog Inference is poised to integrate into dozens of other applications. Also, while many startups in the computing industry are doing “near memory” computing, which brings processing closer, Analog Inference is the only company that is enabling true in-memory computing, placing processing on the device itself to maximize performance.

The company’s leadership team also rises to king of the hill status. The C-suite reads as a deep expert among microchip, AI, and edge applications. The founder, Vishal Sarin, is a veteran creator of in-memory compute and AI solutions. With more than 100 patents to his name, Sarin led analog memory development at Micron, ISD, and other top companies. Analog Inference’s executive team members have held leadership positions at Wave, Blaize, Mentor Graphics, Cadence, Ikanos Communications, and other major hardware and software players.

We are convinced this team and the technology it is perfecting will quickly overcome the myriad issues involved in harnessing the power of analog for AI on the edge. Development and commercialization efforts are not without risk, but Analog Inference is tackling these challenges head-on:

  • Neural Fabric Limitations — With finite input layers, networks’ pattern-recognition ability is limited to fixed outputs. Analog Inference is working to redesign neural networks and customize them to specific AI models within market segments.
  • Reliability Inconsistencies — Analog Inference is conducting product qualification using silicon-extracted noise-aware simulation at full-chip model level with a provision for reliability-based layer spin.
  • Production Chip Tapeout — By outsourcing some routine design tasks to a trusted partner, Analog Inference is expediting design and back-end integration. The company will also reduce re-spin time by staging silicon at various layers during simulation in the software stack.

When the edge AI hill grows to mountainous proportions — a $156-billion market valuation by 2030 by one estimate, thanks to a remarkable and consistent 35 percent CAGR — cameras, sensors, drones, and other single-vision IoT devices will be the driving force [9]. Analog technologies will improve beyond today’s small tasks, where precision is not required to datacenter-grade AI at the edge.

Other industry observers expect an even steeper adoption of AI chipsets over the next few years. “The edge computing segment is expected to contribute significantly to the AI chipsets market through 2026, rapidly growing at over 40 percent,” predicts Global Market Insights. “The market growth can be credited to its feature to run without any cloud connection through AI chipsets processing on edge. The growing adoption of robotics for general purpose, military and defense, industrial applications, and households worldwide will accelerate opportunities for the AI chipsets market growth,” [10]. Technological evolution, with Analog Inference at the forefront, will lead to mobilization of edge AI into every aspect of communications, manufacturing, graphics, entertainment, and business through full battery-powered speech recognition and instantaneous response.

As a final note, a strong vote of confidence for the technology comes in the form of investors that have already backed the Analog Inference team — one in particular being Khosla Ventures. Vinod Khosla, co-founder of Sun Microsystem, is himself a pioneer in modern computing technology and a strong proponent of the startup paradigm-shifting approach.

Figure 5. Analog Inference team with TDK Ventures Investment Director Henry Huang

Moving Forward

Analog Inference is enabling machine learning and artificial intelligence — the building blocks for edge computing and IoT proliferation — that will transform industries. Analog Inference is pioneering the use of low-power computing using analog technology that can deliver the performance edge devices require and end users demand. TDK’s “expertise in manufacturing and strategic implementation” will help the company leverage its vision of “commercializing technologies that will enable our customers to harness the full power of AI,” Analog Inference President Sarin said. We look forward to continuing to assist the company in extending the reach of its AI compute along broad edge applications.

References

  1. Bailey, B. (2019, March 14). Using Analog for AI. Semiconductor Engineering. https://semiengineering.com/using-analog-for-ai/
  2. Kang, Y. W., Wu, C. F., Chang, Y. H., Kuo, T. W., & Ho, S. Y. (2020). On minimizing analog variation errors to resolve the scalability issue of reram-based crossbar accelerators. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 39(11), 3856–3867. https://ieeexplore.ieee.org/document/9211412
  3. Linux Foundation (2021). State of the edge: a market and ecosystem report for edge computing. https://www.linuxfoundation.org/press/press-release/lf-edges-state-of-the-edge-2021-report-predicts-global-edge-computing-infrastructure-market-to-be-worth-up-to-800-billion-by-2028
  4. TDK Ventures (2022, July 22). Edge devices will leverage autonomy and connectivity; TDK Ventures’ DX Week panelists eagerly anticipate smart dust applications. Medium. https://medium.com/tdk-ventures/edge-devices-will-leverage-autonomy-and-connectivity-tdk-ventures-dx-week-panelists-eagerly-e0599035a70d
  5. Dormehl L. (2022, April 10). Analog A.I.? It sounds crazy, but it might be the future. Digital Trends. https://www.digitaltrends.com/computing/mythic-ai-analog-artificial-intelligence/
  6. Yastremsky, D. (2022, March 29). The promise of analog AI. Toward data science. https://towardsdatascience.com/the-promise-of-analog-ai-e3a8c0daf146
  7. Kang, Y. W., Wu, C. F., Chang, Y. H., Kuo, T. W., & Ho, S. Y. (2020). On minimizing analog variation errors to resolve the scalability issue of reram-based crossbar accelerators. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 39(11), 3856–3867. https://ieeexplore.ieee.org/document/9211412
  8. Brilliant (2021, Feb. 13). The AI hardware problem. https://www.youtube.com/watch?v=owe9cPEdm7k
  9. Grandview Research (2022, June). Edge computing market size worth $155.90 billion by 2030. https://www.grandviewresearch.com/press-release/global-edge-computing-market
  10. Wadhwani, P. and Saha, P. (2020, March 30). AI chipsets market work over $70bn by 2026. Global Market Insights https://www.gminsights.com/pressrelease/ai-chipsets-market

Related News & Articles

Why We Invested in NovoLINC to Revolutionize Thermal Interface Solutions for Next-Generation AI Chip Cooling