My notes from the GSA European Executive forum – part 1: AI, hyenas and cheetahs

 

Moshe Zalcberg

CEO at Veriest

 

The GSA European Executive Forum is an yearly get-together of some of the leading minds in electronics and adjacent industries, in Europe and Worldwide. Executives from different companies, big established corporations and small innovative startups, meet for 24 hours of presentations, discussions, panels, analysis as well as simply good catch-up and chat. The 2018 edition took place in Munich on first week on June.

On this post, I’d like to outline a few ideas, data points, concepts and trends I heard during the presentations, illustrated by some of the slides displayed. While not everything is really new nor was voiced for the first time at GSA EEF, still – as many people asked me “how was it?” – I think it’s still worth summarizing some key points and add my own perspective.

In this post, I’ll cover the first session, that was focused around Artificial Intelligence (AI) processors.

No doubt, AI processors are a very hot (hyped?) topic these days – it was even features in this weeks edition of The Economist: “New Street, a research firm, estimates that the market for AI chips could reach $30bn by 2022. That would exceed the $22bn of revenue that Intel is expected to earn this year from selling processors for server computers.”, says the report. And, to explain to the layman the difference between general CPUs and AI-specific processors, the article quotes Andrew Feldman, chief executive of Cerebras: “One sort of chip resembles hyenas: they are generalists designed to tackle all kinds of computing problems, much as the hyenas eat all kinds of prey. The other type is like cheetahs: they are specialists which do one thing very well, such as hunting a certain kind of gazelle.”. So let’s see what the guys had to say about these gazelles at GSA.

 

Nigel Toon, CEO of Graphcore, an UK based AI startup, had some of the slides with the most startling graphics. He was very light on details about his own solution, but added to the already high enough hype of “AI everywhere”. As an example, he brought the (mostly unknown) Chinese company Toutiao, that uses AI to personalize news feed for each user. Look at their user base growth, and even more amazing minutes spent/day numbers. And you thought Facebook was getting too much screen-time, ah?

Although only marginally related to AI, Naveed Sherwani, CEO of SiFive presentation challenged the audience to re-think the silicon design eco-system. How come Instagram had only 13 employees when it was acquired by Facebook for $1B? Can we think of a hardware company achieving a similar feat? The answer, said Naveed, is that in the software business you can rely on an eco-system of software stacks and open-source elements, while in HW design, besides some limited re-use of IPs, every team costly “re-invents the wheel”.

Most of AI deployed today is Cloud based, such as Google translate, Alexa and many other applications. Taking AI to the edge, or to the very edge, as Loic Lietar, CEO of GreenWaves, calls it, requires different architectures that support different form factors, cost, power envelop and feature sets. For example, always-on cameras that can be constantly vigilant and detect not only full images but insights. Coincidentally, this is the key technology featured in the futurist (is it really that far ahead?) movie The Circle, I saw on my flight back home.

 

 

In the open panel that followed these 3 presentations, two topics drew my interest:

Nigel said that AI in general and Deep Learning in particular raises the bar in its requirements for simulation/verification, for different reasons: (a) the complexity of the system; (b) the fact that they are mostly a black-box where only the inputs and outputs have any meaning; (c) the criticatility of many of its intended applications, such as autonomous driving.

Naveed was asked what does it help to optimize and trim ASIC design costs, if mask costs are so high anyway and only justifiable in high-volume parts? He answered that this could facilitate the prototype phase (maybe implying that these can be cover by shuttle runs), not necessarily in the full production phase. In my view, this goes beyond that: compared to investments in engineering teams throughout the design cycle – architecture, design, verification, layout, etc – that often amount to M$’s, mask costs – as much as they are an expensive item – are almost negligible. So re-engineering the chip design process could move the bar to include additional devices.

To end on a high note: The last presentation in the AI section was by Dave Aron, VP and Analyst at the research group Gartner. He dared to expose the Ten Things Everyone Is Getting Wrong About AI, although in my humble opinion this is the most interesting – and comforting – fact he brought:

Although similar trends were experienced in past technological revolutions (steam engine, electricity, computer, etc.), we’ve been hearing mostly doom predictions that this time, with AI, it will be different and bad. So it’s good news to hear that, if I may put it in my own words:

 

AI will not out-smart people,
but make people smarter!

 

(I’ll come back with a similar review of the other sessions, when I have a chance).