Wishing you a Happy Shavuot! Click Here.

[Categories]

What 's on the blog?

blog

10.4.2019 blog Best / Veriest /

Strive big and you may achieve it

Credits: International Space Station
Credits: International Space Station

Strive big and you may achieve it

Milos Mirosavljevic
Verification Team Leader at Veriest Venture Ltd

 

 

“So, what’s your next wild goal?”, CEO of the company I work at (Veriest) asked me, shortly after I had attended DVCon conference in Munich last year.

Up to that point, attending, even more so presenting at technical conferences seemed like a distant goal for me. Gradually however, I’ve managed to work towards it and I was sent to present at DVCon. Naturally, this only set the margin for my next goal even higher.

“I’ve always wanted to visit Silicon Valley. After all, isn’t it every engineer’s dream? I would like to present at CDNLive in Silicon Valley.” And that is how my tour began.

Quickly after submitting the abstract for evaluation, I have been notified that paper I am coauthor on, has been accepted to the conference. I am going to Silicon Valley!

After quite a lengthy trip I took with my wife on the West Coast prior to the conference itself, I have arrived at the destination.

No alt text provided for this image

As in other conferences I presume, few buzzwords kept circling around this one as well: 5G, internet of things, machine learning, automovitve and AI. Some say the only missing one was blockchain.

No alt text provided for this image

Local joke was going around the conference saying that in order to achieve the required latency and data speed for 5G all you have to do is to change the speed of light.

During the first day of the conference I have attended the lecture held by Cadence related to Verification throughput optimization for advanced designs. Speed of which Verification is done now days has become the key challenge of today’s and the next generation verification of System on Chips. Users need to employ smart verification practices to correct as many bugs as early as possible per dollar per day. Session highlighted the usage of new tools available today and combining a different levels of abstraction in order to do smart bug hunting.

Followed by this was the keynote session held by @Cadence CEO Lip-Bu Tan. Speech was focused on technical predictions for the future and Cadence’s intention to further enhance design and verification processes. It is interesting to say that aside verification, software developement costs are rapidly increasing up, in VLSI world (note the green line below):

No alt text provided for this image

As the time for my presentation drew near, I was starting to get excited and slightly nervous, but not as much as prior to my first appearance last year in Munich.

No alt text provided for this image

Session went great and I was happy to achieve the goal I set sometime ago. General conclusion is, and it is well known, that in the USA, System Verilog is much more predominant in usage, than Specman.

The next session was related to ISequenceSpecTM (ISS) tool which can be used with the Perspec, which utilizes the new Portable Stimulus Standard (PSS) to automatically generate non-redundant stimulus and coverage for verification of individual block level scenarios.

No alt text provided for this image

Perspec is pretty trending nowdays and it was mentioned in many presentations in this conference, as well as back at DVCon.

After the first day of the conference ended, I decided to explore the Intel musem. It’s quite an amazing place, despite everything being in only one large room.

No alt text provided for this image

The second day of the conference. For the first session I was going to attend, I picked something different than the usual, academic related. Session was related to Speech recognition with Scalp Brain signals. Speaker presented an overview of a man-machine symbiosis and an example of how this combination is better than only man or only machine. For instance, when radiologists were presented with various images, disease detection error rate was around 4.5%. Machine performed the same with 3.5% error rate. However, when combined, error rate in detection was only 2.5%. This idea was expanded in order to elaborate how speech can be detected using EEG – electroencephalography with no speech as an input. Qute an interesting topic, and it was a nice change in regard to other sessions.

Back to Verification related stuff. The next session I picked was called Transaction-level Stimulus Optimization in Functional Verification using Machine Learning Predictors. Big one, indeed. In this session the audience could hear (yet again) how Verifiction is becoming the bottleneck in the overall chip design cycle. This problem can be alleviated by the machine learning guided stimulus generation that attains verification coverage with the considerable reduction in the number of simulation cycles. Feature extraction is performed on transaction attributes, which are then fed into a machine learning model in order to predict the behavior of incomming transactions. Experimental results show reduction of about 70% coverage closure time, but this is all still heavily in a theoretic domain. Still, that’s how breakthroughts in the industry are done.

No alt text provided for this image

Finally, I decided to visit something automotive related – Cadence’s session about the (in)famous ISO26262 standard. This standard is a Functional Safety Standard related to automotive industry. Increasing complexity throughout the automotive industry is resulting in increased efforts to provide safety-compliant systems. For example, modern automobiles use by-wire systems such as throttle-by-wire. This is when the driver pushes on the accelerator and a sensor in the pedal sends a signal to an electronic control unit. This control unit analyzes several factors such as engine speed, vehicle speed, and pedal position. It then relays a command to the throttle body. It is a challenge of the automotive industry to test and validate systems like throttle-by-wire. The goal of ISO 26262 is to provide a unifying safety standard for all automotive systems.

No alt text provided for this image

Main issue is that the ISO 26262 is written by the automotive industry for the automotive industry, so chip manufacturers didn’t really understand what exactly should or should not be done. Edition 2 should address this. Session was about the implications of the Edition 2, which has recently been released.

No alt text provided for this image

I would like to emphasize that all the presentations will be available for download from CDNLive website, so if you are interested in the topics I mentioned, or some others, feel free to explore.

As the conference was comming to an end, I decided to fulfill my other desire, which was to do a sight tour around Silicon Valley.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
Stanford Campus
No alt text provided for this image

This venture reminded me of one very peculiar experience from my childhood, when it comes to setting goals. It had happened around 20 years ago, during my primary school. Each student had been scheduled to have a conversation with the school psychologist. Among many other questions, he had asked me what would I like to do when I grow up. I said that I would like to go to spaceAlbeit written in cyrillic, the highlighted part in the left picutre is this exact quote, released in a local magazine two decades ago.

Who knows, maybe by having people like Elon Musk around (yes, I’m a fan), common people, including myself, would be able to actually go to space in a not so distant future, as a part of a regular tourist atraction. For less than a million dollars, of course.

Needless to say, I am greatful to Veriest, company in which I have been for seven years now, for giving me this opportunity. And while speaking at the conference is not a dream per se, it is still a significant career goal, and is important to never stop chasing either of these two, no matter how big they are.

No alt text provided for this image
08.10.2018 blog

It’s much more than mere formality…

 

 

 

 

It’s much more than mere formality…

Published on October 8, 2018

 

Moshe Zalcberg

CEO at Veriest

 

 

American humorist and author Erma Bombeck is quoted as saying:

“When your mother asks, ‘Do you want a piece of advice?’ it is a mere formality. It doesn’t matter if you answer yes or no. You’re going to get it anyway.”
As a matter of fact, writing a blog I feel pretty much the same way: I will give my piece of advice whether you reader want or not. And once we’re done with the formalities, I’d like to speak exactly about formalities, or rather – Formal Verification.

And my advice is: you should consider Formal Verification as part of your next ASIC project, along side with Functional Verification.

Some short background for those not familiar with the concept: The most common methodology for verification is functional (dynamic) verification, where we run a series of scenarios through the design and check if the logic behaves correctly in such cases. The good thing is that this should imitate the real life modus operandi of the chip: if it functions flawlessly under such circumstances, it is a good predictor to its real-life performance.

The problem is that it’s often hard to really encompass all the different corner cases the device may operate in: because it’s challenging to scope them, or since it’s hard to document the different modes, or because it will take time to model the verification environment, or yet, because the time it will take to run the different test cases is prohibitive.

Formal Verification addresses the problem from a completely different angle. In this technique, the engineer writes certain assertions that should always (or never) hold true, irrespective of the specific case. Formal Verification tools then go ahead and try to prove mathematically that this is the case.
As a basic example, consider a busy junction with traffic lights. To check the correct functionality of the signaling system you can simulate the pattern by which cars arrive at the junction, at different times of the day, different days of the week, different weather conditions, etc.

 

 

 

 

In parallel, you may want to validate that any given time (hour/day/weather) there is no more than X open lanes and that they never inter-cross other lanes. And that, at no time all lanes are red, as that will create a traffic blockage.

As you can see, these two methods – the dynamic and the formal/static – are complementary. One models the natural behavior of the system, and the other certifies its holistic integrity.

The fact is that at Veriest, we’ve been increasingly leveraging Formal Verification alongside with Functional Verification, with great results.

From our experience. there are 2 key benefits to Formal Verification.
1) “The earlier, the better”: It’s well known that finding a bug earlier in the design cycle is all goodness: it’s simpler/cheaper to fix such bugs before the circuit gets too complex, the bug gets masked by layers of other functionality, and fixing it may impact other areas of the design. The problem is that in the early days, the functional verification environment may not be ready yet – for reasons mentioned above.

 

 

 

 

Comparatively, a relatively simple Formal Verification environment can find simple but meaningful mistakes, early on. This approach is compliant with the “shift-left” methodology often used in Agile software development, where an effort is made to find and prevent as many issues as possible early in the process, before they become “hairy”.

 

 

 

In fact, in a paper presented by ARM at a Formal Verification conference documented the following pattern: The same number of bugs were found, but they were found much earlier in the process.

 

 

 

 

You may notice that the graphs in the bottom of the slide show the usual pattern of bugs finding without Formal (graph on the left) and with Formal (right).

2) Efficiency: Orthogonal to the “shift-left” effect, properly modelling the functional behavior of the circuit in the different scenarios, often requires deep understanding of the system and a sizable investment in people, time and simulation cycles to achieve good coverage.

For certain types of circuits (but definitely not all!), Formal Verification can short-cut and get to comparable coverage with smaller teams. In some projects, it often happens that a Formal Verification team that may represent under 10% of the total team, is responsible for unearthing a much higher proportion of the total bugs in the design (and earlier…).
Elchanan Rappaport, Veriest’s Formal Verification Tech Lead, often compares such techniques to finding if a car tire is punctured. You can start looking for the hole – but this is not at all very efficient; you’d rather sink it in a water basin and just watch the bubbles if something is wrong.

 

 

 

 

 

Remember, Formal Verification can’t solve all the verification problems and has its own limitations. So our advice is to use ALL the tools at your disposal – Functional AND Formal Verification – but to use the right tool for the right job!

 

 

 

This week, on October 10-11, Veriest will be attending the Jasper User Group conference in San Jose, CA, one of the major conferences in this field – where Elchanan will be presenting a paper on “A Methodology for Confirming the Safety of Reductions”. Formal experts from companies such as ARM, STMicro, Marvell, HPE, Samsung, Broadcom, Cadence and others will be presenting and discussing their experiences using Formal techniques with other users.

30.5.2018 blog

Semi market not losing focus despite age, says Wally. What about EDA?

Big fish and small fish in the ocean...

 

 

 

Semi market not losing focus despite age, says Wally. What about EDA?

 

Published on May 30, 2018

 

Moshe Zalcberg

 
It’s always interesting to hear what Wally Rhines – CEO of Mentor Graphics (now a Siemens company) and a semiconductor/EDA veteran executive – has to say. Last week, at the Mentor Forum in Israel, Wally presented a well articulated speech trying to demystify what seems a major consolidation in the semiconductor field, as if this industry was maturing and “dying”.

 

 

In this post, I’d like to briefly describe my understanding as well as my misunderstandings on Wally’s views.

Let’s start by summarizing his take on the semiconductor market:

1) Wally showed that the market share of the 50 largest (or 10, 5, 1 largest) semi companies either decreased or stayed similar in the last many years.

 

 

 

 

2) Most of the larger companies that expanded (such as Samsung and TSMC), did so by organic growth rather than through active M&A .

 

 


3) The extra market share is increasingly captured by start-up companies that are benefiting from unprecedented funding rounds.

 

 

 

 

4) Here we get to Wally’s key message: the companies benefiting the most from the active M&A trend, are those that are not just becoming bigger, but also more specialized.

For example, compare the following two types of companies:
a) Today’s Broadcom, that also incorporated Avago and LSI Logic, is an example of specialization on data center/networking and RF wireless. Note the earnings moving from 10% a decade ago to ~40% today!

 

 

 

 

Other examples Wally brought in this category of specialization are Texas Instruments (Wally’s alma mater before joining Mentor) that specialized on Analog chips and NXP, focusing on automotive and security. But those 2 companies divested divisions almost as much, or more, than they acquired new companies.
b) On the other hand, Intel has been buying all over the application spectrum, is moving to the other direction: from razor-sharp focus on CPU chips only to a very diversified portfolio. To Wally’s point, note their profitability’s ups & downs.

 

 

 

 

Wally’s bottom line: the semiconductor industry is alive and kicking! It’s not more consolidated than it was in the past (maybe even less!), it’s just growing & specializing and there is plenty of room (and $’s) for start-ups!

What about the EDA industry? Interestingly enough, it seems here we’re moving in the opposite direction.

a) Although I haven’t seen historical figures, it doesn’t seem to be undergoing any de-consolidation. The larger players – Synopsys, Cadence, Mentor (now a part of Siemens) and Ansys continue to capture the lion share of the market – 68% according to this report – with a very healthy appetite for acquisitions.

b) Neither I see any signs of specialization in EDA: Mentor itself is now part of Siemens’ large engineering software group, encompassing solutions for CAM, PLM, manufacturability besides traditional electronic design. Synopsys has long ago diversified to software and security solutions, besides its classic EDA tools and IP. Cadence is still primarily playing the same (specialized?) game, with some minor stretches into system design and verification.

True, the EDA market is a tiny ~10B$ pond compared ~400B$ semiconductors ocean – so the dynamics may be completely different.

Be it as it may, I believe that at least one aspect of the semi market will be closely followed by the EDA industry. Increasingly, major system and even software companies are designing their own customized chips – including Facebook, Google, Microsoft, Amazon and others. Therefore, the EDA expansion to system-level functions could just be a mirror-reaction to who their “new” customers are and what they do.

12.6.2018 blog

My notes from the GSA European Executive forum – part 1: AI, hyenas and cheetahs

 

(Picture credit: The Economist. All other slides credit to the corresponding speakers)

 

My notes from the GSA European Executive forum – part 1: AI, hyenas and cheetahs

 

 

Published on June 12, 2018

 

Moshe Zalcberg

 

The GSA European Executive Forum is an yearly get-together of some of the leading minds in electronics and adjacent industries, in Europe and Worldwide. Executives from different companies, big established corporations and small innovative startups, meet for 24 hours of presentations, discussions, panels, analysis as well as simply good catch-up and chat. The 2018 edition took place in Munich on first week on June.

On this post, I’d like to outline a few ideas, data points, concepts and trends I heard during the presentations, illustrated by some of the slides displayed. While not everything is really new nor was voiced for the first time at GSA EEF, still – as many people asked me “how was it?” – I think it’s still worth summarizing some key points and add my own perspective.

In this post, I’ll cover the first session, that was focused around Artificial Intelligence (AI) processors.

No doubt, AI processors are a very hot (hyped?) topic these days – it was even features in this weeks edition of The Economist: “New Street, a research firm, estimates that the market for AI chips could reach $30bn by 2022. That would exceed the $22bn of revenue that Intel is expected to earn this year from selling processors for server computers.”, says the report. And, to explain to the layman the difference between general CPUs and AI-specific processors, the article quotes Andrew Feldman, chief executive of Cerebras: “One sort of chip resembles hyenas: they are generalists designed to tackle all kinds of computing problems, much as the hyenas eat all kinds of prey. The other type is like cheetahs: they are specialists which do one thing very well, such as hunting a certain kind of gazelle.”. So let’s see what the guys had to say about these gazelles at GSA.

 

 

 

 

Nigel Toon, CEO of Graphcore, an UK based AI startup, had some of the slides with the most startling graphics. He was very light on details about his own solution, but added to the already high enough hype of “AI everywhere”. As an example, he brought the (mostly unknown) Chinese company Toutiao, that uses AI to personalize news feed for each user. Look at their user base growth, and even more amazing minutes spent/day numbers. And you thought Facebook was getting too much screen-time, ah?

 


Although only marginally related to AI, Naveed Sherwani, CEO of SiFive presentation challenged the audience to re-think the silicon design eco-system. How come Instagram had only 13 employees when it was acquired by Facebook for $1B? Can we think of a hardware company achieving a similar feat? The answer, said Naveed, is that in the software business you can rely on an eco-system of software stacks and open-source elements, while in HW design, besides some limited re-use of IPs, every team costly “re-invents the wheel”.

 


Most of AI deployed today is Cloud based, such as Google translate, Alexa and many other applications. Taking AI to the edge, or to the very edge, as Loic Lietar, CEO of GreenWaves, calls it, requires different architectures that support different form factors, cost, power envelop and feature sets. For example, always-on cameras that can be constantly vigilant and detect not only full images but insights. Coincidentally, this is the key technology featured in the futurist (is it really that far ahead?) movie The Circle, I saw on my flight back home.

 

 

 

In the open panel that followed these 3 presentations, two topics drew my interest:

Nigel said that AI in general and Deep Learning in particular raises the bar in its requirements for simulation/verification, for different reasons: (a) the complexity of the system; (b) the fact that they are mostly a black-box where only the inputs and outputs have any meaning; (c) the criticatility of many of its intended applications, such as autonomous driving.

Naveed was asked what does it help to optimize and trim ASIC design costs, if mask costs are so high anyway and only justifiable in high-volume parts? He answered that this could facilitate the prototype phase (maybe implying that these can be cover by shuttle runs), not necessarily in the full production phase. In my view, this goes beyond that: compared to investments in engineering teams throughout the design cycle – architecture, design, verification, layout, etc – that often amount to M$’s, mask costs – as much as they are an expensive item – are almost negligible. So re-engineering the chip design process could move the bar to include additional devices.

To end on a high note: The last presentation in the AI section was by Dave Aron, VP and Analyst at the research group Gartner. He dared to expose the Ten Things Everyone Is Getting Wrong About AI, although in my humble opinion this is the most interesting – and comforting – fact he brought:

 

 


Although similar trends were experienced in past technological revolutions (steam engine, electricity, computer, etc.), we’ve been hearing mostly doom predictions that this time, with AI, it will be different and bad. So it’s good news to hear that, if I may put it in my own words:

 

AI will not out-smart people,
but make people smarter!

 

(I’ll come back with a similar review of the other sessions, when I have a chance).

01.11.2018 blog

I may be a twin, but I am one of a kind

 

 

 

I may be a twin, but I am one of a kind

Published on November 1, 2018

 

Moshe Zalcberg
CEO at Veriest

 

This quote, attributed to the American footballer Jerry Smith, reflects an universal truth: We all like to hold ourselves as special and unique individuals.

But there are cases where we wished we had a twin:

Imagine a patient facing a serious health condition. He is offered a few different treatments, each one with its pros and cons, and with different success probabilities. Wouldn’t it be great if this patient had a way to “run” a simulation of the different procedures on a “copy” of himself – a copy that would have his exact physical parameters and personal history? After running the several alternatives, the real patient could choose the one with the best result. Awesome, isn’t it?

As Joan Rivers, the American comedian and television host, once said:

I wish I had a twin, so I could know what I look like without plastic surgery.
Science fiction?

Different industries are increasingly leveraging “Digital twins” – a computerized model of the real “thing”, and is used to design, optimize, test, and simulate the “real life” in a safe and harmless environment.

This video by Philips describes exactly the above scenario, using a digital twin for healthcare benefit:


True, human health is the most critical of all concerns. But “digital twin” applications go beyond that: We are increasingly surrounded by machines, robots and systems – that control more and more aspects of our lives. And it’s critical these machines work properly, on their own and with each other.

At his keynote speech at DVCon Europe conference last week in Munich, Dr. Stefan Jockusch, VP of Strategy for Siemens PLM Software, explained how Siemens envisions the future of machinery and factories: each machine has a “Digital Twin”, a computerized model that is as identical as possible to the real machine, and is used to design the systems control, test them, simulate how to best use them and how they may interact with other digital twin of other machines. This could and should be used before the factory is up and running, and once it’s operational, as a preventive maintenance mechanism and to analyse on-going changes and updates to the flow.


The “twin” concept apparently remounts to the early days of space travel, when NASA built replica models of their spacecraft systems that stayed on earth, to help analyze, monitor and fix units that were up in the skies and were not be accessible. As computer technology advanced, this “twin” systems turned increasingly digital, and today incorporate very complex models, including big-data and machine learning algorithms.

If we can apply this concept to the human body and to machines, why not onwards to the more miniaturized level of microelectronics. Here again, computer chips are the core of many critical applications – some times even controlling machines or even human bodies! It is therefore of our interest to create accurate enough models of such chips that can be used to simulate their behavior in many different scenarios.

On the Dvcon 2nd-day key-note presentation, Philippe Magarshack, Microcontrollers & Digital ICs Group VP at STMicroelectronics, developped the concept of a “Virtual Twin” for semiconductors.

At first, such Virtual Twins enable verifying algorithms and “hard-to-test” conditions using top-speed C tests, support HW and SW co-debug, and diagnosis and feedback on failures of the physical hardware, just like a healthcare “digital twin”.

 

 

 

 

As an example, Philippe presented a “virtual twin” model for the STM32 micro controller for IoT applications, based on System-C, full with an ARM M4 instruction set simulator and other peripherals.

 

 

 

 

For many years and not less now – with the increased complexity of such system – the accurate modelling of the full system continues to be a major challenge, and was the topic of many different presentations. Such models represent, by definition, a higher level of abstraction – or else one wouldn’t be able to achieve its main benefits: a model that is fast to run and a generic enough representation that can be used in different scenarios, reflect different HW/SW partitions and be flexible enough to be modified to explore alternative architectures.

On the other hand, if the model is too high-level, it might not catch some of the issues it was intended to flag – defeating its main purpose.

This topic was covered from many angles at DVCon, at several sessions, under different names: Virtual Platforms, TLM & SystemC models and PSS (Portable Stimulus), all standards and technologies well known and used in the semiconductor industry for the development of system-on-chips and now growing into the development of systems and systems-of-systems.

As a matter of fact, if the subject is challenging enough at the chip-level, imagine when you scale it up to the system level, or the “system-of-systems” domain. As in the example given by ST’s Philippe: what if you want to simulate the inter-operability of 10,000 IoT sensors in a smart-city? “Digital twins” will have to scale up to solve such problems too.

True, the 19th-century American humorist Josh Billings said that

 

 

There are two things in life for which we are never truly prepared: twins.

 
but be prepared: we’re all going to be cloned and get a digital twin: from humans, to machines, down to chips.

*My thanks to Eyck Jentzsch from MINRES for reviewing an earlier version of this article. Eyck presented at DVCon a paper on Virtual Platforms for RISC-V systems.