Dec 6, 2008

metrics

Metal Listened to Brian Bailey talking on the missing metrics for verification coverage at the Certess booth on Wednesday. This was a really interesting talk discussing the standard code coverage and functional coverage metrics and pointing out that we don't really currently have a good metric to track when we are doing useful verification or not. Functional coverage is used for this, but care needs to be taken to ensure we are covering the right things.

There's a danger that functional coverage, which was originally designed to track the quality of random stimulus, is used to only track that stimulus. In this way, we might not consider if the behaviour is actually checked or not. The same ideas are represented in David Robinson's verification planning presentation at DAC. Requirements and functional coverage without checkers is a waste of time and effort. It is possible to get 100% coverage, with lots of cover points and no enabled or implemented checkers in a design. Careful testbench design, or tools like Certess, can be used to address these issues but only once you realise that functional coverage isn't a magic bullet.

I also saw a demo of the Certess product, Certitude. It randomly inserts errors into the device under test (DUT) to give a way to evaluate the quality of the testbench surrounding the DUT. The basic idea is that if the design is mutated in a particular way, the testbench is good if it can detect the error, or has a hole if this failure mode cannot be seen in the checkers. This artificial bug insertion is something I do on an ad-hoc way when developing a testbench and Certitude automates and expands on this commonly used approach.

You can read a paper from Brian on this topic.

There are comments.

Comments !