Jul 28, 2013

Coverage Visualization

I've been thinking recently about better ways to visualize coverage results. The new UCIS API provides ways to access coverage data more easily in third-party tools. I've mostly finished wrapping this API in Python, using SWIG, to create PyUCIS. That Python layer provides iterators and other Pythonic idioms to let you use the UCIS more efficiently. So far I'm seeing about 66% less code for equivalent C functions. Also, a large part of the UCIS implementation is tied up in XML and the Python libraries for XML support are excellent. I've found it a useful way to write tools around coverage data, quickly. That efficiency and quick development is key to providing a flexible toolkit to analyze coverage data. Rather than building an all singing, all dancing tool, I'd rather have a toolkit of features that allow me to inspect and visualize the data, that can be tuned to the particular job at hand. Matplotlib and other visualization tools, such as NetworkX, are great, free libraries that become available, once you get the data into a Python environment. Trying to do similar visualizations in C is possible, but generally prohibitive in time and effort.

So now that I have easy access to coverage data and a wide range of visualization tools available, I've been thinking about how I'd like to look at that coverage. Normally we just get some sort of traffic light view, typically in something that looks like a spreadsheet or tree view. These views might group the coverage model in terms of the implemented model hierarchy - rolling up percentages, flagging things in red if they are low numbers of hits, green for close to 100% coverage. This is useful, but it doesn't really show much about the scope of the coverage model. It would be very easy to write a functional coverage model with a few, easily reached bins and declare '100% coverage' and move on.

The normal way to avoid that scenario is careful review of the coverage model, inspection of code and meetings to discuss what is in the spec that needs coverage. That is certainly a necessary part of planning for verification. I just think there might be additional metrics that could be used and visualised to give a better indication of what the current state of the design is. I started thinking about coverage density, in particular are there ways we could come up with a metric to show that a block has sufficient coverage - at least as a first order approximation. I think this could be useful for both simple code coverage - 100% on a small block is less interesting than 100% on a large block, for example, as well as for functional coverage. Again, a large, complex module with only a few functional coverage points and 100% coverage should be seen as worse result than a small block, with many functional coverage points, that is reporting 100%. We should be more worried about that large, mostly sparsely covered part of our design. I'd like the visualization of any coverage results to highlight those sorts of regions that are completely covered but only thinly.

This then starts to lead to an idea of density, based on the size of a module and the amount of coverage collected. I haven't found any examples of metrics like this. The only reference to coverage density that I have seen is in Functional Verification Coverage Measurement and Analysis by Andrew Piziali. There though the coverage density refers to the number of coverage hits per simulation cycle and is a measure of efficiency of a given test.

NetworkX sample

I think it might be reasonable to consider lines of code as a measure of the size of a module, although synthesized gate count would probably be a better indicator if that was available. Given that coverage data should be collected earlier in the process, before synthesis, I think the line count might be a good starting point. Similarly for the coverage model, we need a metric to estimate the 'size' or complexity of the coverage within a given block. Now a coverage model can span multiple parts of a design and be somewhat orthogonal to the implementation, but I think there is often a fairly direct mapping of some functional coverage to design blocks. When we have a reasonable metric for these, then we can come up with a view of the coverage density. I need to do some more investigation on what might be useful metrics here. It is quite straight-forward for code coverage but less of a one-to-one relationship for functional coverage.

I think two dimensional graphs of that coverage data might become a useful way to see the coverage in a design, using treemaps or network diagrams to allow a user to get a better feel for how the coverage closure is moving forward. Those could even be potentially animated over time to get a better view into the progress of the verification. This sample treemap shows a way of visualizing a source code repository, but the colour coding isn't really showing additional information. An display like this seems well suited to being extended to visualize coverage data. It could also be turned into a 3D visualization of the source code, with the density metric shown on the Z axis.

In a similar way that code coverage is useful right until you get 100% coverage, then it is useless, these sorts of views won't really help in saying 'we have enough functional coverage'. But they might be useful in showing that there isn't enough coverage in particular areas of the design.

There are comments.

Comments !