Sep 18, 2013

UVM Report Servers

I'm a terrible procrastinator when it comes to writing things up or getting around to releasing finished work. As something of a commitment device to try and make me finish something, I submitted an abstract to the Austin Snyopsys Users group about some XML logging functionality I added to the UVM. The commitment device worked, my abstract was accepted and I finally got around to writing the paper, because of the looming submission deadline.

Applications of Custom UVM Report Servers describes a way to convert UVM logging from plain text into a more reuseable XML format. So many tools attempt to extract meaning from plain text files using regular expressions and other potentially fragile parsing methods. Instead, this approach records the log in a structured format that can be reconstructed more effectively, on demand. There is also an example XML parsing GUI that dynamically reformats and collapses log messages for simpler debug. The source code from the paper is available online.

This paper won the Technical Committee Award at the Austin Synopsys User Group.

There are comments.

Aug 18, 2013

Serving dynamic charts with Flask

I've written a little application to track FlexLM status. It's a script that runs as a cronjob and records the status of various FlexLM licenses to an SQLite database. In addition to logging the data, I've written a simple plotting application in matplotlib, but I really want to add some controls around it to select which license to plot, what date range to query from the database and other similarly straightforward features. The easiest way to do this seems to be setting up a basic web form and rendering the images as needed.

Towards this end, I've been playing around with the Flask web framework which makes setting up a micro web application remarkably easy. The code below is all that's required to set up a web server that will dynamically generate and serve images. The graph is rendered using matplotlib and then served up via Flask.

from flask import Flask
import numpy as np
import cStringIO
import matplotlib.pyplot as plt

app = Flask(__name__)

def build_plot():

  # Generate the plot
  x = np.linspace(0, 10)
  line, = plt.plot(x, np.sin(x))

  f = cStringIO.StringIO()
  plt.savefig(f, format='png')

  # Serve up the data
  header = {'Content-type': 'image/png'}
  data =

  return data, 200, header

if __name__ == '__main__':

This just generates the same plot each time it is loaded, but it is quite easy to move beyond this to generating different views based on user selections.

flask plot

There are comments.

Aug 16, 2013

Serving local web pages

I've been trying to improve some SystemVerilog constraints, in Synopsys VCS. One useful approach is to use the profiling features of the constraint solver to explore particular constraints that might be burning lots of cycles or proving hard to solve. There's a VCS runtime switch to turn on constraint profiling:


This creates a profile report in the simv.cst directory, in html/profile.xml The documentation mentions that this file is viewable in Chrome and Firefox but doesn't work in IE. However, it now seems like it also isn't viewable in Chrome. Instead, it hits a security issue:

Unsafe attempt to load URL file:///projects/simv.cst/html/cstrProf.xsl 
from frame with URL file:///projects/simv.cst/html/profile.xml. 
Domains, protocols and ports must match.

Now, the file does load in Firefox, but I've been trying to avoid running multiple browsers. There is however a simple fix. You can run a web server and serve up the profile over html. That maybe sounds like a lot of work, but python ships with one in the standard library and it will run as a command-line loaded module. So, simply change directory to your build directory (or close to it - the server lets you navigate the file hierarchy too) and run

python -m SimpleHTTPServer

Then in a browser on the same machine, open http://localhost:8000 E.g., if you run the server in the directory with the profile, you can go directly to http://localhost:8000/profile.xml

There are comments.

Aug 12, 2013

Language Specification Length

Language Complexity

Now, just as number of keywords isn't the sole indicator of the complexity of a language, the length of the specification doesn't tell the entire story. However, SystemVerilog in the new 2012 version clocks in at 1315 pages.

There are comments.

Aug 7, 2013

DVT Eclipse License under OS X

I find the DVT Eclipse SystemVerilog editor to be exceptionally useful when developing testbenches. One tricky thing to set up when installing these tools in OS X, is how to set an environment variable to point to the license file, for a GUI application.

To set environment variables, they need to be defined before the GUI executes. In OS X this is achieved in the /etc/launchd.conf file (that you probably have to create)

sudo vim /etc/launchd.conf

In that file, set the environment variables using csh syntax. For the DVT license file, the environment variable is DVT_LICENSE_FILE

setenv DVT_LICENSE_FILE /Path/to/dvt.license

Then, reboot your system and the environment variable should be set for all applications. You can check by opening a shell after the reboot and checking the environment variable:


There are comments.

Jul 31, 2013

Playing with TreeMaps

Just started trying to implement a TreeMap visualization in Python, using Matplotlib. I found some sample code that got me started and led to some of the original academic papers on the topic.

I've got the basics working, with a simple Tree class that I wrote to track nodes and leaves, along with weights and a place to store metadata. The Tree class just provides a wrapper around an array but helps in tracking parent/ child relationships and working out the relative weights of nodes.

class Tree(object):

    def __init__(self, parent=None, weight=None, name=None):
        self.parent = parent
        self.children = [] = name
        self.weight = weight
        self.changed = False
        if self.parent:

    def add_child(self, child):
        self.changed = True

    def __iter__(self):
        for child in self.children:
            yield child

    def is_leaf(self):
        return len(self.children) == 0

    def get_weight(self, recalculate = False):

        if (recalculate and self.children) or not self.weight or self.changed:
            self.weight = 0
            self.changed = False
            for child in self.children:
                self.weight += child.get_weight(recalculate)

        return self.weight

    def get_normalized_weight(self):
        return self.weight/ float(self.parent.weight)

The actual drawing algorithm is straightforward too - for each level in the hierarchy, divide up the available space between all the nodes, weighted by their size and then flip between horizontal or vertical packing at each level of the hierarchy. Currently I'm putting the leaf node weight into the center of each rectangle as an annotation. This will probably need to change with additional nodes, as they will get too small to see. Mouseover tooltips or a datacursor that updates when a node is selected will probably be more useful.

def add_node(self, node, lower=[0.005,0.005], upper=[0.995,0.995], axis = 0):
    axis = axis % 2
    self.draw_rectangle(lower, upper, node)

    width = upper[axis] - lower[axis]

    for branch in node:
        upper[axis] = lower[axis] + (width * float(branch.get_weight())) / node.get_weight()
        self.add_node(branch, list(lower), list(upper), axis + 1)
        lower[axis] = upper[axis]

def draw_rectangle(self, lower, upper, node):
    r = Rectangle(lower, upper[0] - lower[0], upper[1]-lower[1], 
        facecolor = (0,0,0))

    if node.is_leaf():
        rx, ry = r.get_xy()
        cx = rx + r.get_width()/2.0
        cy = ry + r.get_height()/2.0
        r.set_facecolor( node.get_colour()), (cx, cy), color=(0,0,0), 
                         fontsize = 10, ha='center', va='center')
        print, rx, ry, cx, cy

The full source is available on github. Trees can be implemented by creating all the nodes, one instance at a time.

from Tree import HueTree as t

a = t()
b = t(a)
e = t(b, 1, 'e')
f = t(b, 2, 'f')

c = t(a, 3, 'c')

d = t(a)
g = t(d)
j = t(g, 1, 'j')
k = t(g, 1, 'k')
h = t(d, 4, 'h')
l = t(d)
lprime = t(l, 1, 'l')
m = t(l, 1, 'm')
n = t(l, 1, 'n')
o = t(l, 1, 'o')


Simple Treemap

Here I'm using the relative weight of the node within a given point in the hierarchy to specify the Hue value from an Hue/Saturation/Value triplet that gets converted to an RGB colour.

The long form way of instantiating a tree gets quite cumbersome, so it is also possible to define a tree just as a nested set of tuples, and then use a helper function, make_tree to construct the actual Tree objects.

import Tree
from TreeMap import TreeMap

short_map = (((1, 2), 3, ((1, 1), 4, ((32,34,1,2), 1, 1, 
                    (1, 2, 4, 5,(2,2,(2,(2,(1,1,1,1,(3,2,12),1)))))))))

x=Tree.make_tree(short_map, TreeType = Tree.Tree)

Things look good with a small number of nodes, but when you start adding more leaves, you find that skinny rectangles start to dominate the plot. In this case, I'm using the relative weight as a gray value, rather than varying the hue. Other algorithms are available for how the space is subdivided that can tend to give a more square structure to the resulting map - I'm starting to investigate implementing the map using those algorithms to see how that looks.

Skinny Treemap

There are comments.

Jul 28, 2013

Coverage Visualization

I've been thinking recently about better ways to visualize coverage results. The new UCIS API provides ways to access coverage data more easily in third-party tools. I've mostly finished wrapping this API in Python, using SWIG, to create PyUCIS. That Python layer provides iterators and other Pythonic idioms to let you use the UCIS more efficiently. So far I'm seeing about 66% less code for equivalent C functions. Also, a large part of the UCIS implementation is tied up in XML and the Python libraries for XML support are excellent. I've found it a useful way to write tools around coverage data, quickly. That efficiency and quick development is key to providing a flexible toolkit to analyze coverage data. Rather than building an all singing, all dancing tool, I'd rather have a toolkit of features that allow me to inspect and visualize the data, that can be tuned to the particular job at hand. Matplotlib and other visualization tools, such as NetworkX, are great, free libraries that become available, once you get the data into a Python environment. Trying to do similar visualizations in C is possible, but generally prohibitive in time and effort.

So now that I have easy access to coverage data and a wide range of visualization tools available, I've been thinking about how I'd like to look at that coverage. Normally we just get some sort of traffic light view, typically in something that looks like a spreadsheet or tree view. These views might group the coverage model in terms of the implemented model hierarchy - rolling up percentages, flagging things in red if they are low numbers of hits, green for close to 100% coverage. This is useful, but it doesn't really show much about the scope of the coverage model. It would be very easy to write a functional coverage model with a few, easily reached bins and declare '100% coverage' and move on.

The normal way to avoid that scenario is careful review of the coverage model, inspection of code and meetings to discuss what is in the spec that needs coverage. That is certainly a necessary part of planning for verification. I just think there might be additional metrics that could be used and visualised to give a better indication of what the current state of the design is. I started thinking about coverage density, in particular are there ways we could come up with a metric to show that a block has sufficient coverage - at least as a first order approximation. I think this could be useful for both simple code coverage - 100% on a small block is less interesting than 100% on a large block, for example, as well as for functional coverage. Again, a large, complex module with only a few functional coverage points and 100% coverage should be seen as worse result than a small block, with many functional coverage points, that is reporting 100%. We should be more worried about that large, mostly sparsely covered part of our design. I'd like the visualization of any coverage results to highlight those sorts of regions that are completely covered but only thinly.

This then starts to lead to an idea of density, based on the size of a module and the amount of coverage collected. I haven't found any examples of metrics like this. The only reference to coverage density that I have seen is in Functional Verification Coverage Measurement and Analysis by Andrew Piziali. There though the coverage density refers to the number of coverage hits per simulation cycle and is a measure of efficiency of a given test.

NetworkX sample

I think it might be reasonable to consider lines of code as a measure of the size of a module, although synthesized gate count would probably be a better indicator if that was available. Given that coverage data should be collected earlier in the process, before synthesis, I think the line count might be a good starting point. Similarly for the coverage model, we need a metric to estimate the 'size' or complexity of the coverage within a given block. Now a coverage model can span multiple parts of a design and be somewhat orthogonal to the implementation, but I think there is often a fairly direct mapping of some functional coverage to design blocks. When we have a reasonable metric for these, then we can come up with a view of the coverage density. I need to do some more investigation on what might be useful metrics here. It is quite straight-forward for code coverage but less of a one-to-one relationship for functional coverage.

I think two dimensional graphs of that coverage data might become a useful way to see the coverage in a design, using treemaps or network diagrams to allow a user to get a better feel for how the coverage closure is moving forward. Those could even be potentially animated over time to get a better view into the progress of the verification. This sample treemap shows a way of visualizing a source code repository, but the colour coding isn't really showing additional information. An display like this seems well suited to being extended to visualize coverage data. It could also be turned into a 3D visualization of the source code, with the density metric shown on the Z axis.

In a similar way that code coverage is useful right until you get 100% coverage, then it is useless, these sorts of views won't really help in saying 'we have enough functional coverage'. But they might be useful in showing that there isn't enough coverage in particular areas of the design.

There are comments.

Jul 23, 2013

Much ado about nothing

Having wasted a few hours debugging a UVM issue with `uvm_do macros and a pre_do task in a parent class, I decided to dig in to starting sequences without using the macro implementations. Some of the main EDA companies recommend this approach and there was a good paper at DVCon discussing the pros and cons of the various UVM macros.

So while several people recommend using the methods directly and avoiding the macros, I didn't find a great deal of clear information that described the correct set of method calls to use for a sequence_item and sequence. The examples in Adam's paper above are somewhat confusing, mixing sequence_items and sequences within the same code in some cases and suffering from some typos in other samples. After I worked out the information below, I took another look at the paper and the examples are correct (ignoring the typos), but the interleaving of the sequence and sequence_item start makes the operations more complex than it needs to be.

Luckily enough the UVM source code is all available and I found the simplest way to resolve this was just to go and look at the implementation. In particular, two files are useful: the macro implementations themselves in src/macros/uvm_sequence_defines.svh and the class definitions for sequences and sequence_item in src/seq/uvm_sequence_base.svh

My first confusion stemmed from the class inheritance.


At first glance, this tends to imply that a uvm_sequence is just another type of uvm_sequence_item and they can be treated as interchangeable. However, it fairly quickly becomes apparent that this is only very superficially true. The reality is that sequences ( uvm_sequence) have a distinct API from sequence items (uvm_sequence_item) and only share a few common features.

In particular, sequences runs on a sequencer, without any arbitration controlling their execution. Multiple sequences can be launched on a sequencer and will execute in parallel. In contrast, sequence_items are subject to arbitration to control access to the sequencer's downstream port. A multi-step arbitration handshake is done, but only for sequence items - not sequences. This is the fundamental difference in the API and the reason for the different methods used to start items and sequences.

Sequences and sequence items can both be launched using the various uvm_do macros, so the common base does help provide this abstraction to a single interface, but it really just hides the multiple APIs that are being used behind the scenes. I agree with the view that it is better to just understand what is going on and use the function calls directly. That way you will tend not to be surprised by the various hooks provided for callbacks within the macro invocations.

From uvm_sequence_defines.svh, the methods used to start a sequence item are:


A sequence however, is launched with:


Now, the `uvm_create macro call can be replaced with a direct call to the factory::create method, for both the sequence_item and sequence, something like this for a sequence_item,

    item = custom_sequence_item::type_id::create("item",, get_full_name());

or this for a sequence

    seq = custom_sequence::type_id::create("seq",, get_full_name());

In the sequence_item startup, the various calls after the create can be replaced by two functions that encapsulate the arbitration for the driver port, to actually launch a transaction.

   item = custom_sequence_item::type_id::create("item", , get_full_name());

Breaking up the uvm_do in this way provides more control over the randomization of the item - you can disable constraints, assign values after start_item, rather than trying to insert code in pre_do. One fundamental problem with trying to use pre_do to modify constraints is there isn't a clear indication which call to uvm_do in the sequence body has triggered the callback to pre_do. The only indication in the pre_do API is if it is being called before a sequence_item is started or a sequence [via the is_item flag]. If you have multiple calls to `uvm_do (or start_item) in your sequence body, there isn't a reliable way to differentiate which call has triggered the callback (other than maybe looking at the existence of a member variable, if it has been created or not, but this fails with loops or threads of execution in the body)

Similarly to start a sequence, there is a simpler API that encapsulates the various calls to pre_start, pre_do etc:

    seq = custom_sequence::type_id::create("seq",, get_full_name());
    seq.start(target_sequencer, this);

These 4 calls for a sequence_item and 3 calls for a sequence can then be further enhanced using control over constraints, randomize with {} constructs and also using the prioritization arguments to the tasks, or providing a different target sequencer. The end result is much more flexible than the 18 flavours of `uvm_do macros and easier to work out the flow of execution.

There are comments.

Jul 21, 2013

A sample of SV

I've recently migrated my blog from Typepad to Pelican, hosted on GitHub. One nice feature of that is that I can write simple posts in Markdown syntax and it uses the Pygments syntax highlighting engine. A while ago I added SystemVerilog support to Pygments and looking below, it seems to be working!

module top;
    import user_pkg::*;
    import uvm_pkg::*;
    env e;

    initial begin
        `uvm_info("top","In top initial block",UVM_MEDIUM)
        e = new("env", null);

There are comments.

Jul 18, 2013

A VCD Parser

A co-worker mentioned that a generic VCD parser might be useful. So I wrote one. It is mostly functional, after a few tweaks and some valuable suggestions from colleagues.

The tokenizer got particular attention, due to the rather neat generator expression, assigned to tokeniser

def extract(self, fh):
    '''Tokenize and parse the VCD file'''
    # open the VCD file and create a token generator
    tokeniser = (word for line in fh for word in line.split() if word)

    for count, token in enumerate(tokeniser):

There are comments.

Next → Page 1 of 6