Jun 17, 2008

visual commitment

graphic

I've had an on again/ off again interest in visualization tools to enhance design and verification for many years. I've written log file parsers to show data in a more friendly way to enhance debug, or TCL/Tk widgets that demonstrate activity on bus ports of a SystemC AMBA switch model. Tools that at first glance might seem somewhat pointless visual trinkets can really enhance debug, by letting the brain search for patterns more easily within data. For example, with the bus switch, it was visually easy to see which bus wasn't getting any traffic, which could have been extracted from a log, but would have required more thought. Patterns of burstiness or busyness can also be seen quite easily. Similar results can usually be achieved with grep or clever regular expressions, but I find I end up having to keep a lot more data in my head, which pushes out the brainpower I might apply to actually working on the real problem.

Well written visualisations present the data in a more accessible way, letting you get to the problems more quickly. A good example of these from the recent DAC are a couple of OCP tools by Duolog. The tools present typical OCP information in a more easily interpretable way, colouring related transactions in a log file, or showing bus bandwidth. It is much simpler to trace a series of transactions this way, or find buses that are being starved or overloaded. The information could have been directly extracted from the log file by the user, but the visualisations make things easier and quicker.

I've been playing around with  Flash and also just started reading about the Processing language over the weekend. Co-incidentally, I happened across these visualisations of version control commits for several large projects, written in Processing. These give a good indication of how visualisations can show a lot of very complex data in a more accessible way. The Python visualisation was particularly interesting to show how few contributers there were until an explosion of contributions around the year 2000 wander on to the stage.

As our SoC designs keep getting larger, I believe that more accessible means to interpret the verification and design data and results will be needed, beyond just waveforms and log traces. I would be interested to see what the ebb and flow of check-ins look like for a complex, modern SoC. We tend to think that hardware design is just the same as software design by another name, but you might start to see structural differences in how the code and modifications organise themselves.

The Python case shows one leader, banging away on their own, suddenly joined en mass when the code base becomes popular. SoCs will also probably demonstrate Conway's Law visually, mirroring the organisational structure that put the design together. Pairs of verification and design engineers working together, sub-assemblies, clustering around organisational and functional boundaries. The vast majority of changes would be much more localised than you see in a large software project such as Python. Quite possibly other software projects fall along similar modular designs but it would be instructive to see the visualisations side by side - perhaps with the organisational (or lack of) human structure overlaid.

There are comments.