Dec 21, 2012

A simple turtle graphics Domain Specific Language (DSL) parser

I've been working through some of the exercises in 'The Pragmatic Programmer' while using it to mentor a colleague.  In particular, I spent a bit of time working on the Domain Specific Language section and considering creating little DSLs to provide flexible control. I've painfully implemented DSLs in SystemVerilog/ UVM sequences in the past and thought about how I typically do this: build a parser, create tokens, then dispatch various execution functions to implement the commands.  Often, the pain of building all these pieces in a language like C or SV is enough of a barrier that I wouldn't even start.  

The example in the book is a simple Logo/ Turtle control language.  Pen up, Pen down, draw, turn, etc. It tries to simplify things by using single letter command codes and only one optional argument.

P 2

D

W 2

N 1

E 1 

S 1

U

I know Python has a Logo/ Turtle engine built in so I decided to try to write a command parser/ dispatcher that would work with that, and let me write commands in a text file and have them execute Python Turtle commands.

Here's an example script (you can see the results of this at the end of this post):

color "green"
up
goto 0 -50 # comments are allowed
down
circle 50
up
color "red"
write "Done!"
sleep 2
exit

I thought initially about having to parse out all the commands and arguments, then writing a large switch/ case statement (or the Python equivalent with a dictionary).  After thinking about it a little longer I realised I didn't have to do that at all.  I could use the Introspection in Python to look up available methods and if they exist, call them. In fact, as not finding the method will just cause an exception, I can just try executing any command and if it fails, catch it and move on.

So after parsing the input text stream (throw away comments, break up tokens using whitespace) then I just try to execute the command in the global namespace.  I've pulled all the turtle functions into that namespace, so any turtle function is a valid command in my little parser.  The globals()[command[0]] in the code below looks up the function in the global namespace and then calls it, using the other parts of the command (command[1:]), after they've each been processed through the Python eval function to convert them from strings to whatever format they happen to be (numbers or strings mainly).  The final trick in this is using the * operator to take take a list and pass it as one argument after another to the function:

globals()[command[0]](*(map(eval, command[1:])))


And that's all that's needed to implement a full Logo language parser and execution engine. The command syntax is fault tolerant and reports errors, with line numbers.  New commands can be added easily, by defining new functions.  They'll be automatically supported as they are added to the namespace.

Being able to pull something like this together quickly means that writing little Domain Specific Languages is possible and quite a low bar.  Doing something similar in C is often more daunting and even worse in a language like SystemVerilog, with such a poor string and file handling library.  There's a real value in being able to program at such a high level, that can greatly enhance what's possible or likely to be attempted in a verification environment. You could do this in SystemVerilog, but how often would you even think to attempt it, without rejecting it as too much work?

 

# pull all the turtle commands into the global namespace, so they are valid commands
from turtle import *

# Use these 3 lines to make 'sleep' and 'exit' useable commands
from time import sleep
from sys import exit, argv

# Given a handle to a series of strings of commands, do them
def parse_and_draw(commands):
    for (line_number, line) in enumerate(commands):
        line = line.split('#')[0]  # throw away comments

        if line:  # if there is anything left after getting rid of comments

            command = line.strip().split(' ')  # parse using spaces to delimit tokens  
                                               # a big limitation of this, we can't have strings with spaces
                                               # e.g.,  "hello world" won't work as it'll get split up
                                               # into ['"hello', 'world"'] neither of which bits are valid when eval'ed

            if command[0]:  # if we have any command left 
                            # (e.g., an indented comment would dissappear)

                try:
                    # The meat of the dispatcher is the next line
                    # using a try/ except means we can always try to run any command
                    # and assume that it is valid and catch if it isn't.
                    # globals() returns a dictionary of every function defined in the global namespace
                    # including all the turtle commands because of the from turtle import *
                    # command[0] is used as a key to look up the function name
                    # we then pass all of the other tokens (command[1:]) through eval (using map)
                    # and pass them as arguments to the function we looked up

                    globals()[command[0]](*(map(eval, command[1:])))

                    # a key error occurs if we don't find command[0] in the global namespace
                except KeyError:
                    print 'Unknown command', command[0], 'on line ', line_number

                    # Some other error occurred (e.g., the called function raised an Exception)
                    # report it here and continue on (if we didn't catch it, the program would end)
                except Exception as e:
                    print 'Invalid command', command[0], 'on line ', line_number,  e


if __name__ == '__main__':
    # open the first file on the command line, get commands from that and run them
    commands = open(argv[1]).xreadlines()

    # note that the parse and draw routine works on a list of commands, it doesn't know about files
    # or anything else.  This seperation is is useful, as we can get commands from anywhere
    parse_and_draw(commands)

 

12:21:12 10:37 AM

There are comments.

Feb 5, 2012

Hobgoblins

A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines

- R. W. Emerson

Looking over the VPI iterator sample code I posted last night from PyHVL, I noticed something. I've defined the API as follows:

def vpi_iterator(handle, type=vpiNet):
With the handle first and an optional type parameter second, taking the default value of vpiNet. Looking again at the C implementation, I see that the method signature is the reverse:
itr = vpi_iterate(vpiNet, mod);
Now, I'm faced with a decision. I could make the Python vpi_iterator() call consistent with the similar C API, and reverse the parameters, or I can keep it the same way. Keeping it the same way allows the use of the default argument for vpiNet. Optional arguments must follow required arguments, so I couldn't reverse it and keep the type=vpiNet as an option, it would have to become (type, handle). In this case, I don't think the default type is particularly intuitive from the name, so I would tend to think there is more value in reversing the arguments, so that it is consistent with the C API's similar function signature.
def vpi_iterator(type, handle):
Some people might think this is a totally arbitrary consideration. I believe this sort of detail is important. My favorite languages are internally consistent, so that when you start using a new library or part of the language, the structures and idioms are so familiar you can often code things correctly without having fully read the documentation. Most of the Python standard library is like that, you can pick up a new library and be productive, quickly, because there aren't jarring inconstancies. If I want people to use this vpi_iterator method, coming from C, it will help if it matches the general argument patterns used there. Less friction to getting code working first time.
Each of these is just a small thing that you have to remember, but each time it catches you out it is annoying and maybe knocks you off your train of thought to fix it. The details and consistency are important to people being productive in a language.

I wish SystemVerilog had more of that consistency - semicolons to mark the end of a function definition, then a begin/end syntax for blocks, or {} for some parts of the language to define blocks. end, endfunction, endtask. Just extra details that get in the way of writing clean code the first time.

There are comments.

Jan 5, 2012

Simplified VPI iterators using PyHVL generators

I've been using PyHVL for a variety of verification tasks in the past few years. PyHVL is an open source Python integration for Verilog and SystemVerilog simulators. To give a quick taste of what it can do for you, consider the following SystemVerilog VPI C code.

void display_nets(mod)
    vpiHandle mod;
    {
       vpiHandle net;
       vpiHandle itr;
       vpi_printf("Nets declared in module %s\n",
       vpi_get_str(vpiFullName, mod));
       itr = vpi_iterate(vpiNet, mod);

       while (net = vpi_scan(itr))
          {
          vpi_printf("\t%s", vpi_get_str(vpiName, net));
          if (vpi_get(vpiVector, net))
          {
             vpi_printf(" of size %d\n", vpi_get(vpiSize, net));
          }
          else vpi_printf("\n");
       }
    }

Here is the equivalent VPI code, this time written in Python, using PyHVL.

def display_nets(module):
    print 'Nets declared in module', get_str(vpiFullName, module) 
    for net in vpi_iterator(module, vpiNet):
        print '\t%s %s' % ( get_str(vpiName, net), 
                get(vpiVector, net) and 'of size %d' % get(vpiSize, net) or '')

The magic happens in the implementation of the vpi_iterator() method, which uses a Python yield instruction to turn the method into a generator. Generators are much like functions, except they maintain a frozen stack frame, at the point where the yield occurs. All existing variables within the method maintain their state and execution picks up where it left off, just after the yield. The example also uses lazy evaluation of the result of get(vpiVector, net) to either call get(vpiSize, net) or not print the 'of size' additional string.

def vpi_iterator(handle, type=vpiNet):
    iterator = iterate(type, handle)
    if iterator:
        handle = scan(iterator)
        while handle:
            yield handle
            handle = scan(iterator)

This lets you write loops inside out as one of my colleagues aptly put it. The outcome is you can simplify the management of loops and indices and focus on the point of the loop. You write less code, you introduce fewer bugs. The code is easier to read and maintain as a result. This is just a very small example of some of the power of using a modern scripting language like Python, as an adjunct to a SystemVerilog simulator.

If you spend much time writing VPI code, you should take a look at PyHVL. It could make your life much simpler, or get in touch and I can help you with it.

There are comments.

Aug 4, 2009

creating a codeswarm movie

code swarm frame

Download video (3Mb)

A codeswarm is a visualization of the activity within a source code repository. The image and linked video above shows the lifetime of one of Verilab's source repositories. You can see code being created, the check-ins as they happen and an indication of which users are doing the work at any given time. It is an example of an 'organic information visualization' and is created using the Processing toolkit. The original visualization tools were developed by Michael Ogawa and the source code is available on Google code.

In this particular case I created the animation on OS X 10.5, using a combination of codeswarm, ffmpeg and LAME. If you are interested in doing something like this yourself:

First you'll need to make sure you have a recent version of the Java Development Kit installed (JDK 1.5 or later). You'll also need a recent version of Ant installed. (I have version 1.7.0, which ships with OS X as default). Download the code_swarm source and install it. Then execute 'ant run'. If all is well, you should get a dialog box prompting you for the source repository, user name and password.

At this point, I put in the svn+ssh URL for the Verilab repository that I wanted to visualize. Everything fell over, with a Java error (NoClassDefFoundError within com/trilead/ssh2). From this I realised I needed to install the SSH libraries for Java, from Trilead. I downloaded those, unpacked them and added the jar file to my CLASSPATH. Along the way I found out the default OS X CLASSPATH definition is in /System/Library/Java/JavaConfig.plist which may be useful as a starting point.

With that fixed, I again ran 'ant run' and put in the relevant information. A bit of time passes as the checkin information is extracted from the repository, then the visualisation runs. You'll find that repository information that was extracted is saved, under the ./data directory (look for the latest realtime_sample.*.xml file) . This is useful for the next stages, as you don't have to fetch the information again. If you want to create a video of the visualisation, there are a few more hoops to jump through.

You will need to configure codeswarm to save the frames for each stage of the visualisation. You do this by editing the ./data/sample.config file. First off, copy it to a new version for your particular project. Then edit these values:

  • InputFile= [Point it at the new realtime_sample<number>.xml file in the data directory, that contains the checkin information for your project]
  • TakeSnapshots=true

That's all you really need to change. You can also change the other values, to alter the visualisation. The ColorAssignX= statements use regexp values to differentiate different types of checkin and colour code them accordingly. Play around with the other values, with TakeSnapshots set to false and re-run the visualisation until you get something you are satisfied with. Then run one more time with TakeSnapshots=true to save off the frame images. You can run with the new configuration by running 'ant run data/your_project.config'

After running with TakeSnapshots enabled, you'll have a set of images in the ./frames directory, (controlled by the SnapshotLocation option in the config file). The final step is to assemble those into a movie. The easiest way I found to do this is to use the command-line utility, ffmpeg. There are a variety of ways to install ffmpeg, but the simplest way seems to be to install ffmpegX and then extract the binary from the application bundle. You can also get it using Fink or MacPorts. If you want to use an audio track with your visualisation, you will also probably require LAME. With ffmpeg working, it is simple to point it towards the image files from codeswarm and produce the final movie. The finishing touch was adding some music from an mp3 file, then limiting the duration via the -t switch, to end when the video frames ran out, rather than playing all of the music.

ffmpeg -i frames/code_swarm-%05d.png -i 6_sym.mp3 -qmax 15 -t 100 -f image2 -r 24 <output_filename>.mpg

You can run 'ffmpeg' without any switches to get help on the options. If all goes well, you should end up with an MPEG format video in the file <output_filename>.mpg.

There are comments.

Dec 6, 2008

metrics

Metal Listened to Brian Bailey talking on the missing metrics for verification coverage at the Certess booth on Wednesday. This was a really interesting talk discussing the standard code coverage and functional coverage metrics and pointing out that we don't really currently have a good metric to track when we are doing useful verification or not. Functional coverage is used for this, but care needs to be taken to ensure we are covering the right things.

There's a danger that functional coverage, which was originally designed to track the quality of random stimulus, is used to only track that stimulus. In this way, we might not consider if the behaviour is actually checked or not. The same ideas are represented in David Robinson's verification planning presentation at DAC. Requirements and functional coverage without checkers is a waste of time and effort. It is possible to get 100% coverage, with lots of cover points and no enabled or implemented checkers in a design. Careful testbench design, or tools like Certess, can be used to address these issues but only once you realise that functional coverage isn't a magic bullet.

I also saw a demo of the Certess product, Certitude. It randomly inserts errors into the device under test (DUT) to give a way to evaluate the quality of the testbench surrounding the DUT. The basic idea is that if the design is mutated in a particular way, the testbench is good if it can detect the error, or has a hole if this failure mode cannot be seen in the checkers. This artificial bug insertion is something I do on an ad-hoc way when developing a testbench and Certitude automates and expands on this commonly used approach.

You can read a paper from Brian on this topic.

There are comments.

Aug 13, 2008

a few good men

jackson pollock

Interesting post over on 43Folders on planning and task estimation. I've certainly experienced theses problems in the past but hadn't heard them described as the planning fallacy or optimism bias. The suggested ~44% buffer to add is quite depressing but probably close to true based on my experience. That means in a typical 5 day work week, you'll be lucky to do useful work for even 3 of them.

The flip side of planning is probably something we've all experienced too. You do your best to come up with a realistic schedule, then you are told that that is unacceptable and to reduce it to a wildly unreasonable, unrealistic estimate. Everyone knows it will be unachievable, but that's what you work towards. No wonder so many projects slip - they finish right around when they would have finished with realistic planning. Now that isn't to excuse projects on a terminal death march or those that have gone wildly out of control. But so often, I've been on projects where the end date was set before the planning was done.

In fact, I think in those cases, a realistic schedule might mean the project would never start, so everyone puts on their rose-coloured glasses and agrees to an insane schedule, that meets the deadline. Then the project starts to slip, often before anyone can even do any work it will be late. Over my career I've collected metrics on the various projects I've worked on, formally and informally. I've found quite a bit of resistance to using this real world project data for subsequent planning sessions. Mostly that's been the case when a certain schedule was required, with set number of people and the nasty reality of past experience was meaning the numbers wouldn't add up correctly. Those projects finished when we originally thought they would too.

So how do we move beyond this? Is it really that we are so bad at estimating how long things will take? Or that we can't accept the numbers that we come up with because of the time to market pressures and realities of the market place? Do the planners want to know the truth right at the start, or deal with the successive revelation of the nasty reality over time? Execution by one thousand slips or just planning business as usual?

There are comments.

Jun 26, 2008

Git - version control done right

concert hall

As I've started working on this small CPU project, one of the first decisions I've been considering has been which version control system to use. I've been a user of subversion for most of my personal projects for several years now and am currently using it at a client. As a result I'm quite familiar with the ins and outs of using it on a variety of sizes of projects. I've become more aware of distributed systems, such as Git and Mercurial over the last year, but haven't really been able to get my head around the advantages of them. In particular, the quote below from Linus Torvalds has been in the back of my mind.

"The slogan of Subversion for a while was 'CVS done right', or something like that, and if you start with that kind of slogan, there's nowhere you can go. There is no way to do CVS right."

- Linus Torvalds

One of the main source control issues I've seen on several of the projects I've worked on has been the aversion to branches that most users have. Typically there is a big central source repository that everyone will check out from. You then develop in your own little world. When the particular piece of work is complete, you check it back in. Usually, there is a fairly high barrier or cost to those commits, with sets of test suites that you must pass before you can commit your code back to the central repository. The checks take hours to run and you can not check back in until your code passes all the tests. Otherwise everyone else is at risk. But I always found that if I was working on something non-trivial, I'd really like to make some progress and check point that half way, committing it in to just a local branch, then working on further. That would give me the confidence to make larger changes, safe in the knowledge I can revert back to a midway working point. That's what a branch would be for, after all, but not if they are hard to make and not if the commit cost is so high. So we never did that, working for days or weeks before committing any changes.

The second common frustration I've seen with a centralized repository occurs when two people are working closely together on a piece of the system. This happens to characterise almost every verification endeavor, for example. By common definition, the verification and design work should be done by two different people, just to get extra eyes on the spec. This avoids duplicating erroneous assumptions about the design and is fundamental to the whole process. As a consequence, we are almost always faced with the situation where changes need to be made by two or more people, in distinct parts of the code (e.g., testbench and rtl) but cannot be checked in because of mutual dependencies. The changes depend on each other and all the commit checks will fail for either change on its own. Various ways around this exist, disabling affected checks in the commit scripts, copying files into each others workspaces and other hacks. All because fundamentally the centralised server approach, with costly branches and high commit costs, doesn't really let this sort of work proceed in an effective way.

The third frustration is the general speed of the repository. Time to check things out, time to do merges, how long it takes to do a diff or an update. These operations can usually mean a break for coffee or a walk around while the tool fetches the changes, compares them and attempts to merge it all together. Compound that by working in remote sites or across multiple geographic locations.

Git claims to solve these problems and be a whole lot faster at the same time.

The key is in breaking away from a centralised server. The database is distributed to every developer. As a result, everyone works on their own branch by default. Making further branches is trivial, because they don't get sent to every other developer. Fewer issues with namespace collisions when naming a branch, no real concern about checking code in and someone else getting your partially finished work. Earlier today I'd listened to Joel Spolsky and Jeff Atwood talking about the fact that Git makes branching trivial, but I didn't really understand why until I watched a really interesting presentation from Linus Torvalds on the subject. It is supposed to be a talk about Git, but really he focuses almost exclusively on the advantages of a distributed repository. I'd initially thought the real advantage was the 'always available' nature of a distributed repository, so that you could work on a plane or generally away from a network and still be able to check in, look at histories and all the things you normally need the central server access for. That's certainly part of the reason why it is interesting, but the branching and merging cost reduction that Git claims to offer is a much bigger deal.

For my second source of frustration above, Git also provides a solution. As there is no central repository, everyone can pull and push data to each other. The verification engineer and designer can exchange files more easily, through a tracked, version controlled system, rather than the usual sideband exchanges or hacks to the check-in scripts. Git also addresses that third issue, because all of the files are local and it has been designed for performance. Network overhead isn't an issue for a diff or history request as you have all the data locally. Merges are similarly less painful. The claimed performance is impressive and part of the reason why I want to try Git out.

Now, the most glaring problem with all this is that it sounds like anarchy. There is no central organisation, check-ins can happen any time, so where did all the quality assurance go? Linus talks about the network of trust relationships in his presentation. But, you can still have acceptance tests on when you actually pull data from a particular user or set of users. You can require them to run a battery of tests before they are allowed to share their work with the rest of the project. The usual checks and balances can be put back in place for when the whole database gets reassembled, but the individual developers or groups of designers can work more efficiently in a sub-repository. Git also supports hierarchical projects that combine various blocks of code, in fact that seems to be the preferred use model. Each sub-system on a design would be a unique Git repository. It could be even broken down further and have each IP block in their own repository. The general approach that has been used in the past, with quality checks, can still be used with some changes, as a gate to when larger mergers take place. This probably requires some trusted people in the organisation to act as gatekeepers or guardians for each level, but the basic methodology shouldn't be too difficult to layer on top.

You can read a lot more about Git on the homepage, including conversion documents from other common source control systems and details on the actual commands to use. Looking through the SVN conversion document, the git command syntax appears a bit cleaner and generally more intuitive to me. I also played around with the merge and diff tools and they seem powerful. It was very easy to create and populate a repository, for example. I plan on using it for the next few projects I work on to get a feel for how really useful it is and where the issues are hidden.

Edit to add: I found this draft version of the differences between Git and Subversion quite useful.

There are comments.

Jun 20, 2008

a somewhat crazy notion

Death Valley, after the storm

Many ideas have been whirling around in my head since being at DAC. I've been inspired to learn some new things, starting with the Open Verification Methodology but also revisiting some of the Electronic System Level tools and flows that I've worked with in the past. I'm interested in exploring visualization techniques and tools and how they might be applied to verification and design. I'd also like to learn more about a few of the more interesting formal verification tools, like OneSpin 360 MV and maybe explore what is possible with ESL tools like Bluespec's SystemVerilog flow or the various other similar tools that are out there.

I have a difficult time learning things just for the sake of it, tending to be more driven by necessity rather than idle curiosity. I've been doing some work based around a small CPU core and started getting frustrated with the way the CPU was architected. This led me to start considering designing my own CPU, just for fun. Partly as a motivation to crack open a Hennessy & Patterson book that I've been meaning to read for a few years, partly to see if I can do it, partly as a vehicle to hang all those other ideas upon.

I've been looking around the web, browsing on OpenCores and finding humbling projects, such as HomebrewCPU, which is a Minix-compatible CPU entirely constructed from discrete gates. You can even browse the web pages that it is serving or telnet in to it! To my way of thinking, that is slightly nuts - impressive, but nuts all the same - five wire-wrapped boards to debug. My background is in FPGAs and that seems the perfect technology for this sort of exploration - I'm also thinking along the way that I might be able to play with synthesisable verification or FPGA enhanced verification/ emulation as well as possibly using this as a platform for a reconfigurable architecture. Lots of ideas, potential and possibilities. It will also give me a chance to re-engage with FPGA technologies and learn about more about the state of those tools. The various tools are getting to a fairly mature point and a simple pipelined CPU shouldn't require too much work but still be complex enough to do interesting things with.

I've been looking at Xilinx and Altera to get an understanding of their current tool flows and trying to work out language support and maturity - which would be the best option for systemVerilog, where the best simulation options are and that kind of thing. No real conclusions yet, but both have trial versions of what appears to be a complete tool chain, so I will probably drive a small example through both flows as a pipe cleaner.

Then of course there are the more fundamental religious issues - CISC or RISC, what ISA to use. Roll my own, pick up an already defined but open architecture, or something in between? I'm looking for suggestions in this respect - I know ARM are quite litigious when it comes to cloning their ISA, so I'll be avoiding that, but OpenSPARC might well be a good option. Any other suggestions? I'm not sure if the early MIPS ISAs are cloneable without problems? Maybe I could go really back to my roots and implement a Z80 architecture. The advantage of picking on an existing ISA is that the tools come mostly for free. While porting gas and gcc to my own ISA could also be an interesting experiment and learning experience, it would probably be more of a distraction than I want.

I am a fan of the Python language and tend to write most of my personal projects in it. As a result, I'm intrigued by the potential for writing the core in Python, using some of the available extensions and libraries. Two packages seem to already exist, MyHDL and PyHVL. MyHDL is Python extension to let you express parallel behaviour that can then be automatically translated to verilog or VHDL. PyHVL provides the other piece of the Python puzzle, enabling high-level verification in Python. So potentially I could do the design and verification in Python then drive through into an FPGA flow for implementation. JL jokingly mentioned the potential for an OVM port to Python but maybe it isn't such a crazy notion. The thing that Python is fantastic for is expressing complex ideas quickly and without a lot of fuss or housekeeping. From the verification perspective it seems to be a perfect match as I can focus more on the testing, and less on the language. I'm a bit more skeptical about using it on the design side but I think it might be worth a look.

To kick things off, I found the description for a minimal CPU on opencores. This is a really basic 8-bit processor, 4 op codes, couple of registers and a very simple, small architecture, yet it can still do some useful function. This evening I wrote a Python ISS for it, just to prove out some ideas. Took about an hour to get a working cycle-based ISS together for the architecture. Of course, immediately the next thing you need is an assembler and that was fairly easy to put together in Python too. Nothing particularly complex, but a two pass assembler that supports labels and constants and generates code that runs on the core. I'm able to assemble and run the example greatest common divisor (Dijkstra's algorithm) described in the paper and it has given me a good indication of the direction to go. So far, my couple of hour project can assemble and execute the following code:

start:
    NOR allone      ; akku == 0
    NOR b
    ADD one         ; akku = -b

    ADD a           ; akku = a - b
                    ; Carry set when akku >= 0
    JCC neg

    STA a

    ADD allone      
    JCC end         ; A=0 ? -> end, result in b

    JCC start

neg:
    NOR zero
    ADD one         ; Akku = -Akku

    STA b
    JCC start       ; carry not altered

end:
    JCC end

a: 10  ; a & b are the two values to consider
b: 30

allone: 0xff  ; various constants
zero:   0
one:    1

Next step is to describe this trivial core in a synthesisable form and see how I get on running it through one or two FPGA flows. A few tests and some verification could be useful too! For FPGAs I'm much more used to the general suck it and see style of testing that is the norm. Synthesize, place and route and see if it works. In the last several years I've been working on much larger ASICs so have certainly seen the value of more robust verification and I think FPGA technology has probably spent too much time as the wild frontier of design robustness and testing. As this project progresses I want to explore what the best balance is for testing and how the test environments can use the FPGA to accelerate testing along the way.

So plenty of things up in the air but I think this could be fun.


There are comments.

Jun 6, 2008

DAC for verification

Each year, John Cooley does a great job previewing the companies at DAC. From that list, here's the ones that sound interesting to me.

OneSpin's 360MV tool does something called "gap-detection" plus timing diagrams on your design's System Verilog Assertions. (booth 625)

Real Intent Meridian CDC for clock domain crossing verification. Formal analysis and interfaces to simulation.
(booth 2540)

NuSym DeNibulator's intelligent testbench" It hunts down your hard to find coverage points and automatically tweaks your TB to reach them. It's next gen constrained random. (booth 379)

Certess Certitude testbench error injection.(booth 324)

Another "intelligent testbench" is Mentor inFact - graphical tool that generates, grades, and then upgrades TBs. It
now drives existing e, Vera, SV, or C/C++ TBs. Does OVM & VMM.(booth 2301)

CebaTech is showing their C2R Compiler,which takes untimed ANSI C and outputs Verilog RTL. (booth 760)

Forte Cynthesizer v3.4 (SystemC design) adds support for Power Compiler for "best-in-class area, performance, and now power results" and "management of ECOs by graphically mapping RTL back to the original SystemC design" and inter-block interfaces.(booth 1645)

Mentor 's Catapult C synth & Vista ESL tools. (booth 2301)

Carbon's Model Studio does Verilog-RTL-to-C conversion for simulation, to get early models for architectural work(booth 2467)

Synfora Pico Extreme C synthesis tool (booth 329)

Bluespec : "general purpose high-level synthesis & simulationfor modeling, verification and implementation". (booth 2367)

Imperas : yet-another-ISS tool, OVPsim, for embedded SW. (booth 467)

Mirabilis VisualSim does "graphical SystemC TLM 2.0 import without any code development; and power estimation of the full system." (booth 778)

Steve Golson's "Four Principles of Flow Engineering" DAC Tuesday at 10:30 AM, Room 206AB

Dassault Synchronicity DesignSync Cadence data management tools. (booth 620)

EVE ZeBu is showcasing PCIe and AXI synthesizable transactors and its System Verilog support for custom transactors.(booth 301)

Mentor Veloce "using mixed System Verilog and SystemC based upon System Verilog DPI standard 2.0" with Nucleus embedded RTOS (booth 2301)

Synfora Pico Extreme FPGA (booth 329)

VeriEZ EZVerify covers you design, assertions and testbench, with added full System Verilog support this year plus VMM/OVM checking (booth 1936)

Veritools usually has linters, code coverage and waveform viewers. Supports SV dynamic objects. (booth 1334)

Denali PureSpec System Verilog methodology support (OVM,VMM, AVM) in all IP blocks. (booth 1611)

There are comments.

May 29, 2008

is this thing on...?

Thomas J. Watson, the president of IBM, once famously didn't say that I think there is a world market for maybe five computers. He is still widely quoted as having said it and it is usually trotted out as a good example of why we shouldn't make predictions about the future of technology. Mainly because those predictions almost always will make us look entirely foolish. The title of this blog is based on that quote, in the hope that it'll discourage me from making too many painful statements about what I think the future of EDA and verification might be, but that's what this blog is going to be about.

Electronic design automation and functional verification are two pieces of the puzzle aiming to help close the design gap in the semiconductor industry. That's the gap between the amount of transistors we can put on a piece of silicon and the amount of transistors we can usefully put together to produce a working system that does something useful, in a reasonable period of time. The device physics guys have done a great job of getting well ahead of what we can usefully design. The main gap doesn't really seem to be what can be designed, though. It is what can be tested and verified to actually do what it is supposed to do.

There are more challenges further down the pipe too, timing closure looms ever larger as a problem, further reduction in geometries threaten the basic assumptions that let us typically ignore the nasty analog reality and pretend we are in some digital fantasy of ones and zeros. Those are all big problems or at least getting bigger, but functional verification is swallowing vast amounts of engineering time on projects right now and we seem to be getting ever further behind the curve. ( I feel already that I've made two potential 5 computers kind of statements in just this one paragraph.) EDA tools keep promising great leaps forward, but we still seem to be seeing the same promises and not so much progress. Raising the abstraction level of the design languages, increasing the quality of the verification, more reuse and large amounts of money invested in creating IP, but largely the industry still appears to be where it was 10 years ago - just with more people working ever harder on each product.

The one saving grace in all this is that there is quite the demand for semiconductor devices. If you start counting up all the computers, portable devices, smart cars and embedded processors in use around your life, you'll probably quite quickly realise you've maxed out that world market for 5 computers all on your own. In fact it is probably closer to 50 computers or computing devices in use around you. So at least the demand for products is there, even if we aren't quite sure how to design them all effectively, yet.

There are comments.

Next → Page 1 of 2