Dec 21, 2012

A simple turtle graphics Domain Specific Language (DSL) parser

I've been working through some of the exercises in 'The Pragmatic Programmer' while using it to mentor a colleague.  In particular, I spent a bit of time working on the Domain Specific Language section and considering creating little DSLs to provide flexible control. I've painfully implemented DSLs in SystemVerilog/ UVM sequences in the past and thought about how I typically do this: build a parser, create tokens, then dispatch various execution functions to implement the commands.  Often, the pain of building all these pieces in a language like C or SV is enough of a barrier that I wouldn't even start.  

The example in the book is a simple Logo/ Turtle control language.  Pen up, Pen down, draw, turn, etc. It tries to simplify things by using single letter command codes and only one optional argument.

P 2

D

W 2

N 1

E 1 

S 1

U

I know Python has a Logo/ Turtle engine built in so I decided to try to write a command parser/ dispatcher that would work with that, and let me write commands in a text file and have them execute Python Turtle commands.

Here's an example script (you can see the results of this at the end of this post):

color "green"
up
goto 0 -50 # comments are allowed
down
circle 50
up
color "red"
write "Done!"
sleep 2
exit

I thought initially about having to parse out all the commands and arguments, then writing a large switch/ case statement (or the Python equivalent with a dictionary).  After thinking about it a little longer I realised I didn't have to do that at all.  I could use the Introspection in Python to look up available methods and if they exist, call them. In fact, as not finding the method will just cause an exception, I can just try executing any command and if it fails, catch it and move on.

So after parsing the input text stream (throw away comments, break up tokens using whitespace) then I just try to execute the command in the global namespace.  I've pulled all the turtle functions into that namespace, so any turtle function is a valid command in my little parser.  The globals()[command[0]] in the code below looks up the function in the global namespace and then calls it, using the other parts of the command (command[1:]), after they've each been processed through the Python eval function to convert them from strings to whatever format they happen to be (numbers or strings mainly).  The final trick in this is using the * operator to take take a list and pass it as one argument after another to the function:

globals()[command[0]](*(map(eval, command[1:])))


And that's all that's needed to implement a full Logo language parser and execution engine. The command syntax is fault tolerant and reports errors, with line numbers.  New commands can be added easily, by defining new functions.  They'll be automatically supported as they are added to the namespace.

Being able to pull something like this together quickly means that writing little Domain Specific Languages is possible and quite a low bar.  Doing something similar in C is often more daunting and even worse in a language like SystemVerilog, with such a poor string and file handling library.  There's a real value in being able to program at such a high level, that can greatly enhance what's possible or likely to be attempted in a verification environment. You could do this in SystemVerilog, but how often would you even think to attempt it, without rejecting it as too much work?

 

# pull all the turtle commands into the global namespace, so they are valid commands
from turtle import *

# Use these 3 lines to make 'sleep' and 'exit' useable commands
from time import sleep
from sys import exit, argv

# Given a handle to a series of strings of commands, do them
def parse_and_draw(commands):
    for (line_number, line) in enumerate(commands):
        line = line.split('#')[0]  # throw away comments

        if line:  # if there is anything left after getting rid of comments

            command = line.strip().split(' ')  # parse using spaces to delimit tokens  
                                               # a big limitation of this, we can't have strings with spaces
                                               # e.g.,  "hello world" won't work as it'll get split up
                                               # into ['"hello', 'world"'] neither of which bits are valid when eval'ed

            if command[0]:  # if we have any command left 
                            # (e.g., an indented comment would dissappear)

                try:
                    # The meat of the dispatcher is the next line
                    # using a try/ except means we can always try to run any command
                    # and assume that it is valid and catch if it isn't.
                    # globals() returns a dictionary of every function defined in the global namespace
                    # including all the turtle commands because of the from turtle import *
                    # command[0] is used as a key to look up the function name
                    # we then pass all of the other tokens (command[1:]) through eval (using map)
                    # and pass them as arguments to the function we looked up

                    globals()[command[0]](*(map(eval, command[1:])))

                    # a key error occurs if we don't find command[0] in the global namespace
                except KeyError:
                    print 'Unknown command', command[0], 'on line ', line_number

                    # Some other error occurred (e.g., the called function raised an Exception)
                    # report it here and continue on (if we didn't catch it, the program would end)
                except Exception as e:
                    print 'Invalid command', command[0], 'on line ', line_number,  e


if __name__ == '__main__':
    # open the first file on the command line, get commands from that and run them
    commands = open(argv[1]).xreadlines()

    # note that the parse and draw routine works on a list of commands, it doesn't know about files
    # or anything else.  This seperation is is useful, as we can get commands from anywhere
    parse_and_draw(commands)

 

12:21:12 10:37 AM

There are comments.

Jan 8, 2009

maker's schedule, manager's schedule

I've always had a dread of mid-afternoon status meetings. Now maybe I understand it a bit better, because of Paul Graham's excellent essay on the difference between being on a maker's schedule and being on a manager's schedule. Seems to share a lot of ideas with Csikszentmihalyi and his ideas of creativity and flow states.

There are comments.

Dec 6, 2008

metrics

Metal Listened to Brian Bailey talking on the missing metrics for verification coverage at the Certess booth on Wednesday. This was a really interesting talk discussing the standard code coverage and functional coverage metrics and pointing out that we don't really currently have a good metric to track when we are doing useful verification or not. Functional coverage is used for this, but care needs to be taken to ensure we are covering the right things.

There's a danger that functional coverage, which was originally designed to track the quality of random stimulus, is used to only track that stimulus. In this way, we might not consider if the behaviour is actually checked or not. The same ideas are represented in David Robinson's verification planning presentation at DAC. Requirements and functional coverage without checkers is a waste of time and effort. It is possible to get 100% coverage, with lots of cover points and no enabled or implemented checkers in a design. Careful testbench design, or tools like Certess, can be used to address these issues but only once you realise that functional coverage isn't a magic bullet.

I also saw a demo of the Certess product, Certitude. It randomly inserts errors into the device under test (DUT) to give a way to evaluate the quality of the testbench surrounding the DUT. The basic idea is that if the design is mutated in a particular way, the testbench is good if it can detect the error, or has a hole if this failure mode cannot be seen in the checkers. This artificial bug insertion is something I do on an ad-hoc way when developing a testbench and Certitude automates and expands on this commonly used approach.

You can read a paper from Brian on this topic.

There are comments.

Nov 6, 2008

Where are they now?

_s3t8496

DAC started early with a Mentor sponsored breakfast at 7:30. Got to see a bit about some of their testbench automation tools that help steer coverage to more interesting parts of the design. There are a few other companies here that have similar products. The aim is to move beyond user driven constrained random testing and let the tools help in solving the problem.

After that, I listened to Gary Smith on the state of the EDA industry. Highlights:

  • It's all about the software
  • Threads are dead
  • C is finished to be replaced by some useful concurrent language - brush up on Occam I say
  • Verification is a solved problem and will be automated - don't bother being a verification engineer
  • Almost 38% of companies roll their own EDA tools

His talk made for an interesting start to the day, at least.

I spent the rest of the day attending various sessions and tool demos, which I'll write more about later. I've been surprised how low attendance is. There are a lot of people from the tool companies in their booths, but not many customers for them to talk to - and that was on Monday, the free attendance day when there aren't that many technical sessions or other distractions.

The Open Verification Methodology(OVM) is a hot item this year, with many sessions discussing issues around use and interoperability. Many vendors are showing products or claiming compatibility with this new standard. SystemVerilog is the other dominant theme though SystemC still has some traction. The last time I was at DAC SystemC was the new kid on the block that everyone was expecting to take over the world, not so much any more and it has settled into a system level modeling niche.

The disturbing thing I am seeing is how little progress has been made in the last decade in the ESL tool industry. Many years ago I was working on ARM926 designs that were not on the bleeding edge, even then. Today, many ESL companies are still just showing support for ARM9 or being excited about performance of their ARM7 models. That's at least 2 or 3 cycles behind the latest processor technologies and a huge leap in tool complexity to keep up. Synthesis capacities are rising much slower than the process technologies, tools companies are stalling out, reimplementing the same technologies to support systemC, systemVerilog or what ever the new hot language or methodology is, without apparently making any significant progress.

The other problem that the ESL companies face is the lack of models for their particular product line. Many seem to be falling into the same trap - expecting customer demand to dictate which models to develop. But that isn't going to work - the systems designers need the models right around when they decide they need the models. There is no lead time. There certainly isn't the 6 months to 1 year lead time it takes to find the right ESL company, negotiate and convince them to start creating a model, get the team together and write and verify the model, then deliver it back to the system architect. When the architect wants the model, there might be a 1 week lead time. Maybe a month, before serious work needs to start. The whole investigation better be mostly finished in 6 months. ESL companies need to seed their model libraries in anticipation of customer need. If they wait for the customer, they'll always be too late to be any use at all and keep going round that loop. In spite of that glaring hole, ESL is another hot ticket at DAC this year. Software design and implementation continues to increase in cost and complexity and HW/SW co-design companies are proliferating as a result.

With the apparent dearth of progress and lack of useful model or up-to-date models, I'm not too surprised that almost 40% of companies are looking inwards for their EDA tools. Might also explain where everyone is this year, too.

The one bright light in the ESL space is that although many of the tools and methodologies haven't changed much in the last 5 years, they do seem to have moved on from being all smoke and mirrors and snake oil to perhaps actually working. So maybe things haven't changed, they've just become real.

There are comments.

Aug 28, 2008

Zerstreutheit and the hardware design flow


Recently I've been reading about the various open-source Continuous Integration servers that are available. This chart gives a good feature comparision of many of the systems that are out there. We've been evaluating Cruise Control on some internal projects and generally trying to understand what the issues are with deploying a CI server on a hardware design project. One of the things I've been struggling with is the real, meaningful difference between continuous integration and the more typical daily build and check-in smoke tests. Scheduled builds are often described as an anti-pattern when considering CI, but as far as I can tell the only practical difference is in the frequency of the builds. Certainly, you have the potential to find out about a broken build more quickly with CI, hence it has less chance to impact other users. Also, you are always doing useful work, rather than maybe re-running a version because no check-in has happened since the previous build. However, these distinctions of implementing CI are maybe more significant in the software world than for the typically longer build times found in hardware design projects. This then is the key point - build time is the significant factor in CI. The real benefit of using CI successfully is that you need to refine your processes to keep the build as quick as possible, striving for close to a 10 minute turnaround time, to stop things getting backed up. The consequence of this is that the entire Checkout-Build-Test loop keeps being optimised and refined. This doesn't just help with automated processes but can significantly improve productivity for the developers who do these steps manually every day. That's great for the software world, but is it really practical for hardware design with the current compilers and the speed of typical RTL simulation? If not, is it worth even bothering with?

The source control tools you choose can have a big impact on the first phase of the Checkout-Build-Test loop. When an update can take quarter an hour or longer, the source control system can become a significant productivity drain and stymie any chance of a quick turnaround. If merging changes, updating source and checking in a new revision takes hours, then there are real problems with the process and you certainly won't be doing multiple checkins per day (another fundamental CI process axiom is at minimum daily checkins for all developers, even more frequent is preferable). In Linus' talk about Git, he describes being able to do a diff and merge on an entire kernel source tree (22k source files) in less than a second. I've used other SCM environments with similar amounts of code, where an update might take 30 minutes. These sorts of differences can significantly change how you work. It isn't just a matter of the time that the particular automated task takes, but what the developer does while they are waiting, reading email, writing documentation, switching context to other distractions.

Similarly, if the build takes half an hour or longer before it fails with a trivial syntax error, you'll have switched to something else and then have to try to mentally context switch back again to work on the problem. Each of these switches have an associated, measurable attention and productivity hit. Improving the build step can have a big impact on how you manage your attention and keep engaged with the development process. A faster turnaround can stave off the onset of Zerstreutheit.

The final Test step is significantly slower for hardware design. Often this is used as a justification not to bother optimising the Checkout and Build phases, because they are comparatively much shorter. A multi-day or week long regression might fool you into thinking that an hour long build is relatively good. However, simulation & testing is the one step where the developer can be more out of the loop, with less impact. Typically, the user is not so tightly coupled into the testing loop, once the initial bugs are ironed out. Automation can certainly help here too, re-running failing tests with waveform dumping enabled or increased logging, to present a useful working environment for debug when the developer does come back to look at the fails.

The point really is that there is still a significant advantage to be had in spending effort to optimize the SCM and compile stages in a development flow, to maximise designer productivity and attention, even if the simulation time is large. Also a build pipeline can be used in the CI server to stage the build and testing feedback, to further mitigate the length of time that running tests takes. Deploying CI brings attention to how long these processes take and might help improve the entire development environment. Having fast enough tools can help the developers keep focused on what they are doing, without breaks for swordfights or reading email. Optimizing the build is still important, even in a hardware design environment, even when the runtime for regressions might be in terms of days or weeks. You might think the build process is only a comparatively small part of the overall runtime for a regression, but the designers spend most of their time looping through that comparatively small part.

So what is the take away? Does CI have a place in a hardware design flow? I think that CI servers can certainly be used to manage running regressions and nightly builds. Smoke tests and scheduled build approaches can be controlled with most of the CI servers. However, the real continual building process required to move from scheduled builds to CI seems to be hard to map to hardware design, simply because of the length of time of the checkout/build/test loops. Tool improvements and generally faster hardware seems to be key to increasing the frequency of integration tests, at least for now. However, optimizing the interaction between the users and automated tools is a key and often overlooked part of developing an effective design flow, if you plan on using CI or not.

There are comments.

Jun 26, 2008

Git - version control done right

concert hall

As I've started working on this small CPU project, one of the first decisions I've been considering has been which version control system to use. I've been a user of subversion for most of my personal projects for several years now and am currently using it at a client. As a result I'm quite familiar with the ins and outs of using it on a variety of sizes of projects. I've become more aware of distributed systems, such as Git and Mercurial over the last year, but haven't really been able to get my head around the advantages of them. In particular, the quote below from Linus Torvalds has been in the back of my mind.

"The slogan of Subversion for a while was 'CVS done right', or something like that, and if you start with that kind of slogan, there's nowhere you can go. There is no way to do CVS right."

- Linus Torvalds

One of the main source control issues I've seen on several of the projects I've worked on has been the aversion to branches that most users have. Typically there is a big central source repository that everyone will check out from. You then develop in your own little world. When the particular piece of work is complete, you check it back in. Usually, there is a fairly high barrier or cost to those commits, with sets of test suites that you must pass before you can commit your code back to the central repository. The checks take hours to run and you can not check back in until your code passes all the tests. Otherwise everyone else is at risk. But I always found that if I was working on something non-trivial, I'd really like to make some progress and check point that half way, committing it in to just a local branch, then working on further. That would give me the confidence to make larger changes, safe in the knowledge I can revert back to a midway working point. That's what a branch would be for, after all, but not if they are hard to make and not if the commit cost is so high. So we never did that, working for days or weeks before committing any changes.

The second common frustration I've seen with a centralized repository occurs when two people are working closely together on a piece of the system. This happens to characterise almost every verification endeavor, for example. By common definition, the verification and design work should be done by two different people, just to get extra eyes on the spec. This avoids duplicating erroneous assumptions about the design and is fundamental to the whole process. As a consequence, we are almost always faced with the situation where changes need to be made by two or more people, in distinct parts of the code (e.g., testbench and rtl) but cannot be checked in because of mutual dependencies. The changes depend on each other and all the commit checks will fail for either change on its own. Various ways around this exist, disabling affected checks in the commit scripts, copying files into each others workspaces and other hacks. All because fundamentally the centralised server approach, with costly branches and high commit costs, doesn't really let this sort of work proceed in an effective way.

The third frustration is the general speed of the repository. Time to check things out, time to do merges, how long it takes to do a diff or an update. These operations can usually mean a break for coffee or a walk around while the tool fetches the changes, compares them and attempts to merge it all together. Compound that by working in remote sites or across multiple geographic locations.

Git claims to solve these problems and be a whole lot faster at the same time.

The key is in breaking away from a centralised server. The database is distributed to every developer. As a result, everyone works on their own branch by default. Making further branches is trivial, because they don't get sent to every other developer. Fewer issues with namespace collisions when naming a branch, no real concern about checking code in and someone else getting your partially finished work. Earlier today I'd listened to Joel Spolsky and Jeff Atwood talking about the fact that Git makes branching trivial, but I didn't really understand why until I watched a really interesting presentation from Linus Torvalds on the subject. It is supposed to be a talk about Git, but really he focuses almost exclusively on the advantages of a distributed repository. I'd initially thought the real advantage was the 'always available' nature of a distributed repository, so that you could work on a plane or generally away from a network and still be able to check in, look at histories and all the things you normally need the central server access for. That's certainly part of the reason why it is interesting, but the branching and merging cost reduction that Git claims to offer is a much bigger deal.

For my second source of frustration above, Git also provides a solution. As there is no central repository, everyone can pull and push data to each other. The verification engineer and designer can exchange files more easily, through a tracked, version controlled system, rather than the usual sideband exchanges or hacks to the check-in scripts. Git also addresses that third issue, because all of the files are local and it has been designed for performance. Network overhead isn't an issue for a diff or history request as you have all the data locally. Merges are similarly less painful. The claimed performance is impressive and part of the reason why I want to try Git out.

Now, the most glaring problem with all this is that it sounds like anarchy. There is no central organisation, check-ins can happen any time, so where did all the quality assurance go? Linus talks about the network of trust relationships in his presentation. But, you can still have acceptance tests on when you actually pull data from a particular user or set of users. You can require them to run a battery of tests before they are allowed to share their work with the rest of the project. The usual checks and balances can be put back in place for when the whole database gets reassembled, but the individual developers or groups of designers can work more efficiently in a sub-repository. Git also supports hierarchical projects that combine various blocks of code, in fact that seems to be the preferred use model. Each sub-system on a design would be a unique Git repository. It could be even broken down further and have each IP block in their own repository. The general approach that has been used in the past, with quality checks, can still be used with some changes, as a gate to when larger mergers take place. This probably requires some trusted people in the organisation to act as gatekeepers or guardians for each level, but the basic methodology shouldn't be too difficult to layer on top.

You can read a lot more about Git on the homepage, including conversion documents from other common source control systems and details on the actual commands to use. Looking through the SVN conversion document, the git command syntax appears a bit cleaner and generally more intuitive to me. I also played around with the merge and diff tools and they seem powerful. It was very easy to create and populate a repository, for example. I plan on using it for the next few projects I work on to get a feel for how really useful it is and where the issues are hidden.

Edit to add: I found this draft version of the differences between Git and Subversion quite useful.

There are comments.

Jun 20, 2008

a somewhat crazy notion

Death Valley, after the storm

Many ideas have been whirling around in my head since being at DAC. I've been inspired to learn some new things, starting with the Open Verification Methodology but also revisiting some of the Electronic System Level tools and flows that I've worked with in the past. I'm interested in exploring visualization techniques and tools and how they might be applied to verification and design. I'd also like to learn more about a few of the more interesting formal verification tools, like OneSpin 360 MV and maybe explore what is possible with ESL tools like Bluespec's SystemVerilog flow or the various other similar tools that are out there.

I have a difficult time learning things just for the sake of it, tending to be more driven by necessity rather than idle curiosity. I've been doing some work based around a small CPU core and started getting frustrated with the way the CPU was architected. This led me to start considering designing my own CPU, just for fun. Partly as a motivation to crack open a Hennessy & Patterson book that I've been meaning to read for a few years, partly to see if I can do it, partly as a vehicle to hang all those other ideas upon.

I've been looking around the web, browsing on OpenCores and finding humbling projects, such as HomebrewCPU, which is a Minix-compatible CPU entirely constructed from discrete gates. You can even browse the web pages that it is serving or telnet in to it! To my way of thinking, that is slightly nuts - impressive, but nuts all the same - five wire-wrapped boards to debug. My background is in FPGAs and that seems the perfect technology for this sort of exploration - I'm also thinking along the way that I might be able to play with synthesisable verification or FPGA enhanced verification/ emulation as well as possibly using this as a platform for a reconfigurable architecture. Lots of ideas, potential and possibilities. It will also give me a chance to re-engage with FPGA technologies and learn about more about the state of those tools. The various tools are getting to a fairly mature point and a simple pipelined CPU shouldn't require too much work but still be complex enough to do interesting things with.

I've been looking at Xilinx and Altera to get an understanding of their current tool flows and trying to work out language support and maturity - which would be the best option for systemVerilog, where the best simulation options are and that kind of thing. No real conclusions yet, but both have trial versions of what appears to be a complete tool chain, so I will probably drive a small example through both flows as a pipe cleaner.

Then of course there are the more fundamental religious issues - CISC or RISC, what ISA to use. Roll my own, pick up an already defined but open architecture, or something in between? I'm looking for suggestions in this respect - I know ARM are quite litigious when it comes to cloning their ISA, so I'll be avoiding that, but OpenSPARC might well be a good option. Any other suggestions? I'm not sure if the early MIPS ISAs are cloneable without problems? Maybe I could go really back to my roots and implement a Z80 architecture. The advantage of picking on an existing ISA is that the tools come mostly for free. While porting gas and gcc to my own ISA could also be an interesting experiment and learning experience, it would probably be more of a distraction than I want.

I am a fan of the Python language and tend to write most of my personal projects in it. As a result, I'm intrigued by the potential for writing the core in Python, using some of the available extensions and libraries. Two packages seem to already exist, MyHDL and PyHVL. MyHDL is Python extension to let you express parallel behaviour that can then be automatically translated to verilog or VHDL. PyHVL provides the other piece of the Python puzzle, enabling high-level verification in Python. So potentially I could do the design and verification in Python then drive through into an FPGA flow for implementation. JL jokingly mentioned the potential for an OVM port to Python but maybe it isn't such a crazy notion. The thing that Python is fantastic for is expressing complex ideas quickly and without a lot of fuss or housekeeping. From the verification perspective it seems to be a perfect match as I can focus more on the testing, and less on the language. I'm a bit more skeptical about using it on the design side but I think it might be worth a look.

To kick things off, I found the description for a minimal CPU on opencores. This is a really basic 8-bit processor, 4 op codes, couple of registers and a very simple, small architecture, yet it can still do some useful function. This evening I wrote a Python ISS for it, just to prove out some ideas. Took about an hour to get a working cycle-based ISS together for the architecture. Of course, immediately the next thing you need is an assembler and that was fairly easy to put together in Python too. Nothing particularly complex, but a two pass assembler that supports labels and constants and generates code that runs on the core. I'm able to assemble and run the example greatest common divisor (Dijkstra's algorithm) described in the paper and it has given me a good indication of the direction to go. So far, my couple of hour project can assemble and execute the following code:

start:
    NOR allone      ; akku == 0
    NOR b
    ADD one         ; akku = -b

    ADD a           ; akku = a - b
                    ; Carry set when akku >= 0
    JCC neg

    STA a

    ADD allone      
    JCC end         ; A=0 ? -> end, result in b

    JCC start

neg:
    NOR zero
    ADD one         ; Akku = -Akku

    STA b
    JCC start       ; carry not altered

end:
    JCC end

a: 10  ; a & b are the two values to consider
b: 30

allone: 0xff  ; various constants
zero:   0
one:    1

Next step is to describe this trivial core in a synthesisable form and see how I get on running it through one or two FPGA flows. A few tests and some verification could be useful too! For FPGAs I'm much more used to the general suck it and see style of testing that is the norm. Synthesize, place and route and see if it works. In the last several years I've been working on much larger ASICs so have certainly seen the value of more robust verification and I think FPGA technology has probably spent too much time as the wild frontier of design robustness and testing. As this project progresses I want to explore what the best balance is for testing and how the test environments can use the FPGA to accelerate testing along the way.

So plenty of things up in the air but I think this could be fun.


There are comments.

Mar 6, 2008

visualising complex datasets

A great example of the sort of insight that you can glean from a lot of data, with the right sort of visualisation.

There are comments.