
DAC started early with a Mentor sponsored breakfast at 7:30. Got to see a bit about some of their testbench automation tools that help steer coverage to more interesting parts of the design. There are a few other companies here that have similar products. The aim is to move beyond user driven constrained random testing and let the tools help in solving the problem.
After that, I listened to Gary Smith on the state of the EDA industry. Highlights:
- It's all about the software
- Threads are dead
- C is finished to be replaced by some useful concurrent language - brush up on Occam I say
- Verification is a solved problem and will be automated - don't bother being a verification engineer
- Almost 38% of companies roll their own EDA tools
His talk made for an interesting start to the day, at least.
I spent the rest of the day attending various sessions and tool demos, which I'll write more about later. I've been surprised how low attendance is. There are a lot of people from the tool companies in their booths, but not many customers for them to talk to - and that was on Monday, the free attendance day when there aren't that many technical sessions or other distractions.
The Open Verification Methodology(OVM) is a hot item this year, with many sessions discussing issues around use and interoperability. Many vendors are showing products or claiming compatibility with this new standard. SystemVerilog is the other dominant theme though SystemC still has some traction. The last time I was at DAC SystemC was the new kid on the block that everyone was expecting to take over the world, not so much any more and it has settled into a system level modeling niche.
The disturbing thing I am seeing is how little progress has been made in the last decade in the ESL tool industry. Many years ago I was working on ARM926 designs that were not on the bleeding edge, even then. Today, many ESL companies are still just showing support for ARM9 or being excited about performance of their ARM7 models. That's at least 2 or 3 cycles behind the latest processor technologies and a huge leap in tool complexity to keep up. Synthesis capacities are rising much slower than the process technologies, tools companies are stalling out, reimplementing the same technologies to support systemC, systemVerilog or what ever the new hot language or methodology is, without apparently making any significant progress.
The other problem that the ESL companies face is the lack of models for their particular product line. Many seem to be falling into the same trap - expecting customer demand to dictate which models to develop. But that isn't going to work - the systems designers need the models right around when they decide they need the models. There is no lead time. There certainly isn't the 6 months to 1 year lead time it takes to find the right ESL company, negotiate and convince them to start creating a model, get the team together and write and verify the model, then deliver it back to the system architect. When the architect wants the model, there might be a 1 week lead time. Maybe a month, before serious work needs to start. The whole investigation better be mostly finished in 6 months. ESL companies need to seed their model libraries in anticipation of customer need. If they wait for the customer, they'll always be too late to be any use at all and keep going round that loop. In spite of that glaring hole, ESL is another hot ticket at DAC this year. Software design and implementation continues to increase in cost and complexity and HW/SW co-design companies are proliferating as a result.
With the apparent dearth of progress and lack of useful model or up-to-date models, I'm not too surprised that almost 40% of companies are looking inwards for their EDA tools. Might also explain where everyone is this year, too.
The one bright light in the ESL space is that although many of the tools and methodologies haven't changed much in the last 5 years, they do seem to have moved on from being all smoke and mirrors and snake oil to perhaps actually working. So maybe things haven't changed, they've just become real.
There are comments.
Many ideas have been whirling around in my head since being at DAC. I've been inspired to learn some new things, starting with the Open Verification Methodology but also revisiting some of the Electronic System Level tools and flows that I've worked with in the past. I'm interested in exploring visualization techniques and tools and how they might be applied to verification and design. I'd also like to learn more about a few of the more interesting formal verification tools, like OneSpin 360 MV and maybe explore what is possible with ESL tools like Bluespec's SystemVerilog flow or the various other similar tools that are out there.
I have a difficult time learning things just for the sake of it, tending to be more driven by necessity rather than idle curiosity. I've been doing some work based around a small CPU core and started getting frustrated with the way the CPU was architected. This led me to start considering designing my own CPU, just for fun. Partly as a motivation to crack open a Hennessy & Patterson book that I've been meaning to read for a few years, partly to see if I can do it, partly as a vehicle to hang all those other ideas upon.
I've been looking around the web, browsing on OpenCores and finding humbling projects, such as HomebrewCPU, which is a Minix-compatible CPU entirely constructed from discrete gates. You can even browse the web pages that it is serving or telnet in to it! To my way of thinking, that is slightly nuts - impressive, but nuts all the same - five wire-wrapped boards to debug. My background is in FPGAs and that seems the perfect technology for this sort of exploration - I'm also thinking along the way that I might be able to play with synthesisable verification or FPGA enhanced verification/ emulation as well as possibly using this as a platform for a reconfigurable architecture. Lots of ideas, potential and possibilities. It will also give me a chance to re-engage with FPGA technologies and learn about more about the state of those tools. The various tools are getting to a fairly mature point and a simple pipelined CPU shouldn't require too much work but still be complex enough to do interesting things with.
I've been looking at Xilinx and Altera to get an understanding of their current tool flows and trying to work out language support and maturity - which would be the best option for systemVerilog, where the best simulation options are and that kind of thing. No real conclusions yet, but both have trial versions of what appears to be a complete tool chain, so I will probably drive a small example through both flows as a pipe cleaner.
Then of course there are the more fundamental religious issues - CISC or RISC, what ISA to use. Roll my own, pick up an already defined but open architecture, or something in between? I'm looking for suggestions in this respect - I know ARM are quite litigious when it comes to cloning their ISA, so I'll be avoiding that, but OpenSPARC might well be a good option. Any other suggestions? I'm not sure if the early MIPS ISAs are cloneable without problems? Maybe I could go really back to my roots and implement a Z80 architecture. The advantage of picking on an existing ISA is that the tools come mostly for free. While porting gas and gcc to my own ISA could also be an interesting experiment and learning experience, it would probably be more of a distraction than I want.
I am a fan of the Python language and tend to write most of my personal projects in it. As a result, I'm intrigued by the potential for writing the core in Python, using some of the available extensions and libraries. Two packages seem to already exist, MyHDL and PyHVL. MyHDL is Python extension to let you express parallel behaviour that can then be automatically translated to verilog or VHDL. PyHVL provides the other piece of the Python puzzle, enabling high-level verification in Python. So potentially I could do the design and verification in Python then drive through into an FPGA flow for implementation. JL jokingly mentioned the potential for an OVM port to Python but maybe it isn't such a crazy notion. The thing that Python is fantastic for is expressing complex ideas quickly and without a lot of fuss or housekeeping. From the verification perspective it seems to be a perfect match as I can focus more on the testing, and less on the language. I'm a bit more skeptical about using it on the design side but I think it might be worth a look.
To kick things off, I found the description for a minimal CPU on opencores. This is a really basic 8-bit processor, 4 op codes, couple of registers and a very simple, small architecture, yet it can still do some useful function. This evening I wrote a Python ISS for it, just to prove out some ideas. Took about an hour to get a working cycle-based ISS together for the architecture. Of course, immediately the next thing you need is an assembler and that was fairly easy to put together in Python too. Nothing particularly complex, but a two pass assembler that supports labels and constants and generates code that runs on the core. I'm able to assemble and run the example greatest common divisor (Dijkstra's algorithm) described in the paper and it has given me a good indication of the direction to go. So far, my couple of hour project can assemble and execute the following code:
start:
NOR allone ; akku == 0
NOR b
ADD one ; akku = -b
ADD a ; akku = a - b
; Carry set when akku >= 0
JCC neg
STA a
ADD allone
JCC end ; A=0 ? -> end, result in b
JCC start
neg:
NOR zero
ADD one ; Akku = -Akku
STA b
JCC start ; carry not altered
end:
JCC end
a: 10 ; a & b are the two values to consider
b: 30
allone: 0xff ; various constants
zero: 0
one: 1
Next step is to describe this trivial core in a synthesisable form and see how I get on running it through one or two FPGA flows. A few tests and some verification could be useful too! For FPGAs I'm much more used to the general suck it and see style of testing that is the norm. Synthesize, place and route and see if it works. In the last several years I've been working on much larger ASICs so have certainly seen the value of more robust verification and I think FPGA technology has probably spent too much time as the wild frontier of design robustness and testing. As this project progresses I want to explore what the best balance is for testing and how the test environments can use the FPGA to accelerate testing along the way.
So plenty of things up in the air but I think this could be fun.
There are comments.