Saturday, April 12, 2014

System Level Verification: What, Another Framework?


Frameworks have been enormously helpful in making testing and verification productive and reusable. In the RTL verification space, AVM, VMM, OVM, and UVM have all contributed to standardizing a pattern for RTL verification that encompasses encapsulation and reuse, automation for stimulus generation, and mechanisms for results checking. In the software space, frameworks such as JUnit and CPPUnit have, likewise, provide structure around standardizing how test results are checked and how test suites are composed and executed in an automated fashion.

As more focus is given to verification issues at SoC and system level, it makes sense to ask: are the requirements for a verification framework in a system level environment met by existing (primarily RTL-centric) verification  frameworks, or is there something fundamentally different at system level? As it turns out, there are some rather unique aspects of system-level verification that make existing verification frameworks unsuitable for application in this space.

Perhaps the most visible difference in an system-level design is the presence of embedded processors that are simultaneously part of the design and part of the verification environment. The verification framework must enable automation for test generation, as well as facilitate managing the processors as design and verification resources in configurations ranging from 'bare metal', to OS and test, to an OS running application software and a test.

Another difference with a system-level environment is that verification must look both forward and backwards in the verification process. It is a requirement that the same test scenario be able to run in an SoC-level simulation/emulation context that includes a SystemVerilog/UVM testbench, as well as running in a prototype context (FPGA or first silicon), as well as potentially running on the end product. Now, that certainly doesn't mean that the same verification will be done at each step: the purpose and goals of verification in each context are quite different. However, being able to re-run the same scenario in two verification contexts provides some continuity between the contexts and avoids having to start from scratch when changing contexts. For example, consider an SoC-level environment in simulation. Much of the test stimulus is still hardware-centric, but early embedded-software scenarios are being executed. When moving to emulation, it is enormously valuable to be able to run the same scenario that was proven to run in simulation and not have to start developing a test from scratch. Continuity is even more helpful when moving from emulation (which still largely maintains the appearance of simulation) to an FPGA prototype environment that is radically different.

As mentioned in a previous post, system-level environments tend to look more like cooperating islands of test functionality rather than the monolithic testbench used for block and subsystem RTL verification. A system-level verification framework must enable reuse of verification infrastructure across these islands, as well as facilitating the cooperation of these islands in carrying out the broader verification task.

Just because the requirements are different for a system-level verification framework doesn't mean that design of a system-level verification framework must start from first principals. Hardware- and software-centric test frameworks have been in development for over a decade (some would argue much, much longer), and there is much to be gained from the features that, over time, were found to be useful for verifying hardware and software.

I've recently started work on a lightweight C++-based verification framework that targets the requirements of system-level verification. The framework borrows features and patterns from existing verification frameworks, and adds features and patterns to address some of the unique requirements of driving embedded software tests and coordinating the multiple 'islands' of a system-level verification environment. The framework is designed to be encapsulated in environments as diverse as UVM and special-purpose verification hardware accelerators. The next few blog posts will explore some of the specifics of this developing framework.

~Everything is a system -- some are just smaller than others

Saturday, March 29, 2014

System-level verification: Islands vs Continents

As the saying goes, "There is nothing permanent except change". However, even with constant change, similarities with what came before abound and true and complete discontinuities are relatively rare. This is certainly true in the functional verification space as we begin looking beyond the methodologies that served us well for unit-level and subsystem-level verification and ask "what's next?". There are many similarities between the requirements for a system-level verification framework and requirements for frameworks targeting unit-level and SoC-level environments. In both cases, encapsulation, reuse, abstraction, and test automation are important. However, there are also fairly significant differences in requirements as well. The biggest differences involve testbench structure, time correlation expectations and requirements, and modeling languages.

In a unit-level and SoC-level environment, testbench environments tend to be monolithic. The very fact that it is common to refer to 'the testbench' highlights the all-encompassing nature of the testbench environment in unit- and subsystem- level verification. By contrast, system level (and to a certain extent SoC level) verification tend to be more distributed -- more like loosely-connected islands than the all-encompassing continent of a unit-level testbench environment.

In a SoC-level verification environment, the primary 'island' is the verification-centric embedded software running on the processor or processors that is effectively verifying the system from the inside out. This software may run autonomously relative to the testbench that is testing the design from the outside in, or it may be coordinated -- either loosely or quite tightly -- with the activity of the testbench surrounding the design.

In a system-level verification environment, the 'island' effect is even more pronounced. System-level verification is typically carried out in the lab with either an FPGA prototype or first silicon. In this case, the testbench will be divided into islands such as test equipment connected to the design interfaces, test code running on the embedded processors, and specialized test hardware built into the chip.

A key requirement in unit- and SoC-level testbench environments has historically been tight time synchronization and correlation of activity. Given the simulation and emulation-centric nature of unit, subsystem, and SoC-level verification this makes perfect sense: since the execution engine maintains a coherent view of events and time, the testbench environment can maximize modeling ease and the repeatability and predictability of results. However, this global view of time comes at the cost of execution speed. Simulation-based testbench environments are by-and-large single threaded, and remain largely unable to take advantage to the recent explosion in the availability of multi-core machines to accelerate simulation speed.

By contrast, a system-level verification environment cannot afford to sacrifice the much higher execution speed delivered by an FPGA or first silicon prototype to maintain lock-step synchronization between the execution of the entire environment. Even if higher execution speed could be sacrificed, maintaining full time synchronization would artificially constrain the system and make the results of system level verification impossible to trust.

Finally, verification frameworks designed for use at unit to SoC level have typically be written in a verification-centric language such as Vera, 'e', or SystemVerilog. This makes sense, of course, since these languages provide features specific to RTL verification. However, the fact that these existing languages are typically tightly tied to simulation environments makes reusing verification IP and tests created using them in a system-level environment nearly impossible. A system-level verification framework is essentially constrained to use a 'lowest common denominator' language in order to ensure maximum reuse.

A verification framework that seeks to provide value for system-level verification must be designed from the ground up with these requirements in mind. Over the next few posts, we'll have a look at how these requirements are being addressed in a new system level verification framework currently being developed.

~Everything is a system -- some are just smaller than others

Tuesday, March 18, 2014

Verification Frameworks and System-Level Verification

It seems the last decade or so has been the decade of the verification language and the verification framework. From SystemC, Vera, and 'e' to SystemVerilog, and VMM and AVM to OVM and UVM, a lot of focus has been placed on making design verification engineers productive in creating verification environments and tests. By and large, these verification language and frameworks focused on block-level and subsystem-level verification -- areas where automating application of test stimulus at the signal level was the name of the game. The world changes, however. Today, there is a growing interest in verification at the SoC and system level. At this level, where embedded software and high-level test stimulus are important, frameworks that are tied to RTL-centric language features are a bit of an impedance mismatch -- despite how relevant and valuable they are for unit and subsystem level verification.

Looking forward, the question must be raised: given the importance of verification frameworks in enabling productive, modular, and reusable verification in RTL-centric environments, might a verification framework focused on system-level verification bring the same benefits? As you might guess, I believe in the value of having a verification framework that addresses the somewhat-unique (at least as compared to those of RTL-centric environments) requirements of system-level environments.

Now, just because there might be value in a different verification framework focused on the needs of system-level verification doesn't mean that we have to start from scratch in designing this framework. The experiences from the past decade in terms of general requirements for and useful attributes of a verification framework are invaluable in informing the core elements of a new system-level verification framework.

So, the past informs the future even as the environment and requirements change. Over the course of the next few blog posts, I'll outline more details on key attributes of a system-level verification framework.

Are you doing verification at the system level? If so, what verification framework are you using?



Sunday, March 16, 2014

SVEditor: What's that reference? (Part 1)


One key feature of integrated development environments -- especially those for object oriented languages -- is the ability to find the references to a data type, a method, or a data field. A few months back, I implemented initial reference-finding functionality focused on module declarations and module instantiations. This functionality was used to support the design hierarchy view. Being able to quickly identify root modules by finding module declarations that were not referenced in the project was key to making the design hierarchy view efficient on large designs. Now, I've started work on more general support for locating references to types and fields.

SVEditor creates an AST (abstract syntax tree) for each source file that it parses. The ASTs stored the filesystem and the most recently used are cached in memory. This enables SVEditor to manage large projects without being limited by the amount memory available to Java, as well as avoid re-parsing source files on startup (provided they haven't been modified). Bringing ASTs in from disk is faster than re-parsing them, but is a time-consuming operation. Consequently, all SVEditor operations seek to minimize the number of ASTs that must be traversed.

Finding references is one of those global operations that requires information from all (or nearly all) the files in the environment. When performing a single reference lookup, waiting for a while is not a huge problem. However, reference searching is a very useful operation. As noted before, doing reference lookups for all modules in a design (often at least hundreds) is used to build the design hierarchy. In cases like these, reference lookups must be very fast.

The approach currently being implemented within SVEditor has two steps: coarse data collection during parsing and fine-grained data analysis during a lookup.

During parsing, all identifiers within a file are collected into a set that is associated with the file id. This per-file identifier hash allows a quick check to be performed to see if any references to a given element are likely. 

During a reference-finding operation, a set of files on which to perform more-involved analysis is built based on the per-file identifier hash. This first-pass filter enables more-detailed analysis to be performed on a much smaller set of files, while requiring very little storage overhead.

Next time, more details on the detailed AST analysis to identify class field and method references.