Showing posts with label System-Level Verification. Show all posts
Showing posts with label System-Level Verification. Show all posts

Sunday, April 18, 2021

SoC Integration Testing: Hw/Sw Coordination (Part 2)


Controlling the outside world -- specifically interface BFMs -- from embedded software is critical to SoC integration tests that exercise interface IP. In the last post, we showed how to pass data from embedded software to Python by tracing execution of the processor core and reading the mirrored values of registers and memory to obtain parameter values. While functional, doing things in this way is highly specific to one message-passing approach and is pretty labor intensive. In this post, we'll add some abstraction and automation to improve usability and scalability.


Design Goals



While we're initially focused on providing a nice automated way to communicate between embedded software and the test harness in a simulation environment, the design goals go beyond this. The diagram above shows the basic architecture. Endpoints provide a portal for one environment to call APIs in another environment. Each endpoint supports a known set of APIs, and different endpoints will support different sets of APIs. 

Each environment interacts with APIs on an endpoint without needing to know how communication is implemented. For example, execution trace might be used to implement processor to Python communication in a simulation-based environment. When the design is synthesized to FPGA, communication might be implemented via an external interface. With appropriate abstraction, neither the test software running on the processor nor the Python test code should need to change despite the fact that data is being moved in very different ways. 

In order for this to be feasible, we'll need to collect some meta-data about the APIs.


Example

I always find an example to be helpful, so let's look at the enhancements to the flow in the context of a simple example.
 


The diagram above shows the key elements of a very small SoC called Tiny SoC. We can test many aspects of integration using just software on the processor. For example, we can read registers in the peripheral devices and check that they are correct. We can carry out DMA transfers. But, we need to control the outside world when testing the full path from software through the UART and SPI devices.

Bus functional models (BFMs) or Verification IP (VIP) provide very effective ways to interact with interface protocols from testbench code. What we need in addition is a way to control these BFMs from the software running on the core in the design.


Capturing the API

Let's focus on the UART for now. Our UART BFM provides a detailed API for configuring individual attributes of the UART protocol (eg baud-rate divisor) and for interacting with the UART protocol a byte at a time. That's fine for IP-level testing, but is a bit too low-level for software-driven testing.

For software-driven testing, we want to instruct the BFM to do some reasonable amount of work and let it go. To help with this, the UART BFM defines a higher-level API intended for use by software. 

An example of that higher-level API is shown above. Calling the uart_bfm_tx_bytes_incr API causes the BFM to begin sending a stream of bytes starting with a specific value and incrementing. There is another API that instructs the BFM to expect to receive a stream of bytes sent by the software running on the processor.

To enable automation, we describe the Python API that we will call from embedded software using special annotations. We collect related APIs together in a class, and identify whether these methods are exported by the Python environment and will be called by the embedded software, or are imported by the Python and will be called by Python code. 


Since we want embedded software to call this API, the API is considered to be exported by Python. You can also see the configuration function that updates the UART's configuration (eg baud rate).

Each of the method parameters is given a Python3 type annotation. This enables the Python libraries to know the type of each parameter and collect the right data to pass when the functions are called. 

On the C side, we simply need to have functions with the same signature as what we've captured in the Python API definition.


While the code shown above (link) is hand-coded, we could generate it automatically based on what is specified in the Python API definition. 


Connecting to Implementation: Python

Connecting all of this up on the Python side involves connecting the relevant BFMs and API implementations together. 


The snippet above is from the cocotb test that runs when a baremetal software test is run (link). At the beginning of simulation, the test locates the relevant BFMs. The u_dbg_bfm is the tracer BFM that monitors execution of software on the processor core. This BFM implements an Endpoint, as shown in the diagram at the beginning of the post. The u_uart_bfm is the BFM connected to the UART interface on TinySoC. 

Once we have all the BFMs, we can create an instance of the higher-level UART BFM API (uart_bfm_sw) and tell the debug BFM that it should handle the embedded software calling these APIs.


Example C-Test
With the BFMs connected on the Python side, we can now focus on how to interact with the BFM from the software test.

The software test snippet above transmits some data via the UART to the waiting UART BFM to check (link). Before we can send data, both the UART IP and the external BFM need to be configured in the same way. We program the UART IP via its registers, and call the uart_bfm_config function to cause the corresponding Python method to be invoked. This will cause the UART BFM mode to be configured.

Next, we call the uart_bfm_rx_bytes_incr to tell the UART BFM that it should expect to receive 20 bytes. It should expect the first byte to have a value 10 and subsequent bytes to increment by one. By telling the BFM what to expect, our test is self-checking and the required amount of interaction is small.

Finally, we again interact with the UART IP actually send the data that the BFM is expecting. 

Next Steps
The API definition and Endpoint architecture described in the post above provides a modular way to capture the APIs used to communicate across environments. Because the API signature is captured in machine-readable way, it also enables the use of automation when implementing the APIs for different environments. 

As I mentioned at the beginning of the post, the API and Endpoint architecture is designed so it can be applied in many verification environments -- it's certainly not restricted to just communicating between embedded software test and the test harness. I've been interested for a while in methodology for creating and verifying firmware along with the IP that it controls such that it's ready to go when SoC-integration testing begins. My next post will begin exploring how to create, verify, and deliver firmware along with an IP.

References


Disclaimer
The views and opinions expressed above are solely those of the author and do not represent those of my employer or any other party.

Saturday, April 12, 2014

System Level Verification: What, Another Framework?


Frameworks have been enormously helpful in making testing and verification productive and reusable. In the RTL verification space, AVM, VMM, OVM, and UVM have all contributed to standardizing a pattern for RTL verification that encompasses encapsulation and reuse, automation for stimulus generation, and mechanisms for results checking. In the software space, frameworks such as JUnit and CPPUnit have, likewise, provide structure around standardizing how test results are checked and how test suites are composed and executed in an automated fashion.

As more focus is given to verification issues at SoC and system level, it makes sense to ask: are the requirements for a verification framework in a system level environment met by existing (primarily RTL-centric) verification  frameworks, or is there something fundamentally different at system level? As it turns out, there are some rather unique aspects of system-level verification that make existing verification frameworks unsuitable for application in this space.

Perhaps the most visible difference in an system-level design is the presence of embedded processors that are simultaneously part of the design and part of the verification environment. The verification framework must enable automation for test generation, as well as facilitate managing the processors as design and verification resources in configurations ranging from 'bare metal', to OS and test, to an OS running application software and a test.

Another difference with a system-level environment is that verification must look both forward and backwards in the verification process. It is a requirement that the same test scenario be able to run in an SoC-level simulation/emulation context that includes a SystemVerilog/UVM testbench, as well as running in a prototype context (FPGA or first silicon), as well as potentially running on the end product. Now, that certainly doesn't mean that the same verification will be done at each step: the purpose and goals of verification in each context are quite different. However, being able to re-run the same scenario in two verification contexts provides some continuity between the contexts and avoids having to start from scratch when changing contexts. For example, consider an SoC-level environment in simulation. Much of the test stimulus is still hardware-centric, but early embedded-software scenarios are being executed. When moving to emulation, it is enormously valuable to be able to run the same scenario that was proven to run in simulation and not have to start developing a test from scratch. Continuity is even more helpful when moving from emulation (which still largely maintains the appearance of simulation) to an FPGA prototype environment that is radically different.

As mentioned in a previous post, system-level environments tend to look more like cooperating islands of test functionality rather than the monolithic testbench used for block and subsystem RTL verification. A system-level verification framework must enable reuse of verification infrastructure across these islands, as well as facilitating the cooperation of these islands in carrying out the broader verification task.

Just because the requirements are different for a system-level verification framework doesn't mean that design of a system-level verification framework must start from first principals. Hardware- and software-centric test frameworks have been in development for over a decade (some would argue much, much longer), and there is much to be gained from the features that, over time, were found to be useful for verifying hardware and software.

I've recently started work on a lightweight C++-based verification framework that targets the requirements of system-level verification. The framework borrows features and patterns from existing verification frameworks, and adds features and patterns to address some of the unique requirements of driving embedded software tests and coordinating the multiple 'islands' of a system-level verification environment. The framework is designed to be encapsulated in environments as diverse as UVM and special-purpose verification hardware accelerators. The next few blog posts will explore some of the specifics of this developing framework.

~Everything is a system -- some are just smaller than others

Saturday, March 29, 2014

System-level verification: Islands vs Continents

As the saying goes, "There is nothing permanent except change". However, even with constant change, similarities with what came before abound and true and complete discontinuities are relatively rare. This is certainly true in the functional verification space as we begin looking beyond the methodologies that served us well for unit-level and subsystem-level verification and ask "what's next?". There are many similarities between the requirements for a system-level verification framework and requirements for frameworks targeting unit-level and SoC-level environments. In both cases, encapsulation, reuse, abstraction, and test automation are important. However, there are also fairly significant differences in requirements as well. The biggest differences involve testbench structure, time correlation expectations and requirements, and modeling languages.

In a unit-level and SoC-level environment, testbench environments tend to be monolithic. The very fact that it is common to refer to 'the testbench' highlights the all-encompassing nature of the testbench environment in unit- and subsystem- level verification. By contrast, system level (and to a certain extent SoC level) verification tend to be more distributed -- more like loosely-connected islands than the all-encompassing continent of a unit-level testbench environment.

In a SoC-level verification environment, the primary 'island' is the verification-centric embedded software running on the processor or processors that is effectively verifying the system from the inside out. This software may run autonomously relative to the testbench that is testing the design from the outside in, or it may be coordinated -- either loosely or quite tightly -- with the activity of the testbench surrounding the design.

In a system-level verification environment, the 'island' effect is even more pronounced. System-level verification is typically carried out in the lab with either an FPGA prototype or first silicon. In this case, the testbench will be divided into islands such as test equipment connected to the design interfaces, test code running on the embedded processors, and specialized test hardware built into the chip.

A key requirement in unit- and SoC-level testbench environments has historically been tight time synchronization and correlation of activity. Given the simulation and emulation-centric nature of unit, subsystem, and SoC-level verification this makes perfect sense: since the execution engine maintains a coherent view of events and time, the testbench environment can maximize modeling ease and the repeatability and predictability of results. However, this global view of time comes at the cost of execution speed. Simulation-based testbench environments are by-and-large single threaded, and remain largely unable to take advantage to the recent explosion in the availability of multi-core machines to accelerate simulation speed.

By contrast, a system-level verification environment cannot afford to sacrifice the much higher execution speed delivered by an FPGA or first silicon prototype to maintain lock-step synchronization between the execution of the entire environment. Even if higher execution speed could be sacrificed, maintaining full time synchronization would artificially constrain the system and make the results of system level verification impossible to trust.

Finally, verification frameworks designed for use at unit to SoC level have typically be written in a verification-centric language such as Vera, 'e', or SystemVerilog. This makes sense, of course, since these languages provide features specific to RTL verification. However, the fact that these existing languages are typically tightly tied to simulation environments makes reusing verification IP and tests created using them in a system-level environment nearly impossible. A system-level verification framework is essentially constrained to use a 'lowest common denominator' language in order to ensure maximum reuse.

A verification framework that seeks to provide value for system-level verification must be designed from the ground up with these requirements in mind. Over the next few posts, we'll have a look at how these requirements are being addressed in a new system level verification framework currently being developed.

~Everything is a system -- some are just smaller than others

Tuesday, March 18, 2014

Verification Frameworks and System-Level Verification

It seems the last decade or so has been the decade of the verification language and the verification framework. From SystemC, Vera, and 'e' to SystemVerilog, and VMM and AVM to OVM and UVM, a lot of focus has been placed on making design verification engineers productive in creating verification environments and tests. By and large, these verification language and frameworks focused on block-level and subsystem-level verification -- areas where automating application of test stimulus at the signal level was the name of the game. The world changes, however. Today, there is a growing interest in verification at the SoC and system level. At this level, where embedded software and high-level test stimulus are important, frameworks that are tied to RTL-centric language features are a bit of an impedance mismatch -- despite how relevant and valuable they are for unit and subsystem level verification.

Looking forward, the question must be raised: given the importance of verification frameworks in enabling productive, modular, and reusable verification in RTL-centric environments, might a verification framework focused on system-level verification bring the same benefits? As you might guess, I believe in the value of having a verification framework that addresses the somewhat-unique (at least as compared to those of RTL-centric environments) requirements of system-level environments.

Now, just because there might be value in a different verification framework focused on the needs of system-level verification doesn't mean that we have to start from scratch in designing this framework. The experiences from the past decade in terms of general requirements for and useful attributes of a verification framework are invaluable in informing the core elements of a new system-level verification framework.

So, the past informs the future even as the environment and requirements change. Over the course of the next few blog posts, I'll outline more details on key attributes of a system-level verification framework.

Are you doing verification at the system level? If so, what verification framework are you using?