FPGA Design Verification in a Nutshell

Alex Gnusin, Verification Methodology Specialist
Like(0)  Comments  (0)

FPGA Design Verification (Planning) in a Nutshell

Before wading into this topic, I’d like to state why I felt compelled to write about FPGA design verification. I recently presented a very well attended three-part webinar series, during which many attendees asked for book recommendations. I was at a loss to recommend any though. None sprang to mind. I don’t have the time to write a book, so I decided to turn the first of the three webinars – which was entitled ‘FPGA Design Verification in a Nutshell (Part 1) Verification Planning’ - into this blog. Enjoy.

 

Let’s start by comparing ASIC and FPGA verification (see Figure 1). With ASICs, full chip verification/validation and signoff must be 100% complete before tape-out so, not surprisingly, between 60 and 70% of the ASIC project investment will be spent on verification. Why? Because the task requires a large verification team, strict planning and the use of comprehensive/advanced testbenches. Emulation is often required too.

 

For an FPGA there is no tape-out. Granted, there still needs to be an ‘end date’ to the project, but the verification investment is much lower and needs fewer dedicated verification engineers. Conversely, designers will typically be more involved in verification because testbenches (initially basic ones) will be needed quickly - far sooner than with an ASIC - in order to start lab testing the FPGA.

 

There still needs to be an overall verification strategy, the implementation of which will involve both simulation- and lab-based-verification. Also, because the team is smaller than for an ASIC, efficiency will be key and designers will need to be more involved with block level verification, and possibly top-level verification too.

 

Figure 1 – One major difference between ASIC and FPGA verification is that (initial) testbench development is much shorter with the latter.

 

FPGA Design Verification Stages

 

The first stage is planning for both RTL verification through simulation and lab testing, and it is important to understand the interfaces within the design; not only external interfaces but also internal interfaces between blocks.

 

All interfaces should be described in the design spec, so it should be easy to develop the verification components (VCs) for them. Each VC will be either a driver, a responder or a monitor/checker. With the VCs we can build block- and top-level test benches that should be capable of at least providing basic stimulus.

 

It is important to run at least some test cases and enable basic design features before going to lab testing. By finding – through simulation - and fixing bugs at this stage, it will help in the long run. The tests used for verifying basic design features can carry over into lab testing, and there should be few if any surprises when transitioning from simulation to lab testing (physical hardware). More advanced tests and testbenches can then be created to exercise the designs more advanced features, leading to regression testing. Figure 2 summarizes the verification stages and flow.

 

Figure 2 – The use of VCs in simulation runs before embarking on lab testing pays dividends in the long run.

 

Teamwork

Though I mentioned earlier that FPGA design verification teams are smaller than those working on an ASIC, structurally there are similarities; and figure 3 three illustrates a typical team structure.

 

Figure 3 – Design verification is a team effort.

 

There will, for example, be a design verification (DV) prime who – as soon as the design spec’ is complete (or at least nearly complete) – will develop the verification plan. Against this plan, the prime’s DV team will develop the VCs. Next, the DV team and designers will develop testbenches for verifying blocks and units. Both parties will then develop system-level testbenches and tests; ideally before the RTL is ready (i.e. before lab testing). Also, writing easy-to-understand testbenches with designers makes the whole verification process more efficient.

 

Test Case Scenarios

Most designs will require a number of different types of tests, used to verify what is well within acceptable design behavior (i.e. well within the circle in Figure 4a), borderline valid behavior and invalid behavior.

 

 Figure 4a – Green = basic tests, red = stress tests, blue = random tests, and yellow = error tests.

 

Basic test cases send packets of data (of varying lengths) into the design and outputs are viewed. They prove that the VCs and the testbench work, and that everything will run together properly. As for stress tests, these exercise the design with valid but ‘extreme’ data, such as packet sizes that are of maximum or minimum size. These are within the but close to the edge of the circle in Figure 4a.

 

For random, we can forget about test scenarios and throw valid data (different parts of different control packets) at the design. This can be done after or in parallel with stress testing. Lastly, error testing checks the design’s reaction to invalid stimuli. You can also think about the test cases as a flow. See Figure 4b.

 

Figure 4b – Test case scenarios as a flow. Each block is a verification task, which can include direct stimulus (e.g. Dir stim1 above). Not shown, but Stress testing could also be included.

 

Plotting test cases in graphical form (different flows) allows for better optimization, as unique paths (i.e. test cases) can be taken and redundancies can be removed.

 

Verification Planning: The ‘What’ and the ‘How’

 

Understandably completeness is important because if something is missed from the plan it is likely to be missed from the verification. We need to know what we are verifying.

 

In this respect the hierarchical development and prioritization of verification properties (VPs) helps. For example, start with high priority issues, such as protocol compliance, and (if possible) provide references to the design specification and RTL code. If a property is violated, it will be easy to identify which in both the specification and the design. Also, software configurations and hardware modes can affect the VPs (see Figure 5).

 

Figure 5. Verification properties can be dependent on not only various software configurations but also hardware modes (which tends to be the case with off-the-shelf IP and possibly even in-house IP that is being reused). It is important to know which SW configurations and HW modes affect each VP.

 

As for how to check each of the identified properties we must first of all define the verification methods; where and when to use simulation, formal verification and lab testing. Also, it tends to be much easier to verify properties at the block level.

 

Stimulus generating methods - such as directed, exhaustive and random - will be needed, as will checking methods. These will either be a single (time) check or performed constantly. A single check is embedded in the test case and checks the functional correctness at a given simulation for a test case (see Figure 6). A constant check means the function's correctness is verified at any time and tends to be invoked for more complex functions such as property verification (again, see Figure 6) and are usually paired with random and exhaustive tests.

 

Figure 6. On the left a single check to verify register access through a simple write and read - generating an error if the data read back is not the same as written. The check is performed at a specific time. On the right, a property verification for a data bus. Packets in (onto the bus) should equal packets out (off the bus).

 

AXI Stream Switch Verification

Let's consider something more complex, the verification of a frame-aware AXI stream switch with parameterizable data width and port count. In the 4 x 4 example shown in Figure 7 (for which I have made code available at https://github.com/alexforencich/verilog-axis) there is no software interface in the switch, as the software configurations were implemented as parameters. I also added an interrupt component (the IRQ Interface).

 

Figure 7 – Example 4 x 4 AXI stream switch. Packets can run from each input port to each output.

 

As mentioned above, we need to start with a verification plan. It will detail which VCs will be needed and which testbenches must be developed. We will also need randomization, checking components, test scenarios and we will need to decide on prioritization (i.e. making sure we don’t put important checks late in the flow) and we must have some rough coverage metrics so that we can track verification progress.

 

The following figures show the different parts of an example plan, with emphasis on the word ‘example’ as it does not cover every aspect of AXI4.

 

Figure 8a – Hardware modes. These are deployed as parameters (that can be changed) and are defined through transaction IDs (TIDs), arbitration modes, connect masks and master BASE and TOP parameters.

 

Figure 8b - Verification properties. These are best developed in a hierarchical way. Also, packet arbitration has to be fair – i.e. no starvation for any input ports, so they are not waiting for more than (say) 10 clock cycles until packet is accepted, for example – and we can also set priorities. Note the tie in with hardware and software modes (though our example contains only hardware modes), the verification method(s), whether they are top level (TL) or block level (BL), the stimulation method (directed [D], random [R], or both), and the checking method (single or constant).

 

Figure 8c – Test scenarios. We can identify test goals, how they will be verified (the scenario), their priorities, the stimuli (directed [D], random [R], or both), and the checking method (single [S] or constant [C]).

 

Figure 8d - Verification components. These include a driver, a responder and monitor/checker plus we need a scoreboard.

 

Block-level verification of the AXI Switch Arbiter

 

Arbitration is a critical feature of the switch, so must be verified. However, a top-level testbench cannot provide sufficient controllability so we need to develop a block-level testbench. Thankfully, the protocol is not too difficult.

 

Figure 9 - A request goes high will remain so until a pulse (one clock cycle) grant is received. The requester holds the arbiter while packets are sent, after which a one clock cycle acknowledgement must be received.

 

The same protocol must be implemented at the module level. On a personal note, I like to implement the test benches in a randomized way. This is because arbiter arbitration is usually quite a complex function, and it is difficult to envisage all possibilities for arbitration. Figure 10 shows a state machine and code.

 

Figure 10 - rnd1 and rnd2 are two random variables, controlled by Req_prob and Ack_prob parameters, and FSM runs by itself, producing valid stimulus towards arbiter.

 

We also need to implement arbiter checks, which will include mutual exclusive (mutex) grants, no grants without requests (waste) and starvation requests (waiting for a grant for no more than 10 clock cycles). Starvation is really important for arbiters because in some cases they can starve if the waiting time is too long. Figure 11 shows Verilog code for mutex, waste and starvation.

 

Figure 11 – Above we are checking for mutual exclusiveness in a very simple way, by creating a hot ground (0) signal. In simulation, if the signal goes high, that's a violation.

 

Arbiter protocol checks must also be performed and, if anything goes wrong, it needs to result in an error. Figure 12 shows an FSM.

 

Figure 12 – Arbiter protocol checks as a finite state machine.

 

Summary

Design verification requires solid investment in planning, and the plan must define all further verification activity (both simulations and in the lab). In terms of implementation, SystemVerilog can be used but it is worth noting that Verilog is sufficient for most testbenches. Indeed, object-oriented approaches (except from dynamic object generation and polymorphism) may be implemented with Verilog modules.

 

Lastly, an important take-home from this blog is to make sure you chose the correct stimulus generation and checking methods.

 

As mentioned, this blog is based on the first webinar of a three-part series called FPGA Design Verification in a Nutshell. I have reproduced a little over half the content from that webinar here, but if you would like to know more you can access the entire series here:

 

Part 1 - Verification Planning

Part 2 - Advanced Testbench Implementation

Part 3 - Advanced Verification Methods

Alex Gnusin is Aldec’s ALINT-PRO Product Manager. He has 28 years of hands-on experience in various aspects of ASIC and FPGA design and verification. In a previous role, and as Verification Prime on a multi-million gate project, Alex combined various verification methods including linting, formal property checking, dynamic simulation and hardware-assisted acceleration, all to achieve the design verification goals. Alex’s former employers include IBM, Nortel and Ericsson, and he has an M.S. in Electronics, awarded from Technion, Israel Institute of Technology.

Comments

Ask Us a Question
x
Ask Us a Question
x
Captcha ImageReload Captcha
Incorrect data entered.
Thank you! Your question has been submitted. Please allow 1-3 business days for someone to respond to your question.
Internal error occurred. Your question was not submitted. Please contact us using Feedback form.
We use cookies to ensure we give you the best user experience and to provide you with content we believe will be of relevance to you. If you continue to use our site, you consent to our use of cookies. A detailed overview on the use of cookies and other website information is located in our Privacy Policy.