Verification Effectiveness with Riviera-PRO: SystemVerilog Randomized Layered Testbench Understanding SystemVerilog Layered Testbench Vatsal Choksi, Applications Engineer Like(4) Comments (0) In this blog, I will discuss randomized layered testbenches used in SystemVerilog. We need to understand why we need it, and learn why verification engineers had to move from Verilog/VHDL direct testing oriented testbenches and use randomized SystemVerilog layered testbench. The direct testing approach used in Verilog or VHDL does not guarantee that all test-cases have been covered. In a Verilog or VHDL testbench, you create a reference model – i.e. test vectors within testbench – and pass it to the DUT (Design Under Test) and reference model at the same time. Then you can compare the outputs from DUT and Reference model to see if the data matches. Although you can do randomized data generation inside a Verilog testbench, it does not provide us any authenticity. What matters the most is that the randomized data is always different - covering all possible test-vectors/scenarios to measure the efficiency of the design. That is not really possible to implement in testbenches created using Verilog and VHDL. Understanding the need of efficient verification and validation, verification engineers developed a methodology that supports controlled randomized test-generation. SystemVerilog has something different than the normal testbenches, called a ‘Layered Testbench’. The overall idea behind a layered testbench is to create an environment that is easy to adopt, follow and verify. SystemVerilog is constructed using OOP (Object Oriented Programming) concepts. Thus, it is important to know the basic concepts of OOP in order to use SystemVerilog. A testbench in SystemVerilog is layered because the process of verification is distributed into segments, each performing different tasks. The main SystemVerilog elements/classes include: Transaction Generator Driver Monitor Agent Scoreboard Environment Test Top Transaction class helps with top level executions. It provides important information to coverage model about the generated stimulus. The transaction class also helps define activities generated by agent. It drives it to DUT and returns it back to the driver. All activities are covered under transaction class. Generator will generate the stimulus required for the Design Under Test. The stimulus is sent to the DUT via Driver (see below). Thus, the data is first received by class Driver. The exchange of data from one class to another can be done using Mailbox or Queue (Parallel Execution). Now, the stimulus generation can be manipulated as per the needs of the verification engineer. The stimulus can be generated based on needs. It can be constrained specific randomized generation or a simple direct testing. It can be automated or manual. But the key benefit is that the verification engineer has the controllability from the test case. The verification engineer can also use assertions here in case they want to have an activity taking place if an execution fails. Driver will drive the stimulus to the DUT. It receives the data from the Generator via Mailbox or Queue. Next, it converts the data in the form of inputs to the DUT and passes it on to the Design Under Test. The Monitor class observes the activity on the interface signals. Once the information is received from the signals of the DUT, they are passed on to Scoreboard. The Monitor class also reports the transaction failures and converts the outputs from the DUT to a transaction level abstraction. Scoreboard is where the comparison takes place. Expected data and actual data are compared here. Since the data paths for both the DUT and the reference model might be different, the output receiving time for data could be different too. Hence the data needs to be stored. Synchronization needs to be set between the expected output and actual output. Once everything is synced, both the outputs can be compared to figure out whether the effectiveness exist. Environment is where all the class instances are presented, along with their connectivity. Now, on top of the execution of verification using OOP concepts, SystemVerilog also offers functional coverage. You can say that all the test-cases have been covered when you have 100% functional coverage. Code coverage tells us about results we achieve from the test cases generated from the stimulus. It does not tell us whether all possible test cases have been covered. You might argue that I do not need it for my designs as I covered all the possible test-cases, but the complexity grows along with the design growth. As design size and complexity increase it becomes virtually impossible to generate all possible test cases manually. Functional coverage is therefore a great tool to all verification engineers. Overall, SystemVerilog Design and Verification based on IEEE standards 2005,2009 and 2012 is supported on Riviera-PRO (Aldec’s Functional Verification Simulator). This blog has information from the conceptual viewpoint. For a practical approach, please stay tuned. To learn how to use SystemVerilog with Riviera-PRO, please check out the demonstration videos available on our website - https://www.aldec.com/en/support/resources/multimedia/presentations Please let me know if you have any questions or doubts, I would be more than glad to answer any of your questions. Also, please feel free to deliver some feedback. Tags:Riviera-PRO,ASIC,Assertions,Co-simulation,Coverage,Debugging,Design,Documentation,FPGA,HDL,IEEE,OS-VVM,Randomization,Simulation,standards,SystemVerilog,UVM,Verification,Verilog