Pseudo random generator – Part II

Chapter 3 – Saving the output data

In the previous chapters of this tutorial, we saw how to make a simple LFSR module in VHDL and a testbench to verify its output results.

In this chapter, we will update our test bench to add data-saving capabilities. In this way we can:

  • Compare the VHDL data output with data generated by our reference design (in our case, a Python script algorithm)
  • Analyze the data output with other tools (again, in our case, we will use Python to produce an FFT analysis of the block output).

At the end of the chapter, you will be able to find a link to a GitHub repository with the complete files.

On the test bench, a new process was added for data saving. Also, several new libraries and variables must be defined to be able to write to files from a testbench. See the code below, and after the code, you will find an explanation of the changes from tb_lfsr_v1 to tb_lfsr_v2:

library ieee;
  use ieee.std_logic_1164.all;
  use ieee.numeric_std.ALL;
  use ieee.std_logic_textio.all;
  use std.textio.all;
    
entity tb_lfsr_v2 is
end entity;

architecture test of tb_lfsr_v2 is
    constant PERIOD  : time   := 10 ns;
    constant log_file: string := "c:\sim_out_files\res.log";
	
    signal clk       : std_logic := '0';
    signal rstn      : std_logic := '0';
    signal rand      : std_logic;
    signal endSim	 : boolean   := false;

    component lfsr1 is
    port (
      rstn      : in  std_logic;                       
      clk       : in  std_logic;               
      rand      : out std_logic   
    );
    end component;
    
begin
  clk     <= not clk after PERIOD/2;
  rstn    <= '1' after  PERIOD*10;

  -- End the simulation
  process 
  begin
  	if (endSim) then
  	  assert false 
  		report "End of simulation." 
  		severity failure; 
  	end if;
  	wait until (clk = '1');
  end process;	

  -- Save data to file
  file_pr : process 
	file 		file_id: 	text;
	variable 	fline : 	line;
	variable	cnt:		integer := 0;
  begin
	-- Open the file
	file_open(file_id, log_file, WRITE_MODE);
	wait until (rstn = '1');
	wait until (rising_edge(clk));
		
	-- Double loop to output groups of 15 values to the file		
    for i in 0 to 6 loop 
      for j in 0 to 14 loop
        if (rand = '0') then
          write(fline, 0); 
        else
          write(fline, 1);
        end if;
        wait until (rising_edge(clk));
      end loop;
	writeline(file_id, fline);
	end loop;
		
	file_close(file_id);
	endSim <= true;
	wait until (clk = '1');

  end process file_pr;	

  lfrs_inst : lfsr_v1
  port map (
      clk      => clk,
      rstn     => rstn,
      rand     => rand
  );

end architecture;

On lines 4 and 5 we include two new libraries that are needed for file and text handling. The constant log_file on line 12 defines the file (and path) used to store the results of the simulation. Personally, I prefer to use an absolute path to a place that is easy to remember. You may want to use a relative path so the result files will be stored together with all the files of the project. Notice that Vivado simulation home directory is well buried inside the project folder hierarchy.

The process file_pr which starts at line 42 saves the simulation output data to a file. We need a file handle file_id, and a variable line_num to store the results we want to write to the file.

The file is built of lines. The write command, used to fill the line, supports data only in integer format.

First, we write the contents to a line (in our case, we write fifteen values per line on the inner for loop which uses iterating variable j), and at the end of the inner loop, we issue a writeline command (line 63). The outer loop which uses variable i repeats the process so we get seven iterations of the LFSR output.

The contents of the results file are as follows:

000011101100101
000011101100101
000011101100101
000011101100101
000011101100101
000011101100101
000011101100101

We can easily see the pseudo-random sequence of 15-bit values repeating itself.

Saving the data to a file has many advantages. We can process huge amounts of data and analyze it in other tools like Excel, Python, and Matlab. We can compare the output data from our testbench to a set of values known to be OK. This capability is priceless once we have our code debugged and functional, and we are asked to add new functionality. We don’t and even can’t (because it is time-consuming and error-prone) explore visually waveforms every time we change our code.

Instead of visually checking, we can compare the output of the testbench to our “golden” value generated on a previous run that we know to be OK, or to a set of values generated by another designer, an algorithm engineer, or the system engineer for our project. This comparison process can also be automated as a step we will run each time we want to release a new version of our design.

We will see how to do this by comparing the output of our testbench to a Python-generated “golden” file in the next entry of this series.

Leave a Reply

Your email address will not be published. Required fields are marked *