Renaming VM Coverage

This commit is contained in:
Huda-10xe 2024-10-28 00:34:20 -07:00
commit a6de72b5ac
22 changed files with 520 additions and 359 deletions

View File

@ -67,6 +67,12 @@ jobs:
os: ubuntu-latest os: ubuntu-latest
image: null image: null
riscv_path: /home/riscv riscv_path: /home/riscv
# Custom location user level installation
- name: custom-user-install
os: ubuntu-latest
image: null
user: true
riscv_path: $HOME/riscv-toolchain
# run on selected version of ubuntu or on ubuntu-latest with docker image # run on selected version of ubuntu or on ubuntu-latest with docker image
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
@ -108,6 +114,7 @@ jobs:
fi fi
# Set environment variables for the rest of the job # Set environment variables for the rest of the job
- name: Set Environment Variables - name: Set Environment Variables
if: always()
run: | run: |
if [ ! -z ${{ matrix.riscv_path }} ]; then if [ ! -z ${{ matrix.riscv_path }} ]; then
sed -i 's,exit 1,export RISCV=${{ matrix.riscv_path }},g' setup.sh sed -i 's,exit 1,export RISCV=${{ matrix.riscv_path }},g' setup.sh
@ -121,11 +128,11 @@ jobs:
with: with:
name: installation-logs-${{ matrix.name }} name: installation-logs-${{ matrix.name }}
path: ${{ env.RISCV }}/logs/ path: ${{ env.RISCV }}/logs/
# Make riscof only as that is the only testsuite used by standard regression # Make riscof and zsbl only as that is the only testsuite used by standard regression
- name: make tests - name: make tests
run: | run: |
source setup.sh source setup.sh
make riscof --jobs $(nproc --ignore 1) make riscof zsbl --jobs $(nproc --ignore 1)
# Only the linux-testvectors are needed, so remove the rest of the buildroot to save space # Only the linux-testvectors are needed, so remove the rest of the buildroot to save space
- name: Remove Buildroot to Save Space - name: Remove Buildroot to Save Space
run: | run: |
@ -137,6 +144,13 @@ jobs:
run: | run: |
source setup.sh source setup.sh
regression-wally regression-wally
- name: Lint + wsim Test Only (for distros with broken Verilator sim)
if: ${{ matrix.name == 'ubuntu-20.04' || matrix.name == 'rocky-8' || matrix.name == 'almalinux-8'}}
run: |
source setup.sh
mkdir -p $WALLY/sim/verilator/logs/
lint-wally
wsim rv32i arch32i --sim verilator | tee $WALLY/sim/verilator/logs/rv32i_arch32i.log
# Upload regression logs for debugging # Upload regression logs for debugging
- name: Upload regression logs - name: Upload regression logs
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4

200
README.md
View File

@ -1,3 +1,5 @@
![Installation CI](https://github.com/openhwgroup/cvw/actions/workflows/install.yml/badge.svg?branch=main)
# core-v-wally # core-v-wally
Wally is a 5-stage pipelined processor configurable to support all the standard RISC-V options, including RV32/64, A, B, C, D, F, M, Q, and Zk* extensions, virtual memory, PMP, and the various privileged modes and CSRs. It provides optional caches, branch prediction, and standard RISC-V peripherals (CLINT, PLIC, UART, GPIO). Wally is written in SystemVerilog. It passes the [RISC-V Arch Tests](https://github.com/riscv-non-isa/riscv-arch-test) and boots Linux on an FPGA. Configurations range from a minimal RV32E core to a fully featured RV64GC application processor with all of the RVA22S64 profile extensions. Wally is part of the OpenHWGroup family of robust open RISC-V cores. Wally is a 5-stage pipelined processor configurable to support all the standard RISC-V options, including RV32/64, A, B, C, D, F, M, Q, and Zk* extensions, virtual memory, PMP, and the various privileged modes and CSRs. It provides optional caches, branch prediction, and standard RISC-V peripherals (CLINT, PLIC, UART, GPIO). Wally is written in SystemVerilog. It passes the [RISC-V Arch Tests](https://github.com/riscv-non-isa/riscv-arch-test) and boots Linux on an FPGA. Configurations range from a minimal RV32E core to a fully featured RV64GC application processor with all of the RVA22S64 profile extensions. Wally is part of the OpenHWGroup family of robust open RISC-V cores.
@ -14,58 +16,66 @@ Wally is presently at Technology Readiness Level 4, passing the RISC-V compatibi
New users may wish to do the following setup to access the server via a GUI and use a text editor. New users may wish to do the following setup to access the server via a GUI and use a text editor.
Git started with Git configuration and authentication: B.1 (replace with your name and email) - Git started with Git configuration and authentication: C.1 (replace with your name and email)
```bash
$ git config --global user.name "Ben Bitdiddle" $ git config --global user.name "Ben Bitdiddle"
$ git config --global user.email "ben_bitdiddle@wally.edu" $ git config --global user.email "ben_bitdiddle@wally.edu"
$ git config --global pull.rebase false $ git config --global pull.rebase false
Optional: Download and install x2go - A.1.1 ```
Optional: Download and install VSCode - A.4.2 - Optional: Download and install x2go - B.1.1
Optional: Make sure you can log into your server via x2go and via a terminal - Optional: Download and install VSCode - B.4.2
Terminal on Mac, cmd on Windows, xterm on Linux - Optional: Make sure you can log into your server via x2go and via a terminal
See A.1 about ssh -Y login from a terminal - Terminal on Mac, cmd on Windows, xterm on Linux
- See B.1 about ssh -Y login from a terminal
Then fork and clone the repo, source setup, make the tests and run regression Then fork and clone the repo, source setup, make the tests and run regression
If you don't already have a Github account, create one 1. If you don't already have a Github account, create one
In a web browser, visit https://github.com/openhwgroup/cvw 2. In a web browser, visit https://github.com/openhwgroup/cvw
In the upper right part of the screen, click on Fork 3. In the upper right part of the screen, click on Fork
Create a fork, choosing the owner as your github account 4. Create a fork, choosing the owner as your github account and the repository as cvw.
and the repository as cvw. 5. On the Linux computer where you will be working, log in.
6. Clone your fork of the repo. Change `<yourgithubid>` to your github id.
On the Linux computer where you will be working, log in ```bash
Clone your fork of the repo. Change `<yourgithubid>` to your github id.
$ git clone --recurse-submodules https://github.com/<yourgithubid>/cvw $ git clone --recurse-submodules https://github.com/<yourgithubid>/cvw
$ cd cvw $ cd cvw
$ git remote add upstream https://github.com/openhwgroup/cvw $ git remote add upstream https://github.com/openhwgroup/cvw
```
If you are installing on a new system without any tools installed, please jump to the next section, Toolchain Installation then come back here. > [!NOTE]
> If you are installing on a new system without any tools installed, please jump to the next section, [Toolchain Installation](#toolchain-installation-and-configuration-sys-admin), then come back here.
Run the setup script to update your `PATH` and activate the python virtual environment. 7. Run the setup script to update your `PATH` and activate the python virtual environment.
```bash
$ source ./setup.sh $ source ./setup.sh
```
Add the following lines to your `.bashrc` or `.bash_profile` to run the setup script each time you log in. 8. Add the following lines to your `.bashrc` or `.bash_profile` to run the setup script each time you log in.
```bash
if [ -f ~/cvw/setup.sh ]; then if [ -f ~/cvw/setup.sh ]; then
source ~/cvw/setup.sh source ~/cvw/setup.sh
fi fi
```
9. Build the tests and run a regression simulation to prove everything is installed. Building tests may take a while.
Build the tests and run a regression simulation to prove everything is installed. Building tests will take a while. ```bash
$ make --jobs $ make --jobs
$ regression-wally $ regression-wally
```
# Toolchain Installation and Configuration (Sys Admin) # Toolchain Installation and Configuration (Sys Admin)
This section describes the open source toolchain installation. > This section describes the open source toolchain installation.
### Compatibility ### Compatibility
The current version of the toolchain has been tested on Ubuntu (versions 20.04 LTS, 22.04 LTS, and 24.04 LTS) and on Red Hat/Rocky/AlmaLinux (versions 8 and 9). The current version of the toolchain has been tested on Ubuntu (versions 20.04 LTS, 22.04 LTS, and 24.04 LTS) and on Red Hat/Rocky/AlmaLinux (versions 8 and 9).
NOTE: Ubuntu 22.04LTS is incompatible with Synopsys Design Compiler. > [!WARNING]
> - Ubuntu 22.04LTS is incompatible with Synopsys Design Compiler.
> - Verilator currently fails to simulate correctly on Ubuntu 20.04 LTS and Red Hat/Rocky/AlmaLinux 8.
### Overview ### Overview
The toolchain installation script installs the following tools: The toolchain installation script installs the following tools:
@ -74,32 +84,37 @@ The toolchain installation script installs the following tools:
- [QEMU](https://www.qemu.org/docs/master/system/target-riscv.html): emulator - [QEMU](https://www.qemu.org/docs/master/system/target-riscv.html): emulator
- [Spike](https://github.com/riscv-software-src/riscv-isa-sim): functional RISC-V model - [Spike](https://github.com/riscv-software-src/riscv-isa-sim): functional RISC-V model
- [Verilator](https://github.com/verilator/verilator): open-source Verilog simulator - [Verilator](https://github.com/verilator/verilator): open-source Verilog simulator
- NOTE: Verilator does not currently work reliably for simulating Wally on Ubuntu 20.04 LTS and Red Hat 8
- [RISC-V Sail Model](https://github.com/riscv/sail-riscv): golden reference model for RISC-V - [RISC-V Sail Model](https://github.com/riscv/sail-riscv): golden reference model for RISC-V
- [OSU Skywater 130 cell library](https://foss-eda-tools.googlesource.com/skywater-pdk/libs/sky130_osu_sc_t12): standard cell library - [OSU Skywater 130 cell library](https://foss-eda-tools.googlesource.com/skywater-pdk/libs/sky130_osu_sc_t12): standard cell library
- [RISCOF](https://github.com/riscv-software-src/riscof.git): RISC-V compliance test framework - [RISCOF](https://github.com/riscv-software-src/riscof.git): RISC-V compliance test framework
Additionally, Buildroot Linux is built for Wally and linux test-vectors are generated for simulation. See the [Linux README](linux/README.md) for more details. Additionally, Buildroot Linux is built for Wally and linux test-vectors are generated for simulation. See the [Linux README](linux/README.md) for more details. This can be skipped using the `--no-buildroot` flag.
### Installation ### Installation
The tools can be installed by running The tools can be installed by running
$ $WALLY/bin/wally-tool-chain-install.sh ```bash
$ $WALLY/bin/wally-tool-chain-install.sh
```
If this script is run as root or using `sudo`, it will also install all of the prerequisite packages using the system package manager. The default installation directory when run in this manner is `/opt/riscv`. If this script is run as root or using `sudo`, it will also install all of the prerequisite packages using the system package manager. The default installation directory when run in this manner is `/opt/riscv`.
If a user-level installation is desired, the script can instead be run by any user without `sudo` and the installation directory will be `~/riscv`. In this case, the prerequisite packages must first be installed by running If a user-level installation is desired, the script can instead be run by any user without `sudo` and the installation directory will be `~/riscv`. In this case, the prerequisite packages must first be installed by running
$ sudo $WALLY/bin/wally-package-install.sh ```bash
$ sudo $WALLY/bin/wally-package-install.sh
```
In either case, the installation directory can be overridden by passing the desired directory as the last argument to the installation script. For example, In either case, the installation directory can be overridden by passing the desired directory as the last argument to the installation script. For example,
$ sudo $WALLY/bin/wally-tool-chain-install.sh /home/riscv ```bash
$ sudo $WALLY/bin/wally-tool-chain-install.sh /home/riscv
```
See `wally-tool-chain-install.sh` for a detailed description of each component, or to issue the commands one at a time to install on the command line. See `wally-tool-chain-install.sh` for a detailed description of each component, or to issue the commands one at a time to install on the command line.
**NOTE:** The complete installation process requires ~55 GB of free space. If the `--clean` flag is passed as the first argument to the installation script then the final consumed space is only ~26 GB, but upgrading the tools requires reinstalling from scratch. > [!NOTE]
> The complete installation process requires ~55 GB of free space. If the `--clean` flag is passed to the installation script then the final consumed space is only ~26 GB, but upgrading the tools will reinstall everything from scratch.
### Configuration ### Configuration
`$WALLY/setup.sh` sources `$RISCV/site-setup.sh`. If the toolchain was installed in either of the default locations (`/opt/riscv` or `~/riscv`), `$RISCV` will automatically be set to the correct path when `setup.sh` is run. If a custom installation directory was used, then `$WALLY/setup.sh` must be modified to set the correct path. `$WALLY/setup.sh` sources `$RISCV/site-setup.sh`. If the toolchain was installed in either of the default locations (`/opt/riscv` or `~/riscv`), `$RISCV` will automatically be set to the correct path when `setup.sh` is run. If a custom installation directory was used, then `$WALLY/setup.sh` must be modified to set the correct path.
@ -108,12 +123,13 @@ See `wally-tool-chain-install.sh` for a detailed description of each component,
Change the following lines to point to the path and license server for your Siemens Questa and Synopsys Design Compiler and VCS installations and license servers. If you only have Questa or VCS, you can still simulate but cannot run logic synthesis. If Questa, VSC, or Design Compiler are already setup on this system then don't set these variables. Change the following lines to point to the path and license server for your Siemens Questa and Synopsys Design Compiler and VCS installations and license servers. If you only have Questa or VCS, you can still simulate but cannot run logic synthesis. If Questa, VSC, or Design Compiler are already setup on this system then don't set these variables.
export MGLS_LICENSE_FILE=.. # Change this to your Siemens license server ```bash
export SNPSLMD_LICENSE_FILE=.. # Change this to your Synopsys license server export MGLS_LICENSE_FILE=.. # Change this to your Siemens license server
export QUESTA_HOME=.. # Change this for your path to Questa export SNPSLMD_LICENSE_FILE=.. # Change this to your Synopsys license server
export DC_HOME=.. # Change this for your path to Synopsys Design Compiler export QUESTA_HOME=.. # Change this for your path to Questa
export VCS_HOME=.. # Change this for your path to Synopsys VCS export DC_HOME=.. # Change this for your path to Synopsys Design Compiler
export VCS_HOME=.. # Change this for your path to Synopsys VCS
```
# Installing EDA Tools # Installing EDA Tools
@ -127,39 +143,48 @@ Although most EDA tools are Linux-friendly, they tend to have issues when not in
### Siemens Questa ### Siemens Questa
Siemens Questa simulates behavioral, RTL and gate-level HDL. To install Siemens Questa first go to a web browser and navigate to Siemens Questa simulates behavioral, RTL and gate-level HDL. To install Siemens Questa first go to a web browser and navigate to https://eda.sw.siemens.com/en-US/ic/questa/simulation/advanced-simulator/. Click Sign In and log in with your credentials and the product can easily be downloaded and installed. Some Windows-based installations also require gcc libraries that are typically provided as a compressed zip download through Siemens.
https://eda.sw.siemens.com/en-US/ic/questa/simulation/advanced-simulator/. Click Sign In and log in with your credentials and the product can easily be downloaded and installed. Some Windows-based installations also require gcc libraries that are typically provided as a compressed zip download through Siemens.
### Synopsys Design Compiler (DC) ### Synopsys Design Compiler (DC)
Many commercial synthesis and place and route tools require a common installer. These installers are provided by the EDA vendor and Synopsys has one called Synopsys Installer. To use Synopsys Installer, you will need to acquire a license through Synopsys that is typically Called Synopsys Common Licensing (SCL). Both the Synopsys Installer, license key file, and Design Compiler can all be downloaded through Synopsys Solvnet. First open a web browser, log into Synsopsy Solvnet, and download the installer and Design Compiler installation files. Then, install the Installer Many commercial synthesis and place and route tools require a common installer. These installers are provided by the EDA vendor and Synopsys has one called Synopsys Installer. To use Synopsys Installer, you will need to acquire a license through Synopsys that is typically Called Synopsys Common Licensing (SCL). Both the Synopsys Installer, license key file, and Design Compiler can all be downloaded through Synopsys Solvnet. First open a web browser, log into Synsopsy Solvnet, and download the installer and Design Compiler installation files. Then, install the Installer.
$ firefox & ```bash
Navigate to https://solvnet.synopsys.com $ firefox &
Log in with your institutions username and password ```
Click on Downloads, then scroll down to Synopsys Installer
Select the latest version (currently 5.4). Click Download Here, agree,
Click on SynopsysInstaller_v5.4.run
Return to downloads and also get Design Compiler (synthesis) latest version, and any others you want.
Click on all parts and the .spf file, then click Download Files near the top
move the SynopsysInstaller into /cad/synopsys/Installer_5.4 with 755 permission for cad,
move other files into /cad/synopsys/downloads and work as user cad from here on
$ cd /cad/synopsys/installer_5.4 - Navigate to https://solvnet.synopsys.com
$ ./SynopsysInstaller_v5.4.run - Log in with your institutions username and password
Accept default installation directory - Click on Downloads, then scroll down to Synopsys Installer
$ ./installer - Select the latest version (currently 5.4). Click Download Here, agree,
Enter source path as /cad/synopsys/downloads, and installation path as /cad/synopsys - Click on SynopsysInstaller_v5.4.run
When prompted, enter your site ID - Return to downloads and also get Design Compiler (synthesis) latest version, and any others you want.
Follow prompts - Click on all parts and the .spf file, then click Download Files near the top
- Move the SynopsysInstaller into `/cad/synopsys/Installer_5.4` with 755 permission for cad,
- move other files into `/cad/synopsys/downloads` and work as user cad from here on
```bash
$ cd /cad/synopsys/installer_5.4
$ ./SynopsysInstaller_v5.4.run
```
- Accept default installation directory
```bash
$ ./installer
```
- Enter source path as `/cad/synopsys/downloads`, and installation path as `/cad/synopsys`
- When prompted, enter your site ID
- Follow prompts
Installer can be utilized in graphical or text-based modes. It is far easier to use the text-based installation tool. To install DC, navigate to the location where your downloaded DC files are and type installer. You should be prompted with questions related to where you wish to have your files installed. Installer can be utilized in graphical or text-based modes. It is far easier to use the text-based installation tool. To install DC, navigate to the location where your downloaded DC files are and type installer. You should be prompted with questions related to where you wish to have your files installed.
The Synopsys Installer automatically installs all downloaded product files into a single top-level target directory. You do not need to specify the installation directory for each product. For example, if you specify /import/programs/synopsys as the target directory, your installation directory structure might look like this after installation: The Synopsys Installer automatically installs all downloaded product files into a single top-level target directory. You do not need to specify the installation directory for each product. For example, if you specify `/import/programs/synopsys` as the target directory, your installation directory structure might look like this after installation:
/import/programs/synopsys/syn/S-2021.06-SP1 ```bash
/import/programs/synopsys/syn/S-2021.06-SP1
```
Note: Although most parts of Wally, including the Questa simulator, will work on most modern Linux platforms, as of 2022, the Synopsys CAD tools for SoC design are only supported on RedHat Enterprise Linux 7.4 or 8 or SUSE Linux Enterprise Server (SLES) 12 or 15. Moreover, the RISC-V formal specification (sail-riscv) does not build gracefully on RHEL7. > [!Note]
> Although most parts of Wally, including the Questa simulator, will work on most modern Linux platforms, as of 2022, the Synopsys CAD tools for SoC design are only supported on RedHat Enterprise Linux 7.4 or 8 or SUSE Linux Enterprise Server (SLES) 12 or 15. Moreover, the RISC-V formal specification (sail-riscv) does not build gracefully on RHEL7.
The Verilog simulation has been tested with Siemens Questa/ModelSim. This package is available to universities worldwide as part of the Design Verification Bundle through the Siemens Academic Partner Program members for $990/year. The Verilog simulation has been tested with Siemens Questa/ModelSim. This package is available to universities worldwide as part of the Design Verification Bundle through the Siemens Academic Partner Program members for $990/year.
@ -174,7 +199,7 @@ If you want to add a cronjob you can do the following:
1) Set up the email client `mutt` for your distribution 1) Set up the email client `mutt` for your distribution
2) Enter `crontab -e` into a terminal 2) Enter `crontab -e` into a terminal
3) add this code to test building CVW and then running `regression-wally --nightly` at 9:30 PM each day 3) add this code to test building CVW and then running `regression-wally --nightly` at 9:30 PM each day
``` ```bash
30 21 * * * bash -l -c "source ~/PATH/TO/CVW/setup.sh; PATH_TO_CVW/cvw/bin/wrapper_nightly_runs.sh --path {PATH_TO_TEST_LOCATION} --target all --tests nightly --send_email harris@hmc.edu,kaitlin.verilog@gmail.com" 30 21 * * * bash -l -c "source ~/PATH/TO/CVW/setup.sh; PATH_TO_CVW/cvw/bin/wrapper_nightly_runs.sh --path {PATH_TO_TEST_LOCATION} --target all --tests nightly --send_email harris@hmc.edu,kaitlin.verilog@gmail.com"
``` ```
@ -182,44 +207,57 @@ If you want to add a cronjob you can do the following:
wsim runs one of multiple simulators, Questa, VCS, or Verilator using a specific configuration and either a suite of tests or a specific elf file. wsim runs one of multiple simulators, Questa, VCS, or Verilator using a specific configuration and either a suite of tests or a specific elf file.
The general syntax is The general syntax is
wsim <config> <suite or elf file or directory> [--options] `wsim <config> <suite or elf file or directory> [--options]`
Parameters and options: Parameters and options:
-h, --help show this help message and exit ```
--sim {questa,verilator,vcs}, -s {questa,verilator,vcs} Simulator -h, --help show this help message and exit
--tb {testbench,testbench_fp}, -t {testbench,testbench_fp} Testbench --sim {questa,verilator,vcs}, -s {questa,verilator,vcs} Simulator
--gui, -g Simulate with GUI --tb {testbench,testbench_fp}, -t {testbench,testbench_fp} Testbench
--coverage, -c Code & Functional Coverage --gui, -g Simulate with GUI
--fcov, -f Code & Functional Coverage --coverage, -c Code & Functional Coverage
--args ARGS, -a ARGS Optional arguments passed to simulator via $value$plusargs --fcov, -f Code & Functional Coverage
--vcd, -v Generate testbench.vcd --args ARGS, -a ARGS Optional arguments passed to simulator via $value$plusargs
--lockstep, -l Run ImperasDV lock, step, and compare. --vcd, -v Generate testbench.vcd
--locksteplog LOCKSTEPLOG, -b LOCKSTEPLOG Retired instruction number to be begin logging. --lockstep, -l Run ImperasDV lock, step, and compare.
--covlog COVLOG, -d COVLOG Log coverage after n instructions. --locksteplog LOCKSTEPLOG, -b LOCKSTEPLOG Retired instruction number to be begin logging.
--elfext ELFEXT, -e ELFEXT When searching for elf files only includes ones which end in this extension --covlog COVLOG, -d COVLOG Log coverage after n instructions.
--elfext ELFEXT, -e ELFEXT When searching for elf files only includes ones which end in this extension
```
Run basic test with questa Run basic test with questa
wsim rv64gc arch64i ```bash
wsim rv64gc arch64i
```
Run Questa with gui Run Questa with gui
wsim rv64gc wally64priv --gui ```bash
wsim rv64gc wally64priv --gui
```
Run lockstep against ImperasDV with a single elf file in the --gui. Lockstep requires single elf. Run lockstep against ImperasDV with a single elf file in the gui. Lockstep requires single elf.
wsim rv64gc ../../tests/riscof/work/riscv-arch-test/rv64i_m/I/src/add-01.S/ref/ref.elf --lockstep --gui ```bash
wsim rv64gc ../../tests/riscof/work/riscv-arch-test/rv64i_m/I/src/add-01.S/ref/ref.elf --lockstep --gui
```
Run lockstep against ImperasDV with a single elf file. Compute coverage. Run lockstep against ImperasDV with a single elf file. Compute coverage.
wsim rv64gc ../../tests/riscof/work/riscv-arch-test/rv64i_m/I/src/add-01.S/ref/ref.elf --lockstep --coverage ```bash
wsim rv64gc ../../tests/riscof/work/riscv-arch-test/rv64i_m/I/src/add-01.S/ref/ref.elf --lockstep --coverage
```
Run lockstep against ImperasDV with directory file. Run lockstep against ImperasDV with directory file.
wsim rv64gc ../../tests/riscof/work/riscv-arch-test/rv64i_m/I/src/ --lockstep ```bash
wsim rv64gc ../../tests/riscof/work/riscv-arch-test/rv64i_m/I/src/ --lockstep
```
Run lockstep against ImperasDV with directory file and specify specific extension. Run lockstep against ImperasDV with directory file and specify specific extension.
wsim rv64gc ../../tests/riscof/work/riscv-arch-test/rv64i_m/I/src/ --lockstep --elfext ref.elf ```bash
wsim rv64gc ../../tests/riscof/work/riscv-arch-test/rv64i_m/I/src/ --lockstep --elfext ref.elf
```

View File

@ -84,7 +84,7 @@ from pathlib import Path
class FolderManager: class FolderManager:
"""A class for managing folders and repository cloning.""" """A class for managing folders and repository cloning."""
def __init__(self): def __init__(self, basedir):
""" """
Initialize the FolderManager instance. Initialize the FolderManager instance.
@ -92,8 +92,12 @@ class FolderManager:
base_dir (str): The base directory where folders will be managed and repository will be cloned. base_dir (str): The base directory where folders will be managed and repository will be cloned.
""" """
env_extract_var = 'WALLY' env_extract_var = 'WALLY'
if os.environ.get(env_extract_var):
self.base_dir = os.environ.get(env_extract_var) self.base_dir = os.environ.get(env_extract_var)
self.base_parent_dir = os.path.dirname(self.base_dir) self.base_parent_dir = os.path.dirname(self.base_dir)
else:
self.base_dir = basedir
self.base_parent_dir = self.base_dir
# logger.info(f"Base directory: {self.base_dir}") # logger.info(f"Base directory: {self.base_dir}")
# logger.info(f"Parent Base directory: {self.base_parent_dir}") # logger.info(f"Parent Base directory: {self.base_parent_dir}")
@ -313,7 +317,7 @@ class TestRunner:
self.logger.error(f"Error making the tests. Target: {target}") self.logger.error(f"Error making the tests. Target: {target}")
return False return False
def run_tests(self, test_type=None, test_name=None, test_extension=None): def run_tests(self, test_type=None, test_name=None, test_extensions=None):
""" """
Run a script through the terminal and save the output to a file. Run a script through the terminal and save the output to a file.
@ -329,12 +333,12 @@ class TestRunner:
output_file = self.log_dir.joinpath(f"{test_name}-output.log") output_file = self.log_dir.joinpath(f"{test_name}-output.log")
os.chdir(self.sim_dir) os.chdir(self.sim_dir)
if test_extension: if test_extensions:
command = [test_type, test_name, test_extension] command = [test_type, test_name] + test_extensions
self.logger.info(f"Command used to run tests: {test_type} {test_name} {test_extension}") self.logger.info(f"Command used to run tests in directory {self.sim_dir}: {test_type} {test_name} {' '.join(test_extensions)}")
else: else:
command = [test_type, test_name] command = [test_type, test_name]
self.logger.info(f"Command used to run tests: {test_type} {test_name}") self.logger.info(f"Command used to run tests in directory {self.sim_dir}: {test_type} {test_name}")
# Execute the command using subprocess and save the output into a file # Execute the command using subprocess and save the output into a file
@ -348,10 +352,10 @@ class TestRunner:
self.logger.error("There was an error in running the tests in the run_tests function: {e}") self.logger.error("There was an error in running the tests in the run_tests function: {e}")
# Check if the command executed successfuly # Check if the command executed successfuly
if result.returncode or result.returncode == 0: if result.returncode or result.returncode == 0:
self.logger.info(f"Test ran successfuly. Test type: {test_type}, test name: {test_name}, test extention: {test_extension}") self.logger.info(f"Test ran successfuly. Test type: {test_type}, test name: {test_name}, test extension: {' '.join(test_extensions)}")
return True, output_file return True, output_file
else: else:
self.logger.error(f"Error making test. Test type: {test_type}, test name: {test_name}, test extention: {test_extension}") self.logger.error(f"Error making test. Test type: {test_type}, test name: {test_name}, test extension: {' '.join(test_extensions)}")
return False, output_file return False, output_file
@ -406,23 +410,31 @@ class TestRunner:
# Remove ANSI escape codes # Remove ANSI escape codes
line = re.sub(r'\x1b\[[0-9;]*[mGK]', '', lines[index]) line = re.sub(r'\x1b\[[0-9;]*[mGK]', '', lines[index])
if "Success" in line: if "Success" in line: # test succeeds
passed_configs.append(line.split(':')[0].strip()) passed_configs.append(line.split(':')[0].strip())
elif "passed lint" in line: elif "passed lint" in line:
passed_configs.append(line.split(' ')[0].strip()) passed_configs.append(f"Lint: {line.split(' ')[0].strip()}")
#passed_configs.append(line) # potentially use a space #passed_configs.append(line) # potentially use a space
elif "failed lint" in line: elif "failed lint" in line:
failed_configs.append(line.split(' ')[0].strip(), "no log file") failed_configs.append([f"Lint: {line.split(' ')[0].strip()}", "No Log File"])
#failed_configs.append(line) #failed_configs.append(line)
elif "Failures detected in output" in line: elif "Failures detected in output" in line: # Text explicitly fails
try: try:
config_name = line.split(':')[0].strip() config_name = line.split(':')[0].strip()
log_file = os.path.abspath("logs/"+config_name+".log") log_file = os.path.abspath(os.path.join("logs", config_name, ".log"))
failed_configs.append((config_name, log_file)) failed_configs.append((config_name, log_file))
except: except:
failed_configs.append((config_name, "Log file not found")) failed_configs.append((config_name, "Log file not found"))
elif "Timeout" in line: # Test times out
try:
config_name = line.split(':')[0].strip()
log_file = os.path.abspath("logs/"+config_name+".log")
failed_configs.append((f"Timeout: {config_name}", log_file))
except:
failed_configs.append((f"Timeout: {config_name}", "No Log File"))
index += 1 index += 1
@ -535,7 +547,7 @@ class TestRunner:
md_file.write(f"\n**Total failed tests: {total_number_failures}**") md_file.write(f"\n**Total failed tests: {total_number_failures}**")
for (test_item, item) in zip(test_list, failed_tests): for (test_item, item) in zip(test_list, failed_tests):
md_file.write(f"\n\n### {test_item[1]} test") md_file.write(f"\n\n### {test_item[1]} test")
md_file.write(f"\n**Command used:** {test_item[0]} {test_item[1]} {test_item[2]}\n\n") md_file.write(f"\n**Command used:** {test_item[0]} {test_item[1]} {' '.join(test_item[2])}\n\n")
md_file.write(f"**Failed Tests:**\n") md_file.write(f"**Failed Tests:**\n")
@ -558,7 +570,7 @@ class TestRunner:
md_file.write(f"\n**Total successful tests: {total_number_success}**") md_file.write(f"\n**Total successful tests: {total_number_success}**")
for (test_item, item) in zip(test_list, passed_tests): for (test_item, item) in zip(test_list, passed_tests):
md_file.write(f"\n\n### {test_item[1]} test") md_file.write(f"\n\n### {test_item[1]} test")
md_file.write(f"\n**Command used:** {test_item[0]} {test_item[1]} {test_item[2]}\n\n") md_file.write(f"\n**Command used:** {test_item[0]} {test_item[1]} {' '.join(test_item[2])}\n\n")
md_file.write(f"\n**Successful Tests:**\n") md_file.write(f"\n**Successful Tests:**\n")
@ -619,7 +631,7 @@ class TestRunner:
# check if there are any emails # check if there are any emails
if not receiver_emails: if not receiver_emails:
self.logger.ERROR("No receiver emails provided.") self.logger.error("No receiver emails provided.")
return return
# grab the html file # grab the html file
@ -660,7 +672,7 @@ def main():
parser.add_argument('--path',default = "nightly", help='specify the path for where the nightly repositories will be cloned ex: "nightly-runs') parser.add_argument('--path',default = "nightly", help='specify the path for where the nightly repositories will be cloned ex: "nightly-runs')
parser.add_argument('--repository',default = "https://github.com/openhwgroup/cvw", help='specify which github repository you want to clone') parser.add_argument('--repository',default = "https://github.com/openhwgroup/cvw", help='specify which github repository you want to clone')
parser.add_argument('--target', default = "all", help='types of tests you can make are: all, wally-riscv-arch-test, no') parser.add_argument('--target', default = "--jobs", help='types of tests you can make are: all, wally-riscv-arch-test, no')
parser.add_argument('--tests', default = "nightly", help='types of tests you can run are: nightly, test, test_lint') parser.add_argument('--tests', default = "nightly", help='types of tests you can run are: nightly, test, test_lint')
parser.add_argument('--send_email',default = "", nargs="+", help='What emails to send test results to. Example: "[email1],[email2],..."') parser.add_argument('--send_email',default = "", nargs="+", help='What emails to send test results to. Example: "[email1],[email2],..."')
@ -682,7 +694,7 @@ def main():
log_file_path = log_path.joinpath("nightly_build.log") log_file_path = log_path.joinpath("nightly_build.log")
previous_cvw_path = Path.home().joinpath(args.path,f"{yesterday}/cvw") previous_cvw_path = Path.home().joinpath(args.path,f"{yesterday}/cvw")
# creates the object # creates the object
folder_manager = FolderManager() folder_manager = FolderManager(basedir=args.path)
# setting the path on where to clone new repositories of cvw # setting the path on where to clone new repositories of cvw
folder_manager.create_folders([cvw_path, results_path, log_path]) folder_manager.create_folders([cvw_path, results_path, log_path])
@ -691,14 +703,18 @@ def main():
folder_manager.clone_repository(cvw_path, args.repository) folder_manager.clone_repository(cvw_path, args.repository)
# Define tests that we can run # Define tests that we can run
if (args.tests == "nightly"): #
test_list = [["python", "regression-wally", "--nightly --buildroot"]] # flags are a list
elif (args.tests == "test"): if (args.tests == "all"):
test_list = [["python", "regression-wally", ""]] test_list = [["python", "./regression-wally", ["--nightly", "--buildroot"]]]
elif (args.tests == "test_lint"): elif (args.tests == "nightly"):
test_list = [["bash", "lint-wally", "-nightly"]] test_list = [["python", "./regression-wally", ["--nightly"]]]
elif (args.tests == "regression"):
test_list = [["python", "./regression-wally", []]]
elif (args.tests == "lint"):
test_list = [["bash", "./lint-wally", ["--nightly"]]]
else: else:
print(f"Error: Invalid test '"+args.test+"' specified") print(f"Error: Invalid test {args.tests} specified")
raise SystemExit raise SystemExit
############################################# #############################################
@ -747,12 +763,12 @@ def main():
if args.target != "no": if args.target != "no":
test_runner.execute_makefile(target = args.target, makefile_path=test_runner.cvw) test_runner.execute_makefile(target = args.target, makefile_path=test_runner.cvw)
if args.target == "all": # TODO: remove vestigial code if no longer wanted
# Compile Linux for local testing # if args.target == "all":
test_runner.set_env_var("RISCV",str(test_runner.cvw)) # # Compile Linux for local testing
linux_path = test_runner.cvw / "linux" # test_runner.set_env_var("RISCV",str(test_runner.cvw))
test_runner.execute_makefile(target = "all_nosudo", makefile_path=linux_path) # linux_path = test_runner.cvw / "linux"
test_runner.execute_makefile(target = "dumptvs_nosudo", makefile_path=linux_path) # test_runner.execute_makefile(target = "all", makefile_path=linux_path)
############################################# #############################################
# RUN TESTS # # RUN TESTS #
@ -766,9 +782,9 @@ def main():
total_failures = [] total_failures = []
total_success = [] total_success = []
for test_type, test_name, test_extension in test_list: for test_type, test_name, test_extensions in test_list:
check, output_location = test_runner.run_tests(test_type=test_type, test_name=test_name, test_extension=test_extension) check, output_location = test_runner.run_tests(test_type=test_type, test_name=test_name, test_extensions=test_extensions)
try: try:
if check: # this checks if the test actually ran successfuly if check: # this checks if the test actually ran successfuly
output_log_list.append(output_location) output_log_list.append(output_location)
@ -778,7 +794,7 @@ def main():
passed, failed = test_runner.clean_format_output(input_file = output_location) passed, failed = test_runner.clean_format_output(input_file = output_location)
logger.info(f"{test_name} has been formatted to markdown") logger.info(f"{test_name} has been formatted to markdown")
except: except:
logger.ERROR(f"Error occured with formatting {test_name}") logger.error(f"Error occured with formatting {test_name}")
logger.info(f"The # of failures are for {test_name}: {len(failed)}") logger.info(f"The # of failures are for {test_name}: {len(failed)}")
total_number_failures+= len(failed) total_number_failures+= len(failed)
@ -789,14 +805,18 @@ def main():
total_success.append(passed) total_success.append(passed)
test_runner.rewrite_to_markdown(test_name, passed, failed) test_runner.rewrite_to_markdown(test_name, passed, failed)
newlinechar = "\n"
logger.info(f"Failed tests: \n{newlinechar.join([x[0] for x in failed])}")
except Exception as e: except Exception as e:
logger.error("There was an error in running the tests: {e}") logger.error(f"There was an error in running the tests: {e}")
logger.info(f"The total sucesses for all tests ran are: {total_number_success}") logger.info(f"The total sucesses for all tests ran are: {total_number_success}")
logger.info(f"The total failures for all tests ran are: {total_number_failures}") logger.info(f"The total failures for all tests ran are: {total_number_failures}")
# Copy actual test logs from sim/questa, sim/verilator # Copy actual test logs from sim/questa, sim/verilator, sim/vcs
test_runner.copy_sim_logs([test_runner.cvw / "sim/questa/logs", test_runner.cvw / "sim/verilator/logs"]) if not args.tests == "test_lint":
test_runner.copy_sim_logs([test_runner.cvw / "sim/questa/logs", test_runner.cvw / "sim/verilator/logs", test_runner.cvw / "sim/vcs/logs"])
############################################# #############################################
# FORMAT TESTS # # FORMAT TESTS #

View File

@ -507,7 +507,7 @@ def main():
TIMEOUT_DUR = 20*60 # seconds TIMEOUT_DUR = 20*60 # seconds
os.system('rm -f questa/cov/*.ucdb') os.system('rm -f questa/cov/*.ucdb')
elif args.fcov: elif args.fcov:
TIMEOUT_DUR = 2*60 TIMEOUT_DUR = 8*60
os.system('rm -f questa/fcov_ucdb/* questa/fcov_logs/* questa/fcov/*') os.system('rm -f questa/fcov_ucdb/* questa/fcov_logs/* questa/fcov/*')
elif args.buildroot: elif args.buildroot:
TIMEOUT_DUR = 60*1440 # 1 day TIMEOUT_DUR = 60*1440 # 1 day
@ -534,6 +534,7 @@ def main():
try: try:
num_fail+=result.get(timeout=TIMEOUT_DUR) num_fail+=result.get(timeout=TIMEOUT_DUR)
except TimeoutError: except TimeoutError:
pool.terminate()
num_fail+=1 num_fail+=1
print(f"{bcolors.FAIL}%s: Timeout - runtime exceeded %d seconds{bcolors.ENDC}" % (config.cmd, TIMEOUT_DUR)) print(f"{bcolors.FAIL}%s: Timeout - runtime exceeded %d seconds{bcolors.ENDC}" % (config.cmd, TIMEOUT_DUR))

View File

@ -48,28 +48,64 @@ ENDC='\033[0m' # Reset to default color
error() { error() {
echo -e "${FAIL_COLOR}Error: $STATUS installation failed" echo -e "${FAIL_COLOR}Error: $STATUS installation failed"
echo -e "Error on line ${BASH_LINENO[0]} with command $BASH_COMMAND${ENDC}" echo -e "Error on line ${BASH_LINENO[0]} with command $BASH_COMMAND${ENDC}"
if [ -e "$RISCV/logs/$STATUS.log" ]; then
echo -e "Please check the log in $RISCV/logs/$STATUS.log for more information." echo -e "Please check the log in $RISCV/logs/$STATUS.log for more information."
fi
exit 1 exit 1
} }
# Check if a git repository exists, is up to date, and has been installed # Check if a git repository exists, is up to date, and has been installed
# Clones the repository if it doesn't exist # clones the repository if it doesn't exist
# $1: repo name
# $2: repo url to clone from
# $3: file to check if already installed
# $4: upstream branch, optional, default is master
git_check() { git_check() {
local repo=$1 local repo=$1
local url=$2 local url=$2
local check=$3 local check=$3
local branch="${4:-master}" local branch="${4:-master}"
if [[ ((! -e $repo) && ($(git clone "$url") || true)) || ($(cd "$repo"; git fetch; git rev-parse HEAD) != $(cd "$repo"; git rev-parse origin/"$branch")) || (! -e $check) ]]; then
return 0 # Clone repo if it doesn't exist
if [[ ! -e $repo ]]; then
for ((i=1; i<=5; i++)); do
git clone "$url" && break
echo -e "${WARNING_COLOR}Failed to clone $repo. Retrying.${ENDC}"
rm -rf "$repo"
sleep $i
done
if [[ ! -e $repo ]]; then
echo -e "${ERROR_COLOR}Failed to clone $repo after 5 attempts. Exiting.${ENDC}"
exit 1
fi
fi
# Get the current HEAD commit hash and the remote branch commit hash
cd "$repo"
git fetch
local local_head=$(git rev-parse HEAD)
local remote_head=$(git rev-parse origin/"$branch")
# Check if the git repository is not up to date or the specified file does not exist
if [[ "$local_head" != "$remote_head" ]]; then
echo "$repo is not up to date. Updating now."
true
elif [[ ! -e $check ]]; then
true
else else
return 1 false
fi fi
} }
# Log output to a file and only print lines with keywords # Log output to a file and only print lines with keywords
logger() { logger() {
local log="$RISCV/logs/$1.log" local log_file="$RISCV/logs/$1.log"
cat < /dev/stdin | tee -a "$log" | (grep -iE --color=never "(\bwarning|\berror|\bfail|\bsuccess|\bstamp|\bdoesn't work)" || true) | (grep -viE --color=never "(_warning|warning_|_error|error_|-warning|warning-|-error|error-|Werror|error\.o|warning flags)" || true) local keyword_pattern="(\bwarning|\berror|\bfail|\bsuccess|\bstamp|\bdoesn't work)"
local exclude_pattern="(_warning|warning_|_error|error_|-warning|warning-|-error|error-|Werror|error\.o|warning flags)"
cat < /dev/stdin | tee -a "$log_file" | \
(grep -iE --color=never "$keyword_pattern" || true) | \
(grep -viE --color=never "$exclude_pattern" || true)
} }
set -e # break on error set -e # break on error
@ -111,6 +147,10 @@ fi
export PATH=$PATH:$RISCV/bin:/usr/bin export PATH=$PATH:$RISCV/bin:/usr/bin
export PKG_CONFIG_PATH=$RISCV/lib64/pkgconfig:$RISCV/lib/pkgconfig:$RISCV/share/pkgconfig:$RISCV/lib/x86_64-linux-gnu/pkgconfig:$PKG_CONFIG_PATH export PKG_CONFIG_PATH=$RISCV/lib64/pkgconfig:$RISCV/lib/pkgconfig:$RISCV/share/pkgconfig:$RISCV/lib/x86_64-linux-gnu/pkgconfig:$PKG_CONFIG_PATH
if (( RHEL_VERSION != 8 )); then
retry_on_host_error="--retry-on-host-error"
fi
# Check for incompatible PATH environment variable before proceeding with installation # Check for incompatible PATH environment variable before proceeding with installation
if [[ ":$PATH:" == *::* || ":$PATH:" == *:.:* ]]; then if [[ ":$PATH:" == *::* || ":$PATH:" == *:.:* ]]; then
echo -e "${FAIL_COLOR}Error: You seem to have the current working directory in your \$PATH environment variable." echo -e "${FAIL_COLOR}Error: You seem to have the current working directory in your \$PATH environment variable."
@ -191,11 +231,13 @@ if (( RHEL_VERSION == 8 )) || (( UBUNTU_VERSION == 20 )); then
section_header "Installing glib" section_header "Installing glib"
pip install -U meson # Meson is needed to build glib pip install -U meson # Meson is needed to build glib
cd "$RISCV" cd "$RISCV"
curl --location https://download.gnome.org/sources/glib/2.70/glib-2.70.5.tar.xz | tar xJ wget -nv --retry-connrefused $retry_on_host_error https://download.gnome.org/sources/glib/2.70/glib-2.70.5.tar.xz
tar -xJf glib-2.70.5.tar.xz
rm -f glib-2.70.5.tar.xz
cd glib-2.70.5 cd glib-2.70.5
meson setup _build --prefix="$RISCV" meson setup _build --prefix="$RISCV"
meson compile -C _build meson compile -C _build -j "${NUM_THREADS}" 2>&1 | logger $STATUS; [ "${PIPESTATUS[0]}" == 0 ]
meson install -C _build meson install -C _build 2>&1 | logger $STATUS; [ "${PIPESTATUS[0]}" == 0 ]
cd "$RISCV" cd "$RISCV"
rm -rf glib-2.70.5 rm -rf glib-2.70.5
echo -e "${SUCCESS_COLOR}glib successfully installed!${ENDC}" echo -e "${SUCCESS_COLOR}glib successfully installed!${ENDC}"
@ -208,11 +250,13 @@ if (( RHEL_VERSION == 8 )); then
if [ ! -e "$RISCV"/include/gmp.h ]; then if [ ! -e "$RISCV"/include/gmp.h ]; then
section_header "Installing gmp" section_header "Installing gmp"
cd "$RISCV" cd "$RISCV"
curl --location https://ftp.gnu.org/gnu/gmp/gmp-6.3.0.tar.xz | tar xJ wget -nv --retry-connrefused $retry_on_host_error https://ftp.gnu.org/gnu/gmp/gmp-6.3.0.tar.xz
tar -xJf gmp-6.3.0.tar.xz
rm -f gmp-6.3.0.tar.xz
cd gmp-6.3.0 cd gmp-6.3.0
./configure --prefix="$RISCV" ./configure --prefix="$RISCV"
make -j "${NUM_THREADS}" make -j "${NUM_THREADS}" 2>&1 | logger $STATUS; [ "${PIPESTATUS[0]}" == 0 ]
make install make install 2>&1 | logger $STATUS; [ "${PIPESTATUS[0]}" == 0 ]
cd "$RISCV" cd "$RISCV"
rm -rf gmp-6.3.0 rm -rf gmp-6.3.0
echo -e "${SUCCESS_COLOR}gmp successfully installed!${ENDC}" echo -e "${SUCCESS_COLOR}gmp successfully installed!${ENDC}"
@ -231,7 +275,7 @@ STATUS="riscv-gnu-toolchain"
cd "$RISCV" cd "$RISCV"
# Temporarily pin riscv-gnu-toolchain to use GCC 13.2.0. GCC 14 does not work with the Q extension. # Temporarily pin riscv-gnu-toolchain to use GCC 13.2.0. GCC 14 does not work with the Q extension.
if git_check "riscv-gnu-toolchain" "https://github.com/riscv/riscv-gnu-toolchain" "$RISCV/riscv-gnu-toolchain/stamps/build-gcc-newlib-stage2"; then if git_check "riscv-gnu-toolchain" "https://github.com/riscv/riscv-gnu-toolchain" "$RISCV/riscv-gnu-toolchain/stamps/build-gcc-newlib-stage2"; then
cd riscv-gnu-toolchain cd "$RISCV"/riscv-gnu-toolchain
git reset --hard && git clean -f && git checkout master && git pull git reset --hard && git clean -f && git checkout master && git pull
./configure --prefix="${RISCV}" --with-multilib-generator="rv32e-ilp32e--;rv32i-ilp32--;rv32im-ilp32--;rv32iac-ilp32--;rv32imac-ilp32--;rv32imafc-ilp32f--;rv32imafdc-ilp32d--;rv64i-lp64--;rv64ic-lp64--;rv64iac-lp64--;rv64imac-lp64--;rv64imafdc-lp64d--;rv64im-lp64--;" ./configure --prefix="${RISCV}" --with-multilib-generator="rv32e-ilp32e--;rv32i-ilp32--;rv32im-ilp32--;rv32iac-ilp32--;rv32imac-ilp32--;rv32imafc-ilp32f--;rv32imafdc-ilp32d--;rv64i-lp64--;rv64ic-lp64--;rv64iac-lp64--;rv64imac-lp64--;rv64imafdc-lp64d--;rv64im-lp64--;"
make -j "${NUM_THREADS}" 2>&1 | logger $STATUS; [ "${PIPESTATUS[0]}" == 0 ] make -j "${NUM_THREADS}" 2>&1 | logger $STATUS; [ "${PIPESTATUS[0]}" == 0 ]
@ -257,7 +301,7 @@ STATUS="elf2hex"
cd "$RISCV" cd "$RISCV"
export PATH=$RISCV/bin:$PATH export PATH=$RISCV/bin:$PATH
if git_check "elf2hex" "https://github.com/sifive/elf2hex.git" "$RISCV/bin/riscv64-unknown-elf-elf2bin"; then if git_check "elf2hex" "https://github.com/sifive/elf2hex.git" "$RISCV/bin/riscv64-unknown-elf-elf2bin"; then
cd elf2hex cd "$RISCV"/elf2hex
git reset --hard && git clean -f && git checkout master && git pull git reset --hard && git clean -f && git checkout master && git pull
autoreconf -i autoreconf -i
./configure --target=riscv64-unknown-elf --prefix="$RISCV" ./configure --target=riscv64-unknown-elf --prefix="$RISCV"
@ -279,7 +323,7 @@ section_header "Installing/Updating QEMU"
STATUS="qemu" STATUS="qemu"
cd "$RISCV" cd "$RISCV"
if git_check "qemu" "https://github.com/qemu/qemu" "$RISCV/include/qemu-plugin.h"; then if git_check "qemu" "https://github.com/qemu/qemu" "$RISCV/include/qemu-plugin.h"; then
cd qemu cd "$RISCV"/qemu
git reset --hard && git clean -f && git checkout master && git pull --recurse-submodules -j "${NUM_THREADS}" git reset --hard && git clean -f && git checkout master && git pull --recurse-submodules -j "${NUM_THREADS}"
git submodule update --init --recursive git submodule update --init --recursive
./configure --target-list=riscv64-softmmu --prefix="$RISCV" ./configure --target-list=riscv64-softmmu --prefix="$RISCV"
@ -301,7 +345,7 @@ section_header "Installing/Updating SPIKE"
STATUS="spike" STATUS="spike"
cd "$RISCV" cd "$RISCV"
if git_check "riscv-isa-sim" "https://github.com/riscv-software-src/riscv-isa-sim" "$RISCV/lib/pkgconfig/riscv-riscv.pc"; then if git_check "riscv-isa-sim" "https://github.com/riscv-software-src/riscv-isa-sim" "$RISCV/lib/pkgconfig/riscv-riscv.pc"; then
cd riscv-isa-sim cd "$RISCV"/riscv-isa-sim
git reset --hard && git clean -f && git checkout master && git pull git reset --hard && git clean -f && git checkout master && git pull
mkdir -p build mkdir -p build
cd build cd build
@ -327,7 +371,7 @@ STATUS="verilator"
cd "$RISCV" cd "$RISCV"
if git_check "verilator" "https://github.com/verilator/verilator" "$RISCV/share/pkgconfig/verilator.pc"; then if git_check "verilator" "https://github.com/verilator/verilator" "$RISCV/share/pkgconfig/verilator.pc"; then
unset VERILATOR_ROOT unset VERILATOR_ROOT
cd verilator cd "$RISCV"/verilator
git reset --hard && git clean -f && git checkout master && git pull git reset --hard && git clean -f && git checkout master && git pull
autoconf autoconf
./configure --prefix="$RISCV" ./configure --prefix="$RISCV"
@ -352,7 +396,9 @@ section_header "Installing/Updating Sail Compiler"
STATUS="Sail Compiler" STATUS="Sail Compiler"
if [ ! -e "$RISCV"/bin/sail ]; then if [ ! -e "$RISCV"/bin/sail ]; then
cd "$RISCV" cd "$RISCV"
curl --location https://github.com/rems-project/sail/releases/latest/download/sail.tar.gz | tar xvz --directory="$RISCV" --strip-components=1 wget -nv --retry-connrefused $retry_on_host_error --output-document=sail.tar.gz https://github.com/rems-project/sail/releases/latest/download/sail.tar.gz
tar xz --directory="$RISCV" --strip-components=1 -f sail.tar.gz
rm -f sail.tar.gz
echo -e "${SUCCESS_COLOR}Sail Compiler successfully installed/updated!${ENDC}" echo -e "${SUCCESS_COLOR}Sail Compiler successfully installed/updated!${ENDC}"
else else
echo -e "${SUCCESS_COLOR}Sail Compiler already installed.${ENDC}" echo -e "${SUCCESS_COLOR}Sail Compiler already installed.${ENDC}"
@ -363,7 +409,7 @@ fi
section_header "Installing/Updating RISC-V Sail Model" section_header "Installing/Updating RISC-V Sail Model"
STATUS="riscv-sail-model" STATUS="riscv-sail-model"
if git_check "sail-riscv" "https://github.com/riscv/sail-riscv.git" "$RISCV/bin/riscv_sim_RV32"; then if git_check "sail-riscv" "https://github.com/riscv/sail-riscv.git" "$RISCV/bin/riscv_sim_RV32"; then
cd sail-riscv cd "$RISCV"/sail-riscv
git reset --hard && git clean -f && git checkout master && git pull git reset --hard && git clean -f && git checkout master && git pull
ARCH=RV64 make -j "${NUM_THREADS}" c_emulator/riscv_sim_RV64 2>&1 | logger $STATUS; [ "${PIPESTATUS[0]}" == 0 ] ARCH=RV64 make -j "${NUM_THREADS}" c_emulator/riscv_sim_RV64 2>&1 | logger $STATUS; [ "${PIPESTATUS[0]}" == 0 ]
ARCH=RV32 make -j "${NUM_THREADS}" c_emulator/riscv_sim_RV32 2>&1 | logger $STATUS; [ "${PIPESTATUS[0]}" == 0 ] ARCH=RV32 make -j "${NUM_THREADS}" c_emulator/riscv_sim_RV32 2>&1 | logger $STATUS; [ "${PIPESTATUS[0]}" == 0 ]
@ -386,7 +432,7 @@ STATUS="OSU Skywater 130 cell library"
mkdir -p "$RISCV"/cad/lib mkdir -p "$RISCV"/cad/lib
cd "$RISCV"/cad/lib cd "$RISCV"/cad/lib
if git_check "sky130_osu_sc_t12" "https://foss-eda-tools.googlesource.com/skywater-pdk/libs/sky130_osu_sc_t12" "$RISCV/cad/lib/sky130_osu_sc_t12" "main"; then if git_check "sky130_osu_sc_t12" "https://foss-eda-tools.googlesource.com/skywater-pdk/libs/sky130_osu_sc_t12" "$RISCV/cad/lib/sky130_osu_sc_t12" "main"; then
cd sky130_osu_sc_t12 cd "$RISCV"/sky130_osu_sc_t12
git reset --hard && git clean -f && git checkout main && git pull git reset --hard && git clean -f && git checkout main && git pull
echo -e "${SUCCESS_COLOR}OSU Skywater library successfully installed!${ENDC}" echo -e "${SUCCESS_COLOR}OSU Skywater library successfully installed!${ENDC}"
else else
@ -428,8 +474,8 @@ section_header "Downloading Site Setup Script"
STATUS="site-setup scripts" STATUS="site-setup scripts"
cd "$RISCV" cd "$RISCV"
if [ ! -e "${RISCV}"/site-setup.sh ]; then if [ ! -e "${RISCV}"/site-setup.sh ]; then
wget https://raw.githubusercontent.com/openhwgroup/cvw/main/site-setup.sh wget -nv --retry-connrefused $retry_on_host_error https://raw.githubusercontent.com/openhwgroup/cvw/main/site-setup.sh
wget https://raw.githubusercontent.com/openhwgroup/cvw/main/site-setup.csh wget -nv --retry-connrefused $retry_on_host_error https://raw.githubusercontent.com/openhwgroup/cvw/main/site-setup.csh
echo -e "${SUCCESS_COLOR}Site setup script successfully downloaded!${ENDC}" echo -e "${SUCCESS_COLOR}Site setup script successfully downloaded!${ENDC}"
echo -e "${WARNING_COLOR}Make sure to edit the environment variables in $RISCV/site-setup.sh (or .csh) to point to your installation of EDA tools and licensce files.${ENDC}" echo -e "${WARNING_COLOR}Make sure to edit the environment variables in $RISCV/site-setup.sh (or .csh) to point to your installation of EDA tools and licensce files.${ENDC}"
else else

View File

@ -5,13 +5,21 @@
// This file is needed in the config subdirectory for each config supporting coverage. // This file is needed in the config subdirectory for each config supporting coverage.
// It defines which extensions are enabled for that config. // It defines which extensions are enabled for that config.
// Unprivileged extensions
`include "RV32I_coverage.svh" `include "RV32I_coverage.svh"
`include "RV32M_coverage.svh" `include "RV32M_coverage.svh"
`include "RV32F_coverage.svh" `include "RV32F_coverage.svh"
`include "RV32D_coverage.svh"
`include "RV32ZfhD_coverage.svh"
`include "RV32Zfh_coverage.svh" `include "RV32Zfh_coverage.svh"
`include "RV32Zicond_coverage.svh" `include "RV32Zicond_coverage.svh"
`include "RV32Zca_coverage.svh" `include "RV32Zca_coverage.svh"
`include "RV32Zcb_coverage.svh" `include "RV32Zcb_coverage.svh"
`include "RV32ZcbM_coverage.svh" `include "RV32ZcbM_coverage.svh"
`include "RV32ZcbZbb_coverage.svh" `include "RV32ZcbZbb_coverage.svh"
`include "RV32Zcf_coverage.svh"
`include "RV32Zcd_coverage.svh"
// Privileged extensions
`include "ZicsrM_coverage.svh"
`include "VM_coverage.svh" `include "VM_coverage.svh"

View File

@ -59,6 +59,21 @@
#--override cpu/instret_undefined=T #--override cpu/instret_undefined=T
#--override cpu/hpmcounter_undefined=T #--override cpu/hpmcounter_undefined=T
# context registers not implemented
--override cpu/scontext_undefined=T
--override cpu/mcontext_undefined=T
--override no_pseudo_inst=T # For code coverage, don't produce pseudoinstructions
# mcause and scause only have 4 lsbs of code and 1 msb of interrupt flag
#--override cpu/ecode_mask=0x8000000F # for RV32
--override cpu/ecode_mask=0x800000000000000F # for RV64
# Debug mode not yet supported
--override cpu/debug_mode=none
--override cpu/reset_address=0x80000000 --override cpu/reset_address=0x80000000
--override cpu/unaligned=F # Zicclsm (should be true) --override cpu/unaligned=F # Zicclsm (should be true)

View File

@ -5,18 +5,25 @@
// This file is needed in the config subdirectory for each config supporting coverage. // This file is needed in the config subdirectory for each config supporting coverage.
// It defines which extensions are enabled for that config. // It defines which extensions are enabled for that config.
// Unprivileged extensions
`include "RV64I_coverage.svh" `include "RV64I_coverage.svh"
`include "RV64M_coverage.svh" `include "RV64M_coverage.svh"
`include "RV64F_coverage.svh" `include "RV64F_coverage.svh"
`include "RV64D_coverage.svh"
`include "RV64ZfhD_coverage.svh"
`include "RV64Zfh_coverage.svh" `include "RV64Zfh_coverage.svh"
`include "VM_coverage.svh"
`include "RV64VM_PMP_coverage.svh"
`include "RV64CBO_VM_coverage.svh"
`include "RV64CBO_PMP_coverage.svh"
`include "RV64Zicbom_coverage.svh"
`include "RV64Zicond_coverage.svh" `include "RV64Zicond_coverage.svh"
`include "RV64Zca_coverage.svh" `include "RV64Zca_coverage.svh"
`include "RV64Zcb_coverage.svh" `include "RV64Zcb_coverage.svh"
`include "RV64ZcbM_coverage.svh" `include "RV64ZcbM_coverage.svh"
`include "RV64ZcbZbb_coverage.svh" `include "RV64ZcbZbb_coverage.svh"
`include "RV64ZcbZba_coverage.svh" `include "RV64ZcbZba_coverage.svh"
`include "RV64Zcd_coverage.svh"
// Privileged extensions
`include "RVVM_coverage.svh"
`include "ZicsrM_coverage.svh"
// `include "RV64VM_PMP_coverage.svh"
// `include "RV64CBO_VM_coverage.svh"
// `include "RV64CBO_PMP_coverage.svh"
// `include "RV64Zicbom_coverage.svh"

View File

@ -57,15 +57,24 @@
#--override cpu/instret_undefined=T #--override cpu/instret_undefined=T
#--override cpu/hpmcounter_undefined=T #--override cpu/hpmcounter_undefined=T
# context registers not implemented
--override cpu/scontext_undefined=T --override cpu/scontext_undefined=T
--override cpu/mcontext_undefined=T --override cpu/mcontext_undefined=T
--override no_pseudo_inst=T # For code coverage, don't produce pseudoinstructions
# nonratified mnosie register not implemented
--override cpu/mnoise_undefined=T --override cpu/mnoise_undefined=T
# *** how to override other undefined registers: seed, mphmevent, mseccfg, debugger registers
#--override cpu/seed_undefined=T # mcause and scause only have 4 lsbs of code and 1 msb of interrupt flag
#--override mhpmevent3_undefined=T #--override cpu/ecode_mask=0x8000000F # for RV32
#--override cpu/mseccfg_undefined=T --override cpu/ecode_mask=0x800000000000000F # for RV64
#--override cpu/tselect_undefined=T
#--override cpu/tdata1_undefined=T # Debug mode not yet supported
--override cpu/debug_mode=none
# Zkr entropy source and seed register not supported.
--override cpu/Zkr=F
--override cpu/reset_address=0x80000000 --override cpu/reset_address=0x80000000

View File

@ -1,41 +0,0 @@
### Cross-Compile Buildroot Linux
Building Linux is only necessary for exploring the boot process in Chapter 17. Building and generating a trace is a time-consuming operation that could be skipped for now; you can return to this section later if you are interested in the Linux details.
Buildroot depends on configuration files in riscv-wally, so the cad user must install Wally first according to the instructions in Section 2.2.2. However, dont source ~/wally-riscv/setup.sh because it will set LD_LIBRARY_PATH in a way to cause make to fail on buildroot.
To configure and build Buildroot:
$ cd $RISCV
$ export WALLY=~/riscv-wally # make sure you havent sourced ~/riscv-wally/setup.sh by now
$ git clone https://github.com/buildroot/buildroot.git
$ cd buildroot
$ git checkout 2021.05 # last tested working version
$ cp -r $WALLY/linux/buildroot-config-src/wally ./board
$ cp ./board/wally/main.config .config
$ make --jobs
To generate disassembly files and the device tree, run another make script. Note that you can expect some warnings about phandle references while running dtc on wally-virt.dtb.
Depending on your system configuration this makefile may need a bit of tweaking. It places the output buildroot images in $RISCV/linux-testvectors and the buildroot object dumps in $RISCV/buildroot/output/images/disassembly. If these directories are owned by root then the makefile will likely fail. You can either change the makefile's target directories or change temporarily change the owner of the two directories.
$ source ~/riscv-wally/setup.sh
$ cd $WALLY/linux/buildroot-scripts
$ make all
Note: When the make tasks complete, youll find source code in $RISCV/buildroot/output/build and the executables in $RISCV/buildroot/output/images.
### Generate load images for linux boot
The Questa linux boot uses preloaded bootram and ram memory. We use QEMU to generate these preloaded memory files. Files output in $RISCV/linux-testvectors
cd cvw/linux/testvector-generation
./genInitMem.sh
This may require changing file permissions to the linux-testvectors directory.
### Generate QEMU linux trace
The linux testbench can instruction by instruction compare Wally's committed instructions against QEMU. To do this QEMU outputs a log file consisting of all instructions executed. Interrupts are handled by forcing the testbench to generate an interrupt at the same cycle as in QEMU. Generating this trace will take more than 24 hours.
cd cvw/linux/testvector-generation
./genTrace.sh

View File

@ -6,29 +6,39 @@ Wally supports the following boards
# Quick Start # Quick Start
## build FPGA ## Build FPGA
`cd generator ```bash
make <board name>` cd generator
make <board name>
```
example Example:
`make vcu108` ```bash
make vcu108
```
## Make flash card image ## Make flash card image
ls /dev/sd* or ls /dev/mmc* to see which flash card devices you have. `ls /dev/sd*` or `ls /dev/mmc*` to see which flash card devices you have.
Insert the flash card into the reader and ls /dev/sd* or /dev/mmc* again. The new device is the one you want to use. Make sure you select the root device (i.e. /dev/sdb) not the partition (i.e. /dev/sdb1). Insert the flash card into the reader and `ls /dev/sd*` or `/dev/mmc*` again. The new device is the one you want to use. Make sure you select the root device (i.e. `/dev/sdb`) not the partition (i.e. `/dev/sdb1`).
`cd $WALLY/linux/sd-card` ```bash
cd $WALLY/linux/sd-card
```
This following script requires root. This following script requires root.
`./flash-sd.sh -b <path to buildroot> -d <path to compiled device tree file> <flash card device>` ```bash
./flash-sd.sh -b <path to buildroot> -d <path to compiled device tree file> <flash card device>
```
example with vcu108, buildroot installed to /opt/riscv/buildroot, and the flash card is device /dev/sdc Example with vcu108, buildroot installed to `/opt/riscv/buildroot`, and the flash card is device `/dev/sdc`
`./flash-sd.sh -b /opt/riscv/buildroot -d /opt/riscv/buildroot/output/images/wally-vcu108.dtb /dev/sdc` ```bash
./flash-sd.sh -b /opt/riscv/buildroot -d /opt/riscv/buildroot/output/images/wally-vcu108.dtb /dev/sdc
```
Wait until the the script completes then remove the car. Wait until the the script completes then remove the card.
## FPGA setup ## FPGA setup
@ -36,22 +46,26 @@ For the Arty A7 insert the PMOD daughter board into the right most slot and inse
For the VCU108 and VCU118 boards insert the PMOD daughter board into the only PMOD slot on the right side of the boards. For the VCU108 and VCU118 boards insert the PMOD daughter board into the only PMOD slot on the right side of the boards.
Power on the boards. Arty A7 just plug in the USB connector. For the VCU boards make sure the power supply is connected and the two usb cables are connected. Flip on the switch. Power on the boards. For Arty A7 just plug in the USB connector. For the VCU boards make sure the power supply is connected and the two usb cables are connected. Flip on the switch.
The VCU118's on board UART converter does not work. Use a spark fun FTDI usb to UART adapter and plug into the mail PMOD on the right side of the board. Also the level sifters on the The VCU118's on board UART converter does not work. Use a spark fun FTDI usb to UART adapter and plug into the mail PMOD on the right side of the board. Also the level sifters on the
VCU118 do not work correctly with the digilent sd PMOD board. We have a custom board which works instead. VCU118 do not work correctly with the digilent sd PMOD board. We have a custom board which works instead.
`cd $WALLY/fpga/generator ```bash
vivado &` cd $WALLY/fpga/generator
vivado &
```
open the design in the current directory WallyFPGA.xpr. Open the design in the current directory `WallyFPGA.xpr`.
Then click "Open Target" under "PROGRAM AND DEBUG". Then Program the device. Then click "Open Target" under "PROGRAM AND DEBUG". Then Program the device.
## Connect to UART ## Connect to UART
In another terminal ls /dev/ttyUSB*. One of these devices will be the UART connected to Wally. You may have to experiment by the running the following command multiple times. In another terminal `ls /dev/ttyUSB*`. One of these devices will be the UART connected to Wally. You may have to experiment by the running the following command multiple times.
`screen /dev/ttyUSB1 115200` ```bash
screen /dev/ttyUSB1 115200
```
Swap out the USB1 for USB0 or USB1 as needed. Swap out the `USB1` for `USB0` or `USB1` as needed.

View File

@ -27,9 +27,16 @@ BINARIES := fw_jump.elf vmlinux busybox
OBJDUMPS := $(foreach name, $(BINARIES), $(basename $(name) .elf)) OBJDUMPS := $(foreach name, $(BINARIES), $(basename $(name) .elf))
OBJDUMPS := $(foreach name, $(OBJDUMPS), $(DIS)/$(name).objdump) OBJDUMPS := $(foreach name, $(OBJDUMPS), $(DIS)/$(name).objdump)
.PHONY: all generate disassemble install clean cleanDTB cleanDriver check_write_permissions .PHONY: all generate disassemble install clean cleanDTB check_write_permissions check_environment
all: check_write_permissions clean download Image disassemble install dumptvs all: check_environment check_write_permissions clean download Image disassemble install dumptvs
check_environment: $(RISCV)
ifeq ($(findstring :$(RISCV)/lib:,:$(LD_LIBRARY_PATH):),)
@(echo "ERROR: Your environment variables are not set correctly." >&2 \
&& echo "Make sure to source setup.sh or install buildroot using the wally-tool-chain-install.sh script." >&2 \
&& exit 1)
endif
check_write_permissions: check_write_permissions:
ifeq ($(SUDO), sudo) ifeq ($(SUDO), sudo)
@ -41,17 +48,17 @@ endif
&& exit 1) && exit 1)
@$(SUDO) rm -r $(RISCV)/.test @$(SUDO) rm -r $(RISCV)/.test
Image: Image: check_environment
bash -c "unset LD_LIBRARY_PATH; $(MAKE) -C $(BUILDROOT)" bash -c "unset LD_LIBRARY_PATH; $(MAKE) -C $(BUILDROOT)"
$(MAKE) generate $(MAKE) generate
@echo "Buildroot Image successfully generated." @echo "Buildroot Image successfully generated."
install: check_write_permissions install: check_write_permissions check_environment
$(SUDO) rm -rf $(RISCV)/$(BUILDROOT) $(SUDO) rm -rf $(RISCV)/$(BUILDROOT)
$(SUDO) mv $(BUILDROOT) $(RISCV)/$(BUILDROOT) $(SUDO) mv $(BUILDROOT) $(RISCV)/$(BUILDROOT)
@echo "Buildroot successfully installed." @echo "Buildroot successfully installed."
dumptvs: check_write_permissions dumptvs: check_write_permissions check_environment
$(SUDO) mkdir -p $(RISCV)/linux-testvectors $(SUDO) mkdir -p $(RISCV)/linux-testvectors
cd testvector-generation; ./genInitMem.sh cd testvector-generation; ./genInitMem.sh
@echo "Testvectors successfully generated." @echo "Testvectors successfully generated."
@ -70,7 +77,7 @@ $(RISCV):
@ echo "and sourced setup.sh" @ echo "and sourced setup.sh"
# Disassembly rules --------------------------------------------------- # Disassembly rules ---------------------------------------------------
disassemble: disassemble: check_environment
rm -rf $(BUILDROOT)/output/images/disassembly rm -rf $(BUILDROOT)/output/images/disassembly
find $(BUILDROOT)/output/build/linux-* -maxdepth 1 -name "vmlinux" | xargs cp -t $(BUILDROOT)/output/images/ find $(BUILDROOT)/output/build/linux-* -maxdepth 1 -name "vmlinux" | xargs cp -t $(BUILDROOT)/output/images/
mkdir -p $(DIS) mkdir -p $(DIS)
@ -114,9 +121,6 @@ $(BUILDROOT):
# --------------------------------------------------------------------- # ---------------------------------------------------------------------
cleanDriver:
rm -f $(DRIVER)
cleanDTB: cleanDTB:
rm -f $(IMAGES)/*.dtb rm -f $(IMAGES)/*.dtb

View File

@ -12,23 +12,31 @@
In order to generate the Linux and boot stage binaries compatible with Wally, Buildroot is used for cross-compilation. In order to generate the Linux and boot stage binaries compatible with Wally, Buildroot is used for cross-compilation.
To set up a Buildroot directory, configuration files for Buildroot, Linux, and Busybox must be copied into the correct locations inside the main Buildroot directory. Buildroot and device tree binaries must be generated as well. This can all be done automatically using the Makefile inside Wally's Linux subdirectory (this one). To install a new buildroot directory, build the Buildroot binaries, generate the device tree binaries, generate test-vectors for simulation, and install the buildroot package needed to build the SD card driver for Linux, run: To set up a Buildroot directory, configuration files for Buildroot, Linux, and Busybox must be copied into the correct locations inside the main Buildroot directory. Buildroot and device tree binaries must be generated as well.
$ make This can all be done automatically using the Makefile inside Wally's Linux subdirectory (this one). The main Wally installation script (`bin/wally-tool-chain-install.sh`) runs this by default, so buildroot is likely already setup. Otherwise, to install a new buildroot directory, build the Buildroot binaries, generate the device tree binaries, and generate testvectors for simulation run:
```bash
$ make
```
This installs to the `$RISCV` directory. Buildroot itself is installed to `$RISCV/buildroot` and the test-vectors are installed to `$RISCV/linux-testvectors`. This installs to the `$RISCV` directory. Buildroot itself is installed to `$RISCV/buildroot` and the test-vectors are installed to `$RISCV/linux-testvectors`.
Optionally, you can override the `BUILDROOT` variable to install a different buildroot source directory. Optionally, you can override the `BUILDROOT` variable to install a different buildroot source directory.
$ make install BUILDROOT=path/to/buildroot ```bash
$ make install BUILDROOT=<path/to/buildroot>
```
## Generating Device Tree Binaries <a name="devicetree"></a> ## Generating Device Tree Binaries <a name="devicetree"></a>
The device tree files for the various FPGA's Wally supports, as well as QEMU's device tree for the virt machine, are located in the `./devicetree` subdirectory. These device tree files are necessary for the boot process. The device tree files for the various FPGAs Wally supports, as well as QEMU's device tree for the virt machine, are located in the `./devicetree` subdirectory. These device tree files are necessary for the boot process.
They are built automatically using the main `make` command. To build the device tree binaries (.dtb) from the device tree sources (.dts) separately, we can build all of them at once using: They are built automatically using the main `make` command. To build the device tree binaries (.dtb) from the device tree sources (.dts) separately, we can build all of them at once using:
$ make generate #optionally override BUILDROOT ```bash
$ make generate # optionally override BUILDROOT
```
The .dts files will end up in the `<BUILDROOT>/output/images` folder of your chosen buildroot directory. The .dts files will end up in the `<BUILDROOT>/output/images` folder of your chosen buildroot directory.
@ -38,23 +46,30 @@ By using the `riscv64-unknown-elf-objdump` utility, we can disassemble the binar
The disassembled binaries are built automatically using the main `make` command. To create the disassembled binaries separately, run: The disassembled binaries are built automatically using the main `make` command. To create the disassembled binaries separately, run:
$ make disassemble #optionally override BUILDROOT ```bash
$ make disassemble # optionally override BUILDROOT
```
You'll find the resulting disassembled files in `<BUILDROOT>/output/images/disassembly`. You'll find the resulting disassembled files in `<BUILDROOT>/output/images/disassembly`.
## Generate Memory Files for Linux Boot <a name="testvectors"></a> ## Generate Memory Files for Linux Boot <a name="testvectors"></a>
Running a linux boot simulation uses a preloaded bootrom and ram memory. We use QEMU to generate these preloaded memory files. The files are output to $RISCV/linux-testvectors. The memory files are generated automatically when using the main `make` command. Alternatively, they can be generated by running Running a linux boot simulation uses a preloaded bootrom and ram memory. We use QEMU to generate these preloaded memory files. The files are output to `$RISCV/linux-testvectors`. The memory files are generated automatically when using the main `make` command. Alternatively, they can be generated by running
make dumptvs ```bash
$ make dumptvs
```
## Creating a Bootable SD Card <a name="sdcard"></a> ## Creating a Bootable SD Card <a name="sdcard"></a>
To flash a bootable sd card for Wally's bootloader, use the `flash-sd.sh` script located in `<WALLY>/linux/sdcard`. The script allows you to specify which buildroot directory you would like to use and to specify the device tree. By default it is set up for the default location of buildroot in `$RISCV` and uses the vcu108 device tree. To use the script with your own buildroot directory and device tree, type: To flash a bootable sd card for Wally's bootloader, use the `flash-sd.sh` script located in `<WALLY>/linux/sdcard`. The script allows you to specify which buildroot directory you would like to use and to specify the device tree. By default it is set up for the default location of buildroot in `$RISCV` and uses the vcu108 device tree. To use the script with your own buildroot directory and device tree, type:
$ cd sdcard ```bash
$ ./flash-sd.sh -b <path/to/buildroot> -d <device tree name> <DEVICE> $ cd sdcard
$ ./flash-sd.sh -b <path/to/buildroot> -d <device tree name> <DEVICE>
```
for example for example
```bash
$ ./flash-sd.sh -b ~/repos/buildroot -d wally-vcu118.dtb /dev/sdb $ ./flash-sd.sh -b ~/repos/buildroot -d wally-vcu118.dtb /dev/sdb
```

View File

@ -181,7 +181,7 @@ if {$DEBUG > 0} {
# suppress spurious warnngs about # suppress spurious warnngs about
# "Extra checking for conflicts with always_comb done at vopt time" # "Extra checking for conflicts with always_comb done at vopt time"
# because vsim will run vopt # because vsim will run vopt
set INC_DIRS "+incdir+${CONFIG}/${CFG} +incdir+${CONFIG}/deriv/${CFG} +incdir+${CONFIG}/shared +incdir+${FCRVVI} +incdir+${FCRVVI}/rv32 +incdir+${FCRVVI}/rv64 +incdir+${FCRVVI}/rv64_priv +incdir+${FCRVVI}/common" set INC_DIRS "+incdir+${CONFIG}/${CFG} +incdir+${CONFIG}/deriv/${CFG} +incdir+${CONFIG}/shared +incdir+${FCRVVI} +incdir+${FCRVVI}/rv32 +incdir+${FCRVVI}/rv64 +incdir+${FCRVVI}/rv64_priv +incdir+${FCRVVI}/priv +incdir+${FCRVVI}/common +incdir+${FCRVVI}"
set SOURCES "${SRC}/cvw.sv ${TB}/${TESTBENCH}.sv ${TB}/common/*.sv ${SRC}/*/*.sv ${SRC}/*/*/*.sv ${WALLY}/addins/verilog-ethernet/*/*.sv ${WALLY}/addins/verilog-ethernet/*/*/*/*.sv" set SOURCES "${SRC}/cvw.sv ${TB}/${TESTBENCH}.sv ${TB}/common/*.sv ${SRC}/*/*.sv ${SRC}/*/*/*.sv ${WALLY}/addins/verilog-ethernet/*/*.sv ${WALLY}/addins/verilog-ethernet/*/*/*/*.sv"
vlog -permissive -lint -work ${WKDIR} {*}${INC_DIRS} {*}${FCvlog} {*}${FCdefineCOVER_EXTS} {*}${lockstepvlog} {*}${SOURCES} -suppress 2282,2583,7053,7063,2596,13286 vlog -permissive -lint -work ${WKDIR} {*}${INC_DIRS} {*}${FCvlog} {*}${FCdefineCOVER_EXTS} {*}${lockstepvlog} {*}${SOURCES} -suppress 2282,2583,7053,7063,2596,13286

View File

@ -8,7 +8,6 @@
// See RISC-V Privileged Mode Specification 20190608 3.1.10-11 // See RISC-V Privileged Mode Specification 20190608 3.1.10-11
// //
// Documentation: RISC-V System on Chip Design // Documentation: RISC-V System on Chip Design
// MHPMEVENT is not supported
// //
// A component of the CORE-V-WALLY configurable RISC-V project. // A component of the CORE-V-WALLY configurable RISC-V project.
// https://github.com/openhwgroup/cvw // https://github.com/openhwgroup/cvw
@ -66,7 +65,8 @@ module csrc import cvw::*; #(parameter cvw_t P) (
localparam MTIME = 12'hB01; // this is a memory-mapped register; no such CSR exists, and access should faul; localparam MTIME = 12'hB01; // this is a memory-mapped register; no such CSR exists, and access should faul;
localparam MHPMCOUNTERHBASE = 12'hB80; localparam MHPMCOUNTERHBASE = 12'hB80;
localparam MTIMEH = 12'hB81; // this is a memory-mapped register; no such CSR exists, and access should fault localparam MTIMEH = 12'hB81; // this is a memory-mapped register; no such CSR exists, and access should fault
localparam MHPMEVENTBASE = 12'h320; localparam MHPMEVENTBASE = 12'h323;
localparam MHPMEVENTLAST = 12'h33F;
localparam HPMCOUNTERBASE = 12'hC00; localparam HPMCOUNTERBASE = 12'hC00;
localparam HPMCOUNTERHBASE = 12'hC80; localparam HPMCOUNTERHBASE = 12'hC80;
localparam TIME = 12'hC01; localparam TIME = 12'hC01;
@ -156,6 +156,9 @@ module csrc import cvw::*; #(parameter cvw_t P) (
if (PrivilegeModeW == P.M_MODE | if (PrivilegeModeW == P.M_MODE |
MCOUNTEREN_REGW[CounterNumM] & (!P.S_SUPPORTED | PrivilegeModeW == P.S_MODE | SCOUNTEREN_REGW[CounterNumM])) begin MCOUNTEREN_REGW[CounterNumM] & (!P.S_SUPPORTED | PrivilegeModeW == P.S_MODE | SCOUNTEREN_REGW[CounterNumM])) begin
IllegalCSRCAccessM = 1'b0; IllegalCSRCAccessM = 1'b0;
if (CSRAdrM >= MHPMEVENTBASE & CSRAdrM <= MHPMEVENTLAST) begin
CSRCReadValM = '0; // mphmevent[3:31] tied to read-only zero
end else begin
if (P.XLEN==64) begin // 64-bit counter reads if (P.XLEN==64) begin // 64-bit counter reads
// Veri lator doesn't realize this only occurs for XLEN=64 // Veri lator doesn't realize this only occurs for XLEN=64
/* verilator lint_off WIDTH */ /* verilator lint_off WIDTH */
@ -188,6 +191,7 @@ module csrc import cvw::*; #(parameter cvw_t P) (
IllegalCSRCAccessM = 1'b1; // requested CSR doesn't exist IllegalCSRCAccessM = 1'b1; // requested CSR doesn't exist
end end
end end
end
end else begin end else begin
CSRCReadValM = '0; CSRCReadValM = '0;
IllegalCSRCAccessM = 1'b1; // no privileges for this csr IllegalCSRCAccessM = 1'b1; // no privileges for this csr

View File

@ -128,10 +128,13 @@ module csrs import cvw::*; #(parameter cvw_t P) (
else else
assign STimerInt = 1'b0; assign STimerInt = 1'b0;
logic [1:0] LegalizedCBIE;
assign LegalizedCBIE = CSRWriteValM[5:4] == 2'b10 ? SENVCFG_REGW[5:4] : CSRWriteValM[5:4]; // Assume WARL for reserved CBIE = 10, keeps old value
assign SENVCFG_WriteValM = { assign SENVCFG_WriteValM = {
{(P.XLEN-8){1'b0}}, {(P.XLEN-8){1'b0}},
CSRWriteValM[7] & P.ZICBOZ_SUPPORTED, CSRWriteValM[7] & P.ZICBOZ_SUPPORTED,
CSRWriteValM[6:4] & {3{P.ZICBOM_SUPPORTED}}, CSRWriteValM[6] & P.ZICBOM_SUPPORTED,
LegalizedCBIE & {2{P.ZICBOM_SUPPORTED}},
3'b0, 3'b0,
CSRWriteValM[0] & P.VIRTMEM_SUPPORTED CSRWriteValM[0] & P.VIRTMEM_SUPPORTED
}; };

View File

@ -221,8 +221,8 @@ module spi_apb import cvw::*; #(parameter cvw_t P) (
SPI_DELAY0: Dout <= {8'b0, Delay0[15:8], 8'b0, Delay0[7:0]}; SPI_DELAY0: Dout <= {8'b0, Delay0[15:8], 8'b0, Delay0[7:0]};
SPI_DELAY1: Dout <= {8'b0, Delay1[15:8], 8'b0, Delay1[7:0]}; SPI_DELAY1: Dout <= {8'b0, Delay1[15:8], 8'b0, Delay1[7:0]};
SPI_FMT: Dout <= {12'b0, Format[4:1], 13'b0, Format[0], 2'b0}; SPI_FMT: Dout <= {12'b0, Format[4:1], 13'b0, Format[0], 2'b0};
SPI_TXDATA: Dout <= {23'b0, TransmitFIFOWriteFull, 8'b0}; SPI_TXDATA: Dout <= {TransmitFIFOWriteFull, 23'b0, 8'b0};
SPI_RXDATA: Dout <= {23'b0, ReceiveFIFOReadEmpty, ReceiveData[7:0]}; SPI_RXDATA: Dout <= {ReceiveFIFOReadEmpty, 23'b0, ReceiveData[7:0]};
SPI_TXMARK: Dout <= {29'b0, TransmitWatermark}; SPI_TXMARK: Dout <= {29'b0, TransmitWatermark};
SPI_RXMARK: Dout <= {29'b0, ReceiveWatermark}; SPI_RXMARK: Dout <= {29'b0, ReceiveWatermark};
SPI_IE: Dout <= {30'b0, InterruptEnable}; SPI_IE: Dout <= {30'b0, InterruptEnable};
@ -234,9 +234,9 @@ module spi_apb import cvw::*; #(parameter cvw_t P) (
// SPI enable generation, where SCLK = PCLK/(2*(SckDiv + 1)) // SPI enable generation, where SCLK = PCLK/(2*(SckDiv + 1))
// Asserts SCLKenable at the rising and falling edge of SCLK by counting from 0 to SckDiv // Asserts SCLKenable at the rising and falling edge of SCLK by counting from 0 to SckDiv
// Active at 2x SCLK frequency to account for implicit half cycle delays and actions on both clock edges depending on phase // Active at 2x SCLK frequency to account for implicit half cycle delays and actions on both clock edges depending on phase
// When SckDiv is 0, count doesn't work and SCLKenable is simply PCLK // When SckDiv is 0, count doesn't work and SCLKenable is simply PCLK *** dh 10/26/24: this logic is seriously broken. SCLK is not scaled to PCLK/(2*(SckDiv + 1)). SCLKenableEarly doesn't work right for SckDiv=0
assign ZeroDiv = ~|(SckDiv[10:0]); assign ZeroDiv = ~|(SckDiv[10:0]);
assign SCLKenable = ZeroDiv ? PCLK : (DivCounter == SckDiv); assign SCLKenable = ZeroDiv ? 1 : (DivCounter == SckDiv);
assign SCLKenableEarly = ((DivCounter + 12'b1) == SckDiv); assign SCLKenableEarly = ((DivCounter + 12'b1) == SckDiv);
always_ff @(posedge PCLK) always_ff @(posedge PCLK)
if (~PRESETn) DivCounter <= '0; if (~PRESETn) DivCounter <= '0;

View File

@ -1,42 +1,39 @@
Synthesis for RISC-V Microprocessor System-on-Chip Design # Synthesis for RISC-V Microprocessor System-on-Chip Design
This subdirectory contains synthesis scripts for use with Synopsys This subdirectory contains synthesis scripts for use with Synopsys
(snps) Design Compiler (DC). Synthesis commands are found in (snps) Design Compiler (DC). Synthesis commands are found in
scripts/synth.tcl. `scripts/synth.tcl`.
Example Usage ## Example Usage
```bash
make synth DESIGN=wallypipelinedcore FREQ=500 CONFIG=rv32e make synth DESIGN=wallypipelinedcore FREQ=500 CONFIG=rv32e
```
environment variables ## Environment Variables
DESIGN - `DESIGN`
Design provides the name of the output log. Default is synth. - Design provides the name of the output log. Default is synth.
- `FREQ`
- Frequency in MHz. Default is 500
- `CONFIG`
- The Wally configuration file. The default is rv32e.
- Examples: rv32e, rv64gc, rv32gc
- `TECH`
- The target standard cell library. The default is sky130.
- Options:
- sky90: skywater 90nm TT 25C
- sky130: skywater 130nm TT 25C
- `SAIFPOWER`
- Controls if power analysis is driven by switching factor or RTL modelsim simulation. When enabled requires a saif file named power.saif. The default is 0.
- Options:
- 0: switching factor power analysis
- 1: RTL simulation driven power analysis.
FREQ ## Extra Tool (PPA)
Frequency in MHz. Default is 500
CONFIG
The Wally configuration file. The default is rv32e.
Examples: rv32e, rv64gc, rv32gc
TECH
The target standard cell library. The default is sky130.
sky90: skywater 90nm TT 25C
sky130: skywater 130nm TT 25C
SAIFPOWER
Controls if power analysis is driven by switching factor or
RTL modelsim simulation. When enabled requires a saif file
named power.saif. The default is 0.
0: switching factor power analysis
1: RTL simulation driven power analysis.
-----
Extra Tool (PPA)
To run ppa analysis that hones into target frequency, you can type: To run ppa analysis that hones into target frequency, you can type:
python3 ppa/ppaSynth.py from the synthDC directory. This runs a sweep `python3 ppa/ppaSynth.py` from the synthDC directory. This runs a sweep
across all modules listed at the bottom of the ppaSynth.py file. across all modules listed at the bottom of the `ppaSynth.py` file.
Two options for running the sweep. The first run runs all modules for Two options for running the sweep. The first run runs all modules for
all techs around a given frequency (i.e., freqs). The second option all techs around a given frequency (i.e., freqs). The second option
@ -44,19 +41,21 @@ will run all designs for the specific module based on bestSynths.csv
values. Since the second option is 2nd, it has priority. If the values. Since the second option is 2nd, it has priority. If the
second set of values is commented out, it will run all widths. second set of values is commented out, it will run all widths.
WARNING: The first option may runs lots of runs that could expend all **WARNING:** The first option may runs lots of runs that could expend all the licenses available for a license. Therefore, care must be taken to be sure that enough licenses are available for this first option.
the licenses available for a license. Therefore, care must be taken
to be sure that enough licenses are available for this first option.
##### Run specific syntheses ### Run specific syntheses
widths = [8, 16, 32, 64, 128] ```python
modules = ['mul', 'adder', 'shifter', 'flop', 'comparator', 'binencoder', 'csa', 'mux2', 'mux4', 'mux8'] widths = [8, 16, 32, 64, 128]
techs = ['sky90', 'sky130', 'tsmc28', 'tsmc28psyn'] modules = ['mul', 'adder', 'shifter', 'flop', 'comparator', 'binencoder', 'csa', 'mux2', 'mux4', 'mux8']
freqs = [5000] techs = ['sky90', 'sky130', 'tsmc28', 'tsmc28psyn']
synthsToRun = allCombos(widths, modules, techs, freqs) freqs = [5000]
synthsToRun = allCombos(widths, modules, techs, freqs)
```
##### Run a sweep based on best delay found in existing syntheses ### Run a sweep based on best delay found in existing syntheses
module = 'adder' ```python
width = 32 module = 'adder'
tech = 'tsmc28psyn' width = 32
synthsToRun = freqSweep(module, width, tech) tech = 'tsmc28psyn'
synthsToRun = freqSweep(module, width, tech)
```

View File

@ -540,7 +540,6 @@ module testbench;
always @(posedge clk) begin always @(posedge clk) begin
if (LoadMem) begin if (LoadMem) begin
$readmemh(memfilename, dut.core.lsu.dtim.dtim.ram.ram.RAM); $readmemh(memfilename, dut.core.lsu.dtim.dtim.ram.ram.RAM);
$display("Read memfile %s", memfilename);
end end
if (CopyRAM) begin if (CopyRAM) begin
LogXLEN = (1 + P.XLEN/32); // 2 for rv32 and 3 for rv64 LogXLEN = (1 + P.XLEN/32); // 2 for rv32 and 3 for rv64

View File

@ -28,6 +28,12 @@
// The PMP tests are sensitive to the exact addresses in this code, so unfortunately // The PMP tests are sensitive to the exact addresses in this code, so unfortunately
// modifying anything breaks those tests. // modifying anything breaks those tests.
// Provides simple firmware services through ecall. Place argument in a0 and issue ecall:
// 0: change to user mode
// 1: change to supervisor mode
// 3: change to machine mode
// 4: terminate program
.section .text.init .section .text.init
.global rvtest_entry_point .global rvtest_entry_point

View File

@ -1,7 +1,7 @@
james.stine@okstate.edu 14 Jan 2022 james.stine@okstate.edu 14 Jan 2022\
jcarlin@hmc.edu Sept 2024 jcarlin@hmc.edu Sept 2024
## TestFloat for CVW # TestFloat for CVW
The CVW floating point unit is tested using testvectors from the Berkeley TestFloat suite, written originally by John Hauser. The CVW floating point unit is tested using testvectors from the Berkeley TestFloat suite, written originally by John Hauser.
@ -9,7 +9,7 @@ TestFloat and SoftFloat can be found as submodules in the addins directory, and
- TestFloat: https://github.com/ucb-bar/berkeley-testfloat-3 - TestFloat: https://github.com/ucb-bar/berkeley-testfloat-3
- SoftFloat: https://github.com/ucb-bar/berkeley-softfloat-3 - SoftFloat: https://github.com/ucb-bar/berkeley-softfloat-3
### Compiling SoftFloat/TestFloat and Generating Testvectors ## Compiling SoftFloat/TestFloat and Generating Testvectors
The entire testvector generation process can be performed by running make in this directory. The entire testvector generation process can be performed by running make in this directory.
@ -17,7 +17,7 @@ The entire testvector generation process can be performed by running make in thi
make --jobs make --jobs
``` ```
This compiles SoftFloat for an x86_64 environment in its build/Linux-x86_64-GCC directory using the `SPECIALIZE_TYPE=RISCV` flag to get RISC-V behavior. TestFloat is then compiled in its build/Linux-x86_64-GCC directory using this SoftFloat library. This compiles SoftFloat for an x86_64 environment in its `build/Linux-x86_64-GCC` directory using the `SPECIALIZE_TYPE=RISCV` flag to get RISC-V behavior. TestFloat is then compiled in its `build/Linux-x86_64-GCC` directory using this SoftFloat library.
The Makefile in the vectors subdirectory of this directory is then called to generate testvectors for each rounding mode and operation. It also puts an underscore between each vector instead of a space to allow SystemVerilog `$readmemh` to read correctly. The Makefile in the vectors subdirectory of this directory is then called to generate testvectors for each rounding mode and operation. It also puts an underscore between each vector instead of a space to allow SystemVerilog `$readmemh` to read correctly.
@ -25,7 +25,7 @@ Testvectors for the combined integer floating-point divider are also generated.
Although not needed, a `case.sh` script is included to change the case of the hex output. This is for those that do not like to see hexadecimal capitalized :P. Although not needed, a `case.sh` script is included to change the case of the hex output. This is for those that do not like to see hexadecimal capitalized :P.
### Running TestFloat Vectors on Wally ## Running TestFloat Vectors on Wally
TestFloat is run using the standard Wally simulation commands. TestFloat is run using the standard Wally simulation commands.
@ -40,15 +40,15 @@ wsim <config> <test> --tb testbench_fp
``` ```
The choices for `<test>` are as follows: The choices for `<test>` are as follows:
>cvtint - test integer conversion unit (fcvtint) cvtint - test integer conversion unit (fcvtint)
cvtfp - test floating-point conversion unit (fcvtfp) cvtfp - test floating-point conversion unit (fcvtfp)
cmp - test comparison unit's LT, LE, EQ opperations (fcmp) cmp - test comparison unit's LT, LE, EQ opperations (fcmp)
add - test addition add - test addition
fma - test fma fma - test fma
mul - test mult with fma mul - test mult with fma
sub - test subtraction sub - test subtraction
div - test division div - test division
sqrt - test square root sqrt - test square root
Any config that includes floating point support can be used. Each test will test all its vectors for all precisions supported by the given config. Any config that includes floating point support can be used. Each test will test all its vectors for all precisions supported by the given config.

View File

@ -32,59 +32,59 @@
00000003 00000003
00000074 00000074 # spi_burst_send
00000063 00000063 # spi_burst_send
00000052 00000052 # spi_burst_send
00000041 00000041 # spi_burst_send
000000A1 000000A1 # spi_burst_send
00000003 00000003
000000B2 000000B2 # spi_burst_send
00000001 00000001
000000C3 000000C3 # spi_burst_send
000000D4 000000D4 # spi_burst_send
00000003 00000003
000000A4 000000A4 # tx_data write test
00000001 00000001
000000B4 000000B4 # tx_data write test
000000A5 000000A5 # spi_burst_send
000000B5 000000B5 # spi_burst_send
000000C5 000000C5 # spi_burst_send
000000D5 000000D5 # spi_burst_send
000000A7 000000A7 # spi_burst_send
000000B7 000000B7 # spi_burst_send
000000C7 000000C7 # spi_burst_send
00000002 00000002
000000D7 000000D7 # spi_burst_send
00000000 00000000
00000011 #basic read write 00000011 #basic read write
000000FF 000000FF # first test sck_div
000000AE 000000AE # min sck_div first spi_burst_send
000000AD 000000AD