Using Docker with Riviera-PRO

Docker is an open source platform allowing the user to package and run the software in isolated user-spaces called containers.

Using the analogy of a computer program, the key components of the docker can be presented as follows:

  • Dockerfile

    Equivalent of source code. Contains a set of instructions that are used to build the docker image.

  • Docker Image

    An immutable template for creating docker containers. May be created from a Dockerfile by using the docker build command or downloaded from a docker registry or Docker HUB. In the software analogy can be represented by a compiled executable file.

  • Docker Container

    Created from a docker image, runs the virtual operating system. Similarly to software, where multiple running processes can be created from a single executable, docker run can be called several times to create multiple containers from a single image.

Containers have their own file system, processes and network capabilities. They are created from templates known as images. Docker containers may look similar to the virtual machines (VM) but there are key differences. VM, as the name implies, creates virtual hardware such as a virtual CPU, virtual memory, virtual disk, virtual network controller, and so on. Isolation is on the hardware level. Each VM contains its own whole operating system. The Docker engine isolates containers on the process level and provides capabilities to run an application on any OS. If compare running the same program using docker container and VM, the container will consume much less resources. As docker containers do not have own operating system, they also start much faster than VMs.

Benefits of docker

Docker cache and image layers help in minimize disk usage. For example, if the image occupies 5GB and 10 containers were started from it then disk usage still will be about 5GB. Note that if containers create files in internal file systems on runtime, then of course disk usage will increase. Another benefit is a large central repository called Docker Hub. What's more, any of these thousands of images can be used as a base image to create a new one with additional content.

Docker's tags allow to easily manage versions and variants of images when pulling it from a repository or using in a CI pipeline, for example:

ubuntu:latest
ubuntu:22.04
ubuntu:23.10

Besides the existence of the Docker Hub central repository, Docker provides tools to setup a free on-premise private registry with similar functionality.

Manually managing and maintaining docker containers becomes difficult if their numbers exceed dozens. This is where another tool from docker's ecosystem - Kubernetes (K8s) comes in handy. It is a production grade open source container orchestration system. Its capabilities include:

  • Monitoring container health

  • Autoscaling based on server load or traffic

  • Load balancing

  • Deploying images with rollback possibility

  • Managing resources on container level

Docker containers are widely utilized in CI/CD platforms like GitLab, Jenkins or CircleCI. Among the most popular use cases are:

  • Build Environment

    Dockerfiles allow specifying the exact operating system, dependencies and tool versions required to build an application.

  • Test Environment

    Containers provide very stable and consistent results across different machines. They prevent interference between tests and help in execution parallelization.

  • Development Environment

    Containers allow standardizing and unifying the environment created by multiple developers. They help in shortening a learning curve of new employees as all required dependencies and tools are assembled in a single package. Containers make it easier to work with different versions of dependencies since they are isolated from the host and from each other.

  • Deploy

    There is a special case when a docker container can build a new docker image. It is not very useful in chip design projects, but worth mentioning. Deploying an application as docker image(s) allows the user to easily take advantage of cloud features such as autoscaling or infrastructure as code.

Docker due to its many benefits becomes adopted in many cloud services, among them we can mention:

  • Amazon ECS, Amazon EKS, AWS Fargate

    AWS services for running docker containers used by GitLab to provide autoscaling features of runners.

  • Amazon ECR

    A private container repository in the cloud. May be applied for storing images used by CI/CD pipelines.

  • Azure Container Instances

    Fully managed container service in the Azure cloud.

  • Google Kubernetes Engine (GKE)

    Cluster management and container orchestration in the Google Cloud.

The next popular use case is experimenting with new technologies. Applications in docker images are delivered with all dependencies. A perfect example may be our GitLab demo demonstrating fully functional CI/CD workflow for EDA projects on just two docker containers (GitLab instance and GitLab runner).

Extending Riviera-PRO image with additional software

Riviera-PRO Dockerfiles and docker images for selected Linux distributions can be requested from the Aldec's support team.

The distributed images usually do not contain any other software beside the one for which they were built. It is motivated by the need of reducing its size. Using such images in the CI/CD platform almost always requires to add additional software to them, often very common tools like perl or make. Extending the Riviera-PRO image will be shown on the example of adding the RISC-V GNU Compiler Toolchain to it.

Docker builds an image by reading and executing instructions from the file named Dockerfile.

The following steps need to be executed to create Riviera-PRO image containing simple modifications such as defining environment variable for the license server:

  1. Create a file named Dockerfile.

  2. Append the following code to it:

    FROM aldec/riviera-pro:latest
    ENV ALDEC_LICENSE_FILE=27000@127.0.0.1
    
  3. Execute the following command inside the directory with Dockerfile:

    docker build -t aldec/riviera-pro:my_modified_image .
    

The FROM instruction starts a new build stage and specifies the base docker image. The ENV instruction, in turn, sets a permanent environment variable in the image. It will persist even restarting the container or changing the user. The docker build command starts the building process and the -t switch allows setting a tag for a newly created image. The dot (.) at the end specifies the path to the context. The context is a set of files that can be referred in the building process. They can be, for example, copied to the image by using the COPY instruction.

The docker image is composed of immutable layers. Every instruction creates a new layer containing changes in comparison to the previous layer. This implies that if two Dockerfiles start with the exact instructions, the layers produced by these instructions will be created and stored on the file system only once. However such an approach requires more attention during writing Dockerfiles, as it may result in an unintentional increase in the occupied storage space. Consider an example of installing the make package in Ubuntu. Typical console commands will be:

sudo apt update
sudo apt install -y make

After translating it to the Dockerfile instructions we get:

RUN apt update
RUN apt install -y make

As all Dockerfile instructions are executed as root, we do not need sudo. The first command will download the package lists from the repositories and store them inside the image. The second command will download the make package and install it. In result, we end up with two additional layers. The first one contains the Ubuntu repository data that is completely useless with regards to the installed make. In a non-container environment a natural solution will be the execution of the next command and removal of all unnecessary data. In Dockerfile it would look like that:

RUN apt update
RUN apt install -y make
RUN rm -rf /var/lib/apt/lists/*

However, this approach is wrong. As mentioned before, the layers in docker image are immutable. It means that once created, a layer cannot be modified by future instructions. The above example results in that the files with repository data will not be visible in the image but they still will be occupying disk space. To resolve this problem, creating and removing apt data should happen in a single instruction. Consider the following snippet:

RUN apt update \
    && apt install -y make \
    && rm -rf /var/lib/apt/lists/*

It results in creation of a single layer containing only the installed make package without repository data.

In some instances, the tools added to images require compilation from sources which may involve downloading a lot of additional dependencies that are not required by the tools themselves. An example of such a tool could be the RISC-V GNU compiler toolchain. Docker provides mechanism named Multi-stage build for these cases. In short, it involves the use of temporary images from which a tool is copied to the final image. Let's start Dockerfile by using Ubuntu as the default image:

FROM ubuntu:23.04 as toolchain-build

The name after the as keyword is a label; it allows to reference the image in subsequent instructions of Dockerfile.

Next, set the environment variable with the desired toolchain location:

ENV RISCV=/opt/riscv

After that, install all the build dependencies:

RUN export DEBIAN_FRONTEND=noninteractive \
    && apt-get update \
    && apt-get install -y --no-install-recommends \
        autoconf \
        automake \
        autotools-dev \
        curl \
        python3 \
        python3-pip \
        libmpc-dev \
        libmpfr-dev \
        libgmp-dev \
        gawk \
        build-essential \
        bison \
        flex \
        texinfo \
        gperf \
        libtool \
        patchutils \
        bc \
        zlib1g-dev \
        libexpat-dev \
        ninja-build \
        git \
        cmake \
        libglib2.0-dev \
    && rm -rf /var/lib/apt/lists/*

Then, the RISC-V GNU toolchain can be cloned and built (note that this process may take several hours):

RUN git clone https://github.com/riscv/riscv-gnu-toolchain
RUN cd riscv-gnu-toolchain \
    && ./configure --prefix=$RISCV \
    && make

Now we can move to the specification of the final image. It starts with the use of the Riviera-PRO image as a base:

FROM aldec/riviera-pro:latest

Set the same toolchain location as in the build stage and add toolchain executables to PATH:

ENV RISCV=/opt/riscv
ENV PATH=$RISCV/bin:$PATH

Any required project dependencies must be installed again in the second build stage:

RUN export DEBIAN_FRONTEND=noninteractive \
    && apt-get update \
    && apt-get install -y --no-install-recommends \
        make \
        perl \
        python3 \
    && rm -rf /var/lib/apt/lists/*

Then just copy the toolchain from the build image:

COPY --from=toolchain-build $RISCV $RISCV

That's it. The final image will contain only the RISC-V toolchain without unnecessary source code, compilers, or build dependencies.

The image build process can be started by executing the command below:

docker build -t aldec/riviera-pro:demo .

Note that the compilation toolchain from source code may take several hours. Using the same tag as in the image from the GitLab demo project allows the user to re-run jobs and see how the toolchain is automatically used by the makefile:

Creating and running docker container

Besides using containers in CI/CD platforms or in Kubernetes clusters, there is a possibility to run them manually by the docker run command. Typical arguments to run the container from the image created in the previous section may look as follows:

docker run -i -t \
    -e ALDEC_LICENSE_FILE=27000@127.0.0.1 \
    -w $PWD \
    -v $PWD:$PWD \
    aldec/riviera-pro:latest

where:

-i / --interactive

Connects the STDIN of the command to the STDIN of the container.

-t / --tty

Allocates a pseudo-TTY. Together with the -i switch, allows the user to handle the Linux terminal in the proper way.

-e / --env

Sets an environment variable, may occur multiple times. May be used, for example, to set up the license server address.

-v / --volume

Mounts a volume, creates an entity resembling a link between the location on the host file system and the container. May be specified multiple times. The path before the colon is from the host and the value after the colon specifies the path in the container. In the above example, both paths are identical and point to the directory where the docker run command was executed.

-w / --workdir

Sets the working directory inside the container.

aldec/riviera-pro:latest

A positional argument that specifies the name and tag of an image from which the container will be created.

The above example uses the Riviera-PRO image, mounts the current working directory (CWD) in the Docker container, sets it as the current working directory inside the container and runs the default image command (bash in this case). The container starts in an interactive mode and the user can execute the commands:

The docker run command allows the user to override the default command and execute any program from the image. It is useful when the container is used in a makefile or shell script. Consider the situation when we want to run the run_tests.do macro from the host file system. The following command can be used to achieve this:

docker run \
    -it \
    --rm \
    -u $(id -u):$(id -g) \
    -w $PWD \
    -v $PWD:$PWD \
    aldec/rivierapro:latest \
    vsimsa -do run_tests.do

There are additional arguments in comparison with the previous example:

-u $(id -u):$(id -g)

By default, Docker containers run as the root user. It may be problematic as all the files created by containers on the mounted host volumes (such as libraries, logs, coverage reports, etc.) will be owned by the root. Using this argument prevents this issue since the container UID/GID are overridden by the values of the current host user.

--rm

Automatically removes the container on exit. Useful when the docker run command is used in a shell script or makefile, as usually there is no need to keep the containers after the script completion.

vsimsa -do run_tests.do

Overwrites the default container command. In this example it is an execution of the *.do script in the Riviera-PRO CLI.

Although running a GUI application from the docker container is not a recommended use case, it is possible with the following command:

docker run --net=host \
    -e DISPLAY \
    -e ALDEC_LICENSE_FILE=27000@10.0.0.2 \
    -v "$HOME/.Xauthority:/root/.Xauthority:rw" \
    -w $PWD \
    -v $PWD:$PWD \
    aldec/rivierapro:latest \
    riviera

where:

--net=host

Connects a container network to the host.

-e DISPLAY

Sets the DISPLAY environment variable in the container. If there is no assignment then variable has the same value as in the host.

-v "$HOME/.Xauthority:/root/.Xauthority:rw"

Mounts credential files used by xauth for authentication of the X session. Note that using other account than root requires other mount point on the container side.

The following commands and keyboard shortcuts may come in handy when working with Docker containers:

  • docker ps -a
    

    Shows both running and stopped containers.

  • docker rm <name or hash>
    

    Removes a specified container.

  • docker rmi <tag or hash>
    

    Removes an image. The command reports an error if there is at least one container that uses the specified image.

  • docker attach <name or hash>
    

    Attaches an image to the running container.

  • Ctrl+P and then Ctrl+Q

    The default key sequence to detach from a container interactive session; after hitting this sequence, the container will still be running in the background.

Ask Us a Question
x
Ask Us a Question
x
Captcha ImageReload Captcha
Incorrect data entered.
Thank you! Your question has been submitted. Please allow 1-3 business days for someone to respond to your question.
Internal error occurred. Your question was not submitted. Please contact us using Feedback form.
We use cookies to ensure we give you the best user experience and to provide you with content we believe will be of relevance to you. If you continue to use our site, you consent to our use of cookies. A detailed overview on the use of cookies and other website information is located in our Privacy Policy.