README.md 7.42 KB
Newer Older
Simpson, Thomas's avatar
Simpson, Thomas committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
<p>
<div align="center">
  <h1 align="center">Getting Started with TVM on Xilinx Zynq UltraScale+ MPSoC Devices</h1>
</div>
</p>

# Introduction
This document describes steps that can be used to setup a host environment for cross-compiling ML models using the TVM framework. This document also describes how to install the TVM runtime on a target MPSoC device, and run an ML based python application on the target device.

# Prerequisites

+ Linux host machine
+ Docker

# Overview
The structure of this document is organized as follows:

- <a href="#part-1-setting-up-the-host">Part 1: Setting up the host</a>
- <a href="#part-2-cross-compile-the-ml-model-on-the-host">Part 2: Cross-compile the ML model on the host</a>
- <a href="#part-3-setting-up-the-target-device">Part 3: Setting up the target device</a>
- <a href="#part-4-running-on-the-target-hardware">Part 4: Running on the target hardware</a>

# Part 1: Setting up the host

- Download the TVM directory contents from the Vitis-AI repository on your host machine
  ```bash
  mkdir -p ~/vai_1.2_tvm/tvm
  cd ~/vai_1.2_tvm/tvm
  wget https://raw.githubusercontent.com/Xilinx/Vitis-AI/v1.2.1/tvm/Dockerfile.ci_vai_1x
  wget https://raw.githubusercontent.com/Xilinx/Vitis-AI/v1.2.1/tvm/bash.sh
  wget https://raw.githubusercontent.com/Xilinx/Vitis-AI/v1.2.1/tvm/build.sh
  wget https://raw.githubusercontent.com/Xilinx/Vitis-AI/v1.2.1/tvm/user_setup.sh
  wget https://raw.githubusercontent.com/Xilinx/Vitis-AI/v1.2.1/tvm/vai_patch.diff
  ```

- Build and install the TVM docker image on the host machine
  ```bash
  cp Dockerfile.ci_vai_1x Dockerfile.ci_vai_1.2
  bash ./build.sh ci_vai_1.2 bash
  ```

- Test the TVM docker image
  ```bash
  bash ./bash.sh tvm.ci_vai_1.2
  ```

- Once the docker image is loaded activate the conda TensorFlow environment
  ```bash
  conda activate vitis-ai-tensorflow
  ```

- Verify the installation of TVM and the Python XIR packages
  ```bash
  python3 -c "import tvm; import pyxir"
  ```

# Part 2: Cross-compile the ML model on the host
Simpson, Thomas's avatar
Simpson, Thomas committed
58
- From inside the ``tvm.ci_vai_1.2`` docker container execute the following commands to change the target device from Alveo to Zynq UltraScale+ MPSoC
Simpson, Thomas's avatar
Simpson, Thomas committed
59
60
61
  ```bash
  cd tvm/tutorials/accelerators/compile
  sed -i "s/target         = 'DPUCADX8G'/target = 'DPUCZDX8G-zcu104'/" mxnet_resnet_18.py
Simpson, Thomas's avatar
Simpson, Thomas committed
62
63
64
65
  ```

- Execute the following commands to compile the model
  ```bash
Simpson, Thomas's avatar
Simpson, Thomas committed
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
  python3 mxnet_resnet_18.py
  cp -rp mxnet_resnet_18 /workspace/.
  ```

- Copy the python application script from the docker container 
  ```bash
  cp -p /opt/tvm-vai/tvm/tutorials/accelerators/run/mxnet_resnet_18.py /workspace/mxnet_resnet_18/.
  ```

- Exit the docker container
  ```bash
  exit
  ```  

# Part 3: Setting up the target device

Simpson, Thomas's avatar
Simpson, Thomas committed
82
## Set up the target with PYNQ image
Simpson, Thomas's avatar
Simpson, Thomas committed
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
  + Download the PYNQ v2.5 board image from http://www.pynq.io/board.html
  + Burn the .img file to a blank SD card.  The example command below uses ``dd``, but feel free to use your favorite disk imaging tool. 
  
    **Note:** ``/dev/sde`` is the enumeration for the SD card on my Linux host - it will most likely be different on your host machine.  Modify the example command below accordingly.
    
    ```bash
    sudo dd if=zcu104_v2.5.img of=/dev/sde bs=4k status=progress
    ```

## Upgrade the PYNQ image to use the DPU-PYNQ overlay
- Insert the SD card into the target device and boot

- Execute the following commands on the **target**:
  ```bash
  git clone --recursive --shallow-submodules https://github.com/Xilinx/DPU-PYNQ.git
  cd DPU-PYNQ/upgrade
  sudo make
  sudo pip3 install pynq-dpu
  cd $PYNQ_JUPYTER_NOTEBOOKS
  sudo pynq get-notebooks pynq-dpu -p .
  ```

  **Note:** The password for the ``sudo`` commands is ``xilinx``

## Install TVM on the target

- Create a TVM directory on the target device by executing the following commands in a terminal (serial or ethernet) on the target

   ```bash
   mkdir -p ~/vai_1.2_tvm/tvm
   cd ~/vai_1.2_tvm/tvm
   ```

- Download the TVM target build files from the Vitis-AI repository by executing the following commands in a terminal on the target 
   ```bash
   wget https://raw.githubusercontent.com/Xilinx/Vitis-AI/v1.2.1/tvm/vai_patch.diff
   wget https://raw.githubusercontent.com/Xilinx/Vitis-AI/v1.2.1/tvm/zynq_setup.sh
   ```

- Modify the build script to fix an issue by executing the following commands in a terminal on the target
   ```bash
   sed -i '39icd \${TVM_HOME}\/python' zynq_setup.sh
   sed -i 's/python3 "\${TVM_HOME}"\/python\/setup.py --install --user/python3 "\${TVM_HOME}"\/python\/setup.py install --user/' zynq_setup.sh
   ```

- Run the build script to compile the TVM runtime and install on the target device by executing the following command in a terminal on the target 
   ```bash
   sudo bash zynq_setup.sh
   ```

# Part 4: Running on the target hardware

- Copy the compiled TVM model and python application script from the **host** to the target device by executing the following commands on the **host**
  ```bash
  scp -rp ~/vai_1.2_tvm/tvm/mxnet_resnet_18 xilinx@$TARGET_IP:~/.
  ```

  **Note 1:** Replace the ``$TARGET_IP`` string in the command above with the IP address of your target device

  **Note 2:** The password for the xilinx user account on the target hardware is ``xilinx``

- Load the DPU-PYNQ bitstream by executing the following commands in a terminal on the **target**
  ```bash
  sudo python3 -c 'from pynq_dpu import DpuOverlay; overlay = DpuOverlay("dpu.bit")'
  ```

- Create a symbolic link in the ``/usr/local/lib`` directory to the n2cube library by executing the following commands in a terminal on the **target**
  ```bash
  sudo ln -sf /usr/lib/libn2cube.so /usr/local/lib
  ```

- Run test application by executing the following commands in a terminal on the **target**
  ```bash
  cd ~/mxnet_resnet_18
  sudo python3 mxnet_resnet_18.py -f ./ -d libdpu
  ```

- The first time the application runs it will download some test data.  The output from a successful run should look like:
  ```txt
  xilinx@pynq:~/mxnet_resnet_18$ sudo python3 mxnet_resnet_18.py -f ./ -d libdpu
  Downloading from url https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true to /root/.tvm_test_data/data/cat.png
  ...100%, 0.12 MB, 135 KB/s, 0 seconds passed
  Downloading from url https://gist.githubusercontent.com/zhreshold/4d0b62f3d01426887599d4f7ede23ee5/raw/596b27d23537e5a1b5751d2b0481ef172f58b539/imagenet1000_clsid_to_human.txt to /root/.tvm_test_data/data/imagenet1000_clsid_to_human.txt
  ...100%, 0.03 MB, 117 KB/s, 0 seconds passed
  [224, 224]
  VAI iteration: 1/2, run time: 0.2587130069732666
  -----------------------
  TVM-VAI prediction top-5:
  282 tiger cat
  285 Egyptian cat
  281 tabby, tabby cat
  287 lynx, catamount
  356 weasel
  VAI iteration: 2/2, run time: 0.015643835067749023
  -----------------------
  TVM-VAI prediction top-5:
  282 tiger cat
  285 Egyptian cat
  281 tabby, tabby cat
  287 lynx, catamount
  356 weasel
  ```

# References
Information in this document was pulled from a few different locations.  For more details please see the source content listed in the links below.

- <a href="http://www.pynq.io/home.html">PYNQ: PYTHON PRODUCTIVITY </a>
- <a href="https://github.com/Xilinx/DPU-PYNQ">DPU-PYNQ</a>
- <a href="https://github.com/Xilinx/Vitis-AI/blob/master/tvm/README.md">TVM host setup</a>
- <a href="https://github.com/Xilinx/Vitis-AI/blob/master/tvm/docs/compiling_a_model.md">Compiling a model with the TVM framework</a>
- <a href="https://github.com/Xilinx/Vitis-AI/blob/master/tvm/docs/running_on_zynq.md">TVM target setup</a>