7 min read

Dual Mine Ethereum & Chia with Proxmox 7.0-11

Introduction

Let's say you have finished plotting and you have either 1) Raspberry Pi's and a load of external hard drives connected by huge USB hubs or 2) you have full PC builds that are acting as harvesters doing nothing but harvesting Chia. I wondered whether it would be possible to add a graphics card to these harvester machines and maybe mine some Ethereum at the same time. Of course it is possible but can it be done with little to no negative effects on the Chia harvester? Making sure we can keep the partials submitted at 100%. Doing things like plotting on the same machine will negatively affect the harvester because you will get invalid partials.. One of the best ways to avoid invalid partials is to divide the machine up into its own restricted Virtual Machines. You can have a Windows 10 machine mining Ethereum and a Linux machine farming Chia.

I will be outlining how i got my system to work using Proxmox 7. As usual the specifications are low because that is all i have to work with, plus i like seeing what minimal specifications are capable of. My setup is built around a 3rd generation Intel CPU, specifically the 3770 (not the K variant) to allow us to passthrough the AMD RX 580 graphics card. Everything else is as basic as you can really go, with 22GB of total ram for the system.

Setup

PC Specifications:

  • CPU: Intel i7 3770.
  • Motherboard: Gigabyte Sniper G3.
  • RAM: Standard 1066mhz 22GB total ram because they are mixed sizes.
  • GFX Card: AMD RX 580 ASUS STRIX OC'd with RedBios mods & Ubermix 3.1 Straps.
  • CPU Cooling: Corsair H100 Cooler.
  • Case: Corsair.
  • PSU: Corsair CX 450 watt.

As you can see my specs are low but this is easily enough computing power needed to run a Chia harvester and an Ethereum miner at the same time without any real

Software Requirements:

About Proxmox

Proxmox is a type 1 hypervisor that installs as an operating system but allows you to create multiple isolated virtual machines. Type 1 Hypervisor means it is a top layer hypervisor and is installed on the bare metal, not under another operating system like Virtualbox.

https://www.proxmox.com/en/proxmox-ve/get-started

The main point i am trying to make with this post is simply because when looking for information on GPU passthrough i get a few decent search results but most are outdated and some steps do not need to be performed anymore which means a much simpler installation. GPU Passthrough used to be a tricky one but it appears things have come a long way in the last couple of years. Also this is just a reference for myself to look back on since i forget everything.

Okay enough talking lets begin.

The guide assumes you have Proxmox running and have installed your graphics card. You do not need to install any drivers on Proxmox for your GPU.

Enabling IOMMU

On your Proxmox machine we need to enable IOMMU to allow us to passthrough our GPU and we enable this by editing our /etc/default/grub file. Open a terminal and type

nano /etc/default/grub

Change it by editing the line that begins with GRUB_CMDLINE_LINUX_DEFAULT= and add intel_iommu=on or amd_iommu=on after the quiet. example below.

# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
#   info -f grub -n 'Simple configuration'

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Proxmox VE"
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on video=vesafb:off,efifb:off"
GRUB_CMDLINE_LINUX=""

The video=vesafb:off,efifb:off is added to disable the framebuffer. Some processors require it some don't. Try it with or without. For example Xeon processors have horrible GPU passthrough capabilities if any so some extra steps may be necessary based on your hardware. Some people suggest to put even more parameters but i have not needed any more than what is above. Here's an example but personally i have not needed any of these extra parameters.

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset video=vesafb:off,efifb:off"

Now update the grub

update-grub

then reboot

reboot

Now we can check if our IOMMU is enabled by issuing the following command

dmesg | grep -e DMAR -e IOMMU

The result should look something like this

root@pve:~# dmesg | grep -e DMAR -e IOMMU
[    0.040380] ACPI: DMAR 0x00000000BE1D5080 0000B8 (v01 INTEL  SNB      00000001 INTL 00000001)
[    0.040394] ACPI: Reserving DMAR table memory at [mem 0xbe1d5080-0xbe1d5137]
[    0.083456] DMAR: IOMMU enabled
[    0.164597] DMAR: Host address width 36
[    0.164598] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.164602] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap c0000020e60262 ecap f0101a
[    0.164604] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.164606] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap c9008020660262 ecap f0105a
[    0.164608] DMAR: RMRR base: 0x000000bded3000 end: 0x000000bdefdfff
[    0.164609] DMAR: RMRR base: 0x000000bf800000 end: 0x000000cf9fffff
[    0.164611] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.164612] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.164613] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.164991] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.953003] DMAR: No ATSR found
[    0.953004] DMAR: dmar0: Using Queued invalidation
[    0.953010] DMAR: dmar1: Using Queued invalidation
[    1.048547] DMAR: Intel(R) Virtualization Technology for Directed I/O
[    3.680221] AMD-Vi: AMD IOMMUv2 driver by Joerg Roedel <jroedel@suse.de>
[    3.680224] AMD-Vi: AMD IOMMUv2 functionality not available on this system
[    3.699773] i915 0000:00:02.0: [drm] DMAR active, disabling use of stolen memory
root@pve:~# 

Set up our Chia harvester VM

Create a new virtual machine you can use Windows if you like. For this example i will be using Ubuntu 20.04.

Leave all the settings as default in the VM creation just don't boot the VM just yet. We need to passthrough our harddrives.

By typing the command below into our Proxmox shell we can find the UUID's of our hard drives

lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'

This will show you your hard drive UUID's and will look like below

NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT DEVICE-ID(S)
sda            8:0    0 465.8G  0 disk   /dev/disk/by-id/ata-Samsung_SSD_850_EVO_500GB_S2RBNCBJ117641V /dev/disk/by-id/wwn-0x5002538d704def7f
sdb            8:16   0   7.3T  0 disk   /dev/disk/by-id/wwn-0x5000c500db5a7de5 /dev/disk/by-id/ata-ST8000DM004-2CX188_ZSC00DXP
sdc            8:32   0   7.3T  0 disk   /dev/disk/by-id/ata-ST8000DM004-2CX188_ZR10SNCD /dev/disk/by-id/wwn-0x5000c500db13bd02
sdd            8:48   0   2.7T  0 disk   /dev/disk/by-id/wwn-0x50014ee20acf42e2 /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4NNPTEPX1
sde            8:64   0   3.6T  0 disk   /dev/disk/by-id/wwn-0x50014ee20d44bb5e /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E4EDRV9V
sdf            8:80   0   1.8T  0 disk   /dev/disk/by-id/ata-ST2000DL001-9VT156_5YD065TN /dev/disk/by-id/wwn-0x5000c5002a6f1ec4
sdg            8:96   1   5.5T  0 disk   /dev/disk/by-id/wwn-0x5000c500baa79278 /dev/disk/by-id/ata-ST6000DM003-2CY186_WF208GM8
sdh            8:112  1 931.5G  0 disk   /dev/disk/by-id/ata-Samsung_SSD_850_EVO_1TB_S21DNXAG629517E /dev/disk/by-id/wwn-0x500253884004fb8d
sdi            8:128  1   3.6T  0 disk   /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E6YEXHLK /dev/disk/by-id/wwn-0x50014ee20bfa6c35
sdj            8:144  1   3.6T  0 disk   /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E4UJZ60V /dev/disk/by-id/wwn-0x50014ee20bfb2e13
sdk            8:160  0   2.7T  0 disk   /dev/disk/by-id/usb-Seagate_Backup+_Hub_BK_01CB1182B2LK-0:0
root@pve:~# 

We need to add our hard drives to our VM but we can only do this at the command line not via the GUI. We will use the UUID to attach the drive to our VM.

To attach a drive type the following into Proxmox. Replace the disk ID with one of your own disks that contain your plots.

qm set 100 -scsi5 /dev/disk/by-id/ata-ST8000DM004-2CX188_ZSC00DXP

Do this for each hard drive that you want to attach to your VM. This is what my VM hardware tab looks like

After adding all your hard drives to the VM you can boot it up, complete the install then install the Chia software from HERE

At this point we should have Chia set up and running in a minimal virtual machine. For more Chia information check my old post here. Now we can go back to configuring the system for the GPU passthrough.

Enabling Proxmox Modules

We need to enable some extra modules in Proxmox. Open the proxmox shell and type:

nano /etc/modules

Then add the following:

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

IOMMU Interrupt Remapping

This can be a bit overly complicated so just bare with me. Check if your system has remapping enabled. In the Proxmox terminal again type:

dmesg | grep 'remapping'

If you see one of the following lines, remapping is enabled

"AMD-Vi: Interrupt remapping enabled"
"DMAR-IR: Enabled IRQ remapping in x2apic mode"

If you don't get any of the above you can enable interrupts anyway by using "unsafe interrupts" by typing:

echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf

Blacklisting The Drivers

We need to blacklist the drivers to further ensure proxmox does not try to utilise the GFX card. Copy and paste the commands below to blacklist both Nvidia and AMD drivers.

echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf 
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf 
echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf 

Finding our GPU

We need to find our GPU address so we can add it to our VM. To be clear you can simply add the GPU using the Proxmox GUI but i will explain both

In your Proxmox terminal type:

lspci

This will give you the address names. Check out the example below

root@pve:~# lspci
08:10.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI Express Gen 3 (8.0 GT/s) Switch (rev ba)
09:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] (rev e7)
09:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590]
root@pve:~# 

As you can see, the GPU is my AMD RX 580 and the address is 09:00.0 and 09:00.1. one is for the GPU and one is for the GPU audio output. You do not need to add both, only the GPU.

Make a note of our device number 09:00.0

Building our Mining Virtual Machine

I will be using a Windows 10 VM to mine Ethereum purely because the drivers for AMD graphics cards on Linux are a nightmare.

We will need to change some settings while creating the VM this time. Create a new VM with the following settings.

General Tab:

Operating System Tab

System Tab

Hard Disk tab

CPU tab

Memory tab

Network Tab

Now our Windows VM is built we can boot it but first we need to attach the GPU to our VM.

Adding our GPU

You can add the GPU manually to your VM by editing the vmconf file. The vm conf file is the number of the VM with a .conf extension. See below

nano /etc/pve/qemu-server/<vmid>.conf

# Example
nano /etc/pve/qemu-server/101.conf

You can open up this file and at the bottom of the file type

hostpci0: 09:00.0

Save and close the file and you should see the GPU in the Proxmox GUI

You can also add the GPU using the GUI. Go to Hardware > Add PCI Device

Complete!

Your Proxmox machine is now set up and ready to start mining!