Proxmox 7.3 GPU Hardware Acceleration for Jellyfin

An ultimate guide to GPU PT for hardware acceleration in virtual machines on a Proxmox host.

Revision 2 – Updated 17th January 2023

The revision includes updated information relevant to Proxmox’s latest release 7.3 pve-5.15Nov 2022

Intro

GPU Passthrough in virtualized environments is nothing new but as new operating systems evolve the methods of installing and setting up these systems change. I recently had to set up hardware acceleration on an Ubuntu 22.04 Server virtual machine using an Nvidia 960M GPU and there is still so much outdated information so this guide aims to be more relevant to 2023. This guide will work with Ubuntu 20.04/22.04 LTS and probably Debian 11.

  • Host OS: Proxmox 7.3 – Debian 11 “Bullseye” based
  • VM OS: Ubuntu 22.04 LTS

The guide will cover everything from the Proxmox configuration to the final virtual machine test for Hardware Acceleration using Jellyfin Media Server. We will also set up driver patches to bypass Nvidia’s transcode limits and install a movie and tv show request service for Sonarr and Radarr called Jellyseer.

No single guide could possibly cover every different system out there. This guide aims to give you as much information as possible. There is a lot of information and not all of it may be relevant to you and your system.

Enable BIOS Features

In your PC/Laptop/Server BIOS make sure the following is enabled in the BIOS:

Check your GPU’s capabilities. Video Encode and Decode GPU Support Matrix. Put your GPU into the list and it will tell you what its capable of and how many streams.

https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new

Proxmox Host Configuration

The commands below are to be entered on the Proxmox host machine. Virtual Machine configuration is here.

Getting GPU Device IDs

The command below will give you your device IDs, There will be many results. Look for the GPU like the image below.

You may have two devices. One will be your GPU and one will be the sound device attached to the GPU. There is only one device listed here because it is a laptop.

Now you have the device ID for the GPU we can use it to narrow down the information by using the command below.

lspci -n -s 01:00

The output will look something like this

01:00.0 0300: 10de:139b (rev a2)              # <------------ Main GPU
01:00.1 0403: 10de:0fbc (rev a1)              # <------------ GPU Sound Card

The part we need is 10de:1381 and 10de:0fbc. These are our GPU Device IDs when you need to reference them again just use the lspci -n -s 01:00 command.

Enable IOMMU in GRUB

On Proxmox 7.3 this step is only required for Proxmox kernels below 5.15 on Intel CPU’s. AMD should be enabled automatically.

https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough

Boot your Proxmox host with the BIOS features enabled then we will edit our grub file to activate the features on every boot.

Open /etc/default/grub and add a new line. I prefer just to comment out what is there so that I can always go back to exactly how it was initially if I need to.

nano /etc/default/grub

If you have an Intel CPU add the following line:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

If you have AMD CPU add the following line:

GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"

If your system supports passthrough mode, we can enable iommu=pt. This gives us better performance by passing requests directly to the GPU instead of the hypervisor.

The final line would look like

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"

Once done do a CTRL + x to exit then y to save.

Update grub and reboot

update-grub
reboot

Once booted, If you have all this set up we can use a few commands to check IOMMU is working

dmesg | grep -e DMAR -e IOMMU

# or

dmesg | grep -e DMAR -e IOMMU -e VT-d 

# For amd use "AMD-Vi" instead of VT-d

You should see something like the image below. It will likely not be the same, all you are looking for is the IOMMU enabled part.

Enable Unsafe Interrupts (optional)

Some systems may require unsafe interrupts to be enabled. For this, we need to create another file with the following command. Be aware that unsafe interrupts are not required because it can degrade performance.

echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf

Enable VFIO Modules

Edit the /etc/modules file

nano /etc/modules

Add the following to the file

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Use CTRL + x to save then y to exit.

Finally, use update-initramfs -u -k all and reboot

update-initramfs -u -k all
reboot

After a reboot we can check to see if the VFIO drivers are running with

dmesg | grep -i vfio

You should get some output like

[    4.254246] VFIO - User Level meta-driver version: 0.3
[    4.275657] vfio_pci: add [10de:139b[ffffffff:ffffffff]] class 0x000000/00000000
[   62.519947] vfio-pci 0000:01:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258
[   62.519959] vfio-pci 0000:01:00.0: vfio_ecap_init: hiding ecap 0x19@0x900

We can also check with lspci to see the GPU and which Kernel driver it is using currently. We got our device ID’s earlier.

lspci -nnk -d 10de:139b

The output should be something similar to the below

Here is where we start to set up the Proxmox host for GPU Passthrough. First, we need our device ID.

GPU Isolation From the Host

We need to make sure that the host does not try to use the GPU. We can do this by two ways. We can either isolate the GPU with a vfio.conf file or we totally blacklist the drivers. Use one or the other, if that doesn’t work you can try both at the same time.

GPU Device ID Isolation

While we have our ID we can create a vfio.conf file that will tell Proxmox to isolate that specific device on boot. Also, because we will be using OMVF bios in our VM instead of SeaBIOS we will be able to boot using UEFI mode instead of CSM.

echo "options vfio-pci ids=10de:1381,10de:0fbc disable_vga=1" > /etc/modprobe.d/vfio.conf

Driver Blacklisting

In Proxmox 7.3, blacklisting drivers is not absolutre

Now that we have isolated the GPU, we need to blacklist the drivers so that Proxmox doesn’t try to boot any drivers as they will conflict with our passthrough. *This used to be mandatory but if you successfully use the command above to create the vfio.conf file then blacklisting may not be necessary.

echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf 
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf 
echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf 
echo "blacklist nvidiafb" >> /etc/modprobe.d/blacklist.conf
echo "blacklist nvidia_drm" >> /etc/modprobe.d/blacklist.conf

Testing GPU vBIOS for UEFI Support

Dumping the vBIOS is not totally necessary in Proxmox 7.3

Dumping the GPU is pretty straightforward and gives us a couple of things. It allows us to test it and see if it is UEFI compatible and we can also use the dumped vbios file to pass through to our VM. This allows our VM to have a copy of the GPU vbios preloaded.

For now, we will dump and test the GPU vbios file. Start by downloading the software and the dependencies.

apt update
apt install gcc git build-essential -y
git clone https://github.com/awilliam/rom-parser
cd rom-parser
make

Now to dump the vBIOS. The vbios will be dumped to /tmp/image.rom.

cd /sys/bus/pci/devices/0000:01:00.0/
echo 1 > rom
cat rom > /tmp/image.rom
echo 0 > rom

Now we can test the vbios rom with

./rom-parser /tmp/image.rom

The output is something similar to this

Valid ROM signature found @0h, PCIR offset 190h
 PCIR: type 0, vendor: 10de, device: 1280, class: 030000
 PCIR: revision 0, vendor revision: 1
Valid ROM signature found @f400h, PCIR offset 1ch
 PCIR: type 3, vendor: 10de, device: 1280, class: 030000
 PCIR: revision 3, vendor revision: 0
  EFI: Signature Valid
 Last image

To be UEFI compatible, you need a “type 3” in the result.

You can log into your Proxmox machine and download the file if you want or you can save it to another location like a home folder. You may need it in future. As long as you got some output then the vbios is valid and UEFI capable.

A note on Windows 7/10/11

This guide is not designed for Windows VM’s but if you absolutely must use Windows and you have a Nvidia GPU you need to use the commands below to avoid crashes with geforce experience.

https://pve.proxmox.com/wiki/Pci_passthrough#NVIDIA_Tips
echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf

If you get a lot of errors in dmesg you can add the following line instead

echo "options kvm ignore_msrs=1 report_ignored_msrs=0" > /etc/modprobe.d/kvm.conf

VM Creation

I will be using Ubuntu 22.04.1 Server and we need to be quite specific about the options we enable and in which order we enable them because as soon as we attach our GPU we will not be able to log in through the Proxmox GUI.

Start with a Virtual Machine with the following settings (images below)

# VM Creation Tab 1
Name: ubuntuvm
# VM Creation Tab 2
Operating System: Ubuntu 22.04.1 Live Server
Linux Kernel: 5.x - 2.6
# VM Creation Tab 3
Graphic Card: Default
Machine: q35
BIOS: OVMF (UEFI)
Add EFI Disk: YES
Pre-Enroll keys: YES
SCSI Controller: VirtiO SCSI
# VM Creation Tab 4
SSD Emulation: YES  # Use if installing to SSD or NVME
Discard: YES        # Use if installing to SSD or NVME
# VM Creation Tab 5
Cores: 4
Type: host
# VM Creation Tab 6
Memory: 4096MB
Ballooning Device: Disable because ballooning affects performance.
# VM Creation Tab 7
Bridge: vmbr0
Network: VirtiO (paravirtualized)
# VM Creation Tab 8
Start after created: YES

VM Creation – General

VM Creation – OS

VM Creation – System

VM Creation – Disks

Discard and SSD Emulation should be enabled if using an SSD or NVME

VM Creation – CPU

VM Creation – Memory

Give as much memory as you can allow, usually, around 4GB is the minimum. Ballooning affects performance so we disable it. It allows for the ram to grow as needed but we don’t want that for stability and performance reasons.

VM Creation – Network

VM Creation – Confirm

Ubuntu Server OS Installation

The server OS is no different from any other server install. Follow the prompts and reboot as necessary until your system is fully up and running.

Ubuntu VM Configuration

The commands below are to be entered on the Virtual Machine. To set up the Proxmox host see here.

First let’s make sure we have some basic apps, services and updates in our virtual machine. Use the following commands. Some of it may already be installed but what we need to make sure of is that we can get an ssh connection because it will be our only method of accessing the virtual machine.

sudo apt update && sudo apt upgrade -y
sudo apt install git build-essential gcc openssh-server

In another computer, test to see if you can get an SSH connection before we start adding the GPU to the VM.

ssh user@<VM IP>

If you get a successful connection good we can move on to the next step.

Dump GPU BIOS to KVM Directory

We need to use the same method as we used earlier to dump the bios but we will put the bios in the /usr/share/kvm/ directory

cd /sys/bus/pci/devices/0000:01:00.0/
echo 1 > rom
cat rom > /usr/share/kvm/vbios.bin
echo 0 > rom

We can now use the bios when we add the GPU to the VM.

Adding the GPU to the VM

From this point your VM will have NO GUI IN PROXMOX. Before you add the GPU make sure you can ssh insto the VM.

First shutdown the VM if it is online.

shutdown now

Edit the VM .conf file /etc/pve/qemu-server/<VMID>.conf

nano /etc/pve/qemu-server/<VMID>.conf        # < Replace <VMID> with the VM ID NUMBER

The reason why we are adding the GPU this way is because we are passing through our GPU bios which cannot be set via the Proxmox GUI.

Add the GPU by adding the line below. Add it just after the disks

hostpci0: 0000:01:00,romfile=vbios.bin,x-vga=1

CTRL + x then y to save and exit.

The GUI will look like the below and you can see our vbios.bin GPU bios is loaded.

Booting the VM & Installing Drivers

Boot the virtual machine and see if the GPU is recognised with

lspci

You should see your card in the list of PCI devices

Install Nvidia Drivers Method #1 (Manually)

We can use a few ways of doing this but the final result is always pretty much the same, some methods work better than others for some and vice versa.

We can use apt to search for a recommended driver. We won’t be using apt to install the driver but it’s good to know what is recommended.

apt search nvidia-driver

We get back the available drivers as a list which tells us which driver is recommended for the system.

Full Text Search... Done
nvidia-384/jammy-updates,jammy-security 390.157-0ubuntu0.22.04.1 amd64
  Transitional package for nvidia-driver-390

nvidia-384-dev/jammy-updates,jammy-security 390.157-0ubuntu0.22.04.1 amd64
  Transitional package for nvidia-driver-390

nvidia-driver-390/jammy-updates,jammy-security 390.157-0ubuntu0.22.04.1 amd64
  NVIDIA driver metapackage

nvidia-driver-418/jammy 430.50-0ubuntu3 amd64
  Transitional package for nvidia-driver-430

nvidia-driver-418-server/jammy 418.226.00-0ubuntu4 amd64
  NVIDIA Server Driver metapackage

nvidia-driver-430/jammy 440.100-0ubuntu1 amd64
  Transitional package for nvidia-driver-440

nvidia-driver-435/jammy 455.45.01-0ubuntu1 amd64
  Transitional package for nvidia-driver-455

nvidia-driver-440/jammy 450.119.03-0ubuntu1 amd64
  Transitional package for nvidia-driver-450

nvidia-driver-440-server/jammy-updates,jammy-security 450.216.04-0ubuntu0.22.04.1 amd64
  Transitional package for nvidia-driver-450-server

nvidia-driver-450/jammy 460.91.03-0ubuntu1 amd64
  Transitional package for nvidia-driver-460

nvidia-driver-450-server/jammy-updates,jammy-security 450.216.04-0ubuntu0.22.04.1 amd64
  NVIDIA Server Driver metapackage

nvidia-driver-455/jammy 460.91.03-0ubuntu1 amd64
  Transitional package for nvidia-driver-460

nvidia-driver-460/jammy-updates,jammy-security 470.161.03-0ubuntu0.22.04.1 amd64
  Transitional package for nvidia-driver-470

nvidia-driver-460-server/jammy-updates,jammy-security 470.161.03-0ubuntu0.22.04.1 amd64
  Transitional package for nvidia-driver-470-server

nvidia-driver-465/jammy-updates,jammy-security 470.161.03-0ubuntu0.22.04.1 amd64
  Transitional package for nvidia-driver-470

nvidia-driver-470/jammy-updates,jammy-security 470.161.03-0ubuntu0.22.04.1 amd64
  NVIDIA driver metapackage

nvidia-driver-470-server/jammy-updates,jammy-security 470.161.03-0ubuntu0.22.04.1 amd64
  NVIDIA Server Driver metapackage

nvidia-driver-495/jammy-updates,jammy-security 510.108.03-0ubuntu0.22.04.1 amd64
  Transitional package for nvidia-driver-510

nvidia-driver-510/jammy-updates,jammy-security 510.108.03-0ubuntu0.22.04.1 amd64
  NVIDIA driver metapackage

nvidia-driver-510-server/jammy-updates,jammy-security 515.86.01-0ubuntu0.22.04.1 amd64
  Transitional package for nvidia-driver-515-server

nvidia-driver-515/jammy-updates,jammy-security 515.86.01-0ubuntu0.22.04.1 amd64
  NVIDIA driver metapackage

nvidia-driver-515-open/jammy-updates,jammy-security 515.86.01-0ubuntu0.22.04.1 amd64
  NVIDIA driver (open kernel) metapackage

nvidia-driver-515-server/jammy-updates,jammy-security 515.86.01-0ubuntu0.22.04.1 amd64
  NVIDIA Server Driver metapackage

nvidia-driver-520/jammy-updates,jammy-security 525.60.11-0ubuntu0.22.04.1 amd64
  Transitional package for nvidia-driver-525

nvidia-driver-520-open/jammy-updates,jammy-security 525.60.11-0ubuntu0.22.04.1 amd64
  Transitional package for nvidia-driver-525

nvidia-driver-525/jammy-updates,jammy-security 525.60.11-0ubuntu0.22.04.1 amd64
  NVIDIA driver metapackage

nvidia-driver-525-open/jammy-updates,jammy-security 525.60.11-0ubuntu0.22.04.1 amd64
  NVIDIA driver (open kernel) metapackage

nvidia-driver-525-server/jammy-updates,jammy-security 525.60.13-0ubuntu0.22.04.1 amd64
  NVIDIA Server Driver metapackage

nvidia-headless-390/jammy-updates,jammy-security 390.157-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage

nvidia-headless-418-server/jammy 418.226.00-0ubuntu4 amd64
  NVIDIA headless metapackage

nvidia-headless-450-server/jammy-updates,jammy-security 450.216.04-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage

nvidia-headless-470/jammy-updates,jammy-security 470.161.03-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage

nvidia-headless-470-server/jammy-updates,jammy-security 470.161.03-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage

nvidia-headless-510/jammy-updates,jammy-security 510.108.03-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage

nvidia-headless-515/jammy-updates,jammy-security 515.86.01-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage

nvidia-headless-515-open/jammy-updates,jammy-security 515.86.01-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage (open kernel module)

nvidia-headless-515-server/jammy-updates,jammy-security 515.86.01-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage

nvidia-headless-525/jammy-updates,jammy-security 525.60.11-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage

nvidia-headless-525-open/jammy-updates,jammy-security 525.60.11-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage (open kernel module)

nvidia-headless-525-server/jammy-updates,jammy-security 525.60.13-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage

nvidia-headless-no-dkms-390/jammy-updates,jammy-security 390.157-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage - no DKMS

nvidia-headless-no-dkms-418-server/jammy 418.226.00-0ubuntu4 amd64
  NVIDIA headless metapackage - no DKMS

nvidia-headless-no-dkms-450-server/jammy-updates,jammy-security 450.216.04-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage - no DKMS

nvidia-headless-no-dkms-470/jammy-updates,jammy-security 470.161.03-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage - no DKMS

nvidia-headless-no-dkms-470-server/jammy-updates,jammy-security 470.161.03-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage - no DKMS

nvidia-headless-no-dkms-510/jammy-updates,jammy-security 510.108.03-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage - no DKMS

nvidia-headless-no-dkms-515/jammy-updates,jammy-security 515.86.01-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage - no DKMS

nvidia-headless-no-dkms-515-open/jammy-updates,jammy-security 515.86.01-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage - no DKMS (open kernel module)

nvidia-headless-no-dkms-515-server/jammy-updates,jammy-security 515.86.01-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage - no DKMS

nvidia-headless-no-dkms-525/jammy-updates,jammy-security 525.60.11-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage - no DKMS

nvidia-headless-no-dkms-525-open/jammy-updates,jammy-security 525.60.11-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage - no DKMS (open kernel module)

nvidia-headless-no-dkms-525-server/jammy-updates,jammy-security 525.60.13-0ubuntu0.22.04.1 amd64
  NVIDIA headless metapackage - no DKMS

xserver-xorg-video-nvidia-390/jammy-updates,jammy-security 390.157-0ubuntu0.22.04.1 amd64
  NVIDIA binary Xorg driver

xserver-xorg-video-nvidia-418-server/jammy 418.226.00-0ubuntu4 amd64
  NVIDIA binary Xorg driver

xserver-xorg-video-nvidia-450-server/jammy-updates,jammy-security 450.216.04-0ubuntu0.22.04.1 amd64
  NVIDIA binary Xorg driver

From this list, it shows that the latest compatible driver is 525 and we could install it with apt install nvidia-headless-no-dkms-525-server and nvidia-utils-525-server but instead we will download the drivers off the website and install them manually so we know exactly what’s installed.

Go to https://www.nvidia.com/en-gb/drivers/unix/linux-amd64-display-archive/ and download the latest driver for your GPU to your VM. Use wget to download it to your virtual machine.

wget https://uk.download.nvidia.com/XFree86/Linux-x86_64/525.60.11/NVIDIA-Linux-x86_64-525.60.11.run

Run the installer with:

./NVIDIA-Linux-x86_64-525.60.11.run --no-questions --ui=none

Once the installer completes you can check the status of the driver with nvidia-smi

dazeb@jellyfin:~$ nvidia-smi
Sun Jan  8 23:31:28 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.108.03   Driver Version: 510.108.03   CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:06:10.0 Off |                  N/A |
| N/A   50C    P8    N/A /  N/A |      0MiB /  2048MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

or nvidia-smi -L

dazeb@jellyfin:~$ nvidia-smi -L
GPU 0: NVIDIA GeForce GTX 960M (UUID: GPU-432dca14-a3c7-c2c2-8e60-9f635b5fc2ad)

If nothing comes up or you get an error you may need to try installing a different driver or with a different method. I decided to use driver version 510.

You can install the drivers that are recommended by Ubuntu

Install the Official PPA (This command may not be necessary as all LTS Ubuntu versions already include the latest Linux Nvidia drivers)

sudo add-apt-repository ppa:graphics-drivers/ppa

See the recommended device drivers

sudo apt upgrade -y
sudo ubuntu-drivers devices

Automatically install the drivers

sudo ubuntu-drivers autoinstall

To remove the repo

sudo add-apt-repository --remove ppa:graphics-drivers/ppa

Remove Nvidia drivers completely

sudo apt purge nvidia-*
sudo apt autoremove

Installing nvtop GPU Process Viewer

A nice little app like htop but for Nvidia GPU’s to show you the amount of GPU process usage.

Install the repository

sudo add-apt-repository ppa:flexiondotorg/nvtop

No need to do an apt update, (it is done automatically when adding a repo on 22.04)

sudo apt install nvtop

Now you can run the app with either nvtop or sudo nvtop

Jellyfin Install

We can go ahead and install Jellyfin now to test our transcoding. The installation is simple enough. You can find the full instructions on the Jellyfin docs

Install some needed packages.

sudo apt install curl gnupg

Enable universe repository for FFMPEG.

sudo add-apt-repository universe

Create a keyrings folder and install the Jellyfin keyring.

sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://repo.jellyfin.org/$( awk -F'=' '/^ID=/{ print $NF }' /etc/os-release )/jellyfin_team.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/jellyfin.gpg

Add a repository configuration at /etc/apt/sources.list.d/jellyfin.sources (paste as one whole command)

cat <<EOF | sudo tee /etc/apt/sources.list.d/jellyfin.sources
Types: deb
URIs: https://repo.jellyfin.org/$( awk -F'=' '/^ID=/{ print $NF }' /etc/os-release )
Suites: $( awk -F'=' '/^VERSION_CODENAME=/{ print $NF }' /etc/os-release )
Components: main
Architectures: $( dpkg --print-architecture )
Signed-By: /etc/apt/keyrings/jellyfin.gpg
EOF

Update the packages and install Jellyfin

sudo apt update && sudo apt install jellyfin

Restart the Jellyfin service

sudo systemctl restart jellyfin

See the service status

sudo service jellyfin status

Transcode Testing with Jellyfin

The first way to test transcoding is to try a video that requires transcoding but when running locally you will have to force the server into transcoding by reducing the quality while the video is playing.

First make sure you have added some media, samba and NFS work great. Open Jellyfin and go to the Dashboard > Playback select Nvidia NVENC.

Click the desktop icon [icon name=”desktop” prefix=”fas”] on the video and select a very low quality.

Go back to your terminal and send the nvidia-smi command again.

dazeb@jellyfin:~$ nvidia-smi
Sun Jan  8 23:52:51 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.108.03   Driver Version: 510.108.03   CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:06:10.0 Off |                  N/A |
| N/A   57C    P0    N/A /  N/A |     97MiB /  2048MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      2693      C   ...ib/jellyfin-ffmpeg/ffmpeg       94MiB |# <--GPU!
+-----------------------------------------------------------------------------+

Now, if we did everything correctly you should have a running process! This means everything works and you have enabled Hardware Acceleration in your virtual machine.

Patch the Driver for More Transcode Streams

We can patch the Nvidia driver to allow it to stream to more than the consumer-grade devices do. The project is on GitHub https://github.com/keylase/nvidia-patch

Look up your card on this list https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new#Encoder see how many streams your card supports and if you need more streams you can install the patch by downloading and running the file.

Install Jellyseerr

Jellyseer is a fork of Overseerr for Jellyfin, Plex and Emby. Overseerr is a movie and tv show request system that you can use to add media to Sonarr and Radarr for download.

The best way to set up Jellyseer is within a Docker container in an unprivileged Proxmox LXC Container.

An easy way to add a LXC Container with Docker installed and ready to go is to use Proxmox Helper Scripts by tteck

Below is a “oneliner” that you can paste into your Proxmox Terminal. It will take you through all the steps and options. If you create the container privileged it will have access to USB devices automatically. I will not be creating a privileged container because I do not need full access to the system as I will be using a samba share to add my media folder.

During installation, you will be asked if you want to install Portainer and the new Docker Compose plugin which the command will be docker compose not the old docker-compose. I’d suggest selecting yes if you want a GUI to manage your Docker containers.

bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/ct/docker-v5.sh)"

Now we have an LXC container with Docker installed we can go ahead with the Jellyfin install. If you are running a homelab id suggest setting up a Cloudflare Tunnel or Nginx Proxy Manager. Nginx Proxy Manager can be found in the Proxmox Helper Scripts also. The oneliner is below. Paste into your Proxmox host terminal and a container will be created.

bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/ct/nginxproxymanager-v5.sh)"

You can install using the Portainer GUI or via the command line. Here are both methods.

Install Jellyseerr – Portainer

Open Portainer on the Docker LXC which will be port 9000 on the LXC IP.

Go to Stacks > New Stack

Give the stack a name and paste in the following content but remember to change the info like timezone to match your own and the config directory /path/to/appdata/config change to match your own data folder.

version: '3'
services:
    jellyseerr:
       image: fallenbagel/jellyseerr:latest
       container_name: jellyseerr
       environment:
            - LOG_LEVEL=debug
            - TZ=Europe/London
       ports:
            - 5055:5055
       volumes:
            - /path/to/appdata/config:/app/config
       restart: unless-stopped

Click deploy stack and Jellyseerr will be created. Move on to the next part, setting up Jellyseer

Install Jellyseer – Terminal Commands

Quick commands to install Jellyseer from the command line

Create a folder and a docker-compose.yml file for your project

mkdir jellyfin
cd jellyfin
nano docker-compose.yml

Paste in the following, and change the values to your own like TZ and config folder.

version: '3'
services:
    jellyseerr:
       image: fallenbagel/jellyseerr:latest
       container_name: jellyseerr
       environment:
            - LOG_LEVEL=debug
            - TZ=Europe/London
       ports:
            - 5055:5055
       volumes:
            - /path/to/appdata/config:/app/config
       restart: unless-stopped

Type CTRL + x then y to save and close.

Bring up the stack

docker compose up -d

The stack will download and you will be able to reach it on the LXC IP port 5055.

References

All the guides I used to learn Nvidia PCI GPU Passthrough.

Share This Guide!