Note: This is not intended to be a best practice guide. It is just what “works for me”. Any and all comments/improvements are very welcome.
Table of Contents
- Install Proxmox
- Updating repositories using the web interface
- Connecting using a public key
- Update the system
- Storage
- Configure Backups
- Download Guest Virtual Machine ISOs
- Download Container Templates
- Network card bonding
Install Proxmox
Boot the machine from a USB stick containing the Proxmox 7.x installation files. Follow the installation wizard. I am not going to detail the steps here because they are well documented elsewhere – Truthfully, it is pretty simple anyway 🙂
Note: This section will be updated in the future to include information about BTRFS.
Updating repositories using the web interface
Proxmox 7.x now supports modifying repositories using the web interface. This method just automates the process of editing /etc/apt/sources.list; either method can be used.
Open the Proxmox Web Interface
Open a web browser and navigate to : http://xxx.xxx.xxx.xxx:8006 and go to Server / Updates / Repositories.
Disable the Enterprise Repository
If you do not have an Enterprise subscription, select the Enterprise repository and click the disable button.

Add the No-Subscription Repository
Click add and from the drop-down select “No-Subscription” and finally click the Add button.
Note: Whilst the disclaimer says that this repo should not be used for production environments, I have never experienced any issues.

Review configured repositories
Finally review the configured repositories.

You will now need to run updates. This can be done using the “Updates” in the web interface or by using the terminal over ssh – I prefer the latter …
Connecting using a public key
You can connect to your Proxmox server using a public key based on the use of digital signatures, this method more secure and convenient than traditional password authentication. Use the following command to copy your public key to the server:
ssh-copy-id -i ~/.ssh/id_rsa.pub root@<proxmox-ip-address>
You can now connect to the server using the following command and a password is not required.
ssh -i ~/.ssh/id_rsa root@<proxmox-ip-address>
For further details please see the following post – How to Set Up SSH Keys on Debian 10.
Note: This article is still valid even though Proxmox 7.x is based on Debian 11.
Update the system
Now that we can easily SSH into our server, we now need to update the system –
apt update
apt -y upgrade
Storage
I have a large number of disks in my Proxmox server; 1 NVME, 5 SSD & 5 SATA. To maximize performance I normally configure them as follows –
Drive | Name | Format | Purpose | Content Type |
---|---|---|---|---|
1 x 512mb SSD | Debian 11/ Proxmox | |||
3 x 512mb SSD + 1 x 1tb SSD | zfs-hot | ZFS | High performance VM disks | Disk Image & Containers |
3 x 1tb SATA | zfs-warm | ZFS | Lower performance VM disk | Disk Image & Containers |
1 x 4tb SATA | Backup | ext4 | Backup / ISOs / Container Templates | VZDump, ISOs & Container Templates |
1 x 512mb NVMe | NTFS | Passed through directly to VM to test performance. |
VM Storage
ZFS Pool Creation
The ZFS pools that I created are optimized for speed and not reliability. If one disk fails, I loose the whole pool; but I do not really care about this because I take nightly backups to a separate drive – the 4tb SATA.
When creating the pools, it is always best to reference the disks by their ID because this does not change if they are re-connected differently for some reason.
Create the zfs-hot pool:
zpool create zfs-hot -f -o ashift=12 \
/dev/disk/by-id/ata-SanDisk_Ultra_II_480GB_161803800213\
/dev/disk/by-id/ata-SanDisk_Ultra_II_480GB_161803800899\
/dev/disk/by-id/ata-SanDisk_Ultra_II_480GB_161803801332\
/dev/disk/by-id/ata-Samsung_SSD_860_EVO_1TB_S3Z6NB0K305604K
Create the zfs-warm pool:
zpool create zfs-warm -f -o ashift=12 \
/dev/disk/by-id/ata-WDC_WD10EZEX-60ZF5A0_WD-WCC1S3374777\
/dev/disk/by-id/ata-WDC_WD10EZEX-60ZF5A0_WD-WMC1S3070843\
/dev/disk/by-id/ata-WDC_WD10EZEX-60ZF5A0_WD-WMC1S3071084
Set compression for both pools:
zfs set compression=lz4 zfs-hot
zfs set compression=lz4 zfs-warm
Note: If you are re-installing and the pools have already been created, they can be imported into your system using:
zpool import -a -f
Check ZFS pool status
zpool status
Add ZFS Pools to Proxmox
pvesm add zfspool zfs-hot -pool zfs-hot
pvesm add zfspool zfs-warm -pool zfs-warm
Backup Drive
First mount the drive to a folder on the system:
mkdir /backup
mount /dev/disk/by-id/ata-TOSHIBA_HDWE140_58D3Y018FBRG-part1 /backup
Now run use lsblk to determine the UUID of the drive we mounted:
lsblk -f
Now use mtab to find the mount information:
cat /etc/mtab | grep backup
Now modify /etc/fstab to make this mount permanent:
UUID="04188fa7-87d1-4e25-b948-2ac51ea1d4ed" /backup ext4 rw,relatime 0 0
We can now unmount the drive and use mount -a to test that our fstab entries are correct without needing to reboot.
umount /backup
mount -a
Add to Backup drive to Proxmox as a directory:
pvesm add dir backup --path /backup
Storage Content Types
Using the web interface, we can now define the content types for each of our storage pools. The table above defines the content types I uses each of my storage pools for.
Navigate to Datacenter/Storage:

Double click a pool (backup for example) and from the content drop down, select the storage types that you would like to use this pool for.

I typically store my ISOs and Container Templates on the Backup pool so that they persist when I decided to re-install the system volume.
The ZFS pools are only used for VM Disk Images and Containers.
I typically now remove the local-lvm pool to remove the risk of any content being stored on them accidentally (again to allow easy re-install). It does not seem possible to remove the local pool, so I just restrict this to snippets (which I currently do not use).
Note: I should probably use a custom partitioning scheme during installation so that I am not wasting disk space for the local-lvm pool.
Here is my final storage configuration:

Configure Backups
Now that we have all of our storage pools configure, we can configure automatic backups. Normally I set the backups to happen daily during off hours.
Navigate to Datacentre/Backup:

Click the Add button to configure your backup job –
- Storage Pool: Select your storage pool – in our case this is “backup”
- Day of Week: Multi-select which days you would like the back up to occur on.
- Start time: 3:00am
- Selection mode: Which VMs to back up, ALL
- Send email to: Enter the email address for notifications
- Email notification: Always or Failures only
- Compression: The default ZSTD (fast and good) works for me
- Mode: Snapshot / Suspend / Stop. I normally use snapshot.


Click create and you have completed your automatic backup configuration.

Download Guest Virtual Machine ISOs
Proxmox 7.x includes a new feature that allows you to download ISOs for guest operating systems directly to the server. Simplifying the old process of having to download ISOs to a client machine and then upload them.
Navigate to the storage pool that you have configured to store ISO Images and select ISO Images.

Example ISO download : Debian 11
As an example we will start by downloading the Debian 11 Network install CD. Go to Debian — Network install from a minimal CD and copy the URL for amd64 – https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-11.0.0-amd64-netinst.iso
Now, click the download from URL button, paste the URL into the URL field and click the Query URL button. The dialog will be populated with information about the ISO you are about to download.

You can also use this dialog to validate the Hash algorithm if you want to. Once you are happy, click the download button.
A task dialog will appear for you to monitor the status

When it is complete and you close the dialog, the ISO will be listed in your available images.

Other ISOs to Download
You can repeat this process to download any other ISOs that you desire. Two other ISOs I always download are –
- Windows 10 direct from the Microsoft site – https://www.microsoft.com/en-ca/software-download/windows10
- Windows VirtIO drivers –
https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-virtio/virtio-win.iso - An explanation of why you need these can be found here –
Windows VirtIO Drivers – Proxmox VE
However, to make installation of Windows into virtual machines easier, I suggest “slipstreaming” the drivers into the Windows source. I have created a walk though of this process, Slipstreaming Proxmox Virtio Drivers into Windows 10 – gareth.com.
Download Container Templates
Proxmox provides a variety of basic templates for most common Linux distributions. They can be downloaded using the web interface or by using the command line via ssh.
Using the web interface
Navigate to the storage pool that you have configured to store Container Templates and select CT Templates.

Click the templates button to show a list of available templates:

Select the template you want and click the download button. A download task viewer is then displayed.

Sadly, you can not multiple select templates so you will need to repeat this process for each template you need. Downloading templates in bulk is easier using the command line.
Once you have downloaded all the templates you require they are displayed under the storage pool.

Using the command line (pveam)
The Proxmox VE Appliance Manager (pveam) can also be used to perform these functions.
The list of available templates is updated automatically on a daily basis, you can also update manually by exedcuting:
# pveam update
To list the templates that are available:
# pveam available
This list is pretty extensive as it includes a number of templates from TurnKey Linux. If you would like to filter the list to display only templates provided Proxmox, use the following command:
# pveam available --section system
Downloading the templates is as simple as passing the storage pool and the name of the template to download. These templates are reasonably small, so I download the current version for all the popular distributions:
# pveam download local archlinux-base_20200508-1_amd64.tar.gz
# pveam download local centos-8-default_20191016_amd64.tar.xz
# pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz
# pveam download local fedora-32-default_20200430_amd64.tar.xz
# pveam download local gentoo-current-default_20200310_amd64.tar.xz
# pveam download local opensuse-15.1-default_20190719_amd64.tar.xz
# pveam download local ubuntu-20.04-standard_20.04-1_amd64.tar.gz
Note: Whilst the version numbers are correct at the time of writing, they will have probably changed by the time you are reading this!
Network card bonding
My Proxmox server motherboard has 4 network cards. To increase performance and reliability, I combine all 4 of these together using a Linux Bond. In order to achieve this you will need a switch that supports 802.3ad.
This is a two stage process, first we need to configure the Proxmox server, then we need to configure the switch. As both steps are required, the server gets temporarily disconnected whilst step 2 is being performed.
Note: It is important to perform these steps in the exact order they are described. If done incorrectly you could lose access to your server.
Create Proxmox Linux Bond
Navigate to the System/Network of the Proxmox server you are configuring.

Make a note of the network devices you are going to bond together, in this case they are enp5s0, enp6s0, enp10s0 and enp11s0.
Edit the existing Linux Bridge and remove the existing Bridge port, leave the IP address and Gateway configured.

Next, create a new Linux Bond by selecting it from the Create menu.
- Slaves: Enter the list of network devices we gathered earlier separated by spaces.
- Mode: Set to LACP (802.3ad)
Note: I am not sure if there is any major benefit to setting the Hash Policy. Leaving it blank seems to work.

Modify Linux Bridge to use this bond by entering bond0 into the Bridge port field.

The network configuration screen now tells us – Pending changes (Either reboot or use ‘Apply Configuration’)

IMPORTANT: When you click the Apply Configuration button, the server will be disconnected from the network and connectivity will not be resumed until the switch has been configured in the next step.
Configure Network Switch
Port aggregation needs to be turned on between the ports that are connected to the Proxmox server. I am not going into much detail here because this configuration will be dependent on your switch manufacturer.
Note: On the Ubiquiti switch these ports have to be next to each other because it uses port ranges.

Note: I have experienced some slow down in the initial SSH connection to Linux virtual machines, I am currently unsure what causes this.
Job done.
References:
1. How to Set Up SSH Keys on Debian 10 – https://linuxize.com/post/how-to-set-up-ssh-keys-on-debian-10/
2. Slipstreaming Proxmox Virtio Drivers into Windows 10 – http://blog.gareth.com/index.php/2020/07/17/slipstreaming-proxmox-virtio-drivers-into-windows-10/
1 Comment
Nice write-up. Very thorough. Thank you for making this.