This is an automated archive made by the Lemmit Bot.
The original was posted on /r/selfhosted by /u/esiy0676 on 2025-01-10 17:18:48+00:00.
Summary: Restore a full root filesystem of a backed up Proxmox node -
use case with ZFS as an example, but can be appropriately adjusted for
other systems. Approach without obscure tools. Simple tar, sgdisk and
chroot. This is a follow-up to the previous post on backing up the
entire root filesystem offline from a rescue boot.
Better formatted at:
No tracking. No ads. OP r/ProxmoxQA
Previously, we have created a full root filesystem backup of our
host. It's time to
create a freshly restored host from it - one that may or may not share
the exact same disk capacity, partitions or even filesystems. This is
also a perfect opportunity to also change e.g. filesystem properties
that cannot be further equally manipulated after install.
Full restore principle
We have the most important part of a system - the contents of the root
filesystem in a an archive created with stock tar
1 tool - with
preserved permissions and correct symbolic links. There is absolutely NO
need to go about attempting to recreate some low-level disk structures
according to the original, let alone clone actual blocks of data. If
anything, our restored backup should result in a defragmented system.
IMPORTANT This guide assumes you have backed up non-root parts
of your system (such as guests) separately and/or that they reside on
shared storage anyhow, which should be a regular setup for any
serious, certainly production-like, system.
Only two components are missing to get us running:
- a partition to restore it onto; and
- a bootloader that will bootstrap the system.
NOTE The origin of the backup in terms of configuration does NOT
matter. If we were e.g. changing mountpoints, we might need to adjust
a configuration file here or there after the restore at worst.
Original bootloader is also of little interest to us as we had NOT
even backed it up.
UEFI system with ZFS
We will take an example of a UEFI boot with ZFS on root as our target
system, we will however make a few changes and add a SWAP partition
compared to what such stock PVE install would provide.
A live system to boot into is needed to make this happen. This could
be - generally speaking - regular Debian, 2 but for consistency, we
will boot with the not-so-intuitive option of the ISO installer, 3
exactly as before during the making of the
backup - this part is
skipped here.
[!WARNING] We are about to destroy ANY AND ALL original data
structures on a disk of our choice where we intend to deploy our
backup. It is prudent to only have the necessary storage attached so
as not to inadvertently perform this on the "wrong" target device.
Further, it would be unfortunate to detach the "wrong" devices by
mistake to begin with, so always check targets by e.g. UUID,
PARTUUID, PARTLABEL with blkid
4 before proceeding.
Once booted up into the live system, we set up network and SSH access as
before - this is more comfortable, but not necessary. However, as our
example backup resides on a remote system, we will need it for that
purpose, but everything including e.g. pre-prepared scripts can be
stored on a locally attached and mounted backup disk instead.
Disk structures
This is a UEFI system and we will make use of disk /dev/sda
as
target in our case.
CAUTION You want to adjust this accordingly to your case, sda
is
typically the sole attached SATA disk to any system. Partitions are
then numbered with a suffix, e.g. first one as sda1
. In case of and
NVMe disk, it would be a bit different with nvme0n1
for the entire
device and first partition designated nvme0n1p1
. The first 0
refers to the controller.
Be aware that these names are NOT fixed across reboots,
i.e. what was designated as sda
before might appear as sdb
on a
live system boot.
We can check with lsblk
5 what is available at first, but ours is
virtually empty system:
lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
loop0 squashfs 4.0
loop1 squashfs 4.0
sr0 iso9660 PVE 2024-11-20-21-45-59-00 0 100% /cdrom
sda
Another view of the disk itself with sgdisk
: 6
sgdisk -p /dev/sda
Creating new GPT entries in memory.
Disk /dev/sda: 134217728 sectors, 64.0 GiB
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 83E0FED4-5213-4FC3-982A-6678E9458E0B
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 134217694
Partitions will be aligned on 2048-sector boundaries
Total free space is 134217661 sectors (64.0 GiB)
Number Start (sector) End (sector) Size Code Name
NOTE We will make use of sgdisk
as this allows us good
reusability and is more error-proof, but if you like the interactive
way, plain gdisk
7 is at your disposal to achieve the same.
Despite our target appears empty, we want to make sure there will not be
any confusing filesystem or partition table structures left behind from
before:
WARNING The below is destructive to ALL PARTITIONS on the
disk. If you only need to wipe some existing partitions or their
content, skip this step and adjust the rest accordingly to your use
case.
wipefs -ab /dev/sda
sgdisk -Zo /dev/sda
Creating new GPT entries in memory.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
The operation has completed successfully.
The wipefs
8 helps with destroying anything not known to sgdisk
.
You can use wipefs /dev/sda*
(without the -a
option) to actually see
what is about to be deleted. Nevertheless, the -b
options creates
backups of the deleted signatures in the home directory.
Partitioning
Time to create the partitions. We do NOT need a BIOS boot partition on
an EFI system, we will skip it, but in line with Proxmox designations,
we will make partition 2 the EFI partition and partition 3 the ZFS pool
partition. We, however, want an extra partition at the end, for SWAP.
sgdisk -n "2:1M:+1G" -t "2:EF00" /dev/sda
sgdisk -n "3:0:-16G" -t "3:BF01" /dev/sda
sgdisk -n "4:0:0" -t "4:8200" /dev/sda
The EFI System Partition is numbered as 2
, offset from the beginning
1M
, sized 1G
and it has to have type EF00
. Partition 3
immediately follows it, fills up the entire space in between except
for the last 16G
and is marked (not entirely correctly, but as per
Proxmox nomenclature) as BF01
, a Solaris (ZFS) partition type. Final
partition 4
is our SWAP and designated as such by type 8200
.
TIP You can list all types with sgdisk -L
- these are the short
designations, partition types are also marked by PARTTYPE
and that
could be seen e.g. lsblk -o+PARTTYPE
- NOT to be confused with
PARTUUID
. It is also possible to assign partition labels
(PARTLABEL
), with sgdisk -c
, but is of little functional use
unless used for identification by the /dev/disk/by-partlabel/
which
is less common.
As for the SWAP partition, this is just an example we are adding in
here, you may completely ignore it. Further, the spinning disk
aficionados will point out that the best practice for SWAP partition is
to reside at the beginning of the disk due to performance considerations
and they would be correct - that's of less practicality nowadays. We
want to keep with Proxmox stock numbering to avoid confusion. That said,
partitions do NOT have to be numbered as laid out in terms of order. We
just want to keep everything easy to orient (not only) ourselves in.
TIP If you got to idea of adding a regular SWAP partition to your
existing ZFS install, you may use it to your benefit, but if you are
making a new install, you can leave yourself some free space at the
end in the advanced options of the installer 9 and simply create
that one additional partition later.
We will now create FAT filesystem on our EFI System Partition and
prepare the SWAP space:
mkfs.vfat /dev/sda2
mkswap /dev/sda4
Let's check, specifically for PARTUUID
and FSTYPE
after our setup:
lsblk -o+PARTUUID,FSTYPE
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS PARTUUID FSTYPE
loop0 7...
***
Content cut off. Read original on https://old.reddit.com/r/selfhosted/comments/1hy9i7k/restore_entire_proxmox_ve_host_from_backup/