Ssd upgrade

anoo222

Member
Feb 21, 2023
39
5
8
Hi,

I'm running a little homeserver on a dell optiplex.
Since trying proxmox i'm just tinkering and trying more stuff :)
Now i'm running ~30 containers, the root disk is a 256 gb ssd.
It's almost full, and for example for jellyfin transcoding remux 4k files (~70gb), i had to offload the transcoding to a storage ironwolf disk,
which in return resulted in a higher IO delay then i would like.

Now i'm looking to upgrade from 16gb ram to 32gb ram, and upgrade the ssd aswell.
The ssd proxmox is running on is just the one that came with the optiplex, and i'm noticing the wearout going up ~2% a month.
I would like to upgrade the ssd to WD Black SN770 1TB.

First i would like to ask if this is a good choice or are there beter alternatives at this pricepoint?
Note i'm the only user on my homeserver, so mostly there isn't really a high load.

Secondly, and most important, what would be the best way to upgrade that ssd without losing any data on the host?
I have a weekly backup of all my containers, but i would like to 'backup' the ssd aswell so everything stays as it is.

Thank u
 
regular SSD are only recommended if filesystem is ext4 (and lvmthin for VM) or xfs.
if ZFS then must use SSD with power protection only present in datacenter grade.
Even an old used sata one is better (no fast wearout) than shiny newest fastest nvme.
 
Last edited:
I am not a Proxmox expert so take this information with a grain of salt. I am running Proxmox in a single node configuration, and my server isn't always on 24x7 (some days I turn it off at night, some days I don't for various reasons. I'd say its about 50/50 that it is on 24x7). I am running an old HP Z640 workstation with an E5-2690 v3 cpu and 64 gb of ram. I probably run 15 or so apps: Wordpress, Nextcloud, Openmediavault, Photoprism, Grocy, Tracks, Leantime, Mealie, Home Assistant, Heimdall, Guacamole, Cloudflare tunnels, Portainer, and a couple of other things. I run a mix of VMs, LXC containers and Docker containers. I have two Teamgroup SATA SSDs in a ZFS mirror for my boot drive, ISOs, backups, etc. I have two teamgroup NVMe SSDs (via an Asus hyper m.2 pcie adapter) in a ZFS mirror for my VMs and containers. Team group is not enterprise grade by any means, but so far in the last 9-10 months I have zero wear out. None. Maybe its the combination of apps, maybe its because I have disabled corosync, pve-ha-crm, and pve-ha-lrm. I don't know. But at least in my situation (and I am not sure why, this is not an endorsement of the team group brand), the notion of Proxmox eating up consumer grade SSDs has proven to be a non issue for me. I had a problem a few months back and it was a bad SATA cable, not a drive issue.
 
I guess i do;

Code:
  LV            VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- <141.59g             77.99  3.72                           
  root          pve -wi-ao----  <69.50g                                                   
  swap          pve -wi-ao----    8.00g                                                   
  vm-100-disk-0 pve Vwi-aotz--    2.00g data        87.27                                 
  vm-101-disk-0 pve Vwi-aotz--    2.00g data        99.11                                 
  vm-102-disk-0 pve Vwi-aotz--    2.00g data        97.91                                 
  vm-103-disk-1 pve Vwi-aotz--   10.00g data        64.20                                 
  vm-104-disk-1 pve Vwi-aotz--   56.00g data        92.80                                 
  vm-105-disk-0 pve Vwi-aotz--    2.00g data        98.67                                 
  vm-106-disk-0 pve Vwi-aotz--    8.00g data        99.15                                 
  vm-107-disk-0 pve Vwi-aotz--    2.00g data        98.97                                 
  vm-108-disk-0 pve Vwi-aotz--    4.00g data        80.25                                 
  vm-109-disk-0 pve Vwi-aotz--    2.00g data        80.77                                 
  vm-113-disk-0 pve Vwi-a-tz--    4.00g data        85.05                                 
  vm-114-disk-1 pve Vwi-aotz--    3.00g data        84.13                                 
  vm-118-disk-0 pve Vwi-aotz--    2.00g data        87.24                                 
  vm-120-disk-1 pve Vwi-aotz--    2.00g data        99.08                                 
  vm-121-disk-0 pve Vwi-aotz--    8.00g data        93.51                                 
  vm-122-disk-0 pve Vwi-aotz--    2.50g data        94.84                                 
  vm-124-disk-0 pve Vwi-aotz--    1.50g data        99.27                                 
  vm-125-disk-0 pve Vwi-aotz--    3.00g data        84.16                                 
  vm-128-disk-0 pve Vwi-aotz--    1.50g data        88.41                                 
  vm-129-disk-0 pve Vwi-aotz--    2.00g data        66.52                                 
  vm-130-disk-1 pve Vwi-a-tz--    2.00g data        67.23
 
I have two Teamgroup SATA SSDs in a ZFS mirror for my boot drive, ISOs, backups, etc.
No problem in that case. Even as boot drive imo.
But as VM usage, wearout will be quick if not idle of course ... + perfomance is bad as slow fsync
 
Secondly, and most important, what would be the best way to upgrade that ssd without losing any data on the host?
I have a weekly backup of all my containers, but i would like to 'backup' the ssd aswell so everything stays as it is.

Hi, my filesystem is ext4.
There isn't such way because, iirc, lvm isn't clonable friendly.
Easiest/Safety/Error free way is Backup VMs, install PVE on your fresh SSD then Restore VMs.
If PVE host itself is heavy customized, backup/restore ext4 partition then copy/move old VM vDisks from old SSD works,
but only for experienced linux user, because errors in terminal commands can be catastrophic.
 
I wouldn't consider myself an experienced linux user, i'm just an enthousiast learning every day.
I mostly understand the concepts, and then research how to implement them.

Would the following be a realistic option;

1) Install fresh proxmox on new ssd
2) Restore containers & vm's from backup
3) Compare apt list --installed from current host & new install, and reinstall the packages on fresh install (Forgot what i installed manually)
4) Copy entire /etc folder from current host to fresh install?
(Is this even possible, or do i need to copy /etc/pve/, /etc/subgid/, /etc/fstab/ etc. seperatly and not the whole /etc folder?)
 
4) Copy entire /etc folder from current host to fresh install?
(Is this even possible, or do i need to copy /etc/pve/, /etc/subgid/, /etc/fstab/ etc. seperatly and not the whole /etc folder?)
You need to know which of those hundreds of files you are allowed to copy and which not. And sometimes you only want to copy a few lines of a config file. By just replacing the whole folder you maybe won't be able to boot because wrong UUIDs in fstab and so on.
 
Consider keeping the existing disk's rootfs & swap. Retain or repurpose the existing vz volume on that drive.
Format the new disk, whole drive as an lvmthin volume, and add that as dedicated guest storage. Restore your guests to the new storage.
 
  • Like
Reactions: anoo222
So copying whole /etc is stupid lol, i guess i try
Consider keeping the existing disk's rootfs & swap. Retain or repurpose the existing vz volume on that drive.
Format the new disk, whole drive as an lvmthin volume, and add that as dedicated guest storage. Restore your guests to the new storage.
Problem with using an optiplex is i only have 1 m.2 slot on mobo...

You need to know which of those hundreds of files you are allowed to copy and which not. And sometimes you only want to copy a few lines of a config file. By just replacing the whole folder you maybe won't be able to boot because wrong UUIDs in fstab and so on.
So copying whole /etc is stupid. I guess i'll try just copying /etc/pve/lxc, /etc/subgid, adjust fstab accordingly to new host and see from there.
Are there any other folders to keep in mind?
 
Are there any other folders to keep in mind?
Really depends on what you installed and what you changed from the defaults. If you haven't documented what you did and what not in the past, its hard to tell when needs to be copied.
Some things you might also want to copy are the firewall rules in /etc/pve/firewall, VMs in /etc/pve/qemu-server, storage definitions in /etc/pve/storage.cfg, network config in /etc/network/interfaces, /etc/resolv.conf, /etc/hosts, ...

I would highly recommend to create some proper documentation next time, even if its very time consuming to note down every single bit you change/do on the server.
 
Last edited:
  • Like
Reactions: anoo222
Noted!

Initially it was just a fun project as i obtained an optiplex for 50 euro with an i5 8500, just to use jellyfin for my movie library and to use as my iptv so i can record aswell. At that time i didn't really think about documenting. But ever since it kept expanding and literally months of work went in.
In the future i will for sure document changes i make to the host, so in case of host ssd failure i don't lose all config on the host.

Luckily i'm aware of this now, since i want to upgrade the host ssd, and not because of host ssd failure.
 
iirc, be careful if you restore an existing LXC from a vzdump/pbs backup, if you've excluded mountpoints from backup, currents mountpoints of existing LXC will be deleted. (sorry for bad english wording ...)
 
  • Like
Reactions: keeka and anoo222
I've declared most of my mountpoints in /etc/pve/lxc/xxx.conf, so if i replace those config files from current host to new ssd, should i be good?
 
Not, just restoring all my containers from vzdump backup on new ssd and then changing xxx.conf accordingly. Or won't this work?
 
Not, just restoring all my containers from vzdump backup on new ssd and then changing xxx.conf accordingly. Or won't this work?
of course it works and this is the way. I was thinking about you copy/paste manually .conf ...
 
Really depends on what you installed and what you changed from the defaults. If you haven't documented what you did and what not in the past, its hard to tell when needs to be copied.
Some things you might also want to copy are the firewall rules in /etc/pve/firewall, VMs in /etc/pve/qemu-server, storage definitions in /etc/pve/storage.cfg, network config in /etc/network/interfaces, /etc/resolv.conf, /etc/hosts, ...

I would highly recommend to create some proper documentation next time, even if its very time consuming to note down every single bit you change/do on the server.

@anoo222 Re config and non-default packages. In the absence of good notes, I have had some success doing something like this:
Code:
# on the first system
# backup etc
tar cvzf etc.tgz /etc
# list of installed packages that placed files in etc
dpkg -S /etc | tr ', ' '\n' | sort >packages.old

# On new system do the same
dpkg -S /etc | tr ', ' '\n' | sort >packages.new
diff packages.old packages.new

Use that to determine what you might need to install.
Once they're installed, initialise git repo in a copy of this new /etc.
Extract your etc backup over the top.

Then, use something like VS code to browse the changed files and inform your configuration of the new system. Hopefully there's not too much to wade through. Also, bear in mind /etc/pve is a virtual filesystem. Next time, take notes ;-)
 
Last edited:
  • Like
Reactions: anoo222

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!