Hinsichtlich live-Migration, ja.
Richtig, der ist ja gerade kaputtgegangen.
Naja, wie in #2 schon erwähnt: du verlierst die Daten seit der letzten Replikation.
Aber ja, auf dem überlebenden Node startet HA diese VM neu. Automatisch!
No, migrating between nodes and back should work. For the resetup migrating the guests off the node, removing it from the cluster and reinstalling is propably the best course of action...
This is as expected. You need at least 80% of the remotes have a basic subscription to not get these message, see this post by a staff member:
https://forum.proxmox.com/threads/proxmox-datacenter-manager-1-0-stable.177321/post-821945
My guess is...
There will be few workloads that will be able to stress out your NVMe lanes. Unless you have that kind of workload, I would suggest 2x6 RAIDZ2 if you only have 1 server - it is safer to have less disks in a VDEV, it is still plenty fast. Don’t...
Are you actually running PBS Version 2.2? It's out of support so I would upgrade to at least the latest PBS3 or PBS Version. I wouldn't expect that an outdates PBS can backup the current PVE Release.
UdoB,
Thanks for your response. I am using Enterprise Class drives in my current setup and happy to hear I don't need to spend a stupid amount of money for more drives.
I am an IT Systems Engineer by profession and home hobby. PVE is the most...
Found the issue, during my upgrade I mistakenly thought that I had migrated the Debian base repositories to the new format but no, ended just commenting them out :/
When I re-enabled the repositories in /etc/apt/sources.list and run apt update...
ich nutze rustdesk um auf meine VMs zu kommen der Vorteil ist das es auch von außerhalb aus geht,
ich habe aber dazu einen eigenen rustdesk Server laufen
I'd recommed setting up scheduled trim instead.
Please share
zfs list -ospace,refreservation -rS used
qm config VMIDHEREOFAVMWITHLOTSOFUSEDDATA
Also read this about how to properly use fstrim/discard. Pretty sure a zpool trim does not affect...
You would have to provide a lot more information than what you posted here. Otherwise we have to make educated guesses ;)
So just cluster, no HA?
So you use ZFS and VMs use local RAW drives on ZFS?
There are a few problems with that.
Short:
A...
There are so many modifications in changed subsystems from Debian Buster --> Bullseye --> Bookworm --> Trixie.
While I do have some systems (not PVE) which were upgraded from Stretch to now I would choose to create a new install and restore VMs...
To separate "the OS" from "the data" is still best practice, yes! For PVE the OS should reside on a mirror, of course.
With the constraints of "Mini-PCs" I feel fine to just install PVE on two (or more) devices, in a ZFS mirror. The resulting...
Hello,
You are mixing two VLAN models:
A VLAN-aware trunk bridge (vmbr0), which carries a VLAN trunk and transports tagged VLAN traffic
Per-VLAN access bridges (vmbr50, vmbr200), where tagging is done at the interface level (eno1.50 /...
One older thread on this:
https://forum.proxmox.com/threads/bad-certificate-for-https-download-proxmox-com.126047/post-550240
The basic argument of Proxmox developers is, that all their packages are signed with gpg anyhow thus https doesn't add...
Yes, technically everything is possible as "everthing is a file". You can stack filesystems on top of block devices as usual and then repeat this step as often as you want to. (Or an LVM physical volume on top of a ZVOL.) The complexity...
Unfortunately I have no NFS storage configured, so I can't confirm this.
But my only "cifs" share does confirm your observation: no explicit "shared 1" but it is shared.
Or using pve-zsync, but then you can't use your NFS share on your NAS and you won't have migration or failover like with a cluster. Instead you would replicate the vms/lxcs from one node to the other and would be able to launch them in case one...
For VM disks to not being copied the underlying storage needs to be officially tagged "shared". Compare your settings with the table: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_storage_types
And it needs to be actually configured...