Figured it out. Had to delete the node directory on the remain node(s).
https://forum.proxmox.com/threads/cluster-node-stuck-in-ui.42330/#post-203778
I had left behind template(s) and retired test VM(s) that I didn't need to migrate.
Thanks.
Thanks. I had to run these commands.
root@pve01:~# pvecm expected 1
root@pve01:~# pvecm delnode pve02
root@pve01:~# pvecm expected 1
root@pve01:~# pvecm delnode pve03
Setting it to expected 1 wouldn't remove the old nodes, so I had to run delnode. The first time this reset the quorum and...
I've got a PVE 6.2-10 cluster with 3 nodes. It's using local ZFS with storage replication between nodes.
A number of things are changing in the environment, so I need to break the cluster apart and run everything on the 1 node.
I moved everything to node 1. I then turned off nodes 2 and 3. At...
I guess performing the manual backup of my VMs must have fixed it because it worked correctly from the scheduled backup this time.
Either that or deleting the unused secondary datastore. That's the only other thing I did.
I did find this information;
https://lists.proxmox.com/pipermail/pve-user/2020-July/171884.html
Which states that if you try to backup a VM to multiple datastores you'll essentially invalidate the qemu dirty-bitmap as PBS doesn't track this per datastore currently.
I did try this (pbs-store-01...
Not sure if this feature was mentioned yet;
Would be very beneficial to those with large datasets that need to sync offsite.
Sometimes it's faster to transport the data instead of sending it over a WAN.
Another use case to keep in mind/consider might be rotation of backup media. Similar to...
I have PBS running as a VM on node 1 of my 3 node Proxmox Cluster.
I setup a backup job within PVE to backup my running VMs everyday at 1:30.
The backup occurs but inspection of the logs it appears to be reading all blocks and not using dirty-bitmap.
I'm at a lose as to why.
I've tested...
In my test case both Linux KVM VMs and the proxmox host have tons of free CPU and RAM while running iperf3. And the bandwidth is as close to exactly 1Gbit/sec that it clearly looks like a limit is being imposed somewhere. So I'm a bit at a lose as to why I'm not seeing over 1Gbit/sec if this is...
VirtIO
CPU and RAM usage didn't spike at all in the host or VMs when I ran iperf3. It's effortlessly hitting 1Gbits/sec so I figured it had to be a limit of the bridge. I read somewhere on these forums (years ago) that the interface was limited by the speed of the physical NIC it's attached to...
I'm curious, does Proxmox have default mechanisms that route traffic between VMs and containers that reside on the same host without the traffic leaving the host to a physical switch? Basically a virtual switch. And if so are there limitations on the speed?
Doing some quick iperf3 tests seems...
Found another approach to make this work.
In FreeNAS go to Service > NFS
Enable NFSv3 ownership model for NFSv4
Description for this setting is;
Set Maproot User=root
Set Maproot Group=wheel
I also set ACL permissions on the ZFS dataset in FreeNAS that I'm using for the backups. I just set...
So it was the permissions issue. I "made" it work by creating a user and group on Freenas called backup for ID=34 and assigning it permission to the ZFS dataset that I'm presenting over NFS. now the permissions show backup:backup on the NFS share in PBS.
root@pbs:~# ls -alh /mnt/
total 13K...
I suspect you're probably using linux as your NFS server.
FreeNAS is a bit different when setting up NFS shares.
Export file that's generated by FreeNAS GUI;
/mnt/store/pve/pbs -quiet -maproot="root":"wheel" -network X.X.X.X/24
Output of mount on PBS server;
freenas:/mnt/store/pve/pbs on...
I'm not totally up to speed of how PBS functions. But I noticed that ZFS seems to be the preferred repository to use and was wondering if a specific record size or other tweaks should be considered when used in this case to combat fragmentation or write amplification. From what I understand...
I thought I read that PBS will have external Authentication (AD/LDAP) like PVE has. Is that accurate? How about the 2FA functionality as well?
I installed the beta and didn't see it there yet. Thanks.
Well it looks like this was mentioned and commented in previously here by the Proxmox staff;
https://forum.proxmox.com/threads/lxc-and-live-migration.35908/
So it would seem like the request here would be to just "force" a snap/replication of a running container if using ZFS (or other...
Okay, that was what I was asking...what exact scenarios lead to this behavior. So specific to containers. I don't use containers much so probably why I don't see this.
I support the proposed goal. I wonder how to get to it and a way that proxmox team would actually support.
I know that for KVM...
So if I understand you correctly, the current behavior is that when performing a container or VM migration from local storage to another nodes local storage, the container or VM goes down for an extended period while the data is being migrated if their isn't already a replication job that has...
I don't migrate containers or VM's all that often so maybe I'm missing something.
The proposal would only fit local storage using storage migration, correct?
And it would only be possible with filesystems that support snapshots, correct? So ZFS and...?
The few times I tested "online live...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.