Search results

  1. J

    [SOLVED] Can't remove nodes, cfs-locked 'file-replication_cfg' operation: no quorum!

    Figured it out. Had to delete the node directory on the remain node(s). https://forum.proxmox.com/threads/cluster-node-stuck-in-ui.42330/#post-203778 I had left behind template(s) and retired test VM(s) that I didn't need to migrate. Thanks.
  2. J

    [SOLVED] Can't remove nodes, cfs-locked 'file-replication_cfg' operation: no quorum!

    Thanks. I had to run these commands. root@pve01:~# pvecm expected 1 root@pve01:~# pvecm delnode pve02 root@pve01:~# pvecm expected 1 root@pve01:~# pvecm delnode pve03 Setting it to expected 1 wouldn't remove the old nodes, so I had to run delnode. The first time this reset the quorum and...
  3. J

    [SOLVED] Can't remove nodes, cfs-locked 'file-replication_cfg' operation: no quorum!

    I've got a PVE 6.2-10 cluster with 3 nodes. It's using local ZFS with storage replication between nodes. A number of things are changing in the environment, so I need to break the cluster apart and run everything on the 1 node. I moved everything to node 1. I then turned off nodes 2 and 3. At...
  4. J

    Backups are not using dirty-bitmap like I thought

    I guess performing the manual backup of my VMs must have fixed it because it worked correctly from the scheduled backup this time. Either that or deleting the unused secondary datastore. That's the only other thing I did.
  5. J

    Backups are not using dirty-bitmap like I thought

    I did find this information; https://lists.proxmox.com/pipermail/pve-user/2020-July/171884.html Which states that if you try to backup a VM to multiple datastores you'll essentially invalidate the qemu dirty-bitmap as PBS doesn't track this per datastore currently. I did try this (pbs-store-01...
  6. J

    Proxmox Backup Server (beta)

    Not sure if this feature was mentioned yet; Would be very beneficial to those with large datasets that need to sync offsite. Sometimes it's faster to transport the data instead of sending it over a WAN. Another use case to keep in mind/consider might be rotation of backup media. Similar to...
  7. J

    Backups are not using dirty-bitmap like I thought

    I have PBS running as a VM on node 1 of my 3 node Proxmox Cluster. I setup a backup job within PVE to backup my running VMs everyday at 1:30. The backup occurs but inspection of the logs it appears to be reading all blocks and not using dirty-bitmap. I'm at a lose as to why. I've tested...
  8. J

    inter-VM traffic on same node

    In my test case both Linux KVM VMs and the proxmox host have tons of free CPU and RAM while running iperf3. And the bandwidth is as close to exactly 1Gbit/sec that it clearly looks like a limit is being imposed somewhere. So I'm a bit at a lose as to why I'm not seeing over 1Gbit/sec if this is...
  9. J

    inter-VM traffic on same node

    VirtIO CPU and RAM usage didn't spike at all in the host or VMs when I ran iperf3. It's effortlessly hitting 1Gbits/sec so I figured it had to be a limit of the bridge. I read somewhere on these forums (years ago) that the interface was limited by the speed of the physical NIC it's attached to...
  10. J

    inter-VM traffic on same node

    I'm curious, does Proxmox have default mechanisms that route traffic between VMs and containers that reside on the same host without the traffic leaving the host to a physical switch? Basically a virtual switch. And if so are there limitations on the speed? Doing some quick iperf3 tests seems...
  11. J

    NFS Datastore: EINVAL: Invalid argument

    Found another approach to make this work. In FreeNAS go to Service > NFS Enable NFSv3 ownership model for NFSv4 Description for this setting is; Set Maproot User=root Set Maproot Group=wheel I also set ACL permissions on the ZFS dataset in FreeNAS that I'm using for the backups. I just set...
  12. J

    NFS Datastore: EINVAL: Invalid argument

    So it was the permissions issue. I "made" it work by creating a user and group on Freenas called backup for ID=34 and assigning it permission to the ZFS dataset that I'm presenting over NFS. now the permissions show backup:backup on the NFS share in PBS. root@pbs:~# ls -alh /mnt/ total 13K...
  13. J

    NFS Datastore: EINVAL: Invalid argument

    I suspect you're probably using linux as your NFS server. FreeNAS is a bit different when setting up NFS shares. Export file that's generated by FreeNAS GUI; /mnt/store/pve/pbs -quiet -maproot="root":"wheel" -network X.X.X.X/24 Output of mount on PBS server; freenas:/mnt/store/pve/pbs on...
  14. J

    NFS Datastore: EINVAL: Invalid argument

    I'm seeing the same thing. Is this an NFS permission issue? Share is NFSv4. Same shares work fine added to PVE for vzdump.
  15. J

    Backup Repo ZFS record size considerations?

    I'm not totally up to speed of how PBS functions. But I noticed that ZFS seems to be the preferred repository to use and was wondering if a specific record size or other tweaks should be considered when used in this case to combat fragmentation or write amplification. From what I understand...
  16. J

    External Authentication?

    I thought I read that PBS will have external Authentication (AD/LDAP) like PVE has. Is that accurate? How about the 2FA functionality as well? I installed the beta and didn't see it there yet. Thanks.
  17. J

    Improvement: Reduce migration downtime to seconds with two step transfer

    Well it looks like this was mentioned and commented in previously here by the Proxmox staff; https://forum.proxmox.com/threads/lxc-and-live-migration.35908/ So it would seem like the request here would be to just "force" a snap/replication of a running container if using ZFS (or other...
  18. J

    Improvement: Reduce migration downtime to seconds with two step transfer

    Okay, that was what I was asking...what exact scenarios lead to this behavior. So specific to containers. I don't use containers much so probably why I don't see this. I support the proposed goal. I wonder how to get to it and a way that proxmox team would actually support. I know that for KVM...
  19. J

    Improvement: Reduce migration downtime to seconds with two step transfer

    So if I understand you correctly, the current behavior is that when performing a container or VM migration from local storage to another nodes local storage, the container or VM goes down for an extended period while the data is being migrated if their isn't already a replication job that has...
  20. J

    Improvement: Reduce migration downtime to seconds with two step transfer

    I don't migrate containers or VM's all that often so maybe I'm missing something. The proposal would only fit local storage using storage migration, correct? And it would only be possible with filesystems that support snapshots, correct? So ZFS and...? The few times I tested "online live...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!