Search results

  1. A

    Restrict pvemanager default_views per-user

    With rare exception we give some third party partners a login to our PVE cluster. Of course we restrict their permissions to which resource pools they can see and give them only the bare minimum permissions on their VMs. However, when these restricted users login to the PVE GUI, the default...
  2. A

    nodes eventually lose access to CephFS and have to unmount

    update: rebooted all nodes Friday and by monday the CephFS mounts have been lost again
  3. A

    nodes eventually lose access to CephFS and have to unmount

    15 node cluster, just upgraded all to 6.4-13. Did not have this problem with previous 6.x PVE. I can always "fix it" by running umount /mnt/pve/CephFS but within an hour or so, at least 4 or 5 of the nodes will lose it again. On the PVE GUI the error is "unable to activate storage 'CephFS' -...
  4. A

    Lost quorum and all nodes were reset during a disjoin!

    Hello, I have a 15-node cluster with about 50 VM/CT's running, all configured for HA and using the software watchdog. It's hyperconverged using Ceph RBD for the VM storage. All nodes run PVE 6.4-13 and Ceph 15.2.9. The hostnames in the cluster are "node36" through "node50." Yesterday, I removed...
  5. A

    pve 6.3 and 7.0 mixed environment?

    I have 15 hosts running 6.3. I want to upgrade them to 7.0 and at the same time switch from a single ext4 root FS to a mirrored ZFS setup, so I want to do a clean install on all the hosts, one at a time, migrating VMs around so they are not affected. Ceph version is 15.2.9 on all nodes. If I...
  6. A

    fdisk and sfdisk missing from pve 7.0 installation iso?

    sfdisk and fdisk were both included in 6.x. neither are included in 7.x, I am looking at it right now, proxmox-ve_7.0-1.iso from July
  7. A

    fdisk and sfdisk missing from pve 7.0 installation iso?

    I'm just pointing this out for others, since I use the 6.x ISOs as well, but it is nice sometimes to be able to boot the ISO into debug mode and have all the usual disk utilities there. On the new ISO, fdisk and sfdisk are missing.... just saying! Was it necessary to remove them?
  8. A

    ZFS rebuild questions

    I'm experimenting with installing Proxmox on a ZFS RAID1 setup, and have successfully tested failing and rebuilding drives. The host remains operational and responsive during the loss of the drive. However, I am noticing the rebuilt drive is missing the "BIOS Boot" and "EFI" partitions. It only...
  9. A

    PVE and Ceph on Mellanox infiniband parts, what is the current state of support?

    Your application must provide native, robust, and ongoing support. Ceph is not one of those applications. RDMA was abandoned ages ago.
  10. A

    PVE and Ceph on Mellanox infiniband parts, what is the current state of support?

    56Gb IPoIB performance was atrocious, less than 10GbE. I ignored the advice from everyone and had to discover this for myself to believe it. This thread remains for others to do the same: take it or leave it!
  11. A

    HA failover issues reconfigures VM after upgrade to 6.3

    At the time of the fence, some were on 6.3-3 (or maybe even 6.2) and some were on 6.3-6. So it's understandable. I used to make a habit of slowly staggering host updates, moving VMs around to reboot only a subset of hosts, etc... not any more lol. I will do all of them the same day. I noticed...
  12. A

    PVE system mail list

    I am aware of that as an external solution, I just figured since all users had an email address field, there might be something provided by PVE.
  13. A

    HA failover issues reconfigures VM after upgrade to 6.3

    OK, this is due to the version of the machine chipset defaulting to "latest" even if that is not currently the active version. The issue has nothing to do with failover, you can also trigger it just by shutting down and restarting the VM on the same host. Proxmox will automatically and...
  14. A

    PVE system mail list

    By default, only the email address assigned to root@pam recieves system alerts about fencing hosts. In my cluster there are 6 other users that are administrators (although they use the pve realm), and their email addresses are configured, but they do not receive email notifications about host...
  15. A

    HA failover issues reconfigures VM after upgrade to 6.3

    A host was fenced today and all of the VMs (all Windows 2016 or 2019) were successfully powered up on other hosts. Only trouble was that none of them were reachable on the network, and they all had services that failed to start upon boot. After investigating, we found that all of the network...
  16. A

    Scheduled VM-snapshots from GUI

    There is a lot of stuff to consider. Anyone who hasn't done it before is going to oversimplify it, they can't help it. If you want to know more about it, you should come up with a solution that works for you. What works for you is unlikely to work for everyone and that's why it isn't...
  17. A

    mysterious reboots

    I have had the same node mysteriously reboot twice over the weekend. Once on a saturday morning and once on a sunday morning. It was fenced properly and VMs were brought up elsewhere, and I have roughly the exact time the host rebooted (or reset). How would I go about tracking down the root...
  18. A

    After 6.3 upgrade, VM boot device missing if member of an HA group

    This issue is not confined to a single VM. It is affecting all the machines in the cluster. Here is a config from one affected VM root@virtual38:~# qm config 103 agent: enabled=1,fstrim_cloned_disks=1 balloon: 3072 boot: order=ide2;scsi0 cores: 8 cpu: host ide2: none,media=cdrom machine: q35...
  19. A

    After 6.3 upgrade, VM boot device missing if member of an HA group

    If a virtual machine is not a member of an HA group, you can add a disk, ISO, or network boot device, and expect it will boot to them in the order given on the options screen. If you power on the machine and interrupt POST with the escape key, you can make a one-time selection to manually boot...
  20. A

    Performance decrease after Octopus upgrade

    tres bien, back above 1 GB/s on write. removing the SSD emulation gave a small boost as well.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!