Search results

  1. W

    Stuck on "Welcome to GRUB", but only on reboot, not power-up

    Exactly the same problem here on an Intel NUC 12. I have tested extensively with BIOS settings, but that makes no difference. It is also not a Proxmox only problem, because a fresh installation of Debian 12.2 produces the same problem and even trixie/sid has the same problem. I don't use HW...
  2. W

    Ceph help - will pay

    /dev/sdb2 was just an example, in your case it’s /dev/cciss/c0d1p1. You mean when you stop and remove the OSD using GUI it time out? The OSD is mounted (and visible to the server), so I think no hardware issue, probably just a faulty filesystem. When you have not much knowhow about this kind of...
  3. W

    Ceph help - will pay

    Are you sure the disk is still available in the server? Check with “lsblk”. If you see the disk with “lsblk” (in this example as /dev/sdb) and the disk is not mounted (not showing up with “df -h”), try to manually mount the disk: # mount /dev/sdb2 /var/lib/ceph/osd/ceph-0 # start ceph-osd id=0...
  4. W

    S.M.A.R.T Current Pending?

    Yes, seems to be a bad disk to me when this problems occurs that fast. I only work with WD RAID Edition and WD Gold for HDDs and in the few times I had issues like this (with new drives or drives in warranty), it was never a problem to replace the drive (when using CC even with advance...
  5. W

    S.M.A.R.T Current Pending?

    For SSDs some bad/dead sectors, especially after some time of using, are “normal”. The pending sectors will be (after some time) reallocated on the disk and then disappear (you can find them under reallocated sectors after they are reallocated). From my experience: for normal HDDs (like your...
  6. W

    Loses node in cluster

    Probably a network issue. Do you use bonding? And if so, what mode? Maybe after the updelay another uplink goes active, can’t talk via multicast to other servers on other switch (caused by wrong config on switches) and gives issues. Try to omping the other nodes just after reboot (when it’s...
  7. W

    Proxmox VE 5.1 released!

    Good job guys! I did the upgrade on a 3 nodes cluster from Ceph Jewel (10.2.10) to Luminous (12.2.1) and PVE 4.4 to 5.1 and also changed from Ceph Filestore to Bluestore. No serious problems found so far.
  8. W

    Software Watchdog Initial Countdown 600s?

    Upgraded to PVE 5.1, same problem. proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve) pve-manager: 5.1-36 (running version: 5.1-36/131401db) pve-kernel-4.13.4-1-pve: 4.13.4-25 libpve-http-server-perl: 2.0-6 lvm2: 2.02.168-pve6 corosync: 2.4.2-pve3 libqb0: 1.0.1-1 pve-cluster: 5.0-15...
  9. W

    Software Watchdog Initial Countdown 600s?

    Today I installed some minor updates on a small PVE 4.4 cluster, since the updates I noticed the Software Watchdog Initial Countdown has changed from 120 seconds to 600 seconds. Isn't this value a bit high? Is it possible for us to change it back to 120 seconds? Timer Use...
  10. W

    Software watchdog: Disable NMI Watchdog?

    If I want to use software watchdog, is it a good idea to disable NMI Watchdog? Like you should disable NMI Watchdog when using a hardware watchdog? Why I'm asking: I use the software watchdog (everything default) and sometimes when I do a reboot of the node, it comes back online with the...
  11. W

    Different physical CPUs in one cluster: VM CPU speed unchanged after live migration

    We had 5 exactly the same physical nodes for our PVE 4.4 cluster. Now we needed to expand our capacity, so we bought 2 more servers, but this new servers aren't the same as the first servers. All 7 servers have dual Intel Xeon CPU's, but different models. Clockspeed of the first servers is...
  12. W

    Ceph - Monitor clock skew

    Also see https://forum.proxmox.com/threads/pve-4-1-systemd-timesyncd-and-ceph-clock-skew.27043/
  13. W

    Hotplug memory limits total memory to 44GB

    Also works without any problem. VM booted with 1 GB and changed to 91GB while running. In the VM I now see 91GB of memory available. :)
  14. W

    Hotplug memory limits total memory to 44GB

    Okay, well, it works for me. I have tested with 91 GB max. and boots without any problem now. When I move the VM to a host without this change applied, it will not boot. So seems to be the solution. :)
  15. W

    Hotplug memory limits total memory to 44GB

    Works! Thanks. Will this be the default in PVE in the future, or do I need to change on all my nodes manually?
  16. W

    Hotplug memory limits total memory to 44GB

    Situation: Physical host has 2 CPU-sockets with each 6-core's and HT-enabled. So, 2 x 6 x 2 = 24 vCPU's Each CPU have 48 GB of memory installed in dual-channel configuration: [ 16 GB ] [ 8 GB ] [ 16 GB ] [ 8 GB ] So, total system memory is 96 GB. Host is running PVE 4.3-9. When I have a VM...
  17. W

    Really weird proxmox issue

    No, this isn't possible. Like Udo wrote: a node can only be member of one cluster. But offcourse you can make bigger clusters than only 3 nodes (this is only a minimal requirement). Always keep an odd number of nodes in a cluster (3, 5, 7, 9, 11 etc).

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!