Search results

  1. M

    High latency in VMs, No latency on backend?

    3-Node Cluster with CEPH RDP backend. Note, this should not be due to Proxmox or CEPH updates, since it never changed while this issue started to occur. Header Proxmox Virtual Environment 8.0.3 Search Node 'VMHost2' Day (maximum) CPU usage 2.20% of 24 CPU(s) IO delay 0.13% Load average...
  2. M

    Dell R520 Recurring Problem CPU locking

    I've been working with a Dell R520 with dual Xeon(R) CPU E5-2430 v2 @ 2.50GHz and I've had this recurring issue (for multiple years) after various kernel updates where the virtual machines (both windows and linux) lock up. Previously, I was somehow able to rectify this issue by disabling as...
  3. M

    [SOLVED] Corosync redundancy - corosync.conf

    I have an existing 3 node cluster that was originally created in Proxomx 5.x. proxmox-ve: 7.2-1 (running kernel: 5.15.30-2-pve) pve-manager: 7.2-3 (running version: 7.2-3/c743d6c1) pve-kernel-helper: 7.2-2 pve-kernel-5.15: 7.2-1 pve-kernel-5.13: 7.1-9 pve-kernel-5.15.30-2-pve: 5.15.30-3...
  4. M

    [SOLVED] Ceph Octopus to Pacific - PG is stuck inactive

    3 node cluster. I just upgraded to the newest Proxmox 7.2 and Ceph Pacific 15.2.16 to 16.2.7 and after I upgraded and rebooted the second node, I am getting stuck pgs. At this time I have one last node to upgrade but [global] auth_client_required = none auth_cluster_required = none...
  5. M

    [SOLVED] Cannot Down and Remove the last remaining CEPH OSD from previous pool

    Info: proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve) pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3) pve-kernel-helper: 7.1-14 pve-kernel-5.13: 7.1-9 pve-kernel-5.11: 7.0-10 pve-kernel-5.13.19-6-pve: 5.13.19-15 pve-kernel-5.13.19-3-pve: 5.13.19-7 pve-kernel-5.13.19-2-pve: 5.13.19-4...
  6. M

    [SOLVED] VM Stuck Locked

    I shut down a VM (Windows 11) and I wanted to take a snapshot. I realized the VM was not yet shut down because it was taking a snapshot of the RAM so I cancelled it while it was snapshoting. Unfortunately, this caused the VM to get stuck in a locked state, which I cannot get it out of...
  7. M

    [SOLVED] Database Sync Error

    This is a pretty new cluster 3-host cluster: version 7.1-1 pmg01 (master) pmg02 (node) pmg03 (node). I am am seeing the following message in the syslogs for pmg01 and pmg03: <27>1 2021-12-30T08:49:16.624941-06:00 pmg01 pmgmirror 926 - - database sync 'pmg02' failed - DBI...
  8. M

    CIFS/SMB Unable to activate storage - directory does not exist or is unrecheable

    PVE Versions: proxmox-ve: 7.0-2 (running kernel: 5.11.22-5-pve) pve-manager: 7.0-13 (running version: 7.0-13/7aa7e488) pve-kernel-helper: 7.1-2 pve-kernel-5.11: 7.0-8 pve-kernel-5.4: 6.4-5 pve-kernel-5.11.22-5-pve: 5.11.22-10 pve-kernel-5.11.22-4-pve: 5.11.22-9 pve-kernel-5.4.128-1-pve...
  9. M

    [SOLVED] How to remove orphaned snapshots?

    I have a couple of snapshot that show up in their respective ceph pools: vm-105-state-ERPNextReady and vm-101-state-ubuntu20 These snapshots do not exist in the vm snapshot list: vm-105: vm-101: Does anyone know how I can remove them? When I try to remove both snapshots I get the...
  10. M

    Backup Logic

    I have a question that I haven't been able to find an answer to either on the wiki, else elsewhere on the web. When you create a backup job and select multiple VMs, do all the VMs backup at the same time, or are they staggered?
  11. M

    CEPH Nautilus mon_host in [global] vs mon_host in [client]

    At this time, here is the way monitors are registered in ceph.conf (an excerpt only): [client] [mon.VMHost4] host = VMHost4 mon addr = 10.10.1.14:6789 [mon.VMHost3] host = VMHost3 mon addr = 10.10.1.13:6789 [mon.VMHost2] host = VMHost2...
  12. M

    [SOLVED] Ceph Cannot add OSD

    PVE Versions: proxmox-ve: 5.2-2 (running kernel: 4.15.18-1-pve) pve-manager: 5.2-5 (running version: 5.2-5/eb24855a) pve-kernel-4.15: 5.2-4 pve-kernel-4.13: 5.2-2 pve-kernel-4.15.18-1-pve: 4.15.18-15 pve-kernel-4.15.17-1-pve: 4.15.17-9 pve-kernel-4.15.15-1-pve: 4.15.15-6...
  13. M

    CEPH Device Class Incorrect

    We have a number of Dell Hosts with PERC H700 raid controllers. We just installed some SSDs so we can use the ssd class ceph replicated rule, but the disks show a class of hdd instead of ssd. Anyone run into this problem?
  14. M

    [SOLVED] Backup Error - No Such volume

    We just updated to the versions below and now our backups have stopped working. proxmox-ve: 5.1-43 (running kernel: 4.15.15-1-pve) pve-manager: 5.1-52 (running version: 5.1-52/ba597a64) pve-kernel-4.13: 5.1-44 pve-kernel-4.15: 5.1-3 pve-kernel-4.15.15-1-pve: 4.15.15-6 pve-kernel-4.13.16-2-pve...
  15. M

    [SOLVED] LVM Error? Cannot boot

    I'm not sure when or why this error started appearing but it never prevented a boot, so I figured I could safely ignore it. Kernel version: proxmox-ve: 5.1-43 (running kernel: 4.13.16-2-pve) pve-manager: 5.1-52 (running version: 5.1-52/ba597a64) For the sake of search engines I'll write...
  16. M

    CephFS MDS Failover

    Hello everyone, There is a fully functional ceph fs running on a 3 node cluster. It was created very simply, here is the conf related to mds: [mds] keyring = /var/lib/ceph/mds/54da8900-a9db-4a57-923c-a62dbec8c82a/keyring mds data =...
  17. M

    [SOLVED] CephFS Mount Connection Timed Out

    This is on a 3 node cluster. The versions: Proxmox Kernel Version Linux 4.13.13-1-pve #1 SMP PVE 4.13.13-31 Ceph: 12.2.2 Successfully created a cephfs as far as I can tell. Ccephx is disabled. Though I did create the cephfs while the cluster still had cephx enabled, and I disabled it shortly...
  18. M

    Ceph - Network File Share Drive

    I am aware that CEPH is not ideal for storing files, but I do not want to run both Gluster and CEPH on the nodes. Right now we are running the file shares from FreeNAS, and I'd like to remove single points of failure as much as possible. Are there any strategies for creating redundant network...
  19. M

    Mysteriously Solved: Proxmox-VE 5.1 No route to gateway?

    I cannot get network access past the default gateway. I cannot ping the default gateway. I can ping and make connections within the network cluster, so that works ok. Version: proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve) pve-manager: 5.1-36 (running version: 5.1-36/131401db)...
  20. M

    Planning Ceph 3 Nodes - 6 OSD vs 3 Hardware Raid

    Hello all, I don't have any experience with Ceph and I wanted to get some people's opinions on this: 3 Node Cluster (2x R510, 1xR520) Storage to be used for light VM usage. Here are the two options I'm considering: OPTION 1 6 spinning hard disks for OSDs, 2 per node. (6 total OSDs) 3 SSD...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!