3-Node Cluster with CEPH RDP backend. Note, this should not be due to Proxmox or CEPH updates, since it never changed while this issue started to occur.
Header
Proxmox
Virtual Environment 8.0.3
Search
Node 'VMHost2'
Day (maximum)
CPU usage 2.20% of 24 CPU(s)
IO delay 0.13%
Load average...
I've been working with a Dell R520 with dual Xeon(R) CPU E5-2430 v2 @ 2.50GHz and I've had this recurring issue (for multiple years) after various kernel updates where the virtual machines (both windows and linux) lock up. Previously, I was somehow able to rectify this issue by disabling as...
I have an existing 3 node cluster that was originally created in Proxomx 5.x.
proxmox-ve: 7.2-1 (running kernel: 5.15.30-2-pve)
pve-manager: 7.2-3 (running version: 7.2-3/c743d6c1)
pve-kernel-helper: 7.2-2
pve-kernel-5.15: 7.2-1
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.30-2-pve: 5.15.30-3...
3 node cluster.
I just upgraded to the newest Proxmox 7.2 and Ceph Pacific 15.2.16 to 16.2.7 and after I upgraded and rebooted the second node, I am getting stuck pgs.
At this time I have one last node to upgrade but
[global]
auth_client_required = none
auth_cluster_required = none...
I shut down a VM (Windows 11) and I wanted to take a snapshot. I realized the VM was not yet shut down because it was taking a snapshot of the RAM so I cancelled it while it was snapshoting.
Unfortunately, this caused the VM to get stuck in a locked state, which I cannot get it out of...
This is a pretty new cluster 3-host cluster:
version 7.1-1
pmg01 (master)
pmg02 (node)
pmg03 (node).
I am am seeing the following message in the syslogs for pmg01 and pmg03:
<27>1 2021-12-30T08:49:16.624941-06:00 pmg01 pmgmirror 926 - - database sync 'pmg02' failed - DBI...
I have a couple of snapshot that show up in their respective ceph pools:
vm-105-state-ERPNextReady
and
vm-101-state-ubuntu20
These snapshots do not exist in the vm snapshot list:
vm-105:
vm-101:
Does anyone know how I can remove them?
When I try to remove both snapshots I get the...
I have a question that I haven't been able to find an answer to either on the wiki, else elsewhere on the web.
When you create a backup job and select multiple VMs, do all the VMs backup at the same time, or are they staggered?
At this time, here is the way monitors are registered in ceph.conf (an excerpt only):
[client]
[mon.VMHost4]
host = VMHost4
mon addr = 10.10.1.14:6789
[mon.VMHost3]
host = VMHost3
mon addr = 10.10.1.13:6789
[mon.VMHost2]
host = VMHost2...
We have a number of Dell Hosts with PERC H700 raid controllers.
We just installed some SSDs so we can use the ssd class ceph replicated rule, but the disks show a class of hdd instead of ssd.
Anyone run into this problem?
We just updated to the versions below and now our backups have stopped working.
proxmox-ve: 5.1-43 (running kernel: 4.15.15-1-pve)
pve-manager: 5.1-52 (running version: 5.1-52/ba597a64)
pve-kernel-4.13: 5.1-44
pve-kernel-4.15: 5.1-3
pve-kernel-4.15.15-1-pve: 4.15.15-6
pve-kernel-4.13.16-2-pve...
I'm not sure when or why this error started appearing but it never prevented a boot, so I figured I could safely ignore it.
Kernel version:
proxmox-ve: 5.1-43 (running kernel: 4.13.16-2-pve)
pve-manager: 5.1-52 (running version: 5.1-52/ba597a64)
For the sake of search engines I'll write...
Hello everyone,
There is a fully functional ceph fs running on a 3 node cluster.
It was created very simply, here is the conf related to mds:
[mds]
keyring = /var/lib/ceph/mds/54da8900-a9db-4a57-923c-a62dbec8c82a/keyring
mds data =...
This is on a 3 node cluster.
The versions:
Proxmox Kernel Version
Linux 4.13.13-1-pve #1 SMP PVE 4.13.13-31
Ceph: 12.2.2
Successfully created a cephfs as far as I can tell.
Ccephx is disabled. Though I did create the cephfs while the cluster still had cephx enabled, and I disabled it shortly...
I am aware that CEPH is not ideal for storing files, but I do not want to run both Gluster and CEPH on the nodes.
Right now we are running the file shares from FreeNAS, and I'd like to remove single points of failure as much as possible.
Are there any strategies for creating redundant network...
I cannot get network access past the default gateway. I cannot ping the default gateway.
I can ping and make connections within the network cluster, so that works ok.
Version:
proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve)
pve-manager: 5.1-36 (running version: 5.1-36/131401db)...
Hello all,
I don't have any experience with Ceph and I wanted to get some people's opinions on this:
3 Node Cluster (2x R510, 1xR520)
Storage to be used for light VM usage.
Here are the two options I'm considering:
OPTION 1
6 spinning hard disks for OSDs, 2 per node. (6 total OSDs)
3 SSD...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.