We're experimenting with various Ceph features on the new PVE 6.2 with a view to deployment later in the year.
One of the Ceph features that we're very interested in is pool replication for disaster recovery purposes (rbd mirror). This seems to work fine with "images" (like PVE VM images within...
I would like to get CephFS working (I have already been using RBD for quite some time with great success).
I've followed the guidance in the wiki to set up CephFS, and things seem fine.
I then tried to mount it on a Ubuntu 18.04 client VM, like:
root@server2:~# mount -t ceph...
I use a 3-node cluster set up with Ceph. Over the weekend node 3's system disk (SSD, no RAID) failed. I replaced the disk, removed it from the cluster, re-added it per the instructions and all is well - the cluster is complete again.
Now I'm having trouble with ceph. I removed all the OSDs and...
We're using Proxmox Virtual Environment version 5.4-3 and have upgraded our cluster to Mellanox Connectx-3 40G NICs.
We would like to use the Mellanox drivers if possible, but their download page only provides drivers for Debian up to version 9.5 and I believe PVE 5.4-3 runs on top of Debian...
I have an issue where I have 1 pg in my ceph cluster marked as:
pg 2.3d is active+clean+inconsistent, acting [1,5,3]
I have tried doing ceph pg repair 2.3d but no success. I am following this guide to fix it:
https://ceph.com/geen-categorie/ceph-manually-repair-object/
I have identified the...
I have a 3-way Proxmox VE cluster. Due to lack of PCIe slots on the servers' motherboards, I need to replace them. I will be changing the motherboards, but retaining all existing storage, CPU, RAM, network cards etc.
Is it better for me to completely reinstall Proxmox VE and re-add it to the...
I have a VM running on one PVE server which I backed up while it was running (snapshot). I've restored to another PVE server, but when I start it, it won't boot.
It's a standard KVM VM running Ubuntu Mate 18.04 and has two virtual disks, / and one for /home. All my other VMs restore and run...
I've got quite a fresh install of Proxmox with Ceph and things are working well. However, on the ceph Health page it reports "mon sb2 is low on available space". But when I check the OSDs, no disk is more than 6% used. Any idea what's going on?
Note that I have Ceph set up with two crush...
I have an issue where I get the following error when trying to create a new HA resource in my cluster (Datacenter -> HA -> Resources -> Add).
parse error - uncexpected '}' (500)
I am sure this is my own fault - I modified corosync.conf to add a separate corosync network as backup as per the...
I was reading the thread on recent Ceph benchmarks stickied in this forum, and saw some comments from PigLover about how the author of the benchmarks "make the claim about being able to run a 3-node cluster and still access the data with a node OOS. While it is "true", it is also dangerous...
I've deployed Proxmox VE 5.1 to a bare metal cloud instance that required me to deploy a pre-configured qcow2 image of Proxmox with cloud-init installed. It worked fine, things are up and running.
However, because I wanted to keep the image size small, I made the whole disk only 8 GB, which is...
We're using backup / restore scripts to move our VMs to a hot spare Proxmox box each day in case our main cluster goes down - in that case, we would simply start the restored VMs on the backup server and carry on.
Here's a typical restore script we use:
/sbin/lvremove -f...
I have a dedicated server on which I have proxmox running. I have installed pfsense as a VM which is working fine. I have assigned pfsense a public IP to its WAN interface, which separate from the Proxmox host.
For LAN, on the proxmox host I created a virtual bridge on eth0.1 and assigned it...
I'm testing the new storage replication framework, specifically trying to make it work with HA.
I've created a 3-node cluster and have a VM on server2. I set up replication. I let the first sync complete, then watch replication continue minute by minute. It's working nicely. I can migrate the...
We had one of our Proxmox VE servers crash today, and it wouldn't boot back up.
I've attached screenshots. Hard disk related? We have 2 hard disks in RAID1, so I feel it's unlikely that both would have failed.
The cluster did its job and the VMs were moved to remaining servers automatically...
We have a 3-server PVE cluster using Ceph running on SSDs.
Now we would like to add a second, separate Ceph pool to the same cluster using slow HDDs (only for CCTV DVR duties).
What is the recommended procedure for configuring that these days? I've seen these approaches...
I followed the tutorial to move Ceph from Hammer to Jewel here:
https://pve.proxmox.com/wiki/Ceph_Hammer_to_Jewel
All the steps went ok aside from starting the monitor daemon;
root@smiles2:~# systemctl start ceph-mon@ceph-mon.1.1500178214.095217502.service
root@smiles2:~# systemctl status...
Very interested in the new Storage Replication functionality in PVE 5.00, and I had a few questions regarding the documentation page on the wiki:
https://pve.proxmox.com/wiki/Storage_Replication
It seems like only 2 nodes would be required for this to work well, but in the documentation it...
I installed a DVR virtual machine to record some CCTV cameras on my PVE 4.4-13/7ea56165 cluster, using Ceph (Hammer) as the storage backend. I created a 1.5 TB disk on Ceph, allocated to the VM.
Over time, the disk began to fill with recordings (over 1 TB), and I found myself lower on storage...
I'm trying to increase the size of the C drive of a Windows 10 guest VM, which uses VirtIO drivers.
I shut down the Windows guest and then resized the disk within Proxmox web UI, which went fine. After starting the guest back up, the disk still shows as the old size though, so I can't extend...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.