Hello, everyone.
I have to setup an Proxmox VE 6.2 cluster that use local disks as storage for VM's (KVM). The local storage is an hardware raid array (8 HDD in RAID0 on a Dell PERC H70 mini). We would like to use the live migration feature with this local storage that is available on ZFS only...
Hello everyone,
We are using virtual servers with large drives, so large that a backup is taking more than 24 hours. I wonder if there is a way to increase the backup speed.
Configuration is a Proxmox VE Cluster 4.4.13, VM storage is a 7 node (2 osd each node with Intel SSD DC Journal) Ceph...
Hello, everyone.
We encounter a problem when we add a new node to our cluster. All pveproxy refuse to work on all nodes.
root@prox249:~# service pveproxy status
● pveproxy.service - PVE API Proxy Server
Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled)
Active: failed...
Hello,
After the last update, I encounter a problem with the VNC console. Each time I move a mouse, or touch a key, or maybe random it just disconnects. I rebooted all nodes twice, no effect.
Problem is that I need to acces some VM that require fsck due to partition corruption (another old...
Hello, everyone.
After a lot of reading on the web and trying to tune the ceph, we whre not able to make it HA. If one of the node is turned off, after some time we have partition corruption on the VM.
The idea is if a node (2 osd) goes down, or if 2 osd's on different nodes goes down, the VM...
We just finish the update from Ceph Hammer to Jewel according to the tutorial. We encounter some OSD/Journal problem that was solved ( I notice that also the tutorial was updated. nice.), also some snmp problem (osd graphs inside cacti not working) that was also solved by adding snmp near ceph...
I just updated one of our Ceph nodes using this tutorial, from Hammer to Jewel version. Unfortunately after upgrade OSD's wont start. We use Proxmox 4.4.5. OSD's have the journal mounted on SSD. Error is,
root@ceph03:~# systemctl status ceph-osd@2.service
● ceph-osd@2.service - Ceph object...
We have just updated to the latest version on Proxmox 4.4.5 when the problem start.
Our configuration is using a cluster of ceph with 6 servers, 3 of them are Intel Skylake CPU's. On those Skylake based servers we have this,
Jan 4 09:32:20 ceph07 kernel: [139775.594411] Purging GPU memory, 0...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.