Search results

  1. D

    RAID0 ZFS over hardware RAID

    Hello, everyone. I have to setup an Proxmox VE 6.2 cluster that use local disks as storage for VM's (KVM). The local storage is an hardware raid array (8 HDD in RAID0 on a Dell PERC H70 mini). We would like to use the live migration feature with this local storage that is available on ZFS only...
  2. D

    Activate Ceph Object Storage

    Hello, community. Is there a way to activate and use Ceph Object Storage in a Proxmox Ceph Cluster? Thanks
  3. D

    Proxmox VE Backup Speed (vzdump) on NFS server

    Hello everyone, We are using virtual servers with large drives, so large that a backup is taking more than 24 hours. I wonder if there is a way to increase the backup speed. Configuration is a Proxmox VE Cluster 4.4.13, VM storage is a 7 node (2 osd each node with Intel SSD DC Journal) Ceph...
  4. D

    pveproxy problem after adding a new node

    Hello, everyone. We encounter a problem when we add a new node to our cluster. All pveproxy refuse to work on all nodes. root@prox249:~# service pveproxy status ● pveproxy.service - PVE API Proxy Server Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled) Active: failed...
  5. D

    VNC Problem, Server disconnected (code: 1006)

    Hello, After the last update, I encounter a problem with the VNC console. Each time I move a mouse, or touch a key, or maybe random it just disconnects. I rebooted all nodes twice, no effect. Problem is that I need to acces some VM that require fsck due to partition corruption (another old...
  6. D

    Ceph Optimization for HA. 7 nodes, 2 osd each

    Hello, everyone. After a lot of reading on the web and trying to tune the ceph, we whre not able to make it HA. If one of the node is turned off, after some time we have partition corruption on the VM. The idea is if a node (2 osd) goes down, or if 2 osd's on different nodes goes down, the VM...
  7. D

    After Ceph update from Hammer to Jewel, Ceph logs are not working

    We just finish the update from Ceph Hammer to Jewel according to the tutorial. We encounter some OSD/Journal problem that was solved ( I notice that also the tutorial was updated. nice.), also some snmp problem (osd graphs inside cacti not working) that was also solved by adding snmp near ceph...
  8. D

    OSD wont start after Ceph upgrade from Hammer to Jewel

    I just updated one of our Ceph nodes using this tutorial, from Hammer to Jewel version. Unfortunately after upgrade OSD's wont start. We use Proxmox 4.4.5. OSD's have the journal mounted on SSD. Error is, root@ceph03:~# systemctl status ceph-osd@2.service ● ceph-osd@2.service - Ceph object...
  9. D

    Intel Skylake video memory purge kill osd process

    We have just updated to the latest version on Proxmox 4.4.5 when the problem start. Our configuration is using a cluster of ceph with 6 servers, 3 of them are Intel Skylake CPU's. On those Skylake based servers we have this, Jan 4 09:32:20 ceph07 kernel: [139775.594411] Purging GPU memory, 0...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!