Search results

  1. G

    Ceph: Erasure coded pools planned?

    Ceph provides erasure coded pools for a several years now (was introduced in 2013), and according to many sources the technology is quite stable. (Erasure coded pools provide much more effective storage utilization for the same number of drives that can fail in a pool, quite similarly as RAID5...
  2. G

    recommended setup for small size network

    Really @mir? There is no other way than using "proper hardware"? @Lirio has two more identical nodes but you think they belong to the trash, there is nothing that can be done with them? Honestly, that's your best advice? From your posts I reckon you want some kind of high availability. For...
  3. G

    [SOLVED] ZFS vs Other FS system using NVME SSD as cache

    The amount of work given to Mirroring the ZIL and using L2ARC from both SSDs is how it should be done. But I'm not sure how on Earth would the ZIL be even 10 GB when used in a pool of 2 mirrored hard drives. I would say a 5GB ZIL (mirrored) would be more than enough, even that would cache 25+...
  4. G

    [SOLVED] ZFS vs Other FS system using NVME SSD as cache

    Your approach sounds a bit backwards to me. You talk about hardware (that can be changed), instead of your workload (that is a given). So why not tell us how many VMs you have, what are the size of your virtual disks, and what kind of IO is expected on them, and how much data can you safely lose...
  5. G

    HowTo: Upgrade Ceph Hammer to Jewel

    Upon further examination, it looks like it's running, albeit very slowly. Has taken more than 2 hours to complete on an SSD based RAIDZ. Thanks for your help.
  6. G

    HowTo: Upgrade Ceph Hammer to Jewel

    Doing the upgrade step by step. When trying to set the following permission, the command hangs on all nodes that have an OSD: chown ceph: -R /var/lib/ceph/ I can't get past this command, despite that no OSD or MON processes run on the node. I have since stopped the entire Ceph cluster, but this...
  7. G

    default net.netfilter.nf_conntrack_max is too low.

    Wanted to check the value myself, but sysctl came up empty, the variable does not exist. Upon further examination, in our recently installed Proxmox 4 cluster, none of the servers have connection tracking enabled in the kernel or a module (or it's not exposed in /proc or sysctl). There is a...
  8. G

    5.0 based on kernel 4.10?

    Both are true actually. The Proxmox "distribution" is based on Debian, but the kernel is compiled based on Ubuntu sources AFAIK.
  9. G

    Ceph as data storage unit

    Well I hope the Proxmox team decides to include CephFS in 5.0, it would be a great addition to the Proxmox feature set.
  10. G

    vzdump error: guest-fsfreeze-freeze failed

    Thanks for clearing that up. Unfortunately, I have quemu-guest-agent installed in many of the Debian 7 VMs, yet they still produce the error on backup, so I had to deselect the flag in the VM options.
  11. G

    Proxmox 4.1 vs 4.4 swap usage

    1. First of all, you decreased total swap space from 12 GB to 8 GB. Why would you do that? Swap should always be plenty... 2. Also, there is a sysctl variable that controls the aggressiveness of swapping, you can check it with the following command: # sysctl vm.swappiness vm.swappiness = 1 It...
  12. G

    vzdump error: guest-fsfreeze-freeze failed

    Wen backing up some KVM guests from ZFS to NFS, vzdump gives the following error: As you can see it takes exactly one hour until vzdump tries this freeze and fails many times, after that the backup completes in normal time. It only happens to a few VMs, most of them are not affected. Any...
  13. G

    Anyone using 10gbe mesh/ring network for Ceph?

    Ok, thanks for clearing that up. So let's say I want to build a dual ring topology, because my test cluster consists of 5 nodes and connecting every node to every other node would be unpractical (cabling and available PCIe slots), and also not much cheaper that switched. Also, with 5 nodes a...
  14. G

    Anyone using 10gbe mesh/ring network for Ceph?

    So there is a howto on the wiki that details the setup of a 10 Gbit/s Ethernet network without using a network switch: http://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server If I understand correctly, you would need a two port 10 Gbe NIC (or two NICs) in each of your nodes, you connect...
  15. G

    KVM disk has disappeared from Ceph pool

    No, actually the problem surfaced right after one of our nodes unexpectedly rebooted Saturday night. During that reboot storage.cfg somehow got modified, because the pools worked just fine the previous day, and no one touched the configuration. (And I only readded pool2, so even if I made an...
  16. G

    KVM disk has disappeared from Ceph pool

    Ok, that solved it. For some unknown reason, the RBD pool definitions in storage.cfg were overwritten with "rbd": rbd: pool3 monhost 192.168.0.6,192.168.0.7,192.168.0.5 krbd 0 username admin content images pool rbd rbd: pool2 monhost...
  17. G

    After unplanned reboot locked VMs don't start

    Okay, I can imagine that snapshot or migrate locks are unsafe to be removed automatically. I can't imagine though that backup locks are needed after a reboot. If I'm right, it would be a great feature to remove stale backup locks when Proxmox boots.
  18. G

    After unplanned reboot locked VMs don't start

    Okay so I can't ask you to read and comprehend my post, and I can't ask you to stay away. Then let's reiterate the facts in some logical order, which are still unclear to you: - I did experience an unplanned reboot. Many others do, most of them use ZFS. It's likely a kernel issue, it seems...
  19. G

    After unplanned reboot locked VMs don't start

    @tom have you read my post? Where did I ask how to prevent a spontaneous reboot? (Also there would be no point, as there can be many reasons, from kernel errors to power outages to harware malfunction.) I was asking is it necessary for VM backup locks to persist across reboots? If not, it would...
  20. G

    KVM disk has disappeared from Ceph pool

    Okay, so I re-added pool2 in the storage UI (did not touch the Ceph pool itself), and checked keyrings: root@proxmox:~# cat /etc/pve/priv/ceph/pool2.keyring [client.admin] key = PQDDRU9YX9u7HhAAEo3wLAFVCgVL+JsrEcs6HA== root@proxmox:~# cat /etc/pve/priv/ceph/pool3.keyring [client.admin]...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!