Ceph provides erasure coded pools for a several years now (was introduced in 2013), and according to many sources the technology is quite stable. (Erasure coded pools provide much more effective storage utilization for the same number of drives that can fail in a pool, quite similarly as RAID5...
Really @mir? There is no other way than using "proper hardware"? @Lirio has two more identical nodes but you think they belong to the trash, there is nothing that can be done with them? Honestly, that's your best advice?
From your posts I reckon you want some kind of high availability. For...
The amount of work given to
Mirroring the ZIL and using L2ARC from both SSDs is how it should be done. But I'm not sure how on Earth would the ZIL be even 10 GB when used in a pool of 2 mirrored hard drives. I would say a 5GB ZIL (mirrored) would be more than enough, even that would cache 25+...
Your approach sounds a bit backwards to me. You talk about hardware (that can be changed), instead of your workload (that is a given). So why not tell us how many VMs you have, what are the size of your virtual disks, and what kind of IO is expected on them, and how much data can you safely lose...
Upon further examination, it looks like it's running, albeit very slowly. Has taken more than 2 hours to complete on an SSD based RAIDZ. Thanks for your help.
Doing the upgrade step by step. When trying to set the following permission, the command hangs on all nodes that have an OSD:
chown ceph: -R /var/lib/ceph/
I can't get past this command, despite that no OSD or MON processes run on the node. I have since stopped the entire Ceph cluster, but this...
Wanted to check the value myself, but sysctl came up empty, the variable does not exist.
Upon further examination, in our recently installed Proxmox 4 cluster, none of the servers have connection tracking enabled in the kernel or a module (or it's not exposed in /proc or sysctl).
There is a...
Thanks for clearing that up. Unfortunately, I have quemu-guest-agent installed in many of the Debian 7 VMs, yet they still produce the error on backup, so I had to deselect the flag in the VM options.
1. First of all, you decreased total swap space from 12 GB to 8 GB. Why would you do that? Swap should always be plenty...
2. Also, there is a sysctl variable that controls the aggressiveness of swapping, you can check it with the following command:
# sysctl vm.swappiness
vm.swappiness = 1
It...
Wen backing up some KVM guests from ZFS to NFS, vzdump gives the following error:
As you can see it takes exactly one hour until vzdump tries this freeze and fails many times, after that the backup completes in normal time.
It only happens to a few VMs, most of them are not affected. Any...
Ok, thanks for clearing that up. So let's say I want to build a dual ring topology, because my test cluster consists of 5 nodes and connecting every node to every other node would be unpractical (cabling and available PCIe slots), and also not much cheaper that switched. Also, with 5 nodes a...
So there is a howto on the wiki that details the setup of a 10 Gbit/s Ethernet network without using a network switch:
http://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
If I understand correctly, you would need a two port 10 Gbe NIC (or two NICs) in each of your nodes, you connect...
No, actually the problem surfaced right after one of our nodes unexpectedly rebooted Saturday night. During that reboot storage.cfg somehow got modified, because the pools worked just fine the previous day, and no one touched the configuration. (And I only readded pool2, so even if I made an...
Ok, that solved it. For some unknown reason, the RBD pool definitions in storage.cfg were overwritten with "rbd":
rbd: pool3
monhost 192.168.0.6,192.168.0.7,192.168.0.5
krbd 0
username admin
content images
pool rbd
rbd: pool2
monhost...
Okay, I can imagine that snapshot or migrate locks are unsafe to be removed automatically. I can't imagine though that backup locks are needed after a reboot. If I'm right, it would be a great feature to remove stale backup locks when Proxmox boots.
Okay so I can't ask you to read and comprehend my post, and I can't ask you to stay away. Then let's reiterate the facts in some logical order, which are still unclear to you:
- I did experience an unplanned reboot. Many others do, most of them use ZFS. It's likely a kernel issue, it seems...
@tom have you read my post? Where did I ask how to prevent a spontaneous reboot? (Also there would be no point, as there can be many reasons, from kernel errors to power outages to harware malfunction.)
I was asking is it necessary for VM backup locks to persist across reboots? If not, it would...
Okay, so I re-added pool2 in the storage UI (did not touch the Ceph pool itself), and checked keyrings:
root@proxmox:~# cat /etc/pve/priv/ceph/pool2.keyring
[client.admin]
key = PQDDRU9YX9u7HhAAEo3wLAFVCgVL+JsrEcs6HA==
root@proxmox:~# cat /etc/pve/priv/ceph/pool3.keyring
[client.admin]...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.