If the current number does not fit to your future needs, you can find very nice description of determining and increasing this value here http://ceph.com/docs/master/rados/operations/placement-groups/ . You need to incrase pg_num as well as pgp_num
Hi,
I think it is described here http://pve.proxmox.com/wiki/Ceph_Server - you can use GUI or CLI to add new drives. They will be assigned to the default pool automatically.
Hi,
is here anything new in this area? My backups run around 30MB/s from ceph storage but more than 100mb/s from nfs based storage to the same location. Rados bench shows read around 300mb/s. It is quite unconvenient.
Isn`t here any settings or anything else I can do to improve performance? I...
This was my solution to replaced that drive, as I got some inconsistent pgs and scrub error (it was not possible to solve it by repair or deep-scrub commands)
ceph osd out osd.15 --- wait for a data move via ceph -w, remains some active+clean+inconsistent pgs
/etc/init.d/ceph stop osd.15
ceph...
Hi,
smartd warned me about some problems with my drives I use for ceph and so I have two questions.
1] Is here any special fsck command to check ceph xfs filesystem or I would just use fsck.xfs /dev/sdX ?
2] What is the correct way to replace a drive, particullary not dead yet in my case?
I...
You can use localstorage for your vm as well, proxmox makes a special partition on root volume. So you can use around 100gb of ssds for vms in case of raid1.
You wrote that you want those hdds for backup so I supposed that cache or speed is not so important in this usage.
Anyway, I am not sure...
Upgrade to latest packages, it is solved already. But if you need your old packages for some reason, you can edit line 105 of /usr/bin/pveceph script, but this could be a security risk to not check certificates.
Hi,
I am not sure if I understand you correctly.
But if you already bought a raid card, it makes sense to use it.
If the problem is i/o delay than create raid1 setup from ssd drives and store all vms on them and then connect hdds directly to motherboard as zfs prefers direct access to drives and...
Be careful of network lags when using this setup. Regarding ceph, it is possible in similar way to this http://forum.proxmox.com/threads/20958-ceph-add-new-hosts-and-create-completely-new-pool
I finally finished this setup. I will make some notes about this procedure.
On new nodes after command
pveceph install -version fireflyit was also necessary to run following command without parameters as is mentioned on wiki.
pveceph init
Then I prepared drives with zap command and then...
Hi,
is the weekly_backup storage available? Did you receive any error email? If you do not have fill in any, try to set it up. Is any error or info visible on gui tasks console? Any hint in /var/log/syslog or messages or in pve/tasks?
I got the same error after installing zfs on last node as it comes with special version of grub. It was necessary to boot sytemrescuecd and choose boot from local harddrive or something like that. And after boot run grub-install /dev/sdX and problem was solved.
Yes, you can. But on new nodes there can not be any VMs either kvm or openvz running at the moment of adding - http://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster .
As Wolfgang has written you will need some non-free products like Nexenta etc for sync HA. There were some instruments for setting HA ZFS by Sun, but I found them very hard to deploy and use. Zreplicate use send and receive, this means you can do a snapshot every five minutes for example and...
Well, it depends on what you are looking for. It is a distributed filesystem, it has file snapshot as well as volume snapshot. I tested it and it seems to me trustable. When there is a split brain, you can solve it easily on file level and not volume level, which seems to me really great as it...
I have the exactly same problem on my servers too and I have latest updates installed (installation cd was 3.1):
Wget is complaining against certificate of ceph.com, when I try it run separately:
wget -O real.asc 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
--2015-02-11...
I am not 100% sure about live migration, but I think that this zfs option uses iscsi and in your setup there is used lvm.
I had some problems with drbd previously and creating manually every drive is a no-go for me, so I would choose glusterfs on zfs direction for example. But it depends on...
Thank you very much for your answer. I have got few more questions.
I want to add spinning drives from older servers cl3 and cl4 to current ssd ones cl1 and cl2.
I am glad that step three can be more simple. What if I set ceph osd set noin and ceph osd set noup as you have written in your...
Sometimes I have problem with gui when some storage is not online.
Access logs are stored in /var/log/pveproxy/access.log and the rest of them in /var/log/syslog - you can try to look for vanished connections.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.