When we move qcow2 image of a VM/guest ( from hardware tab inside proxmox gui for any VM) from local storage to shared Ceph Storage pool , does it not automatically get converted to raw format?
We have a 3 node proxmox/ceph cluster ... each with 4 x4 TB disks
1) If we want to add more disks , what are the things that we need to be careful about?
Will the following steps automatically add it to ceph.conf?
ceph-disk zap /dev/sd[X]
pveceph createosd /dev/sd[X] -journal_dev...
FSYNCS/SECOND did not change much .... strangely though disk lag and load inside guest VMs were better by atleast 5 times.
It was painfully slow earlier and had to wait for 20 to 60 seconds after command input , but after making changes , dont have to wait more than 5 secs for simple...
I see the following error in /var/log/ceph/ceph-mon.1.log
Any ideas about it?
2016-03-29 15:21:53.626021 7f1c39513700 1 mon.1@1(peon).auth v325 client did not provide supported auth type
2016-03-29 15:21:56.561817 7f1c39513700 0 mon.1@1(peon) e3 handle_command...
Do you add following to /etc/ceph/ceph.conf ?
filestore_fd_cache_size = 64
filestore_fd_cache_shards = 32
cephx sign messages = false
cephx require signatures = false
rbd_cache = true
And then enable cache option = writeback for qemu guest/vm under Hardware/Edit Disk ?
Do we have to...
Please check this and advise.
Ceph cluster is running on private network with 10GBit NIC but it seems there is some lag in FSYNCS/SECOND , from what i read in other forum posts, I am suspecting this is the issue.
Please advise any tips to fine tune this. Each sever runs 4x 4TB disk ...
I could load the qcow2 image converted from vdmk , when I used storage as local .. as that created the folder /var/lib/vz/images/VMID/ and i could replace or put converted qcow2 image inside that folder and adjust the image name in file /etc/pve/local/qemu-server/VMID.conf
My Question...
what is the procedure to restore or importat a vmdk image into a proxmox KVM container?
Seems it is a flat vmdk file:
root@srv1:/var/# file clone-serverNAME-flat.vmdk
clone-serverNAME-flat.vmdk: DOS/MBR boot sector; GRand Unified Bootloader, stage1 version 0x3, boot drive 0x80, 1st sector...
Hi,
replication size 2 servers vs 3 ( 3 node ProxmoxVE with CEPH cluster inside nodes itself )
which is better and why? Does the replication size cause the Storage Pool size to shrink ( or make available lesser space ) in size if the replication size is higher?
Can you throw some light on the following
####################################################
1)
vmbr0 bridged to eth1 (public NIC )
Is there any need to use vmbr0 or we can remove vmbr0 and just set the IP directly to eth1 interface?
Altough it seems bridge is necessary on public NIC...
Ok i got it working somewhat , please check below/advice what could be wrong?
#########################################################################
root@srv1:~# pvecm status
Quorum information
------------------
Date: Wed Mar 9 07:45:58 2016
Quorum provider...
Hi,
I see the following error before trying to setup 3node promoxVE cluster
root@srv1:~# pvecm status
Cannot initialize CMAP service
root@srv1:~#
root@srv1:~# pveversion
pve-manager/4.1-1/2f9650d4 (running kernel: 4.2.6-1-pve)
root@srv1:~#
root@srv1:~# corosync-cmapctl -g...
3 servers
------------
Two NIC on each server eth1 (public) , eth0 to be used as Private(internal )
eth1 , promoxVE IP for each servr ( public IP ) same network but different subnets
eth0 to be used as Private(internal ) to be used for CEPH shared storage
Each 3 server has 4 1 TB...
Hello,
I am new to proxmox and got stuck in following error.
Three proxmoxVE server setup with each having non standard ssh port. When i try to add second node to cluster (should be same for third node ) , I see the error:
unable to copy ssh ID
Are you bound to use ssh port 22?
There is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.