Search results

  1. M

    VM disk format , local to Ceph Storage pool move

    When we move qcow2 image of a VM/guest ( from hardware tab inside proxmox gui for any VM) from local storage to shared Ceph Storage pool , does it not automatically get converted to raw format?
  2. M

    Add new disks

    We have a 3 node proxmox/ceph cluster ... each with 4 x4 TB disks 1) If we want to add more disks , what are the things that we need to be careful about? Will the following steps automatically add it to ceph.conf? ceph-disk zap /dev/sd[X] pveceph createosd /dev/sd[X] -journal_dev...
  3. M

    too many PGs

    health HEALTH_WARN too many PGs per OSD (442 > max 300) How can we decrease PG/OSD safely? ( though this is a ceph question)
  4. M

    Slow VMs , hardDisk Lags, load (cpu/mem inside vm good enough)

    FSYNCS/SECOND did not change much .... strangely though disk lag and load inside guest VMs were better by atleast 5 times. It was painfully slow earlier and had to wait for 20 to 60 seconds after command input , but after making changes , dont have to wait more than 5 secs for simple...
  5. M

    Slow VMs , hardDisk Lags, load (cpu/mem inside vm good enough)

    changing from none to cache=writeback in qemu guest/vm ... HW->Disk , improved the lag issue to a very good extent.
  6. M

    Signature check failed. Sender did not set CEPH_MSG_FOOTER_SIGNED

    I see tons of steaming logs in /var/log/ceph/ceph-osd.4.log. Any suggestions to fix this? 2016-03-29 15:27:55.467114 7f9881c35700 0 -- 10.10.10.2:6804/2545 >> 10.10.10.3:0/127961 pipe(0x248b0000 sd=84 :6804 s=2 pgs=24572619 cs=1 l=1 c=0x8345a1b20).Signature check failed 2016-03-29...
  7. M

    auth v325 client did not provide supported auth type

    I see the following error in /var/log/ceph/ceph-mon.1.log Any ideas about it? 2016-03-29 15:21:53.626021 7f1c39513700 1 mon.1@1(peon).auth v325 client did not provide supported auth type 2016-03-29 15:21:56.561817 7f1c39513700 0 mon.1@1(peon) e3 handle_command...
  8. M

    Slow VMs , hardDisk Lags, load (cpu/mem inside vm good enough)

    All the kvm guest/container images are in qcow2 format ..... is that causing any delay? any thoughts
  9. M

    Ceph - Bad performance in qemu-guests

    Do you add following to /etc/ceph/ceph.conf ? filestore_fd_cache_size = 64 filestore_fd_cache_shards = 32 cephx sign messages = false cephx require signatures = false rbd_cache = true And then enable cache option = writeback for qemu guest/vm under Hardware/Edit Disk ? Do we have to...
  10. M

    Slow VMs , hardDisk Lags, load (cpu/mem inside vm good enough)

    Please check this and advise. Ceph cluster is running on private network with 10GBit NIC but it seems there is some lag in FSYNCS/SECOND , from what i read in other forum posts, I am suspecting this is the issue. Please advise any tips to fine tune this. Each sever runs 4x 4TB disk ...
  11. M

    flat vmdk file restore in proxmox

    I could load the qcow2 image converted from vdmk , when I used storage as local .. as that created the folder /var/lib/vz/images/VMID/ and i could replace or put converted qcow2 image inside that folder and adjust the image name in file /etc/pve/local/qemu-server/VMID.conf My Question...
  12. M

    flat vmdk file restore in proxmox

    what is the procedure to restore or importat a vmdk image into a proxmox KVM container? Seems it is a flat vmdk file: root@srv1:/var/# file clone-serverNAME-flat.vmdk clone-serverNAME-flat.vmdk: DOS/MBR boot sector; GRand Unified Bootloader, stage1 version 0x3, boot drive 0x80, 1st sector...
  13. M

    replication size 2 vs 3?

    Hi, replication size 2 servers vs 3 ( 3 node ProxmoxVE with CEPH cluster inside nodes itself ) which is better and why? Does the replication size cause the Storage Pool size to shrink ( or make available lesser space ) in size if the replication size is higher?
  14. M

    New install network questions

    Can you throw some light on the following #################################################### 1) vmbr0 bridged to eth1 (public NIC ) Is there any need to use vmbr0 or we can remove vmbr0 and just set the IP directly to eth1 interface? Altough it seems bridge is necessary on public NIC...
  15. M

    Cannot initialize CMAP service on fresh install of pve-manager/4.1

    i could get past the error in main node, when I created a cluster name using: pvecm create ClusterName
  16. M

    Cannot initialize CMAP service on fresh install of pve-manager/4.1

    Ok i got it working somewhat , please check below/advice what could be wrong? ######################################################################### root@srv1:~# pvecm status Quorum information ------------------ Date: Wed Mar 9 07:45:58 2016 Quorum provider...
  17. M

    Cannot initialize CMAP service on fresh install of pve-manager/4.1

    Hi, I see the following error before trying to setup 3node promoxVE cluster root@srv1:~# pvecm status Cannot initialize CMAP service root@srv1:~# root@srv1:~# pveversion pve-manager/4.1-1/2f9650d4 (running kernel: 4.2.6-1-pve) root@srv1:~# root@srv1:~# corosync-cmapctl -g...
  18. M

    New install network questions

    3 servers ------------ Two NIC on each server eth1 (public) , eth0 to be used as Private(internal ) eth1 , promoxVE IP for each servr ( public IP ) same network but different subnets eth0 to be used as Private(internal ) to be used for CEPH shared storage Each 3 server has 4 1 TB...
  19. M

    waiting for quorum Proxmox VE Version: 4.1-1

    On first node ############################################################################ root@main1:~# pvecm create MyCluster Corosync Cluster Engine Authentication key generator. Gathering 1024 bits for key from /dev/urandom. Writing corosync key to /etc/corosync/authkey. root@main1:~# pvecm...
  20. M

    unable to copy ssh ID pve-manager 4.1

    Hello, I am new to proxmox and got stuck in following error. Three proxmoxVE server setup with each having non standard ssh port. When i try to add second node to cluster (should be same for third node ) , I see the error: unable to copy ssh ID Are you bound to use ssh port 22? There is...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!