Search results

  1. High latency on recently added SSD OSDs

    Hi, I added several dozens SSDs to Ceph cluster and found that Proxmox reports Apply latency for them from 200 to 500 ms. I checked with iostat program - zero activity. What could be wrong with them? P.S. No migration process happens - they are linked to separate 'root' container.
  2. New install on mSATA

    Try to add nomodeset to kernel boot settings.
  3. drbdmanage license change

    Looks like Linbit chose the way that Apple has taken once: no other players. Although I was successful with making master+2slaves DRBD cluster in production.
  4. Resize VM disk on Ceph

    Hi, Some notes: 1. Resizing from Proxmox UI failed with message 'VM 102 qmp command failed - VM 102 qmp command 'block_resize' failed - Could not resize: Invalid argument' 2. Successfully resized image on ceph with 'qemu-img resize -f rbd rbd:rbd/vm-102-disk-1 48G' 3. But Proxmox UI and VM...
  5. [SOLVED] all vms down and lvm-thin not backupable

    Please, paste cat /proc/mdstat output.
  6. Ceph pool may be deleted easily in UI
  7. Ceph pool may be deleted easily in UI

    Hi, I found simple way to get all CT/VMs unusable. Simply delete ceph pool in Ceph->Pools->Remove. No warnings or locks even it's used in any way. Running pve-manager/4.2-15/6669ad2c (running kernel: 4.4.10-1-pve) Regards, Alex
  8. [SOLVED] Journal was not prepared with ceph-disk

    For those who are interested in creating osd journal on separate partition (not disk) here are the steps: (assume that sda3 and sdb3 - 5GB partitions for journal, sdc and sdd - disks for osd data) 1. Create partition with correct size. If using fdisk partition's size should be 10483712 sectors...
  9. [SOLVED] Journal was not prepared with ceph-disk

    Very strange/bad. Even running ceph-disk prepare --fs-type xfs --cluster ceph --cluster-uuid 908ceb45-91b6-4c31-8ede-00acab17c9ef --journal-dev /dev/sdc /dev/sda3 deletes /dev/sda3 :(
  10. [SOLVED] Journal was not prepared with ceph-disk

    Hi, I use Proxmox 4.2 and wonder if things are ok: # pveceph createosd /dev/sdc -journal_dev /dev/sda3 create OSD on /dev/sdc (xfs) using device '/dev/sda3' for journal Caution: invalid backup GPT header, but valid main header; regenerating backup header from main header...
  11. Rbd: couldn't connect to cluster

    It was definitely problem with ceph installation. Using separate network segments via public/cluster network parameters leads to problem with osd daemon can't report to mon daemon (still checking why). Using single network segment is ok.
  12. [SOLVED] GUI displays 'grayed' nodes

    I have successfully resolved the problem. The root of issue was in creating ceph cluster with pveceph init --network b.b.b.b/mask where b.b.b.b/mask network is second (private) segment devoted to ceph intra-cluster communication. So the /etc/pve/ceph.conf had the following lines: cluster...
  13. Ceph or DRBD or else for for 3 node cluster ?

    If you want to use partitions as OSD disks then you have to create/initialize them manually, not with pveceph createosd command (you will need to set correct partition type GUID and name them as 'ceph data').
  14. Rbd: couldn't connect to cluster

    dietmar, you're right. I missed this step in manual. Now the Status looks like: stopped: rbd error: got lock timeout - aborting command Wonder if it's related to pgmap v67: 64 pgs: 64 stale+active+undersized+degraded; 0 bytes data, 197 MB used, 22315 GB / 22315 GB avail
  15. Rbd: couldn't connect to cluster

    Hi, I has PVE 4.2 with Ceph installed with help of pveceph. The problem is that when I try to create CT/VM on RBD volume it ends with error: TASK ERROR: rbd error: rbd: couldn't connect to the cluster! I was able to trace calls to rbd binary which is executed with "--auth_supported none" in...
  16. [SOLVED] GUI displays 'grayed' nodes

    I found what's happening. Browser (web-interface) makes calls to https://hv01:8006 like GET /api2/json/nodes/hv02/storage/local/status HTTP/1.1" 500 - they are proxied to node hv02, then get locally proxied to tcp/85 where pvedaemon listens. Here is the timeout happened.
  17. [SOLVED] GUI displays 'grayed' nodes

    I have even switched from ntpd back to systemd-timesyncd - still no luck. What looks interesting - ceph cluster on these nodes doesn't complain about time synchronization problems. P.S. Btw, why Proxmox starts systemd-timesyncd even if ntpd is up and running?
  18. [SOLVED] GUI displays 'grayed' nodes

    systemd-timesyncd is running on all nodes without errors. May 24 06:52:03 host3 systemd-timesyncd[982]: interval/delta/delay/jitter/drift 2048s/-0.001s/0.000s/0.001s/-14ppm
  19. Help me choose a new network card

    Looks strange since your card is supported since 2.6.x by e1000 kernel module. Anyway Intel NICs like i350 or 82576-based looks acceptable (for servers).
  20. [SOLVED] GUI displays 'grayed' nodes

    HI, Installed Proxmox 4.2 on 3 nodes and formed cluster. CLI tools like pvecm status/nodes show that cluster is OK. But GUI randomly 'grays out' nodes and randomly takes them back. No errors, quorum is established. I changed to udpu and back to multicast - no difference. What should I check...


The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!