Search results

  1. L

    Sizing question, dual data center

    Hey all, I have to size a PVE/CEPH environment for two data center. We need a new home for roughly 300 small VMs (4 cores, 4GB Memory,100-200GB storage) I calculate half a year until all 300 VMs are migrated and calculated 100% growth in the next three years. Storage bandwidth should not be...
  2. L

    how to identify least utilized node via API?

    Thank you Wolfgang. @LnxBil, usually io wait and memory are the bottlenecks Cheers, luphi
  3. L

    how to identify least utilized node via API?

    Hey guys, for the next VM deployment I want to identify the least utilized node in a cluster via API. Sorry, but couldn't find it by myself. Cheers, luphi
  4. L

    15th scsi drive

    Hello Alvin, thank you for your reply. Does that mean we are still limited to 14 scsi devices? Cheers, Martin
  5. L

    15th scsi drive

    Hello all, I just want to add the 15th scsi drive to a VM. I'm aware of the limitation of 14 drive per scsi bus, therevor I have choosen virtio-scsi-single in the controller option to have a dedicated controller per drive. But the GUI still doesn't allow me to add values higher than 13 for the...
  6. L

    How to kill a container that doesn't stop

    Hello, I have the same issue but won't restart the whole node. pct list is hanging, stopping the container is also hanging. unfortunately storage seams to be okay, so I need help on further investigations. root@pve:~# pvesm status Name Type Status Total Used...
  7. L

    OSD issues after migration

    hey there, I migrated a ceph node to new hardware. After moving the OSDs, they won't come up again. All pgs are unknown, all OSDs are down. The GUI shows them as filestore, but they are definitely bluestore. The old server was originally installed with PVE 4.4/hammer and migrated from time to...
  8. L

    ceph and zabbix

    Hey all, I just tried to get ceph's zabbix module running. I followed the ceph docs at http://docs.ceph.com/docs/master/mgr/zabbix/ Communication seems to be fine but no data is send. Got this in the mgr log: 2017-11-01 19:07:46.614999 7f7415d00700 20 mgr[zabbix] Waking up for new iteration...
  9. L

    trying to aquire lock...TASK ERROR: can't lock file '/var/lock/qemu-server/...

    I'm in the same situation, but before I delete the lock file manually, I want to make sure, that it is safe to do so. Which process is using these lock files? What do I have to care about, to be on the safe side? Btw: The lock file is more than a week old and I rebooted the VM several time...
  10. L

    incomplete PGs

    Thank you for the link, will give it a try...
  11. L

    incomplete PGs

    Hello Alwin, yes, that's what I expected. :-( Any idea, how to find out, which VM is affected? Cheers, luphi
  12. L

    incomplete PGs

    Hey guys, this time I really need your help :-( first my setup: 3 nodes (pve0,pve1,pve2) 2 pools (both of size 2/1) all nodes are running latest version 5.0/12.2.0 I just wanted to migrate all OSDs from filestore to bluestore, so I removed all OSDs of node pve2 and recreated them...
  13. L

    fresh PVE 5.0/ceph 12.2.0 @home

    I did some tests: rados bench -p test 30 write --no-cleanup journal on SSD OSD Total time run: 30.824673 30.506182 Total writes made: 485 405 Write size: 4194304 4194304 Object size: 4194304 4194304...
  14. L

    fresh PVE 5.0/ceph 12.2.0 @home

    Thanks for your replay. But how can I make sure, that primary and replicated pg are not on OSDs, which have their journal on the same SSD? If the SSD will fail, I will loose my data. Is my setup not the right way? Cheeers, Martin
  15. L

    fresh PVE 5.0/ceph 12.2.0 @home

    I did some further research this morning by monitoring "ceph -s" and "ceph osd tree" during startup. (I removed the host bucket since this is an unnecessary layer in my hierarchy) At the beginning, everything seems to be ok, mgr is active, osd tree is correct, OSDs are just comming up...
  16. L

    fresh PVE 5.0/ceph 12.2.0 @home

    Hey all, since a few day I plan a new single PVE server for home use. Here is my strategy: I don't care much about availability, so a single host is ok for me. I don't care much about performance, so a single host is ok for me. I care about flexibility, that's why I want ceph for storage and...
  17. L

    hanging CEPH storage

    Hello Dominik, great, that solved my issue. Thank you very much. Cheers, Martin
  18. L

    hanging CEPH storage

    I hope you all had a refreshing weekend but I still need your help. Cheers, Martin
  19. L

    hanging CEPH storage

    Thank you fabian, but still the same issue :( Cheers, Martin
  20. L

    hanging CEPH storage

    I checked "ps -ax" and found a couple of 31641 ? Sl 0:00 /usr/bin/rados -p rbd -m pve1,pve2,pve3 -n client.admin --keyring /etc/pve/priv/ceph/ceph.keyring --auth_supported cephx df 32239 ? Sl 0:00 /usr/bin/rados -p rbd -m pve1,pve2,pve3 -n client.admin --keyring...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!