Search results

  1. R

    No bootable device - migrierte Server lassen sich nicht rebooten

    Hallo, nein nicht wirklich, hab bei proxmox support ein Ticket dafür noch offen. Es gibt einen Lösungsweg um die VMs wieder bootbar zu machen, hab dafür eine Anleitung für uns intern erstellt. Aber was da passiert ist, weiß ich nicht wirklich. D. h. das aufsplitten der Backupjobs hat es...
  2. R

    BUG: soft lockup

    Hi @e100 We have the same issues like your screenshots above. On SLES and on Ubuntu, all messages on console are like the same. Did you fix that problem? And sometimes the vms freeze during backup, did you also have had this problem too? best regards, roman
  3. R

    USB Anschluss zur APC über ein VM?

    Hi Fireon, eine Frage, hast Du Erfahrung mit DELL USVs? Diese würden wir gerne mit apcupsd zum laufen bringen, aber leider geht das nicht. Hast du Erfahrungen mit usbhid-ups? Viele Grüße, Roman
  4. R

    Problem with one node - sporadically not available pve6

    Hello togehther! We have a five node cluster with ceph. pve1 to pve5. The problem is, that pve1 is sometimes not "available". The osd´s of the node are working, the pve1 is available via ping but not via ssh. Here is a screenshot about the problem node - does anyone now why it is sometimes or...
  5. R

    No bootable device - migrierte Server lassen sich nicht rebooten

    Hallo zusammen! Wir haben ca. 20 VMs in einem dreier Cluster mit Ceph am laufen. Die hälfte der VMs wurde mit Clonezilla migriert, P2V, die andere hälfte habe ich zu raw convertiert von vmware. Nun das Problem. Vor ein paar Wochen hat sich eine Ubuntu 18.04. VM aufgehängt, diese wurde von vmware...
  6. R

    Converted/migrated Servers does not reboot

    Hello! We have a three node cluster, the storage for the vms is ceph. I have migrated lot of physical server to pve with clonezilla, i have also converted round about 15 vmware vms to pve. In the past without issues. Now we had the problem (the third problem/server after a while) - an ubuntu...
  7. R

    PVE Cluster Share Store

    I dont know what you wanna say with this picture, but yes of course, ceph is a shared storage solution that can be used in a production environment. We do that since more than five years! Read this about ceph about RAID.. Avoid RAID As Ceph handles data object redundancy and multiple parallel...
  8. R

    PVE Cluster Share Store

    What you mean with osd heartbeat? OSDs for Ceph should have no raid! this works not so good.. for ceph you must have all disk connectet via sata port. Raid is a bottleneck for ceph
  9. R

    PVE Cluster Share Store

    with three nodes and ceph min size 1 will not correctly work i guess. Make standard, size = 3 and min size = 2. If you change e. g. the size to 2 and migrate a vm, the ceph cluster is in read only mode, because of the size 2 in a three node cluster.
  10. R

    Move from 5.4 to 6.0

    U want to live migrate the running vms to the new larger node? I did an upgrade from 5.4 to 6 without any issues. I moved the disks from ceph storage to local. After upgrade to 6 and to ceph nautilus, i moved it back to ceph. But i have not tested a cluster 5.4 with one node with pve6. It...
  11. R

    PVE Cluster Share Store

    Hi! What is your configuration? The informations are a little bit rare. How many nodes do you have, how many OSDs per host, ceph cluster is in a physically separated network? ceph.conf? OSD heartbeat? And please change the topic "Tutorial", i think thats wrong. regards, roman
  12. R

    [SOLVED] Proxmox VE 6.0: Ceph Nautilus Extraneous Monitors?

    I changed the config, ceph dont work "normal" In my case, 1pve5to6 ist the active ceph, the first node dont work correct, manual start of osd fails
  13. R

    [SOLVED] Proxmox VE 6.0: Ceph Nautilus Extraneous Monitors?

    I had this "issue" a long time ago, with comma it dont worked (pve 2.x) ok, i have thought, that the cluster network - this is in our configuration - is the same as the monitors, and the monitors are not in the same cluster network. After i installed ceph, the ceph cluster is the same as the...
  14. R

    [SOLVED] Proxmox VE 6.0: Ceph Nautilus Extraneous Monitors?

    And you have a mistake in the ceph.conf again: change 70 to 10 and remove the commas (yellow marked) reboot and it should work
  15. R

    [SOLVED] Proxmox VE 6.0: Ceph Nautilus Extraneous Monitors?

    maybe this? remove the dots . . Your configfile ceph.conf:
  16. R

    Very Slow iPerf performance from Proxmox VM to VMs on different host

    The best thing is that you use two switches and without OpenvSwitch. A switch for the nodes cluster and the other switch for Ceph cluster, the two 10 GB / s NICs must be physically separated. Please check this guide, then you must have the full network speed. best regards, roman
  17. R

    Proxmox und Cockpit

    Super hitman, bin schon gespannt darauf! Gute Arbeit bisher, weiter so!
  18. R

    Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

    It is not a requirement. in my environment are all OSDs are SSDs, every SSD has its own journal. Yes, try Gluster and you will see after that what you prefere :-) I still can only recommend ceph :rolleyes: best regards
  19. R

    Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

    maybe this make sence, one ssd for journal for all SAS disks per node, then you have more performance. But be careful, if journal is broken or down, all disks per node are "dead"
  20. R

    Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

    no, it is of course recommend! 3 nodes are the minimum, up to n nodes... more is maybe faster in some scenarios/cases

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!