Search results

  1. PVE6.0-5: Corosync3 segvaults randomly on nodes

    Sure: root@vmb2:~# pveversion -v proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve) pve-manager: 6.0-5 (running version: 6.0-5/f8a710d7) pve-kernel-5.0: 6.0-6 pve-kernel-helper: 6.0-6 pve-kernel-5.0.18-1-pve: 5.0.18-3 pve-kernel-4.15.18-20-pve: 4.15.18-46 pve-kernel-4.4.98-6-pve: 4.4.98-107...
  2. PVE6.0-5: Corosync3 segvaults randomly on nodes

    +1 I have got the same problem with our corosync 3 PVE6 cluster :(
  3. Ceph: gleiche Größe, gleiche weight -> warum unterschiedliche belegt?

    Laut Calc sollte ich 1024 haben und 1024 sind bei mir auch eingestellt. Muss demnächst ohnehin zusätzliche OSDs einbauen, mal schauen wie es sich nun entwickelt. Aber Danke erstmal!!
  4. Ceph: gleiche Größe, gleiche weight -> warum unterschiedliche belegt?

    Hast recht, auf OSD 10 sind 66 PGs auf OSD 14 hingegen sind es 91. Aber werden die PGs nicht auch verschoben damit es Normalverteilt wird?
  5. Ceph: gleiche Größe, gleiche weight -> warum unterschiedliche belegt?

    Hallo, hab auf einem Host paar SSD die gleiche Größen haben, Weight ist auch gleich, trotzdem hat eine (OSD 10) 58% Belegung und die andere (OSD 14) 80%. Wenn neue Daten geschrieben werden, passiert das genauso auf die schon fast volle, siehe Screenshot. Wieso das?? Danke Liebe Grüße, Stefan
  6. get sparse space back from once filled HD?

    Well I can't enable discard via GUI. Or do you mean I should enable it via conf? Its a debian lxc ct.
  7. get sparse space back from once filled HD?

    Thanks, hmm I haven't enabled discard and I am using Virtio Block too.
  8. get sparse space back from once filled HD?

    Hi, I have a CT HD with 100G, it was once filled up to 90%, now its just filled with 14% of data. When I backup the CT now, it takes a lot of time, in the begining it was faster. Can I somehow clean that HD or somehow get teh sparse space back?
  9. Proxmox VE Ceph Benchmark 2018/02

    Here the results of my last benchmarks: Model Size TBW BW IOPS Intel DC S4500 480GB 900TB 62,4 MB/s 15,0k Samsung PM883 240GB 341TB 67,2 MB/s 17,2k
  10. Ceph cluster expanding?

    Data redundancy The performance and memory usage of the 4 nodes is fine. Somehow it seems that it makes no really sense to add a 5. node...
  11. Ceph cluster expanding?

    But what if one of the MONs dies? Do I just set the 4. Node become a Monitor? Note: All my cluster have just a small single boot disk. I wanted to increase redundancy by adding a 5. Node even if I don't need the additional performance or disk space yet.
  12. Ceph cluster expanding?

    Hi, I am currently run a 4 node Ceph cluster with an 3/2 pool, 3 Nodes are monitors with OSDs and 1 hold just OSDs. Now I want to add another node and upgrade both Nodes to monitors, but stay with the 3/2 pool to have the possibility of 2 failing Nodes. Does the 5. Node need to have the same...
  13. Proxmox VE Ceph Benchmark 2018/02

    Recently I benchmarked Samsungs Enterprise SSD 860DCT with 960GB with my usual benchmark setup and the result was just horrible: FIO Command: fio --filename=/dev/sda --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test...
  14. create OSD -> always "partitions"

    thanks alwin!! this did the trick. and after creating the OSD it got the ID 1 ;-)
  15. create OSD -> always "partitions"

    hmm.. i tried to add the ssd again and check the output of syslog: Feb 13 11:10:00 ceph8 systemd[1]: Starting Proxmox VE replication runner... Feb 13 11:10:01 ceph8 systemd[1]: Started Proxmox VE replication runner. Feb 13 11:10:52 ceph8 pvedaemon[2536]: <root@pam> starting task...
  16. create OSD -> always "partitions"

    ok tried both partprobe and reboot but that ssd is not going to be turned into an osd.. Now I tried to add a completely new SSD but I receive the same result. Can be the node the faulty part? What can I try? beside reinstalling proxmox ;-)
  17. create OSD -> always "partitions"

    Hi, I wanted to add a new ssd and make an OSD out of it, but after 'create: OSD' its marked as 'partitions'. Ok, its not the first time in my life, so I issued dd if=/dev/zero of=/dev/sdd count=1000000 followed by ceph zap /dev/sdd But still 'partitions'... I deleted all partitions with fdisk...
  18. vzdump: rbd: sysfs write failed ?

    No. Workaround: root disks of those VMs are stored on local-zfs Since this workaround (about Oct 2018), backup to the same NFS store works without any issues.
  19. VMware Tools to control VM from Host?

    Hi, I have a virtualized PBX (Askozia) VM which I can't control through the gui. If I send shutdown nothing happens. Now I have seen that I could activate VMware Tools (or HyperV Linux Integration Services), its just a checkbox. Should I try to activate it? Can KVM use that tools somehow to...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!