Search results

  1. VM cloning is slow

    please explain, what is a linked clone?
  2. shared WAL between CEPH OSD's?

    Do I need to use one WAL per OSD if I use spinning disks?
  3. bad ceph performance on SSD

    Did you ever get to the bottom of this?
  4. Multiple passthrough disk to VM

    How exactly does one passthrough a SSD from the host node to a VM?
  5. shared WAL between CEPH OSD's?

    What would happen if the WAL disk fails?
  6. Question regarding network bond config

    So how does one get 2Gb speed across 2 NIC's?
  7. Hardware compatibility with DELL server

    Which RAID cards do you use? Some Dell RAID cards don't offer HBA mode
  8. shared WAL between CEPH OSD's?

    Is it possible to share a CEPH WAL between all the OSD's, instead of having to partition the WAL? If I have 12 drives, I have to create 12 equal partition on the WAL, and assign each partition to an OSD. Is there a better way to assign the WAL?
  9. ZFS and Ceph on same cluster

    Is it possible to move a VM between CEPH and ZFS in a mixed environment like this?
  10. Redistribute traffic over bond

    ~ Please explain this?
  11. Question on NVME issues

    That's quite a bit performance loss. Is this expected?
  12. 1 node offline after changing host hardware

    I want / wanted to move CEPH to the 2nd IP subnet, but that failed. Both IP subnets can communicate. And all worked fine, till I had to reinstall Proxmox onto another drive. So, shortly after my last reply, I added the 2nd IP subnet (rather, 192.168.11.243) to SRV3 and now all 3 nodes can see...
  13. 1 node offline after changing host hardware

    root@SRV1:~# corosync-cfgtool -s Local node ID 1, transport knet LINK ID 0 addr = 192.168.11.241 status: nodeid: 1: localhost nodeid: 2: connected nodeid: 3: disconnected root@SRV1:~# corosync-cfgtool -s Local...
  14. 1 node offline after changing host hardware

    Thanx I ran the update and rebooted. Now SRV3 is on it's own, and SRV1 and SRV2 are in the cluster: root@192.168.10.241's password: Linux SRV1 5.4.98-1-pve #1 SMP PVE 5.4.98-1 (Mon, 15 Feb 2021 16:33:27 +0100) x86_64 The programs included with the Debian GNU/Linux system are free software...
  15. 1 node offline after changing host hardware

    root@SRV1:~# pveversion -v proxmox-ve: 6.3-1 (running kernel: 5.4.98-1-pve) pve-manager: 6.3-4 (running version: 6.3-4/0a38c56f) pve-kernel-5.4: 6.3-5 pve-kernel-helper: 6.3-5 pve-kernel-5.4.98-1-pve: 5.4.98-1 pve-kernel-5.4.65-1-pve: 5.4.65-1 pve-kernel-5.4.34-1-pve: 5.4.34-2 ceph...
  16. 1 node offline after changing host hardware

    ok, Ok, I can now login. Had to run the following commands on all 3 servers: On every node do systemctl stop pve-cluster This may take a while On every node do sudo rm -f /var/lib/pve-cluster/.pmxcfs.lockfile On each node – one by one do systemctl start pve-cluster And then it's like...
  17. 1 node offline after changing host hardware

    That's the problem. When I login to SRV1, only SRV1 is online. When I login to SRV2 and SRV3, both SRV2 and SRV3 are online - almost asif there's 2 clusters. root@SRV1:~# pvecm status Cluster information ------------------- Name: WHZ Config Version: 5 Transport: knet...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!