Search results

  1. K

    transform virtio-blk to scsi-virtio

    a Linux VM? hard coded /dev/vdX in the fstabs? Try changing mounts to uuid style and the recreate the initramfs
  2. K

    cant change the MTU to 9000

    add "mtu 9000" to the iface definitions of the interfaces: iface enp4s0 inet manual mtu 9000 iface enp6s0 inet manual mtu 9000 iface enx8cae4cee5150 inet manual mtu 9000 you also have to change mtu in the VM's, the local PC's and _all_ your switches have to support it. I...
  3. K

    transform virtio-blk to scsi-virtio

    yes, the extra dummy drive is the way to have it work in windows.
  4. K

    3-node HA cluster - what happens if 2 nodes fail?

    ceph is very dependent on latency. It works very well with at least 10 Ge Network and fast SSD OSD's or HDD OSD's backed by SSD Journals also with modest size clusters.
  5. K

    Proxmox not installing ( sp5100_tco: I/O address 0x0cd6 already in use)

    Have a look on the bios settings for USB, maybe some change here is possible, it trys the USB stick as CD Also maybe you have a real CD drive to test, or maybe a USB CD Drive? you can also try this method: https://forum.proxmox.com/threads/proxmox-installation-via-pxe-solution.8484/
  6. K

    Proxmox not installing ( sp5100_tco: I/O address 0x0cd6 already in use)

    The first error is "could not insert kvm_amd" So look in the bios if virtualization is on. And yes it could be also a problem with a too new chipset.
  7. K

    SAS or SATA disks

    SAS is always Dual Head, you will not see that with just one controller.
  8. K

    SAS or SATA disks

    if speed is not the problem, but capacity, you can use S-ATA Enterprise Disks also, we use many in 2 TByte + sizes. SAS is a must on HA Filer configuration, where you want to failover disk pools between two heads. SAS is faster for high speed SSD's as there is 12 GBit/s (and 24 Gbit/s coming...
  9. K

    3-node HA cluster - what happens if 2 nodes fail?

    There is no advantage in Raspberry's in your configuration. No you will not need to restore, just get one of the failed nodes up again. If your hardware is so flaky that you fear so much failures, then the hardware should be deployed to the trash bin before you ever put a system on it. With...
  10. K

    [SOLVED] DNS settings in Proxmox changing on reboot

    auto vmbr1 iface vmbr1 inet manual bridge_ports enp2s0 bridge_stp off bridge_fd 0
  11. K

    3-node HA cluster - what happens if 2 nodes fail?

    this is complete unneccessary, and wll not help with CEPH anyway. If you have 3 nodes running CEPH you will need 2 of them up anyway, as the usual rule for pool's is min 2 may 3 That means at least two copies of a object have to be available, and they need to be on different hosts. And be...
  12. K

    [SOLVED] DNS settings in Proxmox changing on reboot

    Solution 3, you should _never_ run a virtualisation host with DHCP, always use fixed IP as possible, BTW. it is not necessary to give the virtualisation host an IP in a network, as long as there is not a special reachability requirement. Just the VM's need to have access to the lan.
  13. K

    3-node HA cluster - what happens if 2 nodes fail?

    1 node can fail An additional witness box does not help, as you need n/2+1 nodes as quorum to avoid split brain situations (which will be really bad). So a 4 node cluster will need 3 nodes alive, so the witness box does not help in this case. The witness box is a good idea to run a 2 node...
  14. K

    Proxmox / Ceph 5-Nodes Setup

    Ahh, sorry was a misreading from me, of course the first disk should be OS only, so: 2 x TB Sata + Journal SSD or the alternative with 3 Disks and the PCI NVME Adapter (delock has cheap NVME Adapters) I would not recommend installing OS on USB Stick, they would not live long enough .......
  15. K

    Proxmox / Ceph 5-Nodes Setup

    instead of the disk configuration you mentioned: 3x 2TB Sata + 1 Journal SSD or alternativ: 4 x 2TB Sata + 1 M.2 NVME SSD (on a cheap PCIe M,2 Adapter) For the network: 1G Ethernet is a little bit slow for CEPH, better to use 10 G Ethernet Maybe the following: 2 x 1 G Ports LACP for VM...
  16. K

    Ceph Luminous with Bluestore - slow VM read

    which driver did you use inside the VM's ? For best performance use virtio-scsi or virtio-blk (virtio-scsi gives you also trim capabilities)
  17. K

    Low Disk Performance in VM

    no real idea, but some thought's: -> the HP P800 is some oldish 3 GBit SAS -> maybe not very well fitted for SSD's -> A Raid 0 should be fast, but do you really want to live with the risk? if one of your ssd's goes bad, you will loose anything -> Better try to get a cheap SAS HBA (maybe...
  18. K

    Ceph really (!) is slow

    You probably have no journal / block.db on SSD for your OSD's? It would greatly help to add SSD's for a journal (aka block.db with bluestore). You could combine all journals for one host on one SSD E.g. I use Samsung 960 EVO Nvme SSD (256 Gbytes) with a cheap PCIe -> NVME Adapter board with a...
  19. K

    Port forwarding same port to multiple VMs using same port as well

    The combination IP + Portnumber is the adress of a single service! See it like Housenumber + Flatnumber (or do you want to have people coming into your flat erradically?) Of course it is possible to do some Loadbalancing for some services (probably in one VM), but all VM's behind the balancer...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!