Search results

  1. K

    ha-manager VM's going in error state

    Yep, but it times out because the kernel goes unresponsive with the large NFS writes.
  2. K

    ha-manager VM's going in error state

    It really looks like an NFS issue. As long as I did vzdump onto a NFS Share, i did run into this problem. Even limiting the bandwidth didn't solve it, just reduced the rate how often it happened. It was even so bad that some Nodes were fenced. Now I do the backup's via CIFS, and it works...
  3. K

    Vlan configuration

    sure, of course you have to configure the switch correctly
  4. K

    Promox + Ceph Cluster Network Settings

    With none stacking switches I had always bad experiences with bonding modes other then Active-Backup (mode 1). Maybe you will have more luck with mode 6. But I'm very sure that it will work with bonding the VLAN's together, you will need to bond the underlying interfaces and then define the...
  5. K

    After upgrade from 5.1 to 5.2 corosync failed to start due to DNS lookup failure

    It is always a good idea to pack the necessary host adresses for booting into /etc/hosts booting a node with its dns on its own guest is always a game of henn and egg (who is first ?)
  6. K

    HA accross different locations

    There are some show stoppers for this: -> Latency will bog down Ceph severly -> How do you want to handle communiction failure? you will loose too much OSD-Hosts once. -> How would you avoid a split brain situation on communication failure? Maybe better to replicate the vm's in some way to...
  7. K

    Install Issues

    By the way the 9210-8i can handle hundreds of disks, it is just a matter of the SAS Backplane in the computer. And as Mihai writes: Pack the OS (and nothing else) on a 2 disk mirror, use the other disks for data.
  8. K

    Install Issues

    did you pack the system on a 6 disk ( 3 mirror vdev's) ???? I would recommend to pack the OS onto a mirror of 2 disks (they can be small) Also look into the BIOS Boot setup. Supermicro BIOS tends to play dice with boot disks on a SAS HBA
  9. K

    Duplicate address detected (DAD) on guest ipv6 addresses

    yes at least at my site layer3+4 works flawlessly, but there is some warning about packet misordering in some cases, I just wanted to note this.
  10. K

    Duplicate address detected (DAD) on guest ipv6 addresses

    The reason is the bonding_mode "2". As the switch(es) is/are not aware about the bonding it will send out the multicast packages for the neighborhood detection again out on the second port of the bond. From my experiences with bonding I would strongly advice against bonds with none bonding...
  11. K

    [SOLVED] TRIM SSDs

    One more note: If you use ZoL (ZFS on Linux), there is currently no Trim support in ZFS
  12. K

    [SOLVED] TRIM SSDs

    I would schedule fstrim on a weekly basis. Depending on the SSD trim can result in garbage collection and adding unwanted latencies. The general advise in the linux world is trimming on a weekly basis. The systemd units are in the examples, so you can enable it this way: cp...
  13. K

    How do I convert an IDE disk to VirtIO?

    Uhhm that is a little bit complicated. But on a fresh install, you can do it that way: -> create both Disk and Network with virtio, and add a second CDROM with the virtio driver CD. Then you can install both drivers during the install process.
  14. K

    How do I convert an IDE disk to VirtIO?

    Start the VM again with IDE, then add a dummy drive with VirtIO (preferably use Virtio-Scsi!). You can add the drivers then correcty with the Device Manager. Then shut down the VM again, change the drive type to VirtIO (SCSI if you use that!) remove the dummy drive and boot up again. Then it...
  15. K

    CEPH Device Class Incorrect

    The H700 is a RAID only card, I suppose you created a RAID0 for every disk? CEPH than has no idea of the real nature of the device behind. Keep your hands off from Megaraid cards for CEPH and ZFS. Try to replace the H700 with an HBA.
  16. K

    Reinstalled Proxmox, How to add ZFS Pool Back Without Losing Data?

    Look into the man page of zpool, "zpool import" is the command you search for.
  17. K

    Host IP Discovery

    one way is to run lldpd on host and vm
  18. K

    qdevice for a scaleable cluster

    in a different environment it should be ok, i did it also for migrations. On the same cluster it is stupid of course Sincerly, Klaus
  19. K

    Move from LVM to Shared Storage

    you can move a disk image without shuting down the VM from the GUI, e.g. we did move many images for migration from NFS to CEPH without any hassle online.
  20. K

    Hyper converged setup

    The Journal is per OSD, though you can use one SSD with partitions for many OSD's, and no, you don't need extra redundancy on the Journal device. The read speed is in the ideal case the read speed of the OSD device on which the corresponding PG's sit. More OSD's, more available bandwith. The...