Search results

  1. jsterr

    Reboot Issue

    Hello! Not sure but you do not need a vmbr to use corosync. Just put the IP directly on eno2,reduce unneeded complexity and network layers. Also please configure a second Ring, you dont have any redundancy if your eno2 fails. I would also put the qdevice in the same Network, Just put 2 ips on...
  2. jsterr

    proxmox 8.x Problem using vlan and bonding

    I would recommend starting with a clean network config from scratch and use sdn with vlan-zones to archieve what you want. Theres good documentation for this on the pve-docs. https://pve.proxmox.com/pve-docs/chapter-pvesdn.html#pvesdn_zone_plugin_vlan
  3. jsterr

    proxmox 8.x Problem using vlan and bonding

    auto lo iface lo inet loopback # managementlacp auto bond0 iface bond0 inet manual bond-slaves enp2s0f0 enp2s0f1 bond-miimon 100 bond-downdelay 200 bond-updelay 200 bond-mode active-backup # managementbridge auto vmbr0 iface vmbr0 inet static address 192.168.1.6/24...
  4. jsterr

    Manual VM Migration in Proxmox HA Setup with Failed ZFS

    If you have a replication job up and running and ha enabled for this vm, then your vm should automatically start on the other node. Its important that pooi names are identical on both systems. Theres something you have done wrong as it should work as I described. If you want some help, post...
  5. jsterr

    proxmox 8.x Problem using vlan and bonding

    Please use the code formatting in the post. its very difficult to read.
  6. jsterr

    OVS trunk problem

    Why do you have mtu=1 in your vms network adapter config?
  7. jsterr

    Genoa Cluster (CPU Bottleneck with Storage)

    1) With two servers you cant do ceph so I would go for a dual node zfs cluster (you still need a third vote via external quorum-device) 2) You should be fine with that cpus, but it depends of course how much you wanna use for vm/ct ressources 3) zfs arc is only for zfs-reads, so those will be...
  8. jsterr

    Reboot Issue

    Please post your /etc/network/interfaces file and your /etc/pve/corosync.conf this is most likely a corosync-issue. Do you have corosync running on its own nic?
  9. jsterr

    meine erste Proxmox - Installation: Windows 10 ISO mit webbrowser auf Proxmox Server kopieren

    Ceph beginnt erst ab 3 Proxmox VE Knoten sinnvoll und ist ein Scale-Out Storage, das heißt es skaliert (wenn gewünscht) nahezu endlich. Es hat eine erhöhte Komplexität und ist seit einigen Versionen in PVE aber vollumfänglich integriert für Blockstorage (also das Ablegen von virtuellen Disks...
  10. jsterr

    Multi Datacenter Management

    Just in case you dont know: you can use qm remote-migrate to live migrate between two ceph clusters. please try first with a testing-machine.
  11. jsterr

    ZFS Raid 10 mit 6 Samsung PM893 1,92TB Sata Schreckliche Schreibraten

    Meine ich ja ;-) Yes Solidigm ist korrekt. Ich persönlich sehe SATA nur noch als Boot-Medium für Server relevant, die kein 2x m.2 NVMe onboard haben.
  12. jsterr

    ZFS Raid 10 mit 6 Samsung PM893 1,92TB Sata Schreckliche Schreibraten

    Falscher Alarm, Kunde hatte nur den gleichen Namen, war aber ein anderes Thema (auch Samsung SSDs). Wir haben immer noch ein mittleren zweistelligen Monatsverbrauch von der 960 GB und andere Größen entsprechend auch. Ich habe auch keine weitere Meldung mehr gehört. So ein Problem ist also nie...
  13. jsterr

    ZFS Raid 10 mit 6 Samsung PM893 1,92TB Sata Schreckliche Schreibraten

    Ich gebe Montag ein Update, der Kunde von damals hat sich nochmal gemeldet und ich kann ggf. ein Update geben.
  14. jsterr

    Extending SSD after Cloning

    https://networklessons.com/uncategorized/extend-lvm-partition Check this out, its a step by step tutorial.
  15. jsterr

    Migrate machine

    Hi @kenny82 no, ha is currently only triggered when your corosync-ring0 has a problem. Theres a feature request to have multiple conditions that could lead to a ha-scenario. Like when vm-network is down on one node, use the storage-network to live migrate to the other hosts, where the vm-network...
  16. jsterr

    Option bdev_enable_discard & bdev_async_discard in Ceph

    Whats the reason your looking for these options? I havent found any good documentation on it and afaik ceph automatically trims storage, if it gets the discard/trimming commands from the rbd above (like a Windows or Linux-VM that has Discardand SSD-Emulation enabled in VM-Hardware). Or am I...
  17. jsterr

    migrate en masse within one node to another disk

    Your asking for mass-storage-migration of multiple disks in a vm to a different storage? If yes, this is not possible atm via web-ui. You need to manually migrate the disks one after another.
  18. jsterr

    [TUTORIAL] Creating 2 node Proxmox VE cluster with StorMagic software defined storage

    How do you handle VM-Snapshots, it looks like svSAN is using iSCSI which should result it not beeing able to do snapshots in pve webui?
  19. jsterr

    Periodic update of a replications server

    Yes. Thats how replication works. If server 1 goes down, vm starts on server 2 with the data from the last sync. Automatically if you use ha, non automatically if you move the qm-file /etc/pve/qemu-server manually to the right directory. (this is usually done by the ha-manager) needs to be done...
  20. jsterr

    Cloning VM Template With Cloud Init Fails

    Hello dont use ide for cloudinit-drive, there a multiple issues. Try this one as a reference https://www.thomas-krenn.com/de/wiki/Cloud_Init_Templates_in_Proxmox_VE_-_Quickstart and you should use virtio-scsi-single for the scsihw-controller.