Search results

  1. M

    [SOLVED] ZFS replication for VMs with multiple disks

    definitely the problem was between keyboard and chair :-P after unchecking "skip replication" for the second disk everything went fine, just ping-pong-ed the vm back and forth and only the memory was full transferred (as it should) so ... many thanks for the hint!
  2. M

    [SOLVED] ZFS replication for VMs with multiple disks

    Just a test machine: agent: 1 boot: c bootdisk: scsi0 cores: 1 cpu: cputype=host memory: 1024 name: test net0: virtio=xxxxxxxxxxxxxxxx,bridge=vmbr0 numa: 0 onboot: 1 ostype: l26 scsi0: local-zfs:vm-101-disk-0,discard=on,format=raw,size=20G scsi1...
  3. M

    [SOLVED] ZFS replication for VMs with multiple disks

    Also, I'm reading https://pve.proxmox.com/wiki/Storage_Replication: " Guests with replication enabled can currently only be migrated offline." Isn't the documentation a little outdated ?
  4. M

    [SOLVED] ZFS replication for VMs with multiple disks

    Hello I was just testing Proxmox 6.2 and I was wondering why only the second disk of the vm was transferred fully, and first one was just incremental (and the zvol was there). Well, actually it seems that I've just set a replication (from GUI) and forgot about it :p. But the question remains...
  5. M

    Upgrade test scenario (proxmox 5 to 6)

    the second interface was not used in this scenario (not even configured), it was put there because every server has now at least 2 (mostly 4) NICs included; but as long as there was only ring0 addresses involved in tests, it shouldn't matter, maybe in some further test scenarios like I've said...
  6. M

    Upgrade test scenario (proxmox 5 to 6)

    Prerequisites for this test scenario: - 3 VMs with 2 eth (first one configured), installed proxmox 5.4 on test1, test2, test3, updated to the latest packages versions - created cluster on test1 - test2 and test3 joined cluster through test1 - VM test4 installed with proxmox 6.0, latest updates...
  7. M

    Thoughts on coming Proxmox build

    By doing that you loose one of the best strenghts and features of zfs: error auto-detection and auto-healing. Well, in fact error detection will still work, but it will be like: zfs found an error, zfs can do nothing, it's your problem now! Remember: hardware non-raid 0 (and non zfs, non raid-0...
  8. M

    Thoughts on coming Proxmox build

    Not quite the best solution, I hope you do have enough cache memory and battery for p410i, you cannot create more than 2 raid arrays (even raid0) without it. And good luck when replacing hard drives with zfs over raid0, it will be really a PITA. IIRC, from Gen8 of DL380 you have p420 as raid...
  9. M

    Upgrading to Proxmox 6 by reinstalling servers

    Hi Generally speaking, I always prefer to reinstall than simply upgrading a server. Maybe it's more work to do but it's cleaner and "subtle differences" are avoided. But, if I understood correctly, it's not possible to upgrade a pmx5 cluster this way (foreach pmx5, move vms, remove server from...
  10. M

    nftables vs bpfilter

    thank you very much for the answer!
  11. M

    nftables vs bpfilter

    The new Debian 10 (stretch) has still iptables as main (and default installed) firewall tool, but with nftables support included by default. So even the old firewall will probably work without any change. And you can export as nft list ruleset (after installing nft utility). It's the best...
  12. M

    nftables vs bpfilter

    already read that article, also many other articles related to bpfilter, but still not enough feedback; this "feature" is not (yet) documented as it should be (i.e. like nftables is) because proxmox 6 is the first big project I know to use bpfilter, I am really curious about the reasons of this...
  13. M

    nftables vs bpfilter

    Disclaimer: I'm not trying to start a religious war :-P As probably everyone knows, there is a generally "goodbye iptables (in fact netfilter), you served as well" fashion movement in the linux community. Redhat integrated nftables in their firewalld, Debian introduced nftables in latest buster...
  14. M

    Can't add a Node in a Cluster

    So, this scenario is not possible ? - update all pve5 to corosync 3 - foreach server in pve5: - move all vms/ctx to another machine - remove server from cluster (after shutdown) - install pve6 (format disks, make local storage modifications) - join freshly installed pve6 server...
  15. M

    Migration on LVM-thin

    Thank you for your quick answer. Indeed, the agent was not installed on that test machine, so that's why I needed a manual fstrim. But automatic or manual, that mitigation I call myself a 'horrible hack', because I need unused space on destination host just for the 'thickness' and also the...
  16. M

    Migration on LVM-thin

    Hello A strange (but possible normal) behavior when moving VMs or changing storage when using local LVM thin storage (online move, with vm running): - when moving a VM from a host to another (in particular, with multiple disks) qm migrate #ID #destination -migration_type insecure -online...
  17. M

    BUG in qm create after latest updates on pve-storage

    Bug filled, patch made available by proxmox team, thank you for your quick support! For anyone interested (until the next pve-storage package version): https://bugzilla.proxmox.com/show_bug.cgi?id=1913 Also the direct link to the patch...
  18. M

    BUG in qm create after latest updates on pve-storage

    Hello I was using some scripts to create virtual machines. After latest updates on pve-storage packages qm create returns error when creating machines with more than one virtual disk on local-lvm storage. # /usr/sbin/qm create 101 --name test --ostype l26 --cpu cputype=host --sockets 4 --cores...
  19. M

    [SOLVED] PVE 5.2 Lets Encrypt: TASK ERROR: validating challenge failed

    Hello! I have some problems registering the account from the GUI interface. In the "Register Account" page, the "ACME Directory" contains nothing. Fallback to console, # pvenode acme account register default my@email !!! only one time per cluster !!! ensure you select 0, because 1 is acme...
  20. M

    PVE 5.1: KVM broken on old CPUs

    Same problem on a HP Proliant DL380G5 server. Reverting to 4.10 kernel solved the problem. Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket...