Search results

  1. N

    [SOLVED] Network settings for LXC containers: No internet access

    auto enp3s0 this line looks a bit wrong, have you tried if it works when you remove it or comment it out?
  2. N

    Conatainers have no network access.

    Without a config of the container at hand I have to guess a bit, but I do think that your vmbr0 config is missing a bridge port. bridge-ports none is not using any of the physical NICs. Can you try to set the bridge_port to either eno1 or eno2, depending on which physical NIC the containers...
  3. N

    Epyc 7402P, 256GB RAM, NVMe SSDs, ZFS = terrible performance

    Some things I would try: Install some monitoring tool, especially to know how much of the RAM is going towards ZFS' ARC (cache in ram) and to see some stats from the disks (avg write delay, queue,...) but other systems stats might give insight as well. How did you configure the disks of the VM...
  4. N

    ZFS and EXT4 mixed?

    I am running ZFS on root for year now on my Arch based laptops/desktops and for quite a while on my Proxmox servers. You should consider that whatever problem you read about here in the forum is a small part of the happily working 300.000 installations out there.
  5. N

    ZFS and EXT4 mixed?

    given these restrictions, your idea is probably an okay one. Depending on the disk layout and RAID capabilities you will face different issues. ZFS on HW RAID is a bad idea, PVE only offers software raid via ZFS. You would probably have to install debian first in an md raid to get redundancy...
  6. N

    linux bridge vs ovs Bridge

    I bet that in 99.9% you will be happy with the regular linux bridge. Can you tell a bit more what you actually need?
  7. N

    New 2 node cluster with ZFS replication

    https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_remove_a_cluster_node That is probably the right chapter in the docs
  8. N

    spiceproxy only listening on IPv4 not IPv6

    Do you have an entry in your /etc/hosts file with your node and the IPv6 address? AFAIK the spiceproxy is checking that to determine on which IP to listen.
  9. N

    New 2 node cluster with ZFS replication

    This is a feature that is built on top of ZFS snapshots and the send/receive mechanism of ZFS. It is not enabled automagically. You have to enable it per VM and define to which node it should be replicated and the interval. Luckily, because I only replicate the important VMs in my 2 node...
  10. N

    New 2 node cluster with ZFS replication

    You have to setup the replication. Either on the Datacenter level or on the individual VM. It's a separate panel.
  11. N

    [SOLVED] Questions about ZFS and Migration/Backups

    Hmm, let's talk about replication and HA within a cluster first. This is what Proxmox can do out of the box. Either with a shared file system (Ceph, nfs, samba, iscsi,...) or with replication (ZFS) though this works best in a two node cluster because AFAIK you can only replicate to one other...
  12. N

    [SOLVED] Questions about ZFS and Migration/Backups

    What is your network configuration? Do you have a dedicated physical interface for the Proxmox cluster communicaton (corosync)? Your problem kinda sounds like corosync might have issues. It does not need a lot of bandwidth but really likes low latency. If you have in on an interface that sees a...
  13. N

    [SOLVED] Questions about ZFS and Migration/Backups

    Ah ok, maybe I should have pointed out that Ceph is a clustered solution while ZFS is local only. The slow HDDs might be a problem, not giving you enough IOPS to be performant. If you did not configure the storage as shared you should be able to select the target storage when migrating a VM...
  14. N

    Cephfs - allow other subnet

    Add a second NIC configured in the same subnet? Route the traffic? (probably not good latency wise)
  15. N

    Share host fs/folder?

    No. Desktop Virtualization products that have this feature use some kind of network protocol and make is as transparent as possible for the user with the installed guest tools and such. So you are back to Samba/NFS :)
  16. N

    [SOLVED] Questions about ZFS and Migration/Backups

    Both, ZFS and Ceph can be used to backup VM disks between clusters. ZFS with it's send/receive mechanism. Pve-zsync is a tool around that. Ceph has the rados gateway which can mirror to another Ceph cluster. There should also be an article in the wiki on how to set this up.
  17. N

    [SOLVED] Questions about ZFS and Migration/Backups

    Ceph has certain requirements to be usable. Mainly it's own fast network of 10GBit or faster. Live migration of VMs to a different storage is possible no matter what kind of storage it is. This is possit because the live copying of the disks is done via Qemu and is storage type agnostic. The...
  18. N

    Zpool attach new disk as master / recovery

    How did you mess up? Maybe it can be fixed.
  19. N

    Standard Benutzername

    Du hast sicher das Passwort verwendet, dass du bei der Installation angegeben hast?