Search results

  1. N

    ZFS and EXT4 mixed?

    given these restrictions, your idea is probably an okay one. Depending on the disk layout and RAID capabilities you will face different issues. ZFS on HW RAID is a bad idea, PVE only offers software raid via ZFS. You would probably have to install debian first in an md raid to get redundancy...
  2. N

    linux bridge vs ovs Bridge

    I bet that in 99.9% you will be happy with the regular linux bridge. Can you tell a bit more what you actually need?
  3. N

    New 2 node cluster with ZFS replication

    https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_remove_a_cluster_node That is probably the right chapter in the docs
  4. N

    spiceproxy only listening on IPv4 not IPv6

    Do you have an entry in your /etc/hosts file with your node and the IPv6 address? AFAIK the spiceproxy is checking that to determine on which IP to listen.
  5. N

    New 2 node cluster with ZFS replication

    This is a feature that is built on top of ZFS snapshots and the send/receive mechanism of ZFS. It is not enabled automagically. You have to enable it per VM and define to which node it should be replicated and the interval. Luckily, because I only replicate the important VMs in my 2 node...
  6. N

    New 2 node cluster with ZFS replication

    You have to setup the replication. Either on the Datacenter level or on the individual VM. It's a separate panel.
  7. N

    [SOLVED] Questions about ZFS and Migration/Backups

    Hmm, let's talk about replication and HA within a cluster first. This is what Proxmox can do out of the box. Either with a shared file system (Ceph, nfs, samba, iscsi,...) or with replication (ZFS) though this works best in a two node cluster because AFAIK you can only replicate to one other...
  8. N

    [SOLVED] Questions about ZFS and Migration/Backups

    What is your network configuration? Do you have a dedicated physical interface for the Proxmox cluster communicaton (corosync)? Your problem kinda sounds like corosync might have issues. It does not need a lot of bandwidth but really likes low latency. If you have in on an interface that sees a...
  9. N

    [SOLVED] Questions about ZFS and Migration/Backups

    Ah ok, maybe I should have pointed out that Ceph is a clustered solution while ZFS is local only. The slow HDDs might be a problem, not giving you enough IOPS to be performant. If you did not configure the storage as shared you should be able to select the target storage when migrating a VM...
  10. N

    Cephfs - allow other subnet

    Add a second NIC configured in the same subnet? Route the traffic? (probably not good latency wise)
  11. N

    Share host fs/folder?

    No. Desktop Virtualization products that have this feature use some kind of network protocol and make is as transparent as possible for the user with the installed guest tools and such. So you are back to Samba/NFS :)
  12. N

    [SOLVED] Questions about ZFS and Migration/Backups

    Both, ZFS and Ceph can be used to backup VM disks between clusters. ZFS with it's send/receive mechanism. Pve-zsync is a tool around that. Ceph has the rados gateway which can mirror to another Ceph cluster. There should also be an article in the wiki on how to set this up.
  13. N

    [SOLVED] Questions about ZFS and Migration/Backups

    Ceph has certain requirements to be usable. Mainly it's own fast network of 10GBit or faster. Live migration of VMs to a different storage is possible no matter what kind of storage it is. This is possit because the live copying of the disks is done via Qemu and is storage type agnostic. The...
  14. N

    Zpool attach new disk as master / recovery

    How did you mess up? Maybe it can be fixed.
  15. N

    Standard Benutzername

    Du hast sicher das Passwort verwendet, dass du bei der Installation angegeben hast?
  16. N

    [SOLVED] Neue Installation von Proxmox, keine Updates

    Passt dein gesetzter DNS Server? Die Fehlermeldungen lassen auf ein DNS Problem schließen. Wenn du keine Subskription hast musst du noch das Enterprise repo deaktivieren und das no-subscription repo konfigurieren. Wie steht im Manual. In der GUI oben rechts auf Help oder documentation (Weiß...
  17. N

    2 Server intern miteinander verbinden

    Um das Beispiel von oben zu nehmen: auto vmbr2 iface vmbr2 inet manual bridge_stp off bridge_fd 0 Bzw. wenn man die Bridge über die GUI erstellt einfach bridge port und IP config leer lassen. Somit ist das quasi nur ein switch. Wenn man den Netzwerkkarten der Gäste dann noch...
  18. N

    SSD nicht komplett genutzt?

    Die Übersicht der Storages hast du wenn du links im Tree ganz oben auf Datacenter klickst und dann im Menü daneben auf Storage. Offiziell gibt es in der hinsicht nichts. Grundsätzlich wäre aber ein wenig Handbuch studieren oder zumindest ein paar Einführungstutorials (auf Youtube gibts auch...
  19. N

    2 Server intern miteinander verbinden

    Der Bridge port muss nicht definiert sein. In dem Fall agiert die Bridge als interner Switch. Auch muss keine IP darauf konfigiert sein wenn der Proxmox Host nicht direkt Teil des internen Netzes sein soll.