Search results

  1. Y

    LXC Containers Backing Up Incredibly Slow

    I clearly understand the concept of being agnostic and agree this approach. But from other side: Proxmox recommends ZFS - so IMO work with ZFS should be done with native ZFS tools.
  2. Y

    LXC Containers Backing Up Incredibly Slow

    @fabian If You got >10 milions files rsync doesn't work well (with expected performance - any file-based-tool won't). Default storage agnostic procedure is OK - but for ZFS we got much more better options. If we use ZFS on both sides: - Proxmox->PBS for backup, - PBS->Proxmox for backup...
  3. Y

    Different IP addresses for different SMTP servers/domain

    Got a case when: one PMG is a gateway for 3 SMTP servers (all of them are iredmail). Each of server is dedicated for other company. Problem: One of SMTP server could make mail-exchange blocked for others. How? Imagine situation when IP goes to RBL. Resolve: I would like where mail exchange...
  4. Y

    LXC slow migration workaround

    This was second main thing that I jumped out to ZFS stack from LVM. So for ext4 we should use "lvmsync" and for ZFS "zfs send/recv".
  5. Y

    LXC slow migration workaround

    In this example I got two remotes VPS1 and VPS2. In the past when the container was <500G I was using rsync and backup/restore method. Now when the container has almost 2TB. I am not able to migrate this container with resonable downtime. So for workaround I did: # example1: initial replicate...
  6. Y

    Preferred Method to Make ethtool Changes Persistent Across Reboots and Updates?

    Its better to set many changes at once: This: pre-up ethtool -K $IFACE rx-checksumming on pre-up ethtool -K $IFACE tx-checksumming on pre-up ethtool -K $IFACE tx-checksum-ip-generic on Could be: pre-up ethtool -K $IFACE rx-checksumming on tx-checksumming on tx-checksum-ip-generic on Good...
  7. Y

    backup speed for LXC with ZFS backend

    I know that difference but in backup process incremental snapshot using zfs send / zfs receive time is also the same. Maybe PBS/PBS Client dev should use other magic spells for backuping LXC?
  8. Y

    backup speed for LXC with ZFS backend

    Well - doing incremental backup I see that comparing backups VM(1TB) and LXC(800GB) speed there is 10x performance gap (or even more! - doing backup for LXC is significantly slower even if it's smaller) I did some test zfs send/zfs receive to remote host not differs so much. Local storage...
  9. Y

    Proxmox boot errors

    [ 10.984549] VFIO - User Level meta-driver version: 0.3 [ 11.469416] systemd-journald[873]: Received client request to flush runtime journal. [ 12.249335] power_meter ACPI000D:00: Found ACPI power meter. [ 12.269933] power_meter ACPI000D:00: Ignoring unsafe software power cap! [...
  10. Y

    [TUTORIAL] Proxmox ZFS raid1 performance

    Sorry. Dictionary in my Phone made bad things sometimes. I mean PLP technology - capacitor on SSD Drive. That allows to scream by ssd "write done" before it is done because it save. Check pvestat command on both drives.
  11. Y

    [TUTORIAL] Proxmox ZFS raid1 performance

    Most important if You use ZFS: buy SSD with PLOP (performance 2-2.5x !!!)
  12. Y

    Migration issue - storage 'zfs1-vps1' is not available on node

    https://bugzilla.proxmox.com/show_bug.cgi?id=3148 for --with-local-disks
  13. Y

    [SOLVED] Passthrough two PCI devices

    This issue is only valid when there is a 2x Mellanox ConnectX-3 cards. I changed the cards to ConnectX-4 - all works like a charm.
  14. Y

    [SOLVED] Passthrough two PCI devices

    I tried to juggle cards with different PCI-E - it didn't help. When I turn on second network device (no matter whitch) it gives me an error: genirq: Flags mismatch irq 16. 00000000 (vfio-intx(0000:88:00.0)) vs. 00000000 (vfio-intx(0000:05:00.0)) It looks like some race condition bug when I...
  15. Y

    [SOLVED] Passthrough two PCI devices

    When I am trying to start VM with passtrough two network devices there are errors in DMESG: root@rtx-proxmox:~# dmesg | grep 'Flags mismatch' [ 99.938242] genirq: Flags mismatch irq 16. 00000080 (vfio-intx(0000:81:00.0)) vs. 00000000 (vfio-intx(0000:08:00.0)) [ 302.258175] genirq: Flags...
  16. Y

    [SOLVED] Passthrough two PCI devices

    The other thing I tried is add another virtual machine and passthrough second device at the same time. It doesn't worked. Do this matter if the devices got the same IRQ's: 88:00.0 Ethernet controller: Mellanox Technologies MT27500 Family [ConnectX-3] Physical Slot: 4 IOMMU group: 100...
  17. Y

    [SOLVED] Passthrough two PCI devices

    No - I will try. No. All of the available updates are applied.
  18. Y

    [SOLVED] Passthrough two PCI devices

    Yes - I checked it twice. Name of module is like blacklist module inside conf file. I attached my lspci for more info. Most important thing: 1. Kernel driver in use: vfio-pci Kernel modules: ixgbe 2. Kernel driver in use: vfio-pci Kernel modules: mlx4_core So that means that is...
  19. Y

    [SOLVED] Passthrough two PCI devices

    @oguz did You read one mail infromation from my post about that VM is starting when only one device (no matter whitch) is set for passthrough? In my /etc/modprobe.d/ I've got: -rw-r--r-- 1 root root 16 Jun 1 11:36 ixgbe.conf -rw-r--r-- 1 root root 20 Jun 1 11:36 mlx4_core.conf -rw-r--r--...
  20. Y

    [SOLVED] Passthrough two PCI devices

    I tried to passthrough PCI device alone: I tried to passthrough another PCI device also alone: Then I tried to Passthrough together two devices: And I've got: kvm: -device vfio-pci,host=0000:81:00.0,id=hostpci1.0,bus=pci.0,addr=0x11.0,multifunction=on: vfio 0000:81:00.0: Failed to set...