Search results

  1. J

    Mirroring Backups to multiple storages

    @iMer - a dumb way to do it which works, is to have a job run that watches the backup files to see when they're done being written to, and then pushes them elsewhere.
  2. J

    [SOLVED] zfs: cannot import rpool after reboot

    @Madhatter The new 4.13 kernel and modules fixed it for me on most of my machines. My others are affected by a kernel bug & an updated 4.10 kernel helped with those. ~J
  3. J

    [SOLVED] Anyone already using "IO Thread" and PVE-Zsync?

    Hey all, Has anyone been using pve-zsync with "IO Thread" on a VirtIO SCSI drive successfully? Any thoughts or feedback before I try it? This would seem to have the best of all worlds - minimally impactful backups + higher performing I/O. Thanks!
  4. J

    unable to install proxmox 5.1 kernel panic

    Is this a retail Intel processor or an Engineering Sample?
  5. J

    how to improve performance?

    To do that he'll also need to make sure Jumbo Frames are enabled on all hardware. Just enabling it on the storage server won't accomplish much.
  6. J

    how to improve performance?

    Are you using VirtIO disk or network drivers on the Windows Guests? Recent ones? What about the QEMU Agent? Is it installed?
  7. J

    Proxmox VE 5.1 released!

    Thanks Fabian. I've been following that thread closely. I'll be happy to try the test kernel when you have one.
  8. J

    Proxmox VE 5.1 released!

    Thanks Fabian, The 4.10/0.7.3 kernel works perfectly on my two misbehaving servers. It seems 4.13 added a regression for the Supermicro SCU on at least the X9SRW-F motherboard (C602 chipset). Attached are two snapshots of 4.13 (not) booting - the first shows the kernel panic...
  9. J

    Cannot access disk on zfs

    Thanks J. Carlos! I suspect you meant: apt-get install libzfs2linux=0.6.5.11-pve17~bpo90 instead of libzfs2linux-dev, as -dev doesn't seem to exist on a standard install. My two servers that are misbehaving on 4.13 are manifesting a kernel panic involving an interesting isci_task_abort_task...
  10. J

    Cannot access disk on zfs

    @hardstone - Don't reboot. I was seeing what you were & rebooted. Big mistake. I had to restore all my virtual machines to a different server. (I might have been able to do a ZFS push, to the other server, but backups had just run.) There seems to be a bug in 4.13 which is causing...
  11. J

    > 3000 mSec Ping and packet drops with VirtIO under load

    Easy doesn't mean I have the time or inclination to do so on your schedule. Moving the VM between hosts on a mixed cluster is not the problem. Dealing with downtime from failure due to 700ms+ ping times does. Not to mention my time constraints this week. Plus the issue manifests under NETWORK...
  12. J

    > 3000 mSec Ping and packet drops with VirtIO under load

    Nope, but I could. Just not right now. With all the reboots to test, there's been enough downtime on that server for the day (week!). Going back to SCSI Virtio and changing to E1000 for the network is currently behaving decently. (I have CPU to burn.)
  13. J

    > 3000 mSec Ping and packet drops with VirtIO under load

    Changing from SCSI Virtio to IDE did not solve the problem for me. I'm seeing 900ms ping times from the gateway to the KVM virt. Should be .36ms. VM is nothing special- Apache serving lots of static content. VM's drive is on local-ZFS storage. Latest Proxmox 5 with all updates, VirtIO net...
  14. J

    Random Restarting

    Sorry @Glock24, but the problem we were all having involved ZFS. Yours is something different, and you should probably open a new thread. What sort of storage system are you using? Same disk(s), RAID controller, or other aspects? What sort of error is it? Hard crash, kernel panic, or something...
  15. J

    All cluster-nodes are hanging after nightly backup

    What does your network topology look like? And did any of your storage hang?
  16. J

    zpool failed to import on reboot

    What device names did you use? Something static like /dev/disk/by-id/* or something potentially dynamic like /dev/sd*?
  17. J

    Proxmox 4.4 virtio_scsi regression.

    What kernel is in use on the guests? There are a few virtio-SCSI & virtio general bugs that were introduced in 4.10.6 or 7 that have been patched by 4.11 rc4.
  18. J

    Recommended SSD for Production??

    Hey Hivemind, What SATA SSD drives do folk recommend for a production cluster? I've been using consumer drives with mostly good success, but the weird ZFS write throttle issue has bitten us enough times (twice!), so I want to get something in our school district's budget for next fiscal year...
  19. J

    Proxmox 4.1 vs 4.4 swap usage

    @eth: You might also want to look at limiting the ZFS ARC size. By default, it will try to use half your RAM.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!