Search results

  1. G

    blank page instead of Web GUI

    Thank you for your attention dcsapak. I did a dhcp release renew without changing IP without any success. Also, I was able to connect to the other two hosts in the cluster via the webui. So far the hosts seem to keep reliable time via NTP as does my laptop. The problem arose when I...
  2. G

    blank page instead of Web GUI

    Hello, I have been struggling with this issue since upgrading one of our PVE hosts last week. After the upgrade I was unable to connect to the webui from any browser on my laptop. I use Chrome mainly but FF and IE didn't work either. I was concerned that something had gotten screwed up with...
  3. G

    Speedy ZFS Backups

    So I made a fresh Win2016 VM with a 500 GB OS and data virtual disk then robocopied a bunch of real production files over to it. About 155 GB over 515,000 files. Average file size is about 300k. Both the source VM and the target were on the same host, which enjoys a 10G virtual NIC using...
  4. G

    Speedy ZFS Backups

    Greetings, I did clean up my backup sources. I am just using the root zpool's where the production VMs live. Each of the two hosts back up to each other and to their own destination zfs dataset on the backup machine. I let them both run at the same time and now I have a good sense of what...
  5. G

    Speedy ZFS Backups

    Hello Group, I think I have stumbled onto what for us seems to be a good solution. I resorted to Googling for a solution that uses ZFS features and tried a few of them. The one that I really liked was Znapzend. The one complaint I have about it is that I have to remember to substitute "z"...
  6. G

    Speedy ZFS Backups

    The server I am using as an example is running during backup . It resides on a zpool made of two raidz1 of 4 2TB Samsung Evo SSDs each. There are three virtual drives on the VM, which are zvols in this pool. root@pve-2:~# zpool status zpool_fast pool: zpool_fast state: ONLINE scan: none...
  7. G

    Speedy ZFS Backups

    Greetings, For several months, our small PVE environment was limited to 1Gbit ethernet and so that was generally our backup performance limiting factor. Then we got old used Infiniband for cheap on eBay. When it worked, you could do Iperf and see 15+ Gbit performance from memory buffer to...
  8. G

    10GbaseT Switch Recommendation

    Thank you all for your time and attention! We splurged and went with NETGEAR ProSAFE 8-Port 10-Gigabit Ethernet Smart Managed Switch (XS708T-100NES) It was around $1000 USD and has the word "managed" in the description. There is no serial console so it isn't as managed as you might hope...
  9. G

    10GbaseT Switch Recommendation

    Greetings, We just had a weird event happen when shutting down our third "quorum" host, which is being replaced. We use IPoIB on some old Mellanox IB cards and a Mellanox IB switch. The result was that one of the two remaining hosts went into kernel panic and had to be powercycled. In order...
  10. G

    pve-zsync sync Error

    I see. Thanks Wolfgang. So just to be absolutely sure, there is no history file somewhere that Pve-zsync refers to that causes it to expect a certain snapshot from a previous job? Is there a simpler way that is integrated into Pve-zsync to clean up its snapshots when you destroy a job? Thank...
  11. G

    pve-zsync sync Error

    Update. My new sync job is so far working fine. I have also just noticed that even though I have destroyed the zsynced snapshots from previous jobs, there still linger snapshots on the sending side. See below. pve1zpool2/zsync/vm-100-disk-1...
  12. G

    pve-zsync sync Error

    Please excuse the slight inconsistency between commands. I meant for the commands and error to all reflect the same job but one of the commands still refers to job name dcfs2. It should say mqa10. I just attempted to create a brand new job with a fresh name. Hopefully this time it will work.
  13. G

    pve-zsync sync Error

    I am having similar problems in the following scenario: Run a command like this: pve-zsync create --source 10.0.1.70:104 --dest pve1zpool2/zsync --maxsnap 8 --verbose --name dcfs2 --skip Then before it runs I doctor up the /etc/cron.d/pve-zsync config file to run on my desired schedule. I...
  14. G

    PVE Host seems to go "offline" periodically while moving a virtual disk

    I was able to confirm that migrations were using the VMBR0 network because the first line of the output of the qmmigrate job is the destination hostname and its IP. It is getting this from the /etc/hosts file. I confirmed this by editing my /etc/hosts and giving my host name the IB network's...
  15. G

    PVE Host seems to go "offline" periodically while moving a virtual disk

    What does "determined over the cluster" mean? I changed the cluster network to IB but the migrations are not using it. They can't be because they are still going at single Gbit speed. I expected them to be as fast as SCP at the least and after setting the "migrations_unsecure:1" I expected it...
  16. G

    Host migration practices without a SAN

    My NFS servers are the PVE hosts themselves. Could that be the underlying issue? Is sharing certain of my zfs filesystems via the "sharenfs" option a bad practice? I am exploring another avenue for doing vm migrations between cluster hosts. I believe it was Fabian who turned me on to the...
  17. G

    PVE Host seems to go "offline" periodically while moving a virtual disk

    I should have quoted this first. After re-reading this a bunch of times I see that it says "both corosync AND zfs send/receive to IB". I found a wiki article on the former but have absolutely no idea how to do the latter. How do I move "zfs send/receive" to IB? I hope there is a simpler way...
  18. G

    PVE Host seems to go "offline" periodically while moving a virtual disk

    I cannot find any setting per se that will define which network migrations should use. I am now wondering if I need to somehow rename my hosts so that PVE associates the IB network IP address with the host name. Is there a clean way to do this? Will it require rebooting the cluster? I see...
  19. G

    PVE Host seems to go "offline" periodically while moving a virtual disk

    Here is an update. I moved the Corosync network to the IB network and that seems to work. I also renamed the zpools and standardized my datasets and at last have all my vm images moved back to the fast zpools but when i do a "migrate" it still seems to use the regular production LAN interface...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!