Thank you for your attention dcsapak.
I did a dhcp release renew without changing IP without any success. Also, I was able to connect to the other two hosts in the cluster via the webui.
So far the hosts seem to keep reliable time via NTP as does my laptop.
The problem arose when I...
Hello,
I have been struggling with this issue since upgrading one of our PVE hosts last week. After the upgrade I was unable to connect to the webui from any browser on my laptop. I use Chrome mainly but FF and IE didn't work either. I was concerned that something had gotten screwed up with...
So I made a fresh Win2016 VM with a 500 GB OS and data virtual disk then robocopied a bunch of real production files over to it. About 155 GB over 515,000 files. Average file size is about 300k. Both the source VM and the target were on the same host, which enjoys a 10G virtual NIC using...
Greetings,
I did clean up my backup sources. I am just using the root zpool's where the production VMs live. Each of the two hosts back up to each other and to their own destination zfs dataset on the backup machine.
I let them both run at the same time and now I have a good sense of what...
Hello Group,
I think I have stumbled onto what for us seems to be a good solution. I resorted to Googling for a solution that uses ZFS features and tried a few of them. The one that I really liked was Znapzend. The one complaint I have about it is that I have to remember to substitute "z"...
The server I am using as an example is running during backup . It resides on a zpool made of two raidz1 of 4 2TB Samsung Evo SSDs each. There are three virtual drives on the VM, which are zvols in this pool.
root@pve-2:~# zpool status zpool_fast
pool: zpool_fast
state: ONLINE
scan: none...
Greetings,
For several months, our small PVE environment was limited to 1Gbit ethernet and so that was generally our backup performance limiting factor. Then we got old used Infiniband for cheap on eBay. When it worked, you could do Iperf and see 15+ Gbit performance from memory buffer to...
Thank you all for your time and attention! We splurged and went with
NETGEAR ProSAFE 8-Port 10-Gigabit Ethernet Smart Managed Switch (XS708T-100NES)
It was around $1000 USD and has the word "managed" in the description. There is no serial console so it isn't as managed as you might hope...
Greetings,
We just had a weird event happen when shutting down our third "quorum" host, which is being replaced. We use IPoIB on some old Mellanox IB cards and a Mellanox IB switch. The result was that one of the two remaining hosts went into kernel panic and had to be powercycled. In order...
I see. Thanks Wolfgang. So just to be absolutely sure, there is no history file somewhere that Pve-zsync refers to that causes it to expect a certain snapshot from a previous job? Is there a simpler way that is integrated into Pve-zsync to clean up its snapshots when you destroy a job?
Thank...
Update. My new sync job is so far working fine.
I have also just noticed that even though I have destroyed the zsynced snapshots from previous jobs, there still linger snapshots on the sending side. See below.
pve1zpool2/zsync/vm-100-disk-1...
Please excuse the slight inconsistency between commands. I meant for the commands and error to all reflect the same job but one of the commands still refers to job name dcfs2. It should say mqa10.
I just attempted to create a brand new job with a fresh name. Hopefully this time it will work.
I am having similar problems in the following scenario:
Run a command like this:
pve-zsync create --source 10.0.1.70:104 --dest pve1zpool2/zsync --maxsnap 8 --verbose --name dcfs2 --skip
Then before it runs I doctor up the /etc/cron.d/pve-zsync config file to run on my desired schedule. I...
I was able to confirm that migrations were using the VMBR0 network because the first line of the output of the qmmigrate job is the destination hostname and its IP. It is getting this from the /etc/hosts file. I confirmed this by editing my /etc/hosts and giving my host name the IB network's...
What does "determined over the cluster" mean? I changed the cluster network to IB but the migrations are not using it. They can't be because they are still going at single Gbit speed. I expected them to be as fast as SCP at the least and after setting the "migrations_unsecure:1" I expected it...
My NFS servers are the PVE hosts themselves. Could that be the underlying issue? Is sharing certain of my zfs filesystems via the "sharenfs" option a bad practice? I am exploring another avenue for doing vm migrations between cluster hosts. I believe it was Fabian who turned me on to the...
I should have quoted this first. After re-reading this a bunch of times I see that it says "both corosync AND zfs send/receive to IB". I found a wiki article on the former but have absolutely no idea how to do the latter. How do I move "zfs send/receive" to IB? I hope there is a simpler way...
I cannot find any setting per se that will define which network migrations should use. I am now wondering if I need to somehow rename my hosts so that PVE associates the IB network IP address with the host name. Is there a clean way to do this? Will it require rebooting the cluster? I see...
Here is an update.
I moved the Corosync network to the IB network and that seems to work. I also renamed the zpools and standardized my datasets and at last have all my vm images moved back to the fast zpools but when i do a "migrate" it still seems to use the regular production LAN interface...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.