Ran apt-get dist-upgrade on an old node yesterday as part of a migration plan. Upgrade went fine, no issues afterwards. About an hour ago, we lost external connectivity to a mailserver running in CT106 there. Node is connected to the router with a single GigE cable, with the main node address...
By 'This' do you mean VPLS?
If so, and you do have good speed/latency between the centers, you could still use something like an IPsec tunnel in the meantime.
30 TB is a fair bit of data, but the bandwidth required really depends on how dynamic it is. What bandwidth (speed) connections are available at each end? Have you tried running iperf between the nodes (both directions)?
By tunnel I was implying something like VPLS (assuming the intervening...
I was not aware that the addresses provided were placeholders.
As I previously suggested, a tunnel might be the best way to handle this. What kind of bandwidth do you have available on these connections, and how much of that do you want to use for the HA traffic?
Sorry - should have read more carefully before I replied. The LSI running RAID10 will probably outperform ZFS on speed, and will use far less memory. As long as I had enough RAM I would still choose ZFS for its reliability, resiliency and reduced risk of silent corruption. Several articles...
What is the WAN comprised of? Is it a private link between two datacenters or is there public Internet involved?
Are there firewalls in place on either end?
I believe RSTe is a BIOS RAID and not a true hardware RAID. ZFS would be my choice there.
http://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/ is worth a read before you finalize your configuration
Please correct me if I'm wrong, but 'dripping' an IP seems like a translation problem.
Can you ping from each of those IP's to the other? If not, you may need to create a tunnel between the sites that will route those prefixes.
Fixed by rsyncing to xpool/data/subvol-101-disk-1 instead of /var/lib/lxc/101
How do these two locations interact? Are they overlaid somehow? What writes to which?
thank you~
Please forgive me if I'm missing something obvious here, but I'm new to ZFS storage on Proxmox. PVE 4.4 installed on a small SSD, then added a 4TB ZFS pool built on 4kn SATA drives. Added the storage and assigned it roles, resulting in this:
Enabled Yes
Active Yes
Content...
I read through that awhile back and it gave me the impression that running docker in an LXC would be mildly difficult and occasionally buggy. The thread is nearly two years old at this point. Has anyone tried running Docker in an LXC on Proxmox 4.4? I have an app I need to deploy which is...
I must be missing something...
root@pve1:/var/lib/vz/template/ubuntu.16.04# ls -l
total 4
-rw-r--r-- 1 root root 0 Jan 2 16:01 logfile
-rw-r--r-- 1 root root 604 Jan 2 16:18 ubuntu-xenial-standard-64_dab.conf
root@pve1:/var/lib/vz/template/ubuntu.16.04# dab init
no 'architecture'...
The intent here was to get Proxmox installed on (and booting from) a small but reliable ZFS array. Given that the 4.X installer will install to that, perhaps there is a recipe for moving just the boot to another device?
Thank you.
I did put a dab.conf file (from https://git.proxmox.com/?p=dab-pve-appliances.git;a=summary)into the directory but got the same results. Where is that file supposed to live?
The 16.04 template via pveam is a full system, including X.11 and more. I'd like to create a minimal server template and have read the DAB documentation, but I don't quite understand how (or if) I can start from scratch with an ISO to build only what I need. Is this possible, or do I have to...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.