Probably will work. It depends on what type of behavior you actually want.
You should read the following;
https://docs.oracle.com/cd/E19253-01/819-5461/gazvb/index.html or https://illumos.org/books/zfs-admin/gavwq.html#gazvb
That's why I asked. I couldn't find or think of a way to pipe the output or have it overwrite an existing folder/file. Thought maybe I was missing something so I asked here to see.
I mean, I found out about a flag (or option, idk) that wasn't listed in the MAN page so you never know.
I'm working on something and was wondering if vma extract output can be piped. Right now I'm looking at piping to rsync (that could change later).
Thanks.
GVT-g is really new, niche, and not exactly a common thing (at least not right now) in the server space. Is it really that shocking this doesn't "100% work"? I've not read many virtualization platforms supporting it at all. Not even bleeding edge linux distro's.
Cool, wanted to make sure. Running a current version of QB on server 2016, published as a RemoteApp, I see approx 150-250MB / user session. Up to 500MB if they are long lived sessions and are doing something more extensive in the App. Surprising CPU usage is much lower then I expected with only...
Didn't get to read everything you've written but saw QuickBooks enterprise and figured I'd offer some input. Since I've run that on esxi, hyperv, and proxmox in production.
The recommended way I've always seen to deploy it is using Windows terminal server/RDS. The QB company files a database...
Hello,
I'm wondering what the proxmox teams recommendations are for ZFS. I see proxmox now support UEFI boot and supposedly rpool supports all ZFS feature flags in this configuration.
During installation of Proxmox the "rpool" is generated. The ROOT is there for booting. And rpool/data is...
I should also mention that I disabled checksums to test performance on Node 1 after upgrading it to pve 6.1. Remember this is a mirrored pool with a SLOG. And I'm just testing FSYNC using pveperf. The results did NOT change. So it doesn't appear the issue at least isn't related to "lac of...
It appears that's not the whole story. Proxmox patched their kernel with the SIMD patch that was implemented by the OpenZFS/ZOL master branch. This is their "workaround" and not reverting the kernel dev teams changes to export symbols.
See here...
Has the proxmox team patched the kernel for SIMD support? I think I'm seeing some comments on the oven kernel about patching this. If so, is it the proposed ZFS teams workaround or just patching the kernel back to the old method before the kernel team removed it?
You'll find a bunch of benchmarks for freenas comparing nfs and iscsi. Just know that lots of them are poorly done unfortunately. They both perform equally well when doing the same thing. The big differences usually comes from how the ZFS storage looks to the VM and if it is lying about sync...
Well just looking at fletcher4 stats between Node 1 (zfs 0.7.13) and Node 3 (zfs 0.8.2), you can see a clear difference...
NODE1
------
root@pve01:~# cat /proc/spl/kstat/zfs/fletcher_4_bench
0 0 0x01 -1 0 6970133600 45976618831011140
implementation native byteswap
scalar...
Doing some more digging in github issues related to zfs v0.8 releases it appears there were multiple about drops in performance. Lots of people talking about SIMD. Which from what I can tell was disabled moving from v0.7 to v0.8. And even in the latest release of zfs (0.8.2) which I just...
I have 3 different nodes that I'm messing with. All 3 nodes are the same (servers, CPU, RAM, HBAs, etc),
Intel R2224GZ4GC4 (barebones)
Intel S2600GZ (motherboard)
Intel E5-2670 x 2 (CPUs)
Hynix HMT31GR7CFR4A-H9 (8GB x 16) (RAM)
LSI 9340-8i (SAS3008 flashed to IT mode)
They all use ZFS. Was...
Not the same type of software, not a NAC. Packetfence is much more then a captive portal and guest access.
I installed as a KVM VM and it worked. I'll go back and test as a container later. I think I might have not disabled the firewall.
Thanks.
I just realized I didn't disable AppArmor. For some reason I just checked if selinux was disabled inside the container. So I've added;
features: nesting=1
lxc.apparmor.profile = unconfined
to /etc/pve/lxc/[id].conf. This is just for testing anyway so not to concerned. Will see if that...
I spun up an LXC of CentOS7 and walked through the installation of packetfence (NAC). I got to the point on setup where it wants to connect to active directory and just haven't had any luck and I'm not even getting anything useful in logs. So I started researching a bit and found THIS.
Which...
Thanks. I'll do some further testing on my end with this in mind to make sure. I suspect that this will eventually get pushed up into the GUI and I have the spare hardware right now test it a bunch,
And now it looks like it is working. o_O
It transferred and cleaned up after itself. I used;
qm migrate 110 pve03 --online --with-local-disks --migration_type insecure
So do you have to use the --migration_type flag?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.