Hello,
I have a couple of Proxmox 4.4 deployments out in the wild. On occasion, one of them will have their "usedbysnapshot" field grow to incredible sizes. If I do a zfs get all on the vdev, it's showing usedbysnap is 279G. I've listed out all of the snapshots, there are 337 hourly snapshots...
Here we go. I tried to get an output of arcstat.py to go along with this, but I didn't realize until after I disconnected that the file generated was 0 bytes. Oops. I can go back and get that if it's important. Currently arcsz and c are hanging out around 600MB. Here is the output of...
Here it is, though unfortunately on a clean boot, so c still shows 3.7G. I will keep an eye on it and repost whenever c changes value.
# cat /proc/spl/kstat/zfs/arcstats
6 1 0x01 91 4368 2442007230 349203140231
name type data
hits 4...
It looks like I don't have c_min recorded. I know we set "options zfs zfs_arc_max=4000000000" in /etc/modprobe.d/zfs.conf, but it never takes. We have a cron entry @reboot to echo 4000000000 into /sys/module/zfs/parameters/zfs_arc_max. Does that not set it to a static size?
Hello,
I'm running into some confusion with the output of arcstat.py. As I understand it, at the tail end of the output arcsz is the current size of the ARC, and c is the ARC maximum size.
root@myprox:~# arcstat.py 1
time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c...
Hello,
We've been encountering some performance issues with VMs in general recently, and in particular with those on ZFS. We have a few hits against us up front in that we're using RAID-Z2, and we're using 7200RPM disks, which I know are bad for VM performance in general, but we're experiencing...
Looks like the 9207 is a proper HBA, so that's perfect for the job. It looks like there is one review on that card regarding firmware version 20 and Linux software RAIDs, so if you're using mdadm, watch out for that.
Other than that, this looks like one sexy setup.
I've had trouble just reformatting ZFS'd disks, probably because ZFS is still protecting the disks. I've found it's just easier to install the utils, destroy the zpool the way ZFS wants you to, and then boot into the Proxmox installer.
This sounds similar to a sneaky issue I had with Proxmox and ZFS that I hadn't run into until I setup a test server that was built up, torn down, and built up several times over. The first time, everything worked as expected. But every subsequent rebuild, I would install Proxmox to a ZFS mirror...
Hello,
We are encountering a bug with the 4.4.35-1-pve kernel where the system reboots when under high load. This is a known problem, and we were able to identify it thanks to a post in the forum (woohoo!). But now we have a different problem. In the thread we were reading, it was suggested to...
So this is a strange little issue. We have a few VMs running on a Proxmox 4.0 machine. We needed a Windows 7 VM to act as a sort of management VM for some stuff running in the office. When this VM is running, after a couple of minutes we will no longer be able to access the web GUI. We will get...
The timing on your problem and the problem this fix solves seems to match. Have you seen this yet?
https://pve.proxmox.com/wiki/Storage:_ZFS#ZFS_mounting_workaround
I'm having trouble running apt-get update on a fresh install of Proxmox 3.1. It gets a good ways in to it, then hits enterprise.proxmox.com, and then fails with a 401. I put the URL in a browser, and sure 'nuff it's asking for authentication. I'm not sure how best to get around this, but I do...
A few questions that may help in troubleshooting.
1. Do all of the Windows VMs, including the VMs that don't crash, use iSCSI in the same fashion? e.g. All of the Windows VMs have their virtual OS disk on the iSCSI target.
2. Do both of the VMs that BSOD have the same error code?
3. What is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.