enp2s0 is your physical NIC and it says NO-CARRIER and DOWN, are you sure there is a cable plugged into it and does it have link light?
But if you change the slot of your NIC, I think maybe the "enp2s0" name might change? Try something like "sudo dmesg -T | grep -3 enp2s0"
OK, so inside my VM I ran
root@testnodepve:~# dd if=/dev/urandom of=/tmp/zerofile1G bs=1M count=1024
and the zvol usage grew but only by a little bit:
vm01/vm-401-disk-0 referenced 6.65G
I am not sure how to explain it, but at least we can make careful observations and go from there.
In this case, on my zvol I see I have volblocksize 8K (from zfs get all | grep vm-401 | grep -i block) , and inside is actually an ext4 filesystem with 4k block size (from tune2fs -l...
I guess one way to examine what is happening is make a new disk of a small but round size and then fill it from inside your VM or something and see what it looks like on the ZFS side.
The other thing you really need to keep track of is the base2 vs base10 sizing, that you called "overhead"...
I guess I am coming from it from the opposite point of view; I never expected zvols to be "thin provisioned", so a 2TB zvol should take up 2TB in your pool (plus parity). So you see your ~2.7TB usage for your 4+2 RAIDZ2 pool.
Does proxmox make "sparse" zvols by default? I think you want to...
OK, I think maybe this is just how it's supposed to work and I didn't have the right mental model of the nfs server traversing filesystems.
https://github.com/zfsonlinux/zfs/issues/8376
OK, I think I isolated the issue to a difference between a zfs dataset/filesystem and a regular directory.
e.g.
root@regulated01:~# systemctl stop nfs-kernel-server
root@regulated01:~# zfs destroy tank/regulated/ukbb
root@regulated01:~# mkdir /tank/regulated/ukbb/
root@regulated01:~# chown...
If I remove the option
RPCMOUNTDOPTS="--manage-gids"
from /etc/defaults/nfs-kernel-server on the server side, then the mount operation just hangs indefinitely and I can't seem to get any error from either client or server side.
I have what I think is a very simple setup, but I must be missing something very simple.
On the proxmox server, I have a zpool and a zfs dataset and I export it via NFS.
On the client (happens to be CentOS 7.x) I mount the nfs share via nfsv3 and it shows different ownership and permissions...
OK, I figured it out, I needed to specify more info for pvesm:
root@pve-c1:~# pvesm add rbd hyperconverged --pool firstpool
And now my storage.cfg section looks like this:
rbd: hyperconverged
pool firstpool
Hi Alwin,
Thanks for your response. I read that section but it seems to me that I only need to do something if I have external Ceph. But I have the pve hyper-converged ceph.
Here were the commands in my history:
56 pveceph createpool firstpool -pg_num 1024
61 pvesm status
62...
maybe the GUI doesn't use the correct pool name?
root@pve-c3:~# rbd ls
rbd: error opening default pool 'rbd'
Ensure that the default pool has been created or specify an alternate pool name.
rbd: list: (2) No such file or directory
root@pve-c3:~# rbd ls firstpool
root@pve-c3:~#
Hey all,
I did a total fresh install of a test cluster on three machines. All the ceph stuff seems to work just fine. I created the pool and added it to pve with
pvesm add rbd firstpool
It shows up in the GUI, but if I look at the contents tab, I get an error
rbd error: rbd: list: (2) No...
If you're having problems with your HBA or with your disks.... you're not going to get good ZFS performance. Can you test / benchmark the HBA or the disks separately?
this sounds like a low-level network problem; I recommend upgrading your NIC firmware and upgrading your NIC kernel module (Intel latest tends to be much newer than what is in the kernel by default) and then re-checking all your iperf3 performance. If you can't get reliable / reproducible...
New stock install of PVE.
I created a couple of test VMs; things work fine. Now I would like to delete them.
Somehow I don't find any "delete" or "destroy" option in the Web UI. And also not in the PDF guide, or in the forum search.
What is the procedure to delete a VM? Delete the config...
Are you sure your physical devices are 4Kn? E.g. an INTEL SSDSC2KB960G7
# smartctl --all /dev/sdb |grep "Sector Sizes"
Sector Sizes: 512 bytes logical, 4096 bytes physical
You can use the 'ashift=12' parameter when creating your zpool if you want to be able to add 4kn devices in the future.
Yes, it is just the latest download that I got from the Windows VM link in the PVE PDF guide:
root@amd01:/vm01/admin_stuff# ls -alh Windev*
-rw-r--r-- 1 root root 18G Feb 12 23:48 Windev1802Eval-disk1.vmdk
-rw-r--r-- 1 root root 193 Feb 12 23:48 Windev1802Eval.mf
-rw-r--r-- 1 root root 5.8K...
Wanted to follow up and say that instead of importing ovf, I created a new VM with id 100, then imported only the disk, and that worked.
root@amd01:~# qm importdisk 100 /vm01/admin_stuff/Windev1802Eval-disk1.vmdk localzfspool
(100.00/100%)
Then I modified the vm config to use this disk.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.