Recent content by chalex

  1. C

    No internet on host after hardware change

    enp2s0 is your physical NIC and it says NO-CARRIER and DOWN, are you sure there is a cable plugged into it and does it have link light? But if you change the slot of your NIC, I think maybe the "enp2s0" name might change? Try something like "sudo dmesg -T | grep -3 enp2s0"
  2. C

    Understanding disk usage of a VM with ZFS

    OK, so inside my VM I ran root@testnodepve:~# dd if=/dev/urandom of=/tmp/zerofile1G bs=1M count=1024 and the zvol usage grew but only by a little bit: vm01/vm-401-disk-0 referenced 6.65G
  3. C

    Understanding disk usage of a VM with ZFS

    I am not sure how to explain it, but at least we can make careful observations and go from there. In this case, on my zvol I see I have volblocksize 8K (from zfs get all | grep vm-401 | grep -i block) , and inside is actually an ext4 filesystem with 4k block size (from tune2fs -l...
  4. C

    Understanding disk usage of a VM with ZFS

    I guess one way to examine what is happening is make a new disk of a small but round size and then fill it from inside your VM or something and see what it looks like on the ZFS side. The other thing you really need to keep track of is the base2 vs base10 sizing, that you called "overhead"...
  5. C

    Understanding disk usage of a VM with ZFS

    I guess I am coming from it from the opposite point of view; I never expected zvols to be "thin provisioned", so a 2TB zvol should take up 2TB in your pool (plus parity). So you see your ~2.7TB usage for your 4+2 RAIDZ2 pool. Does proxmox make "sparse" zvols by default? I think you want to...
  6. C

    nfs issue, shows different directory ownership/mode on client

    OK, I think maybe this is just how it's supposed to work and I didn't have the right mental model of the nfs server traversing filesystems. https://github.com/zfsonlinux/zfs/issues/8376
  7. C

    nfs issue, shows different directory ownership/mode on client

    OK, I think I isolated the issue to a difference between a zfs dataset/filesystem and a regular directory. e.g. root@regulated01:~# systemctl stop nfs-kernel-server root@regulated01:~# zfs destroy tank/regulated/ukbb root@regulated01:~# mkdir /tank/regulated/ukbb/ root@regulated01:~# chown...
  8. C

    nfs issue, shows different directory ownership/mode on client

    If I remove the option RPCMOUNTDOPTS="--manage-gids" from /etc/defaults/nfs-kernel-server on the server side, then the mount operation just hangs indefinitely and I can't seem to get any error from either client or server side.
  9. C

    nfs issue, shows different directory ownership/mode on client

    I have what I think is a very simple setup, but I must be missing something very simple. On the proxmox server, I have a zpool and a zfs dataset and I export it via NFS. On the client (happens to be CentOS 7.x) I mount the nfs share via nfsv3 and it shows different ownership and permissions...
  10. C

    new pve 5.3 3-node cluster install with ceph, minor issue

    OK, I figured it out, I needed to specify more info for pvesm: root@pve-c1:~# pvesm add rbd hyperconverged --pool firstpool And now my storage.cfg section looks like this: rbd: hyperconverged pool firstpool
  11. C

    new pve 5.3 3-node cluster install with ceph, minor issue

    Hi Alwin, Thanks for your response. I read that section but it seems to me that I only need to do something if I have external Ceph. But I have the pve hyper-converged ceph. Here were the commands in my history: 56 pveceph createpool firstpool -pg_num 1024 61 pvesm status 62...
  12. C

    new pve 5.3 3-node cluster install with ceph, minor issue

    maybe the GUI doesn't use the correct pool name? root@pve-c3:~# rbd ls rbd: error opening default pool 'rbd' Ensure that the default pool has been created or specify an alternate pool name. rbd: list: (2) No such file or directory root@pve-c3:~# rbd ls firstpool root@pve-c3:~#
  13. C

    new pve 5.3 3-node cluster install with ceph, minor issue

    Hey all, I did a total fresh install of a test cluster on three machines. All the ceph stuff seems to work just fine. I created the pool and added it to pve with pvesm add rbd firstpool It shows up in the GUI, but if I look at the contents tab, I get an error rbd error: rbd: list: (2) No...