Recent content by athompso

  1. A

    openvswitch permissions missing?

    further testing reveals that PVEAuditor permissions at "/" is adequate to let the user see vmbr0, but VM creation fails with: Permission check failed (/sdn/zones/localnetwork/vmbr0, SDN.Use) (403) Oh, even though I'm not [knowingly!] using SDN in any way, adding them as "SDNUser" to "/" seems...
  2. A

    openvswitch permissions missing?

    I'm trying to use pools and roles to allow limited user self-service, but I'm stuck on allowing them to create their own VMs. The sticking point appears, I think(???) to be that I'm using OpenvSwitch. Openvswitch works great for my needs, but I don't see any permissions for it in the PVE...
  3. A

    Sheepdog rollback deletes disk

    No, it just had the sound of a potential root-cause, but if you're sure the sheepdog behaviour described therein only affects iSCSI, then it's probably not relevant here. Also, additional testing reveals that sheepdog is the problem, not PVE: even doing "dog rollback" from the CLI unexpectedly...
  4. A

    Sheepdog rollback deletes disk

    As far as we can determine: We took a live snapshot of a powered-on VM running on Sheepdog storage (not including memory) A little while later, powered off the VM and attempted to roll back. Rollback failed complaining that the disk already existed Looks like we attempted to rollback more than...
  5. A

    Sheepdog rollback deletes disk

    We've just encountered a situation where we took a snapshot of a sheepdog-based VDI, then later rolled it back - but on trying to roll back, failed, and not only didn't roll back the disk, but deleted the base disk, too. Can anyone else report on their success or failure using snapshots with...
  6. A

    Sheepdog storage not thin

    Still not as stable as CEPH. I just tried rolling back on a snapshot, only to have sheepdog delete the base disk when the rollback failed. Not exactly a graceful failure mode... Sheepdog also fails miserably at online cluster operations like altering the # of copies kept - the entire cluster...
  7. A

    Sheepdog storage not thin

    Oh, shoot, yes online-vs-offline would explain the results I'm seeing. I wasn't aware of that limitation. Some of the disks I can trim - and now have, to good effect - but others predate OSes with TRIM support (notably one particular Win2008R2 domain controller with a needlessly large disk). I...
  8. A

    Sheepdog storage not thin

    I've been migrating VMs from NFS to Sheepdog, and I'm suddenly noticing that some of the virtual disks are consuming 100% of their stated capacity, i.e. thick-provisioned, not thin. Sheepdog is supposed to be able to thin-provision... and in fact does for some other disks - how do I "thin-ify"...
  9. A

    Insane load avg, disk timeouts w/ZFS

    Only consistent kernel panics :-). It was not an acceptable solution.
  10. A

    How to downgrade CEPH from Jewel to Hammer?

    Dietmar, as I've said elsewhere - this wasn't an upgrade. I expect the upgrade to work fine. What's broken for me is a brand-new, fresh install of Jewel on a fresh install of 4.3 immediately updated to 4.4.
  11. A

    How to downgrade CEPH from Jewel to Hammer?

    Since there's a showstopper bug with creating new CEPH clusters in Jewel, I had to figure out how to remove it and reinstall Hammer the hard way: 0. "pveceph install -version hammer" to reset the apt/sources.list.d/ceph.list repository (or just fix it by hand) 1. "dpkg-query --list | grep...
  12. A

    Cannot create jewel OSDs in 4.4.5

    Matthieu, lsblk never reports anything for the second partition. There is no filesystem on the second partition. Partitions to not get UUIDs by themselves; only whole disks and filesystems get UUIDs, so it makes sense that there is no UUID on the second partition. In any case, I can't...
  13. A

    Cannot create jewel OSDs in 4.4.5

    Granted, but recall this was a fresh install of Jewel, Hammer was never installed here. I'll try following some of that anyway (once the site is back up...) and see what happens.
  14. A

    Cannot create jewel OSDs in 4.4.5

    Nope. /dev/sdc1 is owned by ceph/ceph, but /dev/sdc2 is owned by root/root.
  15. A

    Cannot create jewel OSDs in 4.4.5

    Nothing obviously wrong to me: root@pve1:~# ls -ld /var/lib/ceph/ drwxr-x--- 8 ceph ceph 4096 Dec 30 13:07 /var/lib/ceph/ The error more tends to indicate (to me, anyway) that the mountpoint wasn't chown'd after being mounted there. (All disks under linux AFAIK mount as root... is there a...