Search results

  1. A

    openvswitch permissions missing?

    further testing reveals that PVEAuditor permissions at "/" is adequate to let the user see vmbr0, but VM creation fails with: Permission check failed (/sdn/zones/localnetwork/vmbr0, SDN.Use) (403) Oh, even though I'm not [knowingly!] using SDN in any way, adding them as "SDNUser" to "/" seems...
  2. A

    openvswitch permissions missing?

    I'm trying to use pools and roles to allow limited user self-service, but I'm stuck on allowing them to create their own VMs. The sticking point appears, I think(???) to be that I'm using OpenvSwitch. Openvswitch works great for my needs, but I don't see any permissions for it in the PVE...
  3. A

    Sheepdog rollback deletes disk

    No, it just had the sound of a potential root-cause, but if you're sure the sheepdog behaviour described therein only affects iSCSI, then it's probably not relevant here. Also, additional testing reveals that sheepdog is the problem, not PVE: even doing "dog rollback" from the CLI unexpectedly...
  4. A

    Sheepdog rollback deletes disk

    As far as we can determine: We took a live snapshot of a powered-on VM running on Sheepdog storage (not including memory) A little while later, powered off the VM and attempted to roll back. Rollback failed complaining that the disk already existed Looks like we attempted to rollback more than...
  5. A

    Sheepdog rollback deletes disk

    We've just encountered a situation where we took a snapshot of a sheepdog-based VDI, then later rolled it back - but on trying to roll back, failed, and not only didn't roll back the disk, but deleted the base disk, too. Can anyone else report on their success or failure using snapshots with...
  6. A

    Sheepdog storage not thin

    Still not as stable as CEPH. I just tried rolling back on a snapshot, only to have sheepdog delete the base disk when the rollback failed. Not exactly a graceful failure mode... Sheepdog also fails miserably at online cluster operations like altering the # of copies kept - the entire cluster...
  7. A

    Sheepdog storage not thin

    Oh, shoot, yes online-vs-offline would explain the results I'm seeing. I wasn't aware of that limitation. Some of the disks I can trim - and now have, to good effect - but others predate OSes with TRIM support (notably one particular Win2008R2 domain controller with a needlessly large disk). I...
  8. A

    Sheepdog storage not thin

    I've been migrating VMs from NFS to Sheepdog, and I'm suddenly noticing that some of the virtual disks are consuming 100% of their stated capacity, i.e. thick-provisioned, not thin. Sheepdog is supposed to be able to thin-provision... and in fact does for some other disks - how do I "thin-ify"...
  9. A

    Insane load avg, disk timeouts w/ZFS

    Only consistent kernel panics :-). It was not an acceptable solution.
  10. A

    How to downgrade CEPH from Jewel to Hammer?

    Dietmar, as I've said elsewhere - this wasn't an upgrade. I expect the upgrade to work fine. What's broken for me is a brand-new, fresh install of Jewel on a fresh install of 4.3 immediately updated to 4.4.
  11. A

    How to downgrade CEPH from Jewel to Hammer?

    Since there's a showstopper bug with creating new CEPH clusters in Jewel, I had to figure out how to remove it and reinstall Hammer the hard way: 0. "pveceph install -version hammer" to reset the apt/sources.list.d/ceph.list repository (or just fix it by hand) 1. "dpkg-query --list | grep...
  12. A

    Cannot create jewel OSDs in 4.4.5

    Matthieu, lsblk never reports anything for the second partition. There is no filesystem on the second partition. Partitions to not get UUIDs by themselves; only whole disks and filesystems get UUIDs, so it makes sense that there is no UUID on the second partition. In any case, I can't...
  13. A

    Cannot create jewel OSDs in 4.4.5

    Granted, but recall this was a fresh install of Jewel, Hammer was never installed here. I'll try following some of that anyway (once the site is back up...) and see what happens.
  14. A

    Cannot create jewel OSDs in 4.4.5

    Nope. /dev/sdc1 is owned by ceph/ceph, but /dev/sdc2 is owned by root/root.
  15. A

    Cannot create jewel OSDs in 4.4.5

    Nothing obviously wrong to me: root@pve1:~# ls -ld /var/lib/ceph/ drwxr-x--- 8 ceph ceph 4096 Dec 30 13:07 /var/lib/ceph/ The error more tends to indicate (to me, anyway) that the mountpoint wasn't chown'd after being mounted there. (All disks under linux AFAIK mount as root... is there a...
  16. A

    Cannot create jewel OSDs in 4.4.5

    At the last step (ceph-activate), I get: root@pve1:~# ceph-disk activate /dev/sdc1 got monmap epoch 1 mount_activate: Failed to activate ceph-disk: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', u'2', '--monmap', '/var/lib/ceph/tmp/mnt.4Y75ui/activate.monmap', '--osd-data'...
  17. A

    Cannot create jewel OSDs in 4.4.5

    I mean there no symlink to /dev/sdc2 exists in /dev/disk/by-partuuid to /dev/sdc2 until I put a filesystem on /dev/sdc2. Should I just change the symlink to (literal) "/dev/sdc2" ? Right now the symlink is to a non-existent entry in /dev/disk/by-partuuid.
  18. A

    Proxmox 4.4.5 kernel: Out of memory: Kill process 8543 (kvm) score or sacrifice child

    I reformatted one of my two affected servers (both Dell PowerEdge 2950-III systems, one with 28GB RAM, one with 16GB RAM) to not have any ZFS data pools whatsoever. The non-ZFS server now survives the nightly backups. The ZFS server still kills *both* VMs running on it. Just before the OOM...
  19. A

    Cannot create jewel OSDs in 4.4.5

    Not unless I put a filesystem on /dev/sdc2. What type should it be?
  20. A

    Cannot create jewel OSDs in 4.4.5

    1. "activate.monmap" is not owned by ceph/ceph, it's owned by root/root. 2. "journal" points to a partition UUID that doesn't exist. 3. "ceph-osd --mkjournal -i 0" fails, with this error: root@pve1:~# ceph-osd --mkjournal -i 0 2017-01-03 10:45:15.398375 7f22b6992800 -1...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!