Search results

  1. D

    [SOLVED] Promox PAM Authentication not working against SSSD

    You can take a look at the files here : https://git.fws.fr/fws/ansible-roles/src/branch/master/roles/sssd_ad_auth/templates (the various deb_xxx files). I'm using those to auth my PVE hosts against a Samba4 domain with sssd. Cheers, Daniel
  2. D

    Possibility to set quota for datastores

    You can use zfs and set quota on dataset
  3. D

    Why I cannot use a stp bridge to do a full mesh?

    You can still enable stp in /etc/network/interfaces if so you want. All the SDN stuff, including l3 routing are for situations where simple bridges are not an option (eg, when you do not control the l2 between your nodes, so no VLAN. Or when your hostser doesn't provides QinQ etc.)
  4. D

    Why I cannot use a stp bridge to do a full mesh?

    I do not understand. You can't do it, but you've done it ? That doesn't make a lot of sens. You can't do it through the GUI ? Well, it's an advanced options most users will never need to understand or set. You can always set it in /etc/network/interfaces manually
  5. D

    VM drive read only since poweroutage

    Well, you have saturated your LVM thin storage. You have to free some space (either extend the thin pool, or remove some of the volumes) before trying to recover data inside VM.
  6. D

    How to manage file storage connected via iSCSI?

    There's no one true and best way to do this. Depends on a lot of factors. Including if you have another temp storage available, how much downtime you can afford, the size of the disks etc. You can for example do offline migrations with qm importdisk. It'll convert from vmdk to raw while copying...
  7. D

    How to manage file storage connected via iSCSI?

    I don't understand your question With a san, I'd use lvm over iscsi.
  8. D

    How to manage file storage connected via iSCSI?

    One way to have a managed storage is to stack LVM on top of your iSCSI device. Note that such a setup would provide a block based storage, so not usable for files (no template/ISO/snippets, nor backup dumps). Also, while the storage can be shared accross all your nodes, it doesn't support thin...
  9. D

    [SOLVED] HA Cluster with 2 node proxmox.

    Hardware spec can be different, no problem
  10. D

    [SOLVED] HA Cluster with 2 node proxmox.

    The 3rd node can be replaced with another qdevice to provide the needed vote. See https://pve.proxmox.com/wiki/Cluster_Manager
  11. D

    [SOLVED] HA Cluster with 2 node proxmox.

    You need 3 nodes yo have HA working
  12. D

    Error with pve-firewall after recent update

    An update has been pushed with a fix for this. See https://bugzilla.proxmox.com/show_bug.cgi?id=2719
  13. D

    (SOVLED)LVM over iSCSI not working with multipath devices

    Well, depends on your needs. I'll personnaly always favor iSCSI+LVM over NFS (for performance, and reliability reasons). Block based storage are usualy preferred. But, with NFS, you can use qcow2, so thin prov and snapshots.
  14. D

    (SOVLED)LVM over iSCSI not working with multipath devices

    You need to configure the multipath device in /etc/multipath.conf. And exclude the underlying devices from LVM, so that LVM only use the multipath "view". See https://pve.proxmox.com/wiki/ISCSI_Multipath
  15. D

    [SOLVED] Getting permission denied on ZFS-shares after upgrade to Proxmox 6.1 (from 5.4)

    If it's an NFS export from a ZFS dataset, it's probably because since ZFS on Linux 0.8.3, the no_root_squash option isn't set anymore by default, and must be set explicitely, like zfs set sharenfs='rw,no_root_squash' zpool/foo/bar
  16. D

    [SOLVED] NFS problem after latest upgrade .. vzdump backup status: backup failed

    The issue is most likely the change in zfs 0.8.3 : default nfs exports do not set no_root_squash anymore. You must set it explicitely now, eg zfs set sharenfs='rw,no_root_squash' zpool/dataset
  17. D

    [SOLVED] Unexpected pool usage of ZVOL created on RAIDZ3 Pool vs. Mirrored Pool

    This is due to padding when using small volblocksize with raidz. See https://www.reddit.com/r/zfs/comments/b6dm4y/raidz2_used_size_double_logical_size_in_proxmox_53/?utm_source=amp&utm_medium=&utm_content=post_body for example. Try using 16k volblocksize (or whatever best value for your raidz...
  18. D

    /boot partition full after latest kernel upgrade

    You should Indeed reboot to run the latest kernel anyway. But I recommend to do so after feeeing some space and reinstalling the latest kernel, to be sure
  19. D

    /boot partition full after latest kernel upgrade

    You can remove old kernels manually. List them with dpkg -l | grep pve-kernel Then apt remove --purge pve-kernel-xxx Keep only the last 3 versions. If disk where full during the last update, reinstall the last one with apt reinstall pve-kernel-xxx
  20. D

    Is it possible to use snapshot of one vm for another vm?

    No. You can clone VM 100 from this snapshot but that's it.