Search results

  1. G

    All lxc fails to launch after apt update and upgrade.

    seems as if your container is started before the storage is available (seeing the 865 No such file or directory - Failed to receive the container state) Have you checked if all your storage is avaiable to PVE ?
  2. G

    Appliances and their shutdown behaviour

    In my setup i am running some appliances which refuse to shutdown when i issue a host reboot. As these are 'closed' appliances i am not able to install helpers like the Qemu-guest-agent to get around this. So the only solution is to open a console, and shutdown them from there ( else Proxmox...
  3. G

    Proxmox host locked to my static IP and double authentication

    i use a reverse proxy with authentication and authorisation (including 2FA) to expose the management UI to the public side. this way i control whats being exposed.
  4. G

    update-initramfs failed - No /etc/kernel/pve-efiboot-uuids found

    I dont see an error, just a piece of information ... on my dl360G7's i also see it when i update kernel ( the pve-efiboot-uuids - message) .. but never have i had any issues.
  5. G

    Shared crontab between nodes

    or setup a VM/LXC for puppet (or similar) and let that control your node's crontab 1 point of administration , and its all synched
  6. G

    Best practices for setting up physical disks

    I setup proxmox servers from installing a Debian 10 server (which allows me to partition as i see fit) and then moving to proxmox. This gave me the freedom to lay out partitions as i wanted .. below my partition-layout : 500M boot (xfs) – primary partition 20Gb root (xfs) – primary partition...
  7. G

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    Offering raw LVM (or LVM-thin) does not give me the capabillity of pulling snapshots, which in my env is very required (i test customer-cases/issues) So prior to starting on testing a case from the 'base' product i take a snapshot, then further tune into the situation, test, report back on...
  8. G

    [SOLVED] NFS Storage Cleanup?

    Hi, Good to hear you found out where the issue was :) Could you please mark the Topic as 'Solved' ?
  9. G

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    Something i had forgotten to mention in the whole previous is that the directory being offered to Proxmox is not set to shared. As the GFS2 filesystem takes care of this by itself it is not needed to set the directory to 'shared'
  10. G

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    Another Update after a while of time. I extended my storage, so i now have enough for the future. One of the things i had to do was to migrate all VM/CT storage devices from the original 'Raw' LVM device i offered to Proxmox to the GFS2 offered volume (as directory). When i had completed the...
  11. G

    Upgrade proxmox 5.3-5 to 6.0

    Clusters do require the pre-upgrade on all nodes to corosync 3.x , from what i have experienced on my 4-node cluster following the exact procedure as described in the update docs i had no issues at all.
  12. G

    Issue on connection between node to cluster

    u could fake the quorable part by setting pvecm expected 1 , this would atleast get your VMS running, but it wont solve the cluster-issues you are facing.
  13. G

    Access proxmox gui with domain name instead of local ip and port

    To get DNS resolution for your proxmox UI the solution depends on some conditions : One management machine over dns/hostname resolution. Easyest way is to create hosts entries in C:\Windows\System32\drivers\etc\hosts Add a line to the file in the following format : ip (space or tab) fully...
  14. G

    Cluster - possible PVE 5.x and PVE 6.x ?

    As stated above Clustering on PVE5 is not compatible with PVE6 due to the way in which corosync version 2 differs from 3, however for migration purposes there is a way : My approach would be : moving towards the latest 5.4 version. check compatibility on all your nodes for migrating towards...
  15. G

    Unable to connect after 5.x to 6.x upgrade.

    you are welcome - please mark the topic as solved :)
  16. G

    Unable to connect after 5.x to 6.x upgrade.

    Do add them : i myself do not use subbed repo : cat /etc/apt/sources.list.d/pve-install-repo.list deb http://download.proxmox.com/debian/pve buster pve-no-subscription
  17. G

    Unable to connect after 5.x to 6.x upgrade.

    as you are only listing the debian repositories in your info ... can we also get a list/cat of the proxmox repo's ?
  18. G

    Proxmox with Storage Server

    Hi, i am a bit confused, i myself have an MSA2040 unit which has 4 SAS ports , connected to 4 separate servers, and configured for shared access of the LUN's i have configured. as far as i can derive from your info : You have 3 server you have a MSA2000 It is physically connected to ONE...
  19. G

    GFS/GFS2

    Hi, shared lvm directly attatched has its disadvantages, as it will not hold all types/elements required, it only supports diskimages and containers. Which is why i have been testing a GFS2 shared volume offered to PVE as directory, please look at my experiences so far as i have kept a...
  20. G

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    Just an update in regards of the workings : I have been updating the PVE install (regular software update) on all nodes, without any issues, basically they restarted flawlessly, joined the global lockspace and then mounted the shared LVvolumes. So i would say the final configuration is...