Recent content by AndreasS

  1. A

    VM boot issue after full clone between SAS storages (Dell ME4024) – Proxmox VE 8.4.14

    Hey, why should both Storages hold the same data? Did you enable replication jobs between the two? Or are you using CEPHS? And if you did enable replication that ME4024 are not in sync. They will always have minimum 30 minutes of delay.
  2. A

    VM boot issue after full clone between SAS storages (Dell ME4024) – Proxmox VE 8.4.14

    Hi, do the ME4024 have enough SAS Ports to connect to ALL 6 Hosts? Afaik there are only 4 Ports on each controller, not 6 (keep in mind 2 controllers per ME4024 are redundant, so you cannot count as 8 ports u can use).
  3. A

    [SOLVED] CheckMK-Installation/Configuration help needed on SSL

    Question 2. solved now as well: added Proxmox pve-root-ca.pem to CheckMK Trusted Anchor Storage (CheckMK Global Settings -> Trusted certificate authoroties for SSL -> copy .pem cert there). Then re-schedule Check_MK Agent inventory service as it's usual schedule is every 2 hours. added...
  4. A

    [SOLVED] CheckMK-Installation/Configuration help needed on SSL

    Good morning, question 2. solved: Create new Rule in CheckMK which selects "Worst Node Wins". Rule only points to service "Check_MK Agent" and is applied only to the Clustered Host. Service "only" checks, if most recent version of CheckMK Agent is installed on the nodes, which is checked on...
  5. A

    [SOLVED] Proxmox VE 9 vm backup with Veeam Backup & Replication 12.3.2

    Hello all, this is working now with the most recent Proxmox VE9 and most recent Veeam 12. Restore even of single files is working without any helper-vm needed. Just mounted via the original-vm. Regards, Andreas
  6. A

    [SOLVED] CheckMK-Installation/Configuration help needed on SSL

    Hello again, I have just put monitoring for Proxmox VE (most recent version/patchlevel) on our CheckMK (Agent 2.4.0p18). Following CheckMK-how-to (CheckMK Proxmox Monitoring) and Thomas-Krenn-how-to (Thomas-Krenn Proxmox Monitoring). All is working fine apart from two issues: 1. This critical...
  7. A

    [SOLVED] open-iscsi-service failed to start after update to Proxmox 9.0.10 (6.14.11-3-pve)

    Yes there is only one VM active for testing at the moment, which is obviously only running on one node. There are 4 other VMs, but they are turned off and also not using the iSCSI storage.
  8. A

    [SOLVED] open-iscsi-service failed to start after update to Proxmox 9.0.10 (6.14.11-3-pve)

    Hi all, 1. Dell always assigns all Portal-IPs of each controller of the storage, no matter if you tell to use only specific ports, this is known Dell behaviour. So I made all ports available to my proxmox cluster. PVESM status is coming back fine and working. iSCSI config is reboot consistent...
  9. A

    [SOLVED] Windows guests are extremely slow

    Hi, can you try using Q35 machine type recent/latest version instead of "downgraded" V9.2 version? Maybe that helps.
  10. A

    [SOLVED] open-iscsi-service failed to start after update to Proxmox 9.0.10 (6.14.11-3-pve)

    iscsiadm discovery against mgmt IP did not work, I will try opening a proper case with Dell, would be better to fix it from the right end I suppose. Will keep you posted.
  11. A

    [SOLVED] open-iscsi-service failed to start after update to Proxmox 9.0.10 (6.14.11-3-pve)

    difference between the nodes is, that the failing node got the update to latest PVE kernel and the other didn't so far. Apart from that I wanted to try, if the advertising from Dell-storage end is working properly if I use the management-IP of the controllers rather than iscsi-Port-IPs as...
  12. A

    [SOLVED] open-iscsi-service failed to start after update to Proxmox 9.0.10 (6.14.11-3-pve)

    Hey @bbgeek17, this is indeed a good starting point, do you know if I can safely replace portal IP in /etc/pve/storage.cfg with another one via CLI and just reboot, to see if it is working? Or do I have to do this in a different place. Cluster is non productive yet, but I don't want to...
  13. A

    [SOLVED] open-iscsi-service failed to start after update to Proxmox 9.0.10 (6.14.11-3-pve)

    I know, there are 8 ports on our storage, 4 of them can be connected (10.xxx.19.yyy IP-Range), the other 4 ports (10.aaa.20.bbb) are for intra-storage replication, nevertheless the appliance announces all 8 corresponding IP-addresses for 8 portals instead of 4. This might be bad implementation...
  14. A

    Is PVE9 supported on Veeam 12.3?

    Hi Victor, restore on Hyper-V is working for me in general, but I don't have and ESXi for testing. Do you have subscription with Veeam? Their support is quite good. Regards, Andreas
  15. A

    [SOLVED] open-iscsi-service failed to start after update to Proxmox 9.0.10 (6.14.11-3-pve)

    Hi Fiona, those two are reasonably small and secret, how can I share full logs without compromising internal information e.g. IP-addresses and servernames etc. in here?