Search results

  1. L

    Perplexing: When a node is turned off, the whole cluster looses it's network

    On a 4 cluster Proxmox installation, when one node is shut down, access to the network on the others goes away somehow. Here is configuration: Each node is set up similarly, but with the LAN, corosync and other address changed with each node. The enlan2.25 and enlan2.35 are legacy setups...
  2. L

    Windows Server licensing

    It's been a couple of years, but the issue is still the same. I'd like some clarification on this please. If I have 4 nodes on a pmx cluster with 2 x 10 core cpu's in each and I want to license for example a Windows Server 2022, then would I have to: Pay MS for a license for 16 cores for...
  3. L

    enabling ceph image replication: how to set up host addresses?

    The file rbd-mirror-capetown.conf contains the config of the capetown cluster on the remote cluster, so from that I assume that I have to create a VPN link between the two sites so that the replication service on the LAN at the remote site is able to get to the local / LAN address given in that...
  4. L

    enabling ceph image replication: how to set up host addresses?

    I'm attempting to do a test to replicate a ceph image to a remote cluster by following this HOWTO. However, what I'm missing is the detail of how or where to specify where "site-a" is in the examples given in terms of ip address. When I follow the instructions, I see this in the status logs...
  5. L

    [SOLVED] 2 stuck OSD's in ceph database

    I recreated the manager on a node (after deleting all the managers) and that resolved the issue, so I can now add the OSD's again.
  6. L

    [SOLVED] 2 stuck OSD's in ceph database

    That just hangs, since the osd's were on a node that doesn't exist anymore. Here is also says :~# pveceph osd destroy 1 OSD osd.1 does not belong to node pmx2! at /usr/share/perl5/PVE/API2/Ceph/OSD.pm line 952, <DATA> line 960. This zapped the osd's, but they are still shown in the ceph...
  7. L

    [SOLVED] 2 stuck OSD's in ceph database

    I tried to remove all OSD's from a cluster and recreate them, but 2 of them are still stuck in the ceph configuration database. I have done all the standard commands to remove them, but the reference stays. # ceph osd crush remove osd.1 removed item id 1 name 'osd.1' from crush map # ceph osd...
  8. L

    New install pve 8.2 on Debian 12 certificate blocks GUI

    # cat /etc/hosts 127.0.0.1 localhost 154.65.99.47 pmx1 ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters # pvecm updatecerts --force (re)generate node files generate new node certificate merge authorized SSH keys creating directory...
  9. L

    New install pve 8.2 on Debian 12 certificate blocks GUI

    This host gets a dynamic ip address as per the cloud provider's settings. Do I have to have the address set in the hosts file? inet 154.65.99.47/20 metric 100 brd 154.65.111.255 scope global dynamic ens3
  10. L

    New install pve 8.2 on Debian 12 certificate blocks GUI

    # cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters
  11. L

    New install pve 8.2 on Debian 12 certificate blocks GUI

    I have done fresh install on a Debian 12 cloud host and all went well I thought, except that port 8006 is not responding. (I followed the documentation here) I the logs I find this: Jun 04 17:52:23 pmx1 pveproxy[12734]: /etc/pve/local/pve-ssl.pem: failed to use local certificate chain...
  12. L

    Windows Server 2022 reports disk errors on ceph volume

    Sorry, it's now called retrim, which differs from defrag. However, it still kills the machine when doing it on a 1TB drive. It's NVMe storage, can't be too slow. It doesn't happen on any other machine, only on the new WS 2022 installation.
  13. L

    Windows Server 2022 reports disk errors on ceph volume

    The driver update made no difference. However, the scsi-virtio driver allows thin provisioning of the disk volume, which is what we use, so Windows starts a defrag (Edit: it's actually an "optimization") once a week by default and it uses 75% of the available RAM. When some user also log on...
  14. L

    Windows Server 2022 reports disk errors on ceph volume

    We installed a new Windows server 2022 on a cluster that uses an SSD-based ceph volume. All seems to be going well, when suddenly windows event log reports: "An error was detected on device \Device\Harddisk0\DR0 during a paging operation" It's Windows error # 51 There are other Windows...
  15. L

    [SOLVED] Remote server doesn't deduplicate

    I found that the garbage collection job wasn't running. Fixed that and now all is well!
  16. L

    [SOLVED] Remote server doesn't deduplicate

    I have setup a remote server in a different city to which I ship all backups using a sync job. The remote PBS datastore however doens't seem to be doing deduplication. The local PBS. Usage : 91.02% (3.97 TB of 4.37 TB) Backup Count CT : 32 Groups, 380 Snapshots Host : 0 Groups, 0...
  17. L

    pvesh and how to list API endpoints

    I had used https://pve.proxmox.com/wiki/Proxmox_VE_API#Using_.27pvesh.27_to_Access_the_API, which bascially just touches on the how to use pvesh. However, I have now found https://pve.proxmox.com/pve-docs/api-viewer/, which actually documents the whole API. I think it would be a good idea to...
  18. L

    pvesh and how to list API endpoints

    I have seen a couple of blogs out there that claim one can simply use the pvesh command without any parameters and it will drop into an interactive mode where one can show the calls that can be done and a particular level. It doesn't work like that for me though and the documentation is really...
  19. L

    Use API to get storage location for VM's

    Here is what I ended up doing: #!/bin/bash if [ -f vms.config ]; then rm vms.config fi echo 'node,type,name,cores,memory,disk1,virtio1,scsi0,scsi1' >vms.config for i in $(pvesh get /nodes/ --noheader --output-format json|jq -r '.[] .node') do for j in $(/usr/bin/pvesh get /nodes/$i/qemu...
  20. L

    Use API to get storage location for VM's

    So, I have created a basic script to list the storage locations for virtual machines on my Proxmox clusters. I will expand this to include lxc as well, but for now I'm just trying to learn how jq achieves this. #!/bin/bash #IFS=$'\n' rm qemu.config touch qemu.config for i in $(pvesh get /nodes/...