Search results

  1. D

    Wrong size reported by CEPH

    Hello, We have been using Proxmox for a long time along with ceph. but I feel the report of the space usage by Proxmox is misleading. for eg. we have a 3 node cluster with 3 disks of 7.68TB NVMe on it. now we have a 3 way replicated pool which means we should have 1/3rd the capacity of 3...
  2. D

    VM Issue

    Hello I have a strange problem with the latest update of proxmox i have a machine that is shutting down and then refusing the start unless the node is rebooted. I get the below error in the Task Viewer: swtpm_setup: Not overwriting existing state file. stopping swtpm instance (pid 36801) due...
  3. D

    Nvidia vGPU-A40 with proxmox

    have a Nvidia A40 GPU - this is supported I have proxmox 8.3.0 - With Kernel 6.5.13-6-pve pinned and installed nvidia VGPU Linux drivers - 535.161.05 I have got the responce from nvidia-smi, nvidia-smi vgpu, mdevctl types... all ok i can even assign the PCIe-vGPU to the VM but cant start it i...
  4. D

    No space in backup server - help

    2024-09-07T11:27:11+05:30: starting garbage collection on store backup01 2024-09-07T11:27:11+05:30: Start GC phase1 (mark used chunks) 2024-09-07T11:27:11+05:30: TASK ERROR: update atime failed for chunk/file...
  5. D

    vGPU - Nvidia vm - vm does not start

    Hi I have a 48GB A40 GPU running Proxmox 8.2 (pve-manager/8.2.4/faa83925c9641325 (running kernel: 6.5.13-3-pve)) with nvidia driver ver. 535.161.05 I am running 7 vm with vgpu successfully (4GB Profile) I should be able to run 12 VM with the given video memory I have. But not able to start...
  6. D

    Garbage Collection - Pending removals

    Hello Team, I dont see how I can get the pending removals of 1.34TB - this always shows as 0bytes removed I have tried daily garbage collect, hourly, every 2 hours every thing tried. Maybe i have not understood what actually happens here can somebody help
  7. D

    Datastore Full

    Hi, At a customer site the backup datastore became full, we pruned the number of backups from keep 7 to keep 4 and ran prun, we even deleted the backups of 2-3 vm but still the storage is full at 100%, Any ideas on how to come out of this situation. Thanks in advance
  8. D

    Ceph Add capacity - Slow rebalance

    Hi We have an EPYC 9554 based Cluster with 3x WD SN650 - NVME PCI-Gen4 15.36TB in each server. all backed by 100G Ethernet on Mellanox 2700 CX4 cards. output of lspci --- lspci | grep Ethernet 03:00.0 Ethernet controller: Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA...
  9. D

    Proxmox to Proxmox Cluster Migration

    Hi Team, Proxmox to Proxmox migration on old Proxmox RBD to new Proxmox RBD is supported now as I managed to do it. it works flawlessly. Moving VM from one cluster to another is so seamless that users will not even realize that they were moved from one cluster to another. Both clusters have...
  10. D

    Slow moving of storage

    Hello team, I am on the latest version of Proxmox with the enterprise repo. We are moving the disks from a ZFS storage to a ceph storage and all this on NVMe Enterprise SSD PCIE 4.0 * 2 per node = 6 SSD with a 100G Ethernet network. So the IO should be high and it should not be an issue...
  11. D

    Install - Screen issues

    Hi Folks, Trying to install proxmox 8 on a EPYC 9554 based machine on a Gigabyte MZ33-AR0 Board. The screen is cut at the bottem right corrner (image below) across mutliple reboots and you cant seem to work via IPMI. any bright ideas
  12. D

    Slow CEPH performance

    Hi, I have proxmox 8.1 running, will be upgrading to proxmox 8.2.2 soon. We had one of the nodes crash and we have very slow rebuild speeds We run AMD EPYC 7002 Series CPU with 64 Cores * 2, 2TB RAM, 15.36TB SN650 NVME SSD - WD Enterprise grade * 4 per node and we have 10G for interVM...
  13. D

    Proxmox 8.1 +Secure Boot + A40 (Supported Nvidia vGPU)

    Hi Folks, Got Proxmox 8.1 Running with Secure Boot and CEPH, added an Nvidia A40 GPU, this supports vGPU was following the below guide - i dont need vgpu-unlock as its a supported card. https://gitlab.com/polloloco/vgpu-proxmox iommu is enabled - verified by below command...
  14. D

    WRMSR messages on Proxmox 8.1

    I have installed Proxmox 8.1 and updated it with the enterprise repo too so I am on the latest stable updates version of packages. I have installed a few VM and I noticed the below messages on the console like the below. kvm_amd: kvm [processid]: vcpu0 guest rIP: 0xfffff8e8f14d08e6 Unhandled...
  15. D

    rbd error

    Hello, We have a strange issue, the vm was not starting and kept giving errors. we did a #rbd cp RBD-ceph/vm-3242-disk-0 RBD-ceph/vm-3242-disk-1 - this worked - took 15-min and copied the volume data over. and made a manual change in the /etc/pve/qemu-server/3242.conf and changed the...
  16. D

    Help with Ceph

    Hello folks, We have a proxmox cluster running ver. 7.4 - not yet upgraded to 8.1 - will do shortly after we resolve the ceph issue. Its a 4 node cluster with ceph on 3 nodes. with 6.4TB NVME * 2 for each host for ceph osd. means total of 6.4TB * 6 drives in the pool. 3 way replicated with 2...
  17. D

    Proxmox 8.1 Upgrade and Secure Boot

    Hi Folks, Congrats on the release of release 8.1 Now that we have Proxmox 8.1 supporting Secure boot, I understand that we can certainly have secure boot enabled and have a clean install of Proxmox 8.1 But what about a upgrade to Proxmox 8.1, I don't think going into the bios and enabling...
  18. D

    Error in Ceph Reed Repo

    Hi Folks, Congrats on the new release of Proxmox 8.1 was following the below guide to upgrade from Quincy to Reef https://pve.proxmox.com/wiki/Ceph_Quincy_to_Reef#Important_Release_Notes # apt update Hit:1 https://packages.wazuh.com/4.x/apt stable InRelease Hit:2 http://ftp.debian.org/debian...
  19. D

    Hung Node

    Hi Folks, We are on a enterprise repo (just want to inform that we are on tested stable repo) but faced the below screenshot, want to know if its a proxmox issue or a hardware issue. node was pinging but - not able to ssh or communicate with the host, but a few vm inside were working...
  20. D

    Hotplug Support in PVE 8.04

    Hello, We have the paid subscription of PVE, All nodes in the Cluster Updated using PVE Enterprise REPO. pve-manager/8.0.4/d258a813cfa6b390 (running kernel: 6.2.16-18-pve) I have given the number of cores as 16 and vCPU as 4, also enabled hotplug for Memory, CPU, Disk, Network and USB...