Search results

  1. D

    Ceph usage space

    We made a test VM, with 13 GB utilization in it - Windows Server 2022 checked the space in the ceph copied some 2GB of data into it.. now it reports I deleted the copied data to verify its usage, but no impovement, it still reports 15GB I ran disk optimisation inside the OS to trim...
  2. D

    WRMSR messages on Proxmox 8.1

    I have installed Proxmox 8.1 and updated it with the enterprise repo too so I am on the latest stable updates version of packages. I have installed a few VM and I noticed the below messages on the console like the below. kvm_amd: kvm [processid]: vcpu0 guest rIP: 0xfffff8e8f14d08e6 Unhandled...
  3. D

    spice-agent stops after the restart windows

    Same issue, anyone found a solution to this
  4. D

    rbd error

    Hello, could you help me on how to get the info / status of the RBD image
  5. D

    rbd error

    Hello, We have a strange issue, the vm was not starting and kept giving errors. we did a #rbd cp RBD-ceph/vm-3242-disk-0 RBD-ceph/vm-3242-disk-1 - this worked - took 15-min and copied the volume data over. and made a manual change in the /etc/pve/qemu-server/3242.conf and changed the...
  6. D

    Help with Ceph

    Hello folks, We have a proxmox cluster running ver. 7.4 - not yet upgraded to 8.1 - will do shortly after we resolve the ceph issue. Its a 4 node cluster with ceph on 3 nodes. with 6.4TB NVME * 2 for each host for ceph osd. means total of 6.4TB * 6 drives in the pool. 3 way replicated with 2...
  7. D

    Proxmox 8.1 Upgrade and Secure Boot

    Hi Folks, Congrats on the release of release 8.1 Now that we have Proxmox 8.1 supporting Secure boot, I understand that we can certainly have secure boot enabled and have a clean install of Proxmox 8.1 But what about a upgrade to Proxmox 8.1, I don't think going into the bios and enabling...
  8. D

    Error in Ceph Reed Repo

    Yes Sir, I do have a subscription for the full cluster. E: The repository 'https://enterprise.proxmox.com/debian/ceph-reef bookworm Release' does not have a Release file. But still get the error. Concerned about this below... Err:10 https://enterprise.proxmox.com/debian/ceph-reef bookworm...
  9. D

    Error in Ceph Reed Repo

    Hi Folks, Congrats on the new release of Proxmox 8.1 was following the below guide to upgrade from Quincy to Reef https://pve.proxmox.com/wiki/Ceph_Quincy_to_Reef#Important_Release_Notes # apt update Hit:1 https://packages.wazuh.com/4.x/apt stable InRelease Hit:2 http://ftp.debian.org/debian...
  10. D

    Hung Node

    Hi Folks, We are on a enterprise repo (just want to inform that we are on tested stable repo) but faced the below screenshot, want to know if its a proxmox issue or a hardware issue. node was pinging but - not able to ssh or communicate with the host, but a few vm inside were working...
  11. D

    Proxmox 8 Breaks Nested Virtualization

    Nested ESXi Works l ike a charm Now on the enterprise repo... Thanks folks
  12. D

    Proxmox 8 Breaks Nested Virtualization

    When is the proxmox-kernel-6.2.16.-19 coming to the enterprise repo, we are having a paid subscription to all hosts so don't want to try the pve-no-subscription unless this is going to take time to come to the enterprise repo.
  13. D

    Hotplug Support in PVE 8.04

    Hello, We have the paid subscription of PVE, All nodes in the Cluster Updated using PVE Enterprise REPO. pve-manager/8.0.4/d258a813cfa6b390 (running kernel: 6.2.16-18-pve) I have given the number of cores as 16 and vCPU as 4, also enabled hotplug for Memory, CPU, Disk, Network and USB...
  14. D

    Security Hardening

    Should we not look at securing services on Proxmox systemd-analyze security - gives out most as unsafe... any ideas
  15. D

    Security Hardening

    We have seen recent Ransomware attacks on vSphere ESXi Hypervisors and are very concerned about Proxmox being targeted too. We are planning on doing the hardening of Proxmox hosts and implementing a security audit using lynis. During the course of this audit I am sure to hit many roadblocks...
  16. D

    Sizing for an HA cluster with Ceph

    Hi Alex, this is Inter VM traffic i am referring too and not Inter VLAN traffic. both are different. both VM in this case are in the same VLAN
  17. D

    Sizing for an HA cluster with Ceph

    Hi, Having 200 VM does not mean that all are transfering huge data at the same time, its during benchmarking using iperf3 that we saw only 2-3 Gbps which i felt was low and could be better
  18. D

    Sizing for an HA cluster with Ceph

    sorry for the delayed response... the type of disks are 6.4TB WD SN640 NVME PCIE Gen 3. we have about 8 disks per node * 3 Nodes = 24 NVME in the CEPH Cluster and Mellanox CX455/456 100G Cards so it clearly is some drivers issue or something of the sort. a new cluster is being planned with...