Search results

  1. E

    Query Regarding API Permission

    I have created PVE user and its API Token for a user manish@pve in proxmox cluster running version 7. When i assign user permission to VM as attached in screenshot1, the permission are showing with propagate=Yes whereas if I assign token permission to the same VM with same role, it is showing...
  2. E

    [SOLVED] Spiceproxy via API

    I have create an API Token and wanted to access the spice-config through an API curl -f -s -S -k --header 'Authorization: PVEAPIToken=manish@pve!monitoring=b9b0c035-3fe2-46e9-9374-299854b44b12' "https://inc1pve53:8006/api2/spiceconfig/nodes/inc1pve53/qemu/111/spiceproxy" when I do this curl...
  3. E

    Secure Boot Disabled automatically when VM is started by HA

    I have a raised a bug report https://bugzilla.proxmox.com/show_bug.cgi?id=3826
  4. E

    Proxmox 7 Bug in the VM Config

    qm config 126 agent: 1 balloon: 8192 bios: ovmf boot: order=scsi0;ide2;net0 cores: 4 cpu: kvm64 efidisk0: vmrepository:vm-126-disk-1,efitype=4m,pre-enrolled-keys=1,size=1M hotplug: disk,network,usb,memory,cpu ide2: none,media=cdrom memory: 32768 name: Naviadv-Website net0...
  5. E

    Procedure followed for Proxmox 6 to Proxmox 7 for No subscription Proxmox

    export LC_ALL="en_US.UTF-8" rm -f /etc/apt/sources.list.d/pve-enterprise.list apt-get update && apt-get full-upgrade -y sed -i 's/buster\/updates/bullseye-security/g;s/buster/bullseye/g' /etc/apt/sources.list echo "deb http://download.proxmox.com/debian/ceph-octopus bullseye main" >...
  6. E

    PBS issue with TOTP

    I have added a proxmox backup server in the proxmox VE and it was working fine and I could take backup normally I decided to add TOTP in the web interface of PBS and after that It started showing gray In Proxmox. Again I removed the TOTP, it started showing normally. It seems like bug to me
  7. E

    Ceph Enable Local OSD First

    Is there a way that while selecting OSD for data writes in case of Ceph, the Primary OSD is always selected from the Server where VM is running. The normal pattern is that Primary OSD are selected randomly and writes are slightly delayed by Latency of the network
  8. E

    Ceph emulating Raid 1

    Follow this command to start with single node ceph Assumption: You have single node, You have configured a cluster already with single node member as of now pveceph init --network=172.19.X.Y/24. == Specify your 10G or higher interface network here pveceph mon create find the list of drives...
  9. E

    Ceph emulating Raid 1

    Well one way to start with single node setup is modify the crush rule and allow the replication across osds rather than nodes. You can do that easily by manually editing crush maps. As soon as you keep adding nodes and you have sufficient number of nodes, change the crush map again to distribute...
  10. E

    Ceph emulating Raid 1

    How many nodes you have right now??
  11. E

    Ceph Pacific

    As far as I understand, you need to go to 6.4 to migrate to 7 as there is no tool like pve6to7 in repo which can help from 5 to 7
  12. E

    Ceph Pacific

    http://download.proxmox.com/debian/ceph-pacific/
  13. E

    Ceph Pacific

    I see that ceph pacific is now available in repo but it is only available for bullseye and not for buster release. How can we test it any documentation will be useful
  14. E

    Ceph emulating Raid 1

    According to ceph osd tree output, you only have 1 node which is having NVME Disks. Where is the second node at all or you clipped the output? otherwise to make it work, crush rule need to be modified but you will not get any advantage, it will just work, wont survive failure
  15. E

    Ceph emulating Raid 1

    Post the status Ceph osd tree Ceph -s
  16. E

    Ceph emulating Raid 1

    Yes configure RF=2 though it is not recommended for production setup as node failure will leave the setup unusable
  17. E

    Ceph not detecting disk failure

    proxmox-ve: 6.4-1 (running kernel: 5.4.119-1-pve) pve-manager: 6.4-8 (running version: 6.4-8/185e14db) pve-kernel-5.4: 6.4-3 pve-kernel-helper: 6.4-3 pve-kernel-5.4.119-1-pve: 5.4.119-1 pve-kernel-5.4.106-1-pve: 5.4.106-1 ceph: 15.2.13-pve1~bpo10 ceph-fuse: 15.2.13-pve1~bpo10 corosync...
  18. E

    Ceph not detecting disk failure

    I am testing ceph by running a proxmox vms inside a proxmox server by using nested virtualization My ceph setup consists of 3 VMs each with 2 disks of 100GB configured for Ceph. I detached the disk from one of the VM to understand, how ceph behaves on disk failure but to my surprise, ceph...
  19. E

    Error copying files to Rados

    Steps to reproduce the issue dd if=/dev/zero of=test4G bs=4M count=1024 1024+0 records in 1024+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 3.6726 s, 1.2 GB/s rados -p vm2 put o9 test4G error putting vm2/o9: (27) File too large What is the size limit for rados??
  20. E

    All resources in HA

    You may raise a feature request such that at time of vm creation itself, option to add vm in ha group may be configured

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!