Search results

  1. E

    Query Regarding API Permission

    With both user and API permissions assigned to VM curl -f -s -S -k --header 'Authorization: PVEAPIToken=manish@pve!monitoring=b9b0c035-3fe2-46e9-9374-299854b44b12' "https://172.19.200.121:8006/api2/json/access/permissions"...
  2. E

    Query Regarding API Permission

    With API token permission only assigned to the VM curl -f -s -S -k --header 'Authorization: PVEAPIToken=manish@pve!monitoring=b9b0c035-3fe2-46e9-9374-299854b44b12' "https://172.19.200.121:8006/api2/json/access/permissions"...
  3. E

    [SOLVED] Spiceproxy via API

    @fabian, yes I am able to do it now, able to access my VM with API token without even going to WebUI Thanks for the help
  4. E

    Query Regarding API Permission

    I have created PVE user and its API Token for a user manish@pve in proxmox cluster running version 7. When i assign user permission to VM as attached in screenshot1, the permission are showing with propagate=Yes whereas if I assign token permission to the same VM with same role, it is showing...
  5. E

    [SOLVED] Spiceproxy via API

    I have create an API Token and wanted to access the spice-config through an API curl -f -s -S -k --header 'Authorization: PVEAPIToken=manish@pve!monitoring=b9b0c035-3fe2-46e9-9374-299854b44b12' "https://inc1pve53:8006/api2/spiceconfig/nodes/inc1pve53/qemu/111/spiceproxy" when I do this curl...
  6. E

    Secure Boot Disabled automatically when VM is started by HA

    I have a raised a bug report https://bugzilla.proxmox.com/show_bug.cgi?id=3826
  7. E

    Proxmox 7 Bug in the VM Config

    qm config 126 agent: 1 balloon: 8192 bios: ovmf boot: order=scsi0;ide2;net0 cores: 4 cpu: kvm64 efidisk0: vmrepository:vm-126-disk-1,efitype=4m,pre-enrolled-keys=1,size=1M hotplug: disk,network,usb,memory,cpu ide2: none,media=cdrom memory: 32768 name: Naviadv-Website net0...
  8. E

    Procedure followed for Proxmox 6 to Proxmox 7 for No subscription Proxmox

    export LC_ALL="en_US.UTF-8" rm -f /etc/apt/sources.list.d/pve-enterprise.list apt-get update && apt-get full-upgrade -y sed -i 's/buster\/updates/bullseye-security/g;s/buster/bullseye/g' /etc/apt/sources.list echo "deb http://download.proxmox.com/debian/ceph-octopus bullseye main" >...
  9. E

    PBS issue with TOTP

    I have added a proxmox backup server in the proxmox VE and it was working fine and I could take backup normally I decided to add TOTP in the web interface of PBS and after that It started showing gray In Proxmox. Again I removed the TOTP, it started showing normally. It seems like bug to me
  10. E

    Ceph Enable Local OSD First

    Is there a way that while selecting OSD for data writes in case of Ceph, the Primary OSD is always selected from the Server where VM is running. The normal pattern is that Primary OSD are selected randomly and writes are slightly delayed by Latency of the network
  11. E

    Ceph emulating Raid 1

    Follow this command to start with single node ceph Assumption: You have single node, You have configured a cluster already with single node member as of now pveceph init --network=172.19.X.Y/24. == Specify your 10G or higher interface network here pveceph mon create find the list of drives...
  12. E

    Ceph emulating Raid 1

    Well one way to start with single node setup is modify the crush rule and allow the replication across osds rather than nodes. You can do that easily by manually editing crush maps. As soon as you keep adding nodes and you have sufficient number of nodes, change the crush map again to distribute...
  13. E

    Ceph emulating Raid 1

    How many nodes you have right now??
  14. E

    Ceph Pacific

    As far as I understand, you need to go to 6.4 to migrate to 7 as there is no tool like pve6to7 in repo which can help from 5 to 7
  15. E

    Ceph Pacific

    http://download.proxmox.com/debian/ceph-pacific/
  16. E

    Ceph Pacific

    I see that ceph pacific is now available in repo but it is only available for bullseye and not for buster release. How can we test it any documentation will be useful
  17. E

    Ceph emulating Raid 1

    According to ceph osd tree output, you only have 1 node which is having NVME Disks. Where is the second node at all or you clipped the output? otherwise to make it work, crush rule need to be modified but you will not get any advantage, it will just work, wont survive failure
  18. E

    Ceph emulating Raid 1

    Post the status Ceph osd tree Ceph -s
  19. E

    Ceph emulating Raid 1

    Yes configure RF=2 though it is not recommended for production setup as node failure will leave the setup unusable
  20. E

    Ceph not detecting disk failure

    proxmox-ve: 6.4-1 (running kernel: 5.4.119-1-pve) pve-manager: 6.4-8 (running version: 6.4-8/185e14db) pve-kernel-5.4: 6.4-3 pve-kernel-helper: 6.4-3 pve-kernel-5.4.119-1-pve: 5.4.119-1 pve-kernel-5.4.106-1-pve: 5.4.106-1 ceph: 15.2.13-pve1~bpo10 ceph-fuse: 15.2.13-pve1~bpo10 corosync...