Search results

  1. L

    TASK ERROR: cfs-lock 'file-replication_cfg' error: got lock request timeout

    Hi. I'm running a cluster with pve 8.2.2 and a ceph storage for the VMs. There are two redundant corosync rings and no replication is active. Still, sometimes when I delete VMs via API I see this error in the task log and the deletion fails, which means I need to unlock the VM to delete it...
  2. L

    Permissions for Backup Management

    Hey, we have a proxmox cluster with globally (in the datacenter section) configured backups. In some cases users from a group that has access to a pool as PVEVMAdmin needs to reboot VMs during a backup. This is not possible during a backup run. Currently they need to ask a global admin with...
  3. L

    ceph.log file switched to unix timestamp

    Hey, after our upgrade to Proxmox 8 with ceph 17.2.7 we observed a strange change in the /var/log/ceph/ceph.log. In this file the format for log timestaps changed to unix time stamps and we have no idea why and couldn't find anything in the ceph docs neither the release notes. All other ceph log...
  4. L

    [SOLVED] Traffic limit isn't working properly

    Hey, we have a problem with our PBS setup regarding the traffic limitations. We run PBS 2.3.2 and PVE 7.3.4 with Ceph 16.2.9 as storage backend. We have configured a limit of 120 MiB/s for backups from all networks. This works great on the PBS site but sometimes we observe much higher rates in...
  5. L

    Questions about traffic control because it's quite confusing

    I setup a up to date PBS with an up to date PVE server. I've limited the incoming rate to 50MiB/s. Looking at the system metrics the results looks quite correct, thats the good part. But I'm confused because the values shown in the backup job on the PVE server are much higher than the limit (see...
  6. L

    fstrim performance on CEPH RBD with Proxmox

    Hey, we observe major performance issues while running fstrim on VMs backed by a SSD pool (3 replica, 50OSDs) with Ceph (16.2.7) on proxmox. We have a workload that leads to some bigger data fluctuation on our VMs (CentOS 7) . Therefore we have enabled the discard mode for the disks and run...
  7. L

    [SOLVED] WebUI: select custom device class on OSD creation

    Hey, I have two SSD device classes in my proxmox ceph cluster. On is the default SSD class. The other one contains SSDs of a specific size, it is called SSD4T. At the moment it looks like the device class option for new OSDs (see screenshot) is hardcoded to the three default types. Am I right...
  8. L

    Bulk migration parallel jobs vs. max-workers

    Short question about the meaning of two options: In the web interface under Cluster -> Options I have the option "Maximal Workers/bulk-action". From observation this value defines the maximum number of parallel bulk migrations. But in the interface for bulk migrations I have the option "Parallel...
  9. L

    [SOLVED] UI Bug at CPU cores with 6.2-10

    After upgrading to 6.2-10 the hardware page of some VMs only report 1 core for the affected VM despite they are running and have 20 cores. The unapplied change is not related to this issue and the VM had 20 cores befor too. There are affected and unaffected VMs on the same host. Is this a known...
  10. L

    Cluster crash on update from 6.1-8 to 6.2-10

    Hey, yesterday I started to upgrade our cluster. Most of the 13 nodes where running 6.1-8 but two or three newer ones already used 6.2-10 as they where added in the last weeks or got upgraded already. No Problem so far. But yesterday I wanted to upgrade another node (node5) and after running...
  11. L

    [SOLVED] Problem with CPU flags

    We have a small problem here with the cpu type option for our proxmox VMs. We run PVE 6.18 currently. The Problem is, that if we select "host" as CPU type the flags arn't the same but they should. Is there an easy explenation for that? From the Docs: > If you want an exact match, you can set...
  12. L

    How to apply ceph.conf change on a running cluster

    I want to set the following options for all OSDs in a running cluster in a persistent way. How do I apply the new ceph.conf to the running cluster and all daemons? I'm not quite sure if I need to set it manually via some config command or if there is a way to deploy the ceph.conf change...
  13. L

    [SOLVED] Configure vm shutdown timeout

    After searching the forum I read that the hart timeout before a VM is killed is 3min. In our setup this is by far not enough for some hosts. So my question is, if there is any option to increase this timeout (yes shutdown the vm from the inside is an option too). Killed databases aren't that great.
  14. L

    [SOLVED] Release notes for patch releases

    Are there any release notes for the patch releases? For example 6.0-9 to 6.0-11?
  15. L

    [SOLVED] I there an easy way to mount the rbd image of a VM on a non proxmox host?

    I need to copy some huge VMs. To reduce the downtime and the overall resources needed I want to mount the rbd image of a VM on the old host and dd it directly to the block device. Currently I'm not cappable of finding a way to mount an image created on the pve ceph cluster on another host. Is...
  16. L

    [SOLVED] Bulk live migration possible?

    If I open the context menu for a node I have the option to migrate all VMs to another node in a bulk mode. This migration seems to be an offline migration as the VMs are just stopped and restarted on the new cluster node. Is it possible at the moment to do a bulk migration but in live mode, so...
  17. L

    qm importdisk from remote LVM block device without creating an image

    Hey, we have some VMs on our old cluster (not proxmox, libvirt based) that we want to migrate to our new proxmox cluster. The VM storage used on the old cluster are lvm volumes. So what I need is a way to import these block devices to the new proxmox cluster using ceph as storage backend. Is...
  18. L

    [SOLVED] PVE 6: Duplicate monitors and managers in ceph overview

    Hey, I've a fresh proxmox 6.0.5 cluster installation with no changes beside cluster creation, installation of ceph (via gui), and the addition of two more monitors and managers (also via gui). The list under Ceph -> Monitor seems correct but in the overview every monitor and manager has a...