Here is another one. In this case it was working perfectly.
Resizing image: 100% complete...done.
resize2fs 1.43.4 (31-Jan-2017)
Filesystem at /dev/rbd/vNTDB-Storage/vm-150-disk-0 is mounted on /tmp; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
The filesystem on...
root@vNTDB-host-1:~# rbd du vNTDB-Storage/vm-160-disk-0
warning: fast-diff map is not enabled for vm-160-disk-0. operation may be slow.
NAME PROVISIONED USED
vm-160-disk-0 10GiB 4.96GiB
Yes I know it is a old post, but It has the same issue.
I have similar issue.
ii pve-container 2.0-40 all Proxmox VE Container management tool
The disk size is increased but the container inside is unchanged.
root@vNTDB-host-4:~# rbd...
I have an idea.
The pool distribution is correct. This situation comes when there is a removing process in the background.
The system starts to remove VM 557. The system release the ID 557. My app picks up the "next free VMID" it is the 557.
The cloning process uses the id 557. But the removal...
I created a test to reproduce the situation.
You can find the log in the attached file.
The 127.0.0.1 address is a link to node-1.
There was a running deletion process during the test.
The process cloned all VMs that located in the Template-Basic17-v3s1.1 pool.
create new pool
collect the...
In this case the application uses only linked clone.
I have only logs that shows what happened.
In the last time the apploication was connected to node-1 to create new pool and to create linked clones that are located on node-3.
I don't have log from single node activity.
I could not realized...
I tested this function, because it us very important to collect all cloned VM in a pool.
My flow is this:
- create a new pool
- coolect all necesarry cloned VMs (9-15 virtual machines) and send clone request over http with pool parameter
After the application created linked clone for all...
HI,
I use API interface to manage VMs.
I have a process that clone the collected VM ID into a new POOL.
Some cloned VM will not be in the POOL.
I use workaround. After the clone processes I check which VM located in the POOL. I add the missing VMs.
But the Pool members responses sometimes not...
Hi,
I could not analyze the whole process.
I found something.
This task took 1 minute and 2 seconds.
Every 1.0s: ps --forest -o pid,tty,stat,time,cmd -g 2572454...
Hi,
I create and destroy lot of vm in a short time.
The destroy task usually very slow.
In the first 10 seconds the linked disk is removed.
I don't know what's happening in the background but the whole destroy task takes more then 1 minutes.
How can I found out why the destroy process takes...
I had the same issue.
:~# rbd rm vNTDB-Storage/vm-112-disk-0
2019-08-01 14:22:29.567286 7fc2b97fa700 -1 librbd::image::RemoveRequest: 0x560d820f5470 check_image_watchers: image has watchers - not removing
Removing image: 0% complete...failed.
rbd: error: image still has watchers
This means the...
Hi,
I use API interface to manage the VMs. I working with lot of template and I use linked clone. But sometims I need remove the old templates.
Is there a command that can show me if the template has a linked cloned VM?
Yes I know I can find it in the disk configuration, but the system has...
Thank you for your help and hints.
I replaced some SATA disks to SAS 10k disk and I added one SSD to every nodes for handle the ceph db.
I got a 10 times better ceph. This configuration is enoght for my activities.
It was not easy because I could not shut down the cluster.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.