Hi,
i am also wondering myself.
But in fact, when we see this in the ProxmoxUI:
create full clone of drive ide0 (nfs-server:321/vm-321-cloudinit.qcow2) Formatting '/mnt/pve/nfs-server/images/457/vm-457-cloudinit.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata...
Hi,
thank you for pointing us to this!
After every request we do, we are checking using UPID the status of the task before we issue the next command.
So for now that does not seem to be the case.
During our debug we did:
- issue clone -> waiting for task done via code
- issue config ->...
Hi,
ok after some deeper debugging it turned out that the TASK OK from the clone is actually the problem:
create full clone of drive ide0 (nfs-server:321/vm-321-cloudinit.qcow2) Formatting '/mnt/pve/nfs-server/images/457/vm-457-cloudinit.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off...
Hi Fabian,
thank you for your time!
The post request from the browser is:
virtio0
"nfs-server:322/base-322-disk-0.qcow2/110/vm-110-disk-0.qcow2,discard=on,iops_rd=50001,iops_wr=50001,mbps_rd=251,mbps_wr=252,size=2G"
digest
"a0ab606e900238de1632c6834b53c5268c18076b"
background_delay...
Hi,
this is from:
pve-manager/8.1.10/4b06efb5db453f29 (running kernel: 6.5.13-3-pve)
A qcow2 based template on an NFS server is link cloned on the same NFS server
Doing this via API will result in tasks with this output:
create full clone of drive ide0...
Hi,
a zfs dataset with 1 TB of quota was created.
A new datastore with the path of the zfs dataset was created.
The "df" command on the CLI will show the correct size of 1 TB.
But the PBS UI will show the size of the whole zpool.
Is this a bug or wanted behavior?
I would expect that the...
Hi,
it seems that was solved by updating the 7.3.3 install according to
Proxmox Bugtracker
And yes, the VM in question started to transfere without issues :-)
I hope all of them will do.
Hi,
i had a similar issue, when migrating from 7.3.3 to 8.0.3
Rebooting the 8.0.3 ( that was updated recently ) helped to have the problem looking another way.
Now i receive:
2023-08-09 16:22:10 ERROR: migration aborted (duration 00:00:00): internal error: cannot check version of invalid...
Hi,
i didnt test all LXC images that are downloadable through the proxmox UI.
But it seems only DEB based distributions have the open ssh server preinstalled.
All other ( tested all RPM based distributions ( centos, fedora, alma) aswell as archlinux ) seems to have no open ssh server...
Hi,
using Backup Server 2.2-1
This is the sync-job:
proxmox-backup-manager sync-job list
┌─────────────────┬───────────────────────────────┬───────────────┬─────────────────┬──────────┬──────────────┬─────────┬─────────┐
│ id │ store │ remote │...
Hi,
ok after the process ended, the new storage appears also in the gui/config.
So seems even the task vanished, the job gets done successfully.
In the end just a cosmetic problem.
Greetings
Oliver
Hi,
during a disk movement from a ceph to a zfs storage ( 10 TB disk, was already moved by 20% ) the thin provisioning checkbox was activated for the zfs storage.
The result was, that the task of the disk movement vanished. And i mean vanished like completely removed from the proxmox UI...
Hi,
no the root filesystem is _not_ on zfs! Please excuse me that i didnt clearify this earlier. The OS is running on independent, dedicated for the OS, disks.
And yes, i am talking about CLI operations with the zfs / zpool command. They are fast(er) and dont time out/not having any issues...
Hi,
yes
but why should this block the creation / destroy of any other VM on the same storage?
---------
yes its of course natural, that if you are using weak hardware, that you get what you paid for.
And without doubt, moving this 2 TB disk with 10G is quiet unpleasent for ZFS using...
Hi Fiona,
its clear that a migration will and must block all operations for the migrating vm to ensure a consistent state of the VM ( incl. its disks ).
To me its not clear why a migration of VM A to node X will actually block / harm all other operations with the ZFS storage on node X.
As i...
Hello fiona,
as it seems, doing migrations blocks several zfs actions.
When trying to create new VM's by cloning existing template, you will receive:
()
trying to acquire lock...
TASK ERROR: can't lock file '/var/lock/qemu-server/lock-265.conf' - got timeout
This is
proxmox-ve: 7.2-1...
Hi,
if you are moving a disk from ceph to a zfs storage with one VM and you try to remove another VM on the same host which is on zfs, you will receive
TASK ERROR: zfs error: cannot destroy 'local-zfs/vm-124-disk-0': dataset is busy
pve-manager/7.2-4/ca9d43cc (running kernel: 5.15.35-1-pve)...
Hi,
yes
Firewall is enabled on the net0 NIC.
Maybe this whole thing is not working this way to put the datacenter on accept all. Maybe it has to be put on deny and then host/vm wise be opened. But unfortunatelly i can not find any documentation about details how this all concept is actually...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.