Backup and Update Task Error

walker13

New Member
Jun 4, 2019
10
0
1
34
I've been having an issue with backup and updates failing for the past couple weeks. Started noticing because some of my windows vms were not saving changes made. I would create folders or change settings and come back in a couple days and they were no longer there, I hope these issues are related.

I hope you guys can help me, i'm really new to linux and proxmox and am trying to learn as I go but after trying to figure this out through google for a week I have not been able to fix it. I did some research and found that I might need to expand the lvm and fs sizes. I went ahead and followed a couple guides and I think I did expand the right volume since none of them are full.
upload_2019-6-24_23-24-56.png



upload_2019-6-24_23-26-38.png

Your help on how to fix this is greatly appreciated.
 
HI,
seems like the storage you are writing your backup to is almost full. Could you please post your storage configuration `cat /etc/pve/storage.cfg` and the full output of the task log. Also check the output of `vgs` `pvs` and `lvs` to see how much space you have left and how big your volumes are.
 
HI,
seems like the storage you are writing your backup to is almost full. Could you please post your storage configuration `cat /etc/pve/storage.cfg` and the full output of the task log. Also check the output of `vgs` `pvs` and `lvs` to see how much space you have left and how big your volumes are.

Sorry for the delay, I ran them.

upload_2019-6-26_21-0-13.png
upload_2019-6-26_21-0-56.png
upload_2019-6-26_21-1-29.png
 

Attachments

  • upload_2019-6-26_21-0-20.png
    upload_2019-6-26_21-0-20.png
    3.7 KB · Views: 8
As you can see from these outputs volume group is full (see VFree and PFree column). You will have to expand your storage space by adding a new disk or cleanup unneeded disk images.

EDIT: This statement is incorrect, the VFree and PFree show the space not allocated by the VG.
 
Last edited:
As you can see from these outputs volume group is full (see VFree and PFree column). You will have to expand your storage space by adding a new disk or cleanup unneeded disk images.

Ok, I deleted some unused vms. Of the ones listed in the pictures above im only using 4. How do I get that space back?
 
I am a bit puzzled, as I do not jet understand how you perform your backups. Do you backup to the local storage or to a remote and run out of space for the temp files? What storages do you have defined?
If you have leftover disk images you can easily remove them from the UI by clicking on the corresponding storage, select the disk to remove and click on the remove button.
 
I am a bit puzzled, as I do not jet understand how you perform your backups. Do you backup to the local storage or to a remote and run out of space for the temp files? What storages do you have defined?
If you have leftover disk images you can easily remove them from the UI by clicking on the corresponding storage, select the disk to remove and click on the remove button.

I have not intentionally defined any remote or local backup, so it must be default backup to local. As I said, I am really new to this and am trying to learn as I go. I have deleted some vms through the ui but it has not seemed to free any space.
 
Ok that is strange as by destroying VMs the corresponding disks should be removed from the vg (check `pvs`, `vgs` and `lvs` again). If you are sure that a disk is not in use any more (and does not show up in the storage in the UI), you can try to use `lvremove /dev/pve/<LV>` where LV is the logical volume you want to remove. As you probably already have lost data, you should consider performing a backup from within the running VMs to a remote storage in order to possibly save what's still possible.
 
Ok, I ran those commands again after deleting vm 115 and do not see any additional space.

upload_2019-6-27_10-18-24.png

And also, I'm not sure what you want me to do with the 'lvremove'. Is that to remove a vm?
 
After talking to a colleague, let me reconsider... As the VG allocated the full physical disk space, this is ok and you should have enough space left to hold the vm disks and are not overprovisioned. So the good news is that you will not need to delete any disk images. But it is strage that the VMs are loosing data. Can you check the output of `dmesg` and `journalctl -b` for errors?
The bad news is that the ~60GB on pve-root are not enough to store your backups, so you will have to get additional storage space.
 
Thank you for the help and I apologize for the delayed responses. For some reason I stopped getting email notifications of replies after the second one. I went ahead and ran those commands.

upload_2019-6-27_14-41-13.png
upload_2019-6-27_14-41-46.png
upload_2019-6-27_14-42-7.png

I ran the first command but the output is extremely long. What is the proper way of posting it here? Also, in regards to the 60gb issue, is this something where I can allocate more storage to pve-root from my 1tb that everything is on? Or do I have to add additional drives to the system?
 
The preferred way to post such outputs is in coding brackets, although I would recommend you filter the output with grep or search for errors first. How are your disks attached to the VMs? Do you use SCSI or SATA/IDE... The latter might cause issues as the number of retries is limited.
In general I would recommend you to use a different drive for backups as storing them to the same drive will not prevent you from data losses if your drive fails.
 
I am using SATA/IDE. Also, once I get external backups set up to my nas, how do I go about getting space freed on proxmox.
 
I am using SATA/IDE. Also, once I get external backups set up to my nas, how do I go about getting space freed on proxmox.
It might be better to attach VM/CT disks via SCSI with the VirtIO-SCSI controller instead... See https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_hard_disk
If you perform your backups to the NAS, you should not have a space problem. You can create a CIFS or NFS backed storage for the backups, see https://pve.proxmox.com/pve-docs/pve-admin-guide.html#storage_cifs
 
It might be better to attach VM/CT disks via SCSI with the VirtIO-SCSI controller instead... See https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_hard_disk
If you perform your backups to the NAS, you should not have a space problem. You can create a CIFS or NFS backed storage for the backups, see https://pve.proxmox.com/pve-docs/pve-admin-guide.html#storage_cifs

I'm not sure if this is still active or not, but I successfully got the server to back up to my nas. However, I am still having an issue where changes are not being saved after restarts. I made a number of changes and then restarted the vm and they were no longer there. Any idea why this could be happening?
 

Sorry for the late replies, I went ahead and checked the virtio drivers and they are the most recent for windows 10 stable version 0.1.141. I thought that it might be the caching setting for the hdd on the win10 vm. It was set to default so I tried both write back and write through with no luck. To test I am making a change (create folder or make changes to a setting and restart or shutdown and turn back on) then when I come back its gone. What is weird is I have an additional hard drive that is set up with sata and write through which has no issues; it is just the boot drive.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!