command lvcreate --size 2048M … failed with exit code 5

hisaltesse

Well-Known Member
Mar 4, 2009
227
2
58
Since we upgraded from proxmox 1.2 to proxmox 1.9, the scheduled backup no longer works. We get the following error, any idea why and why we need to do to fix it?

command 'lvcreate --size 2048M --snapshot --name vzsnap-proxmox1-0 /dev/pve/data' failed with exit code 5
 
Definitely not my lucky day. I started the backup of the openvz containers but a backup that usually takes less than 6 hours has now been running for over 14 hours and slowing the server down.

Any idea why I am having these issues?
 
Right now the server would not even reboot it. It is showing:

Load: 694.08
CPU: 0%
IODelay: 61%

And all containers are stalled on their max memory.

Any idea what could be the cause of this issue?
 
The server was not very responsive. The reboot command wouldn't go through.
I had to hard reboot the server by shutting down the power and reboot this way.

I made it skip the fsck but it still took a very long time to boot the 7 containers we have on there.

Something is wrong, things are a little slow.

Since the backup is failing and the server seems unstable, is there a way i can migrate the containers by copying their files? If so what exactly should I copy?

I am nervous about running a live migration and running into issues creating down time. Is there a way to manually copy all the container files from one node to the other and get them work on the other node?
 
I am nervous about running a live migration and running into issues creating down time.

You can also use offline migration (no need to live migrate).

Is there a way to manually copy all the container files from one node to the other and get them work on the other node?

see: man vzmigrate
 
I am not sure I fully understand what you are suggesting that I do.

Here is the output of lvs:

Code:
lvs
  /dev/dm-5: read failed after 0 of 4096 at 0: Input/output error
  /dev/dm-7: read failed after 0 of 4096 at 0: Input/output error
  LV                     VG   Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  data                   pve  owi-ao 583.14G                                      
  root                   pve  -wi-ao  96.00G                                      
  swap                   pve  -wi-ao  15.00G                                      
  vzsnap                 pve  Swi-I-   2.00G data   100.00                        
  vzsnap-proxmox1-0 pve  Swi-I-   1.00G data   100.00
-----

and here is my df -h


Code:
df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/pve-root   95G  2.1G   88G   3% /
tmpfs                 7.9G     0  7.9G   0% /lib/init/rw
udev                   10M  884K  9.2M   9% /dev
tmpfs                 7.9G     0  7.9G   0% /dev/shm
/dev/mapper/pve-data  574G  157G  418G  28% /var/lib/vz
/dev/sda1             504M   62M  418M  13% /boot
/dev/sdb1             688G  330G  324G  51% /vzbackups

here is the output of vgscan

Code:
vgscan
  Reading all physical volumes.  This may take a while...
  /dev/dm-5: read failed after 0 of 4096 at 626142412800: Input/output error
  /dev/dm-5: read failed after 0 of 4096 at 626142470144: Input/output error
  /dev/dm-5: read failed after 0 of 4096 at 0: Input/output error
  /dev/dm-5: read failed after 0 of 4096 at 4096: Input/output error
  /dev/dm-5: read failed after 0 of 4096 at 0: Input/output error
  /dev/dm-7: read failed after 0 of 4096 at 626142412800: Input/output error
  /dev/dm-7: read failed after 0 of 4096 at 626142470144: Input/output error
  /dev/dm-7: read failed after 0 of 4096 at 0: Input/output error
  /dev/dm-7: read failed after 0 of 4096 at 4096: Input/output error
  /dev/dm-7: read failed after 0 of 4096 at 0: Input/output error
  Found volume group "pve" using metadata type lvm2
 
Last edited:
Ok I did it.
1. how can I confirm that my issue is resolved? And what is the actually issue?

2. Also originally I had set "size: 2048" in my /etc/vzdump.conf. But after the initial backup issues (see beginning of this thread) I removed that line and the backup was able to start but stalled the server and never completed.

So I was wondering what I should set the vzdump.conf size to on this server, considering that this particular hardware node runs on a single 750GB SATA2 drive (not raided) and typically during backup everything slows down and IOdelay increases.

Let me know. Thanks for your help.
 
I guess my question is that: in order to solve my main issue, should I keep the size to the default 1024MB or should I increase it from the initial 2048MB I had to say 4096MB in /etc/vzdump.conf ?
 
Ok I did it.
1. how can I confirm that my issue is resolved? And what is the actually issue?

You ned to fin the issue first!

So I was wondering what I should set the vzdump.conf size to on this server, considering that this particular hardware node runs on a single 750GB SATA2 drive (not raided) and typically during backup everything slows down and IOdelay increases.

I guess that explains the issue - there is simply to much IO load on that single disk.
 
My node is still failing to backup containers over a certain size. For example I am trying to backup a 60GB container and:


A- it fails with vzdump.conf size set to 2048 or 4096 with message


1195: Mar 13 21:12:01 ERROR: Backup of VM 1195 failed - command 'lvcreate --size 2048M --snapshot --name vzsnap-proxmox1-0 /dev/pve/data' failed with exit code 5


B- when I remove the settings from vzdump.conf, the backup starts but never finishes and the server stalls and I get the lvm errors reported above.




Can someone please help me understand what needs to be done here?


1. Do I have to re-install proxmox and will this change anything


2. Do I have to increase some settings somewhere so that the backup can work?


3. What exactly is the problem?


4. What exactly (please include exact commands) should I do to increase my lvm snapshot that keeps running out of space or something?


5. Am I having instead a hard drive issue?
 
On the box I am getting this error message:

Code:
__ratelimit: 163 callbacks suppressed
Fatal resource shortage: privvmpages, UB 1195
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!