Hello,
I've used ProxMox VE for a few years without a prob but I'm facing something I don't understand, so I don't know how to solve this poblem.
Virtual Environment 4.4-13/7ea56165 on a dedicated server at SoYouStart. Let me say I run 4 servers like this one and have this problem with this server only.
A few day ago I order a new server, setup a CT (Debian 8 - CT is running smoothly) and started to use it in production. I needed to extend the drives space from 1.5TB to 2.5. I like ProxMox VE cause I can do it really an easy way using the WebUI. But I had a problem doing it:
I had this prob at the first disk extend. I could deal to add more space adding 10 by 10GB. I couldn't add 600/70/40GB at once. I Googled a bit about it but i'm still stuck so I hope you could help me.
Here are some specs:
ProxMox server
112.conf
CT
BTW since the last fail, the disk space is written in K instead of GB...
EDIT: I tried to repair with fsck.ext3 but it seems it didn't help.
Thanks,
I've used ProxMox VE for a few years without a prob but I'm facing something I don't understand, so I don't know how to solve this poblem.
Virtual Environment 4.4-13/7ea56165 on a dedicated server at SoYouStart. Let me say I run 4 servers like this one and have this problem with this server only.
A few day ago I order a new server, setup a CT (Debian 8 - CT is running smoothly) and started to use it in production. I needed to extend the drives space from 1.5TB to 2.5. I like ProxMox VE cause I can do it really an easy way using the WebUI. But I had a problem doing it:
Code:
qemu-img: Error resizing image: File too large
TASK ERROR: command '/usr/bin/qemu-img resize -f raw /var/lib/vz/images/112/vm-112-disk-1.raw 2244120444928' failed: exit code 1
I had this prob at the first disk extend. I could deal to add more space adding 10 by 10GB. I couldn't add 600/70/40GB at once. I Googled a bit about it but i'm still stuck so I hope you could help me.
Here are some specs:
ProxMox server
Code:
root@nsxxx:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 6.3G 8.9M 6.3G 1% /run
/dev/md2 20G 2.2G 16G 12% /
tmpfs 16G 40M 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/pve-data 5.4T 665G 4.5T 13% /var/lib/vz
/dev/fuse 30M 16K 30M 1% /etc/pve
root@nsxxx:/var/lib/vz/images/112# ls -lsh
total 663G
663G -rw-r----- 1 root root 2.0T May 20 09:50 vm-112-disk-1.raw
112.conf
Code:
arch: amd64
cores: 8
hostname: yyy
memory: 10240
nameserver: xxxx
net0: name=eth0,bridge=vmbr0,gw=xxx,hwaddr=xxxx,ip=xxx,type=veth
ostype: debian
rootfs: local:112/vm-112-disk-1.raw,size=2139095072K
searchdomain: xxx
swap: 1024
lxc.autodev: 1
lxc.hook.autodev: sh -c "mknod -m 0666 ${LXC_ROOTFS_MOUNT}/dev/fuse c 10 229"
CT
Code:
root@yyy:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 2.0T 778G 1.1T 41% /
none 492K 0 492K 0% /dev
tmpfs 3.2G 64K 3.2G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.2G 0 2.2G 0% /run/shm
BTW since the last fail, the disk space is written in K instead of GB...
EDIT: I tried to repair with fsck.ext3 but it seems it didn't help.
Code:
root@nsxxx:/var/lib/vz/images/112# fsck.ext3 -pcfv vm-112-disk-1.raw
MMP interval is 5 seconds and total wait time is 22 seconds. Please wait...
vm-112-disk-1.raw: Updating bad block inode.
78053 inodes used (0.06%, out of 132644864)
101 non-contiguous files (0.1%)
25 non-contiguous directories (0.0%)
# of inodes with ind/dind/tind blocks: 0/0/0
Extent depth histogram: 70987/484/3
166859639 blocks used (31.45%, out of 530579456)
0 bad blocks
53 large files
63967 regular files
7347 directories
0 character device files
0 block device files
1 fifo
380 links
6729 symbolic links (6570 fast symbolic links)
0 sockets
------------
78424 files
Thanks,
Attachments
Last edited: