Problem with 3.3 local storage

driv3l

New Member
Sep 28, 2013
13
0
1
I just upgraded my 2 nodes to 3.3.... FYI, one of the nodes didn't upgrade correctly and would hang on reboot, so I ended up wiping and reinstalling.

The problem I am having is on the node that was wiped and reinstalled, I can't seem to upload anything to the local storage. I can download openvz templates just fine, but when I try uploading an ISO, I get "Error 500: can't activate storage local on node xyz".

I have tried just about every way to get storage on that node working i.e. I have a second hard disk on the node which I setup and tried adding an LVM and also tried adding a local directory to storage. Every time I try uploading an ISO to that node, I get the same error. It doesn't matter if it's to the default local storage, a newly created LVM or local directory, a shared directory from node 2, or anything else.

Anyone have any suggestions on how to solve this?

Btw, this problem does not occur on node2 which was upgraded to 3.3. I also noticed that node1 (which was wiped and reinstalled) has a partition table in GPT format whereas the node that was upgraded has a partition table in MBR format. I am not sure if that has any bearing on the problem.

Thanks.
 
Anyone? I am kinda stuck at the moment not being able to upload any ISOs. Any pointers in the right direction would help.

Thanks!
 
post your:

> pveversion -v

and:

> df -h
 
Hi,

Thanks for the replies. Here is the information you requested. Note, this is from a clean install using the proxmox 3.3 ISO burned to a disk. This node (which has the upload problems) is joining an existing node in a cluster. The existing node was an upgrade to 3.3 and that node does not have any problems with uploads.

pveversion -v:
proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1


df -h:
udev 10M 0 10M 0% /dev
tmpfs 1.6G 376K 1.6G 1% /run
/dev/mapper/pve-root 95G 966M 89G 2% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.2G 53M 3.1G 2% /run/shm
/dev/mapper/pve-data 792G 197M 792G 1% /var/lib/vz
/dev/sda2 494M 36M 434M 8% /boot
/dev/fuse 30M 24K 30M 1% /etc/pve




cat /etc/pve/storage.cfg is empty (no contents)

pvs:
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- 931.01g 16.00g

vgs:
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- 931.01g 16.00g

lvs:
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
data pve -wi-ao--- 804.02g
root pve -wi-ao--- 96.00g
swap pve -wi-ao--- 15.00g
 
Add these into your storage.cfg file:
Code:
dir:  local
          path /var/lib/vz
          content images,iso
          maxfiles 0
Somehow your storage.cfg file lost configurations.
Or,
add the local storage through GUI.
 
Hi Wasim,

I don't believe that is the problem. I did add the lines back to the storage.cfg, but it made no different. The storage.cfg file did have those settings before, but I'm not sure when it got erased / deleted. I have been trying various ways to get the local storage on the new node working, so somewhere along the lines that file seems to have gotten empty.

The Web GUI does show the local storage, and even after adding the lines back to the storage.cfg file, I still get the exact same error on the new node (this problem never occurs on the upgraded node), so something weird is going on. I am not sure if it's related to the existing node which was upgraded. Unfortunately I can't afford to wipe and reinstall the node that was upgraded since I have a number of important VMs on there that I don't want to take the chance of losing.
 
I just reread your original posting. What you are trying to do is upload an ISO to the local storage through WebGUI. Correct? If so, then i think it is a valid error when uploading large ISO. I never had success with more than 300MB ISO file through WebGUI. I always used FileZilla. Faster, hassle free and gauranteed to work.
 
Some more info... I ran the same diagnostic commands you guys asked for on the node that was upgraded. It's showing a an issue, so it's probably that this upgraded node (which is serving as the master in the cluster) might be causing some problem:

pveversion - v:
proxmox-ve-2.6.32: not correctly installed (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-24-pve: 2.6.32-111
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1


Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 1.6G 412K 1.6G 1% /run
/dev/mapper/pve-root 74G 2.7G 67G 4% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.2G 53M 3.1G 2% /run/shm
/dev/mapper/pve-data 190G 21G 169G 11% /var/lib/vz
/dev/sda1 495M 104M 366M 23% /boot
/dev/mapper/storage-storage 3.6T 274G 3.2T 8% /pool
/dev/fuse 30M 24K 30M 1% /etc/pve
/storage/private/103 8.0G 913M 7.2G 12% /var/lib/vz/root/103
/pool 3.6T 274G 3.2T 8% /var/lib/vz/root/103/mnt/pool
none 103M 1.3M 102M 2% /var/lib/vz/root/103/run
none 5.0M 0 5.0M 0% /var/lib/vz/root/103/run/lock
none 512M 0 512M 0% /var/lib/vz/root/103/run/shm




/etc/pve/storage.cfg (added these back per Wasim's response... was empty previously):
dir: local
path /var/lib/vz
content images,iso,vztmpl,backup,rootdir
maxfiles 0

pvs:
/dev/sda2 pve lvm2 a-- 297.59g 16.00g /dev/sdb1 storage lvm2 a-- 931.51g 0
/dev/sdc1 storage lvm2 a-- 931.51g 0
/dev/sdd1 storage lvm2 a-- 931.51g 0
/dev/sde1 storage lvm2 a-- 931.51g 0




VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- 297.59g 16.00g
storage 4 1 0 wz--n- 3.64t 0


lvs:
data pve -wi-ao--- 192.09g
root pve -wi-ao--- 74.50g
swap pve -wi-ao--- 15.00g
storage storage -wi-ao--- 3.64t


How do I solve the "not correctly installed" per pveversion -v without potentially losing any VMs or data?
 
Yes... I am uploading an ISO. The ISO is 360MB. It uploads fine on the upgraded node, and I did not have any problems uploading large ISOs with the previous versions of proxmox. I've even uploaded ISOs in the gigabytes without a problem. It's only this cleanly installed version of 3.3 that is complaining.
 
Hi Wasim,

So this problem seems to be related to the web uploading on a freshly installed 3.3 (it's was not an issue with proxmox previously, or a node that was upgrade to 3.3).

I can scp the files over to the newly installed node and it works fine installing and running through the web UI.

It sucks that the web UI is broken for uploads now. I hope they fix it soon. At least I have a workaround.
 
It sucks that the web UI is broken for uploads now. I hope they fix it soon. At least I have a workaround.
I am not sure what exactly my issue was but i had very same issue from Proxmox 2. Never really looked into it deeply since i was still learning Proxmox then and i found alternate way to upload ISOs. Scp/FileZilla to be much more convenient for me. I think i even opened a thread for this issue and dietmar provided some explanation some time ago.
 
Some more info... I ran the same diagnostic commands you guys asked for on the node that was upgraded. It's showing a an issue, so it's probably that this upgraded node (which is serving as the master in the cluster) might be causing some problem:
Hi,
there isn't an master in the cluster anymore - all nodes which have quorum can act in the cluster; nodes without quorum not!
pveversion - v:
proxmox-ve-2.6.32: not correctly installed (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-24-pve: 2.6.32-111
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1


Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 1.6G 412K 1.6G 1% /run
/dev/mapper/pve-root 74G 2.7G 67G 4% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.2G 53M 3.1G 2% /run/shm
/dev/mapper/pve-data 190G 21G 169G 11% /var/lib/vz
/dev/sda1 495M 104M 366M 23% /boot
/dev/mapper/storage-storage 3.6T 274G 3.2T 8% /pool
/dev/fuse 30M 24K 30M 1% /etc/pve
/storage/private/103 8.0G 913M 7.2G 12% /var/lib/vz/root/103
/pool 3.6T 274G 3.2T 8% /var/lib/vz/root/103/mnt/pool
none 103M 1.3M 102M 2% /var/lib/vz/root/103/run
none 5.0M 0 5.0M 0% /var/lib/vz/root/103/run/lock
none 512M 0 512M 0% /var/lib/vz/root/103/run/shm




/etc/pve/storage.cfg (added these back per Wasim's response... was empty previously):
dir: local
path /var/lib/vz
content images,iso,vztmpl,backup,rootdir
maxfiles 0
This should not happens with an cluster-node! The content of /etc/pve is on all nodes the same (only links are different).
This mean, on your node with the upload-trouble you have issues with your cluster.
pvs:
/dev/sda2 pve lvm2 a-- 297.59g 16.00g /dev/sdb1 storage lvm2 a-- 931.51g 0
/dev/sdc1 storage lvm2 a-- 931.51g 0
/dev/sdd1 storage lvm2 a-- 931.51g 0
/dev/sde1 storage lvm2 a-- 931.51g 0




VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- 297.59g 16.00g
storage 4 1 0 wz--n- 3.64t 0


lvs:
data pve -wi-ao--- 192.09g
root pve -wi-ao--- 74.50g
swap pve -wi-ao--- 15.00g
storage storage -wi-ao--- 3.64t


How do I solve the "not correctly installed" per pveversion -v without potentially losing any VMs or data?
On which nodes do you have VMs and data?

Please post following from all nodes in the cluster
Code:
pvecm status
pvecm nodes
Udo
 
Hi Udo,

Here's the output of pvecm status for both nodes:

Version: 6.2.0
Config Version: 27
Cluster Name: Primary
Cluster Id: 12057
Cluster Member: Yes
Cluster Generation: 1596
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: Server1
Node ID: 1
Multicast addresses: 239.192.47.72
Node addresses: 192.168.1.128


Version: 6.2.0
Config Version: 27
Cluster Name: Primary
Cluster Id: 12057
Cluster Member: Yes
Cluster Generation: 1596
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: Server2
Node ID: 2
Multicast addresses: 239.192.47.72
Node addresses: 192.168.1.129



pvecm nodes:
1 M 1596 2014-09-18 17:06:20 Server1
2 M 1596 2014-09-18 17:06:20 Server2


1 M 1596 2014-09-18 17:06:19 Server1
2 M 1580 2014-09-18 14:54:32 Server2
 
Hi Udo,

Here's the output of pvecm status for both nodes:

Version: 6.2.0
Config Version: 27
Cluster Name: Primary
Cluster Id: 12057
Cluster Member: Yes
Cluster Generation: 1596
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: Server1
Node ID: 1
Multicast addresses: 239.192.47.72
Node addresses: 192.168.1.128


Version: 6.2.0
Config Version: 27
Cluster Name: Primary
Cluster Id: 12057
Cluster Member: Yes
Cluster Generation: 1596
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: Server2
Node ID: 2
Multicast addresses: 239.192.47.72
Node addresses: 192.168.1.129



pvecm nodes:
1 M 1596 2014-09-18 17:06:20 Server1
2 M 1596 2014-09-18 17:06:20 Server2


1 M 1596 2014-09-18 17:06:19 Server1
2 M 1580 2014-09-18 14:54:32 Server2
Hi,
looks not bad!
And the content of /etc/pve/storage.cfg is different on both nodes?!

Can you try an
Code:
service pve-cluster restart
to build the content of /etc/pve from /var/lib/pve-cluster/config.db. See http://pve.proxmox.com/wiki/Proxmox_Cluster_file_system_%28pmxcfs%29

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!