[SOLVED] Shared local Storage.

mctimber

New Member
Sep 11, 2018
4
0
1
38
Hello,

I build for my company a proxmox cluster (first time), but I have un problem of storage design so I need advice.

So I build a cluster of 3 OVH server with 2 x 2To disk (Raid 1), we also have a storage serveur 4 x 4To and the network is manage by a 1Gbps vRack (OVH service). Now, the problem is I thought that the migration the vm and disk will move between servers. That is not the case, only the vm.conf move, the disk stay on local storage of the master cluster. So I can't start the vm because it doesn't find the right local storage.
After research, I understand that is normal behaviour, but I kinda stuck because I not sure what to do.

I not sure if putting all the vm image on the storage serveur is a great idea at the moment. the LAN network (vrack) is 1Gbps and limited to one per servers. I'm afraid that will induced too much latency and not sure how it work either. The 10Gbps is overprices for the need at the moment so I will try to avoid this solution. We thought to use Ceph but I don't know how to used it and I'm not sure ours infra is build for it.

I think it's a shame to not use the 2To of each servers. I was wondering if it's a good idea (good practice and clean) to maybe make an nfs partition on each local and share it.

All links, advice and experience are welcome.

Thank you for your time.
 
Hi,
After research, I understand that is normal behaviour, but I kinda stuck because I not sure what to do.
This behavior is only normal if you mark the storage as shared.

please send the storage.cfg which is located in /etc/pve/

. the LAN network (vrack) is 1Gbps and limited to one per servers.

The main problem is if you make a storage migration the cluster will get unstable because the network latency is rising.
If you use HA this can end in a fenced node.

We thought to use Ceph but I don't know how to used it and I'm not sure ours infra is build for it.
Ceph needs 10 GBit and SSD/NVME in such small setups.
 
Here the storage.cfg file :

nfs: pve-stockage
export /pve-stockage
path /mnt/pve/pve-stockage
server 192.168.50.2
content backup,images
maxfiles 3
options vers=3,soft

dir: local
path /var/lib/vz
content vztmpl,images,iso,rootdir
maxfiles 0
shared 1

But when I disable the shared option I wasn't able to clone template or find the ISO.
I'll try to put the iso on the storage and do some test. Is putting the VM template in the storage a good idea?

The main problem is if you make a storage migration the cluster will get unstable because the network latency is rising.
If you use HA this can end in a fenced node.
We don't need the HA for now.
Thanks
 
Is putting the VM template in the storage a good idea?
If you use linked clones you can't migrate that easy.
So if you use templates and local storage, I would use full clones.

The shared flag has nothing to do with clones.
The shared flag tells PVE that this dir storage is the same on all nodes.
 
Hello,
Sorry for the reply delay.
So I did what you say, "unshared" the local disk of local. The migration work fine now (is it possible to have more detail of the progress?).

Now, I have the problem I was trying to fix. I can't clone the template to create a new VM on another node. Is it normal ?
I could migrate the vm after the creation manually but that feel like an avoidable step, right ?

Regards
 
(is it possible to have more detail of the progress?).
Can you specify more details?
Now, I have the problem I was trying to fix. I can't clone the template to create a new VM on another node. Is it normal ?
Yes because you have the data only on one node and not on the other.
You must clone local and then migrate, but this works only with full clones.
 
Is it possible to have more detail of the progress?

I was thinking "progress bar" or pourcentage or Mo transfert. When I migrate the template, I only got the "formatting" log line and 15" after the rest tells me its done. Nothing say how long its going to be or if it freeze. ex: When you back you can follow the progress in the log.

Yes because you have the data only on one node and not on the other.
You must clone local and then migrate, but this works only with full clones.

OK thanks for the info, I'll work with that.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!