We're facing the same problem. But I already did some tests to get rid of the NFS share and use vzdump to stdout over SSH from the backup server. It works perfectly but I need to script the whole procedure to login to every PVE node and run vzdump for a VMID on the right node. Some raw steps so...
You're possibly right, I'm not sure about that, I think RAM will only serve as read cache and will not corrupt the data, at most it returns objects from RAM (cache) that isn't up-to-date but afaik Gluster checks the directory structure on both ends on every listing and will update it's RAM...
IMO it makes sense. I would like to use it too, but for a slightly different situation. For a database server I use a system disk including all databases and want to use PVE snapshot function for this disk. The second disk is for GlusterFS, shared with another server. Imagine vm01 and vm02, both...
You're overcomplicating things. While it's possible doesn't mean it's good. You haven't mentioned your requirements and available hardware (storage configuration), so it's very hard to mention an alternative but what you're doing might not be the best way to go. More layers in the stack...
If your setup will work fine depends on the usecase of your Windows servers. SQL is more resource intensive than serving a remote desktop for 1 Word user. I think your storage will suffice, but try to calculate the necessary IOPS and your workload, than calculate the IOPS your storage can...
I just want to say the new menu works fine for us. A product evolves and changes are necessary sometimes, you just have to adjust to it. Doesn't happen this all the time in software, and the rest of the world? Your new microwave probably doesn't use the same controls as your old but it reallly...
It all depends on your workload. Response time of SSD is really good, from my pveperf:
AVERAGE SEEK TIME: 0.29 ms
You need to determine the storage demand:
- performance
- disk space
- reliability
- costs
Reliability can be the chosen RAID configuration, RAID0 is no RAID! RAID5 can be...
It seems you're missing the hotplug definition in your VM settings. In the GUI you should enable Hotplug support for at least Disk at the Options section of VM ID 164. Something like this should appear in 164.conf:
hotplug: disk,network,usb
Stopping and starting the VM is necessary to activate...
We recently updated our APC RPS devices to the newest 6.x firmware, mainly because of issues with SSL support in the old firmware. After updating the firmware, fence_apc didn't work anymore, some investigation showed it was because pre 5.x commands were send to the APC for power off and on. The...
I had the same problem but got it solved with fence_drac5. You pointed me in the right direction :-)
Your command (I used the same):
fence_drac5 --ip=xxxxxxxxxx -l fencing_user -p xxxxxxxxx -c "admin1->" -x -v -v -v -o off
is according to the documentation at...
Hello,
Corosync is used to setup a cluster between the Proxmox nodes, but what are the timeout settings? After how many seconds a node is considered dead?
We use an IGMP querier on our switches but when the master switch fails (which is the master IGMP querier) it can take up to 60 seconds...
I have the same problem.Not sure when and why the problem started, because we also migrated our 1Gb network to 10Gb and therefore replaced switches and network cards. Around the same time we upgraded to PVE 3.4. Since 1 week our cluster crashed 2 times. That means quorum was last on all nodes...
Wow! This is great. Thank you!
By distributing evenly among all available nodes, you mean that in a 3-node cluster where node1 with 10 VM's fails, there will go 5 VMs to node2 and 5 VMs to node3? This is far better than the current method, but if you take into account that nodes in a cluster...
It depends. IMHO there shouldn't be relocations based on available resources during the day. This should only occur when a node crashes and the VM's that were running on that node are automatically migrated to other nodes. The current situation is unpredictable, or maybe it's like node1 migrates...
Hi Dietmar,
Thank you very much. You were of great help! I'm going to like Proxmox even more :-)
Your explanation is clear and I'm going to configure the redundant fencing devices like you suggested.
Hi Dietmar,
Thanks, I think you're right, but I don't fully understand. Can you please explain why fencing is still necessary if the failed node is already powered off? If the fencing device can't be reached it seems reasonable to assume the node is dead. Can't it be configured that way? So...
I don't think so. Fencing works, when I shutdown the management network interface on a node, this node will be fenced by the other nodes. They connect to it's iDRAC and give a power reset ot something, because the node is restarted.
So, basically, fencing works. But maybe you are right. When we...
Hi,
We've a 3-node Proxmox cluster connected to our Ceph storage cluster. We're still testing all options and stability, our last test didn't succeed in retaining the high availability we expected. Let me start by saying we are very satisfied with Proxmox, all seems very good and stable. Good...
I'm pretty new to PVE, so I'm not sure. But I had the same problem in a three node cluster with balance-rr bonding. Information I found about the problem pointed in the network connectivity direction and that seemed to be true, in my case. For you, I can't tell, but try to remove lacp config...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.