Search results

  1. H

    HA and backup via NFS

    We're facing the same problem. But I already did some tests to get rid of the NFS share and use vzdump to stdout over SSH from the backup server. It works perfectly but I need to script the whole procedure to login to every PVE node and run vzdump for a VMID on the right node. Some raw steps so...
  2. H

    [SOLVED] Advice for storing database

    You're possibly right, I'm not sure about that, I think RAM will only serve as read cache and will not corrupt the data, at most it returns objects from RAM (cache) that isn't up-to-date but afaik Gluster checks the directory structure on both ends on every listing and will update it's RAM...
  3. H

    [SOLVED] Advice for storing database

    IMO it makes sense. I would like to use it too, but for a slightly different situation. For a database server I use a system disk including all databases and want to use PVE snapshot function for this disk. The second disk is for GlusterFS, shared with another server. Imagine vm01 and vm02, both...
  4. H

    Proxmox ZFS + DRBD - zvol per VM?

    You're overcomplicating things. While it's possible doesn't mean it's good. You haven't mentioned your requirements and available hardware (storage configuration), so it's very hard to mention an alternative but what you're doing might not be the best way to go. More layers in the stack...
  5. H

    Three Node Cluster Setup recommendations

    If your setup will work fine depends on the usecase of your Windows servers. SQL is more resource intensive than serving a remote desktop for 1 Word user. I think your storage will suffice, but try to calculate the necessary IOPS and your workload, than calculate the IOPS your storage can...
  6. H

    Proxmox VE 4.3 released!

    I just want to say the new menu works fine for us. A product evolves and changes are necessary sometimes, you just have to adjust to it. Doesn't happen this all the time in software, and the rest of the world? Your new microwave probably doesn't use the same controls as your old but it reallly...
  7. H

    I need advices for my storage

    It all depends on your workload. Response time of SSD is really good, from my pveperf: AVERAGE SEEK TIME: 0.29 ms You need to determine the storage demand: - performance - disk space - reliability - costs Reliability can be the chosen RAID configuration, RAID0 is no RAID! RAID5 can be...
  8. H

    KVM disk resize online error (CEPH)

    It seems you're missing the hotplug definition in your VM settings. In the GUI you should enable Hotplug support for at least Disk at the Options section of VM ID 164. Something like this should appear in 164.conf: hotplug: disk,network,usb Stopping and starting the VM is necessary to activate...
  9. H

    Ceph Cluster using RBD down

    Please post your /etc/pve/storage.cfg
  10. H

    Fence_apc update for compatibility with firmware 6.x

    We recently updated our APC RPS devices to the newest 6.x firmware, mainly because of issues with SSL support in the old firmware. After updating the firmware, fence_apc didn't work anymore, some investigation showed it was because pre 5.x commands were send to the APC for power off and on. The...
  11. H

    "Connection timed out" when fencing node, but it does actually shut down (iDRAC)

    I had the same problem but got it solved with fence_drac5. You pointed me in the right direction :-) Your command (I used the same): fence_drac5 --ip=xxxxxxxxxx -l fencing_user -p xxxxxxxxx -c "admin1->" -x -v -v -v -o off is according to the documentation at...
  12. H

    Cluster fail time

    Hello, Corosync is used to setup a cluster between the Proxmox nodes, but what are the timeout settings? After how many seconds a node is considered dead? We use an IGMP querier on our switches but when the master switch fails (which is the master IGMP querier) it can take up to 60 seconds...
  13. H

    Problem with cluster after update to 3.4

    I have the same problem.Not sure when and why the problem started, because we also migrated our 1Gb network to 10Gb and therefore replaced switches and network cards. Around the same time we upgraded to PVE 3.4. Since 1 week our cluster crashed 2 times. That means quorum was last on all nodes...
  14. H

    HA Failover Logic

    Wow! This is great. Thank you! By distributing evenly among all available nodes, you mean that in a 3-node cluster where node1 with 10 VM's fails, there will go 5 VMs to node2 and 5 VMs to node3? This is far better than the current method, but if you take into account that nodes in a cluster...
  15. H

    HA Failover Logic

    It depends. IMHO there shouldn't be relocations based on available resources during the day. This should only occur when a node crashes and the VM's that were running on that node are automatically migrated to other nodes. The current situation is unpredictable, or maybe it's like node1 migrates...
  16. H

    Cluster - HA enabled VMs not migrating when node fails

    Hi Dietmar, Thank you very much. You were of great help! I'm going to like Proxmox even more :-) Your explanation is clear and I'm going to configure the redundant fencing devices like you suggested.
  17. H

    Cluster - HA enabled VMs not migrating when node fails

    Hi Dietmar, Thanks, I think you're right, but I don't fully understand. Can you please explain why fencing is still necessary if the failed node is already powered off? If the fencing device can't be reached it seems reasonable to assume the node is dead. Can't it be configured that way? So...
  18. H

    Cluster - HA enabled VMs not migrating when node fails

    I don't think so. Fencing works, when I shutdown the management network interface on a node, this node will be fenced by the other nodes. They connect to it's iDRAC and give a power reset ot something, because the node is restarted. So, basically, fencing works. But maybe you are right. When we...
  19. H

    Cluster - HA enabled VMs not migrating when node fails

    Hi, We've a 3-node Proxmox cluster connected to our Ceph storage cluster. We're still testing all options and stability, our last test didn't succeed in retaining the high availability we expected. Let me start by saying we are very satisfied with Proxmox, all seems very good and stable. Good...
  20. H

    all nodes red - but quorum - can not find any error

    I'm pretty new to PVE, so I'm not sure. But I had the same problem in a three node cluster with balance-rr bonding. Information I found about the problem pointed in the network connectivity direction and that seemed to be true, in my case. For you, I can't tell, but try to remove lacp config...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!