I have found this one article and scripts very interesting - http://servernetworktech.com/2013/06/upsalert-my-solution-to-graceful-server-shutdowns-on-power-loss/
Thanks a lot for this patch. It would be a great benefit if the hostname could appear in the backup filename in the core proxmox. Could it be added there? In the ESXI backups there are stored config files as well. I think this could help in case of a huge disaster. Would it be possible to...
I put here some news about this problem as we talked about it on bugzilla.
This is an old bug of the web interface only. Cluster communication remains unaffected, at least in my case. It is possible to move machines between nodes and so on but only via CLI.
It could be temporarily fixed by...
In this case you can use just glusterfs, because ceph is stable only as a block storage for now.
Here you have a nice tutorial for two nodes http://www.jamescoyle.net/how-to/435-setup-glusterfs-with-a-replicated-volume-over-2-nodes also check other articles about connecting to proxmox...
@ dietmar buying a subscription was on my todo list, but now I am waiting for this issue to be solved. I just can imagine myself after some big blackout to be unable to start the cluster just because of some missing unimportant backup server ...
I tried to install glusterfs on the cluster nodes...
I have just installed the latest updates. Unfortunately, this error is still not solved and there is no answer on the bugzilla as well.
Although I am very happy for all new features, solving old major bugs should be on the first place.
I use proxmox for few years but this seems to me like a...
When I tried to use gluster instead of nfs, I noticed that a container is firstly synced to /tmp, compressed and then transferred to remote storage. It was much faster because of copying many files versus one big file, so I suppose it could be done with nfs as well, but I did not try it.
cesarpk:
I just tried to start backup a container (suspend only as for snapshot I do not have enough space in this testing virtual cluster) and turn off my nfs server after few seconds. The process hang out until that nfs storage is available again.
When I stopped nfs server and then backup...
In my comparison the difference was between 5-10%, not a big deal for me. But i was comparing iscsi+lvm vs nfs+image, so maybe on local storage there will be bigger performance boost.
It would be better to use separate disks for data and proxmox. Anyway, you can resize your data partition...
Can some of the devs acknowledge this bug please? I would like to help somehow or try any patches, unfortunately I am not a programmer. This error is really disturbing and keeps me from completely migrating from esxi.
Yes, you can install gluster on the a node without any problem as described in my previous post. But gluster does file level replication so it is more similar to NFS than CEPH which is closer to drbd or cLVM in my opinion.
In comparison to DRBD it offers scalability, HA by desgin and it is much...
Hi Tino,
I was trying to make exactly this same configuration, but at the end I gave it up. It was so complicated to install pacemaker and I prefer more recommended and tested solution.
Anyway, maybe you could try gluster. It is no problem to install it and use it on pve node. There are howtos...
Hi mcmyst,
thanks for your hints, but unfortunatelly it does not solve the problem. In fact I had to manually unmout this device with umount -l /mnt/pve/cupid_data.
Here are my logs after turning off my nfs server:
Apr 24 11:27:02 cl1 kernel: ct0 nfs: server 192.168.80.200 not responding, timed...
Please, can someone else confirm this bug?
I installed another two new proxmox servers, joined them to cluster, than connect nfs server and selected it for backup only. I tested it and everything was working. Afterward I stopped that nfs server and in /var/log/syslog I can see
Apr 23 12:08:41...
Thanks for your answer cesarpk.
I was using zfs on OpenIndianna virtualized on ESXI host for few years and there were absolutely no problems when serving data from this virtual machine to another ESXI hosts. I replicated this configuration on proxmox, but unfortuantelly it crushed everytime on...
I experience this bug with latest two pve 3.2 servers and external nfs server only for backup. I even created two virtual proxmox servers and after turning off nfs, cluster splits. There is the same behaviour when using gluster or ceph as well. When a storage is not available, than cluster...
I tried build a virtual test environment of two servers connected to cluster, then added nfs storage and stopped it afterwards. These servers stops communicating and it is not possible to restore them without rebooting one of this servers.
Am I really the only one who has this problem with...
Hi,
maxfiles means number of backups on that storage.
On some threads I have found this settings in storage.cfg could help options vers=3,tcp,nolock,noatime .
I haven`t seen any significant improvement in my setup while changing rsize and wsize.
When mounting backup storage based on zfs I use...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.