I am trying to find a way to optimize migration times and reduce downtime of openvz containers hosted in proxmox on shared storage.
I know about the iscsi howto for openvz, which suggest a configuration with disk-over-iscsi which is completely transparent to openvz, using 1 LUN per cluster node...
this is great news. My nas exports both nfs3 and nfs4 (and isn't running linux) so this is the reason for having to tell linux clients to use nfs4.
@AkeemMcLennon: What you're describing is normal with nfs4. It's the way nfs4 exports shares. My patch is obviously a "hack" and is not supposed to...
dietmar, seems me and you replied at the same time.
Actually, if your nas exports both nfs3 and nfs4 trees, the nfs client in linux defaults to nfs3. You can confirm this with cat /proc/mounts on the client. Also, when you have nfs4 server with nested export directories (i.e. 1 export per VM)...
I am wondering if anybody is interested in nfs4 vs nfs(3) support in proxmox. Using nfs has its advantages over iscsi for shared storage and nfs4 has it's own ones over nfs3 (security).
Oops... did I share a secret?
jinjer
Hi,
I recently needed to mount all my shares using nfsv4 instead of the classic nfs protocol.
Fortunately this is a simple patch, so I would like to contribute it in the hope it will get incorporated some time.
This patch works ONLY if ALL your shares are nfs4: as there's no provision...
I would like to start a bounty for a much needed feature for proxmox: A VM owner limited access panel.
This is a much needed feature (IMHO) which will help adoption of proxmox in many small shops.
The panel needs not be fancy. A simple Start/Stop/Reset/Reboot operations will be enough...
I have a test cluster with proxmox installed. The slave servers do not mount the share added to the master (i.e. master will mount a share immediately but slave will only have the config but not mount it).
To replicate:
1. Add nfs to master
2. Look at slave: not mounted.
Is this normal...
Well, in my case I would like to give each employee their own workstation, so they can mess with it, but if that ws is virtualized perhaps would solve some backup and availability problems.
I found a solution from NEC which claims to virtualize a huge number of workstations based on their "FlexPower" (i.e. intel modular server clone) and their thin clients.
I was not able to get quotes from them (i.e. my source for IMS is much cheaper then the source I found for the FlexPower) and...
Unfortunately I don't have an answer for you. I'm also looking into VDI but am not really convinced on the opportunity to virtualize everything. It's OK for servers that have light usage but I would be worried if I have ti virtualize a fileserver or an exchange server running anything but a...
Thanks for sharing your experience. I too think the problem is with drbd+ocfs2.
I am under the impression that ocfs2 requires some performance from the underlying disk system. I had performance issues from the disks that were causing the disk "pressure" to be very sustained. I can rule out...
Not exactly... as I have already said all there's to it. I can try to explain with an example, but please don't ask me for a copy-paste walk-trough.
Say you need an imap/pop3 server on stared storage and active-active setup. Bill of materials:
1. 2 x servers for providing service (i.e. 2 VM on...
I've mixed two possible scenarios for usage of ocfs2/gfs2... take what suits you :)
Commercial support is just that: You need a contact with someone if you need commercial support for anything (including ext3). Ocfs2 is in the kernel so it's safe to use.
Ocfs2 or any other clustered filesystem...
Well, ocfs2 is ideally suited for holding large files (i.e. raw disks for kvm machines). Then you don't need to sync anything and HA is as easy as starting the KVM on another node (or also migrating it while running).
The way I use ocfs is because I need an active-active setup for 0 downtime...
Thanks for the explanation. I see the potential problem... as soon as you use lvm tools by hand. Cluster can get out of sync. But knowing how you handle things it's easy to avoid problems (manually rescan on all nodes).
care to comment on the ocfs2 way of doing things?
jinjer
@dietmar: I've not read the source for LVM, however:
The point is that the kernel (almost certainly) caches some information about the VG/LV even if the source is stored on shared storage. This information can get outdated by the operations on another node and hence shall be refreshed on each...
Sorry for the late answer.. I'm not very active in proxmox forum as I'm testing a whole range of other virtualization systems (vsphere, oracle vm, cloud.com to name a few).... add to this some hardware tests as I'm looking for an el-cheapo but safe-to-use san solution and also trying to find a...
A general rule of thumb: try adding an ide drive in the VM and install the drivers this way. Also, uninstall any previous drivers (xen or similar).
If that doesn't work, perhaps a repair-install on top of XP will update your drivers (just have a copy of your XP in case you mess it up).
Would you like to elaborate a little more?
The PV is on shared storage (drbd/iscsi etc) but the VG and LV need syncronization between the nodes... well unless you only create the LV from a single node (the master). But even in this case the updated metadata need to be migrated to other nodes...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.