Search results

  1. J

    OpenVZ guests on shared storage (not iscsi)

    I am trying to find a way to optimize migration times and reduce downtime of openvz containers hosted in proxmox on shared storage. I know about the iscsi howto for openvz, which suggest a configuration with disk-over-iscsi which is completely transparent to openvz, using 1 LUN per cluster node...
  2. J

    nfs4 patch for proxmox

    this is great news. My nas exports both nfs3 and nfs4 (and isn't running linux) so this is the reason for having to tell linux clients to use nfs4. @AkeemMcLennon: What you're describing is normal with nfs4. It's the way nfs4 exports shares. My patch is obviously a "hack" and is not supposed to...
  3. J

    nfs4 patch for proxmox

    dietmar, seems me and you replied at the same time. Actually, if your nas exports both nfs3 and nfs4 trees, the nfs client in linux defaults to nfs3. You can confirm this with cat /proc/mounts on the client. Also, when you have nfs4 server with nested export directories (i.e. 1 export per VM)...
  4. J

    nfs4 patch for proxmox

    I am wondering if anybody is interested in nfs4 vs nfs(3) support in proxmox. Using nfs has its advantages over iscsi for shared storage and nfs4 has it's own ones over nfs3 (security). Oops... did I share a secret? jinjer
  5. J

    nfs4 patch for proxmox

    Hi, I recently needed to mount all my shares using nfsv4 instead of the classic nfs protocol. Fortunately this is a simple patch, so I would like to contribute it in the hope it will get incorporated some time. This patch works ONLY if ALL your shares are nfs4: as there's no provision...
  6. J

    Bounty: VM owner panel

    I would like to start a bounty for a much needed feature for proxmox: A VM owner limited access panel. This is a much needed feature (IMHO) which will help adoption of proxmox in many small shops. The panel needs not be fancy. A simple Start/Stop/Reset/Reboot operations will be enough...
  7. J

    NFS + Cluster mount issue on slaves

    thank you. I never moved towards creating a guest since nfs was not mounted. Kind of puzzling. jinjer
  8. J

    NFS + Cluster mount issue on slaves

    I have a test cluster with proxmox installed. The slave servers do not mount the share added to the master (i.e. master will mount a share immediately but slave will only have the config but not mount it). To replicate: 1. Add nfs to master 2. Look at slave: not mounted. Is this normal...
  9. J

    VDI (Virtual Desktop Infrastructure) + DR (Disaster Recovery) : case study

    Well, in my case I would like to give each employee their own workstation, so they can mess with it, but if that ws is virtualized perhaps would solve some backup and availability problems.
  10. J

    VDI (Virtual Desktop Infrastructure) + DR (Disaster Recovery) : case study

    I found a solution from NEC which claims to virtualize a huge number of workstations based on their "FlexPower" (i.e. intel modular server clone) and their thin clients. I was not able to get quotes from them (i.e. my source for IMS is much cheaper then the source I found for the FlexPower) and...
  11. J

    VDI (Virtual Desktop Infrastructure) + DR (Disaster Recovery) : case study

    Unfortunately I don't have an answer for you. I'm also looking into VDI but am not really convinced on the opportunity to virtualize everything. It's OK for servers that have light usage but I would be worried if I have ti virtualize a fileserver or an exchange server running anything but a...
  12. J

    New Proxmox VE 1.6 kernels (2.6.32 and 2.6.35)

    Thanks for sharing your experience. I too think the problem is with drbd+ocfs2. I am under the impression that ocfs2 requires some performance from the underlying disk system. I had performance issues from the disks that were causing the disk "pressure" to be very sustained. I can rule out...
  13. J

    iSCSI Configuration

    Not exactly... as I have already said all there's to it. I can try to explain with an example, but please don't ask me for a copy-paste walk-trough. Say you need an imap/pop3 server on stared storage and active-active setup. Bill of materials: 1. 2 x servers for providing service (i.e. 2 VM on...
  14. J

    iSCSI Configuration

    I've mixed two possible scenarios for usage of ocfs2/gfs2... take what suits you :) Commercial support is just that: You need a contact with someone if you need commercial support for anything (including ext3). Ocfs2 is in the kernel so it's safe to use. Ocfs2 or any other clustered filesystem...
  15. J

    iSCSI Configuration

    Well, ocfs2 is ideally suited for holding large files (i.e. raw disks for kvm machines). Then you don't need to sync anything and HA is as easy as starting the KVM on another node (or also migrating it while running). The way I use ocfs is because I need an active-active setup for 0 downtime...
  16. J

    shared storage: LVM2 or CLVM ?

    Thanks for the explanation. I see the potential problem... as soon as you use lvm tools by hand. Cluster can get out of sync. But knowing how you handle things it's easy to avoid problems (manually rescan on all nodes). care to comment on the ocfs2 way of doing things? jinjer
  17. J

    shared storage: LVM2 or CLVM ?

    @dietmar: I've not read the source for LVM, however: The point is that the kernel (almost certainly) caches some information about the VG/LV even if the source is stored on shared storage. This information can get outdated by the operations on another node and hence shall be refreshed on each...
  18. J

    iSCSI Configuration

    Sorry for the late answer.. I'm not very active in proxmox forum as I'm testing a whole range of other virtualization systems (vsphere, oracle vm, cloud.com to name a few).... add to this some hardware tests as I'm looking for an el-cheapo but safe-to-use san solution and also trying to find a...
  19. J

    Windows XP 0x7B

    A general rule of thumb: try adding an ide drive in the VM and install the drivers this way. Also, uninstall any previous drivers (xen or similar). If that doesn't work, perhaps a repair-install on top of XP will update your drivers (just have a copy of your XP in case you mess it up).
  20. J

    shared storage: LVM2 or CLVM ?

    Would you like to elaborate a little more? The PV is on shared storage (drbd/iscsi etc) but the VG and LV need syncronization between the nodes... well unless you only create the LV from a single node (the master). But even in this case the updated metadata need to be migrated to other nodes...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!