Search results

  1. L

    pvecm add faild when cluster use unicast [ transport="udpu" ]

    I'm having the same issue, did you found any solution? EDIT - it seems that this is corosync limitation, you need to restart corosync on all nodes when adding new node to cluster using udpu
  2. L

    When is CLVM needed?

    Great, thanks. I was stuck with 1.9 for long time, but now we could finally upgrade to 3.0 and I'm really impressed by the progress you made with it. If anyone can answer what are those cases when CLVM is needed it would be great.
  3. L

    When is CLVM needed?

    Proxmox 2.0 beta3 changelog states: "do not activate clvmd by default (clvm is now only needed in very special cases)" what are those special cases? I couldn't find any description on the wiki, bugzilla or in this forum, only comments like "not needed for most users". I'm using FC based...
  4. L

    Wrong base for memory graph?

    I just installed proxmox 3.0 and it seems that memory graph is using 1000 instead of 1024 as base, I have max memory set to 4G but the line on graph is above the 4G mark, same with current used memory. Is it just me? ps. storage graphs also looks like it's using 1000 instead of 1024
  5. L

    KVM 0.15 - pvetest repository

    Works fine here, also lucid guests works with vhost-net now (or it was fixed in ubuntu)
  6. L

    General High Availability Question

    http://pve.proxmox.com/wiki/Roadmap#Roadmap_for_2.x
  7. L

    General High Availability Question

    If You use shared storage of any kind, then You can restart virtual machines from failed host on another server, all You need to do is to backup /etc/qemu-server directory which contains virtual machine configs. There is not HA in proxmox that will do it automatically.
  8. L

    PVE 1.8 - disk performance becomes worse over time

    Desktop drives don't support some features that are needed when disk is used in hardware RAID, see: http://www.spinics.net/lists/xfs/msg03730.html http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery
  9. L

    Ceph and file storage backend capabilities

    This is small cluster, currently 22 with light moderate load and not much I/O.
  10. L

    Ceph and file storage backend capabilities

    Proxmox is using 8 node cluster with HP DL180G5, each with 1TB 7200RPM SATA drives (no RAID volumes, each disk is single volume). MFS splits files (vm images in this case) into 64MB chunks that are spread across mfs cluster, each chunk is replicated according to goal that was set for given file...
  11. L

    Ceph and file storage backend capabilities

    Metadata is not the only thing that affects performance and I had bad experience with glusterfs (some time ago, around ~v2.0, it may be different now). No dedicated central server may seem like a good think but it also creates problems: - split-brain may occur and it may lead to data loss, it...
  12. L

    Ceph and file storage backend capabilities

    Why do You think that "no use of metadata" gives You best performance? I use http://www.moosefs.org/ and I can recommend it, it's easy to set up and quite fast (I'm getting 50-60MB/s for reads on vm).
  13. L

    Strange behavior with the CPU units !

    You got something for free from the internet, it did broke and instead of fixing it you are complaining that it did broke in the first place. Don't have skills needed to fix it? Pay the proxmox guys for support, that's even easier than "going back to citrix"
  14. L

    KSM Problems with Proxmox 1.8

    KSM tuned deamon will enable KSM only if You use >50% memory
  15. L

    Comparison between KVM and VMWARE

    KVM exposes by default only subset of cpu features that are supported by most of cpu versions that support virtualization so that after You live-migrate VM from one host to another it does still work on new host cpu. If You got same cpu on every host You can enable all the feature or only those...
  16. L

    Ubuntu 10.04.2 guest problem

    Maybe this is related to http://forum.proxmox.com/threads/5961-Ubuntu-10.04.2-KVM-fresh-install-inoperable-w-high-events-1-usage-due-to-virtio-nic ??
  17. L

    Strange lvm issue on proxmox cluster

    I've resized one on logical volumes using lvextend, You can't do that if in case of lvm on shared storage connected to proxmox. see http://forum.proxmox.com/threads/2744-ProxMox-on-a-direct-attached-shared-storage-system?p=15240#post15240
  18. L

    Strange lvm issue on proxmox cluster

    I did everything using proxmox tools and still I had problems. Before that I had one cluster with lvm on top of FC but I broke it with lvextend, so I was very careful not to do any manual "tweaking" and left everything to proxmox.
  19. L

    Strange lvm issue on proxmox cluster

    I've had identical problem on two clusters in two separate data centers, one was using FC storage and multipath, other iscsi first without multipath, and then I've added multipath to protect myself from iscsi reconnects (one reconnect changed iscsi disk name from sda to sdb and my lvm stopped...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!