Search results

  1. S

    Poor hard drive performance...

    compression, encryption and what you probably not want in an production environment - write cache; raw can be restored to iscsi or lvm - qcow2 only on a fileystem like nfs, also qcow2 is slower then raw; if you want a encrypted or compressed image you need to use qcow2 - raw gives you more data...
  2. S

    Poor hard drive performance...

    VIRTIO, LVM and RAW for more data security or QCOW2 because of the features....
  3. S

    problem with kernel 2.6.24-11-pve+bonding+vlan

    there is no real difference except your interface settings are a little bit inconsistent to each other.... vmbr0 has stp on, the others off both vmbr0 and vmbr1 has a gateway configured where the gateway from vmbr1 is the same as on vmbr0 anyway, whatever i try i can't get rid of these...
  4. S

    problem with kernel 2.6.24-11-pve+bonding+vlan

    hmmm..... can you please post an example you setup your interfaces? this is what i have configured: auto bond0 iface bond0 inet manual slaves eth0 eth1 auto vmbr0 iface vmbr0 inet static address 192.168.100.55 netmask 255.255.255.0 gateway 192.168.100.1 network...
  5. S

    Hard Drive question

    in your harddisk setup you have no fault tolerance, so if you have 4 identical drives and a hardware raid-controller the proxmox recommendation is a raid10; anyway, back to your question - you can mount the lvol directly into /var/lib/vz/template or to another location via fstab and define it as...
  6. S

    problem with kernel 2.6.24-11-pve+bonding+vlan

    seems that is not used very often....so i need to install temporary the kernel from the backports repository until openvz is in the latest pve-kernel.... screenie
  7. S

    Cluster with 2 node in different datacenter

    the only possible and useful scenario for clustering across different datacenters is when you have a layer2 backend connection between the datacenters and your pve-hosts and vm's are using an pi-space which is announced from different providers on each datacenter; screenie
  8. S

    Inter node network

    well, then i would use openvpn in routing mode
  9. S

    How to restore a container to a specific snapshot/backup

    ah cool - would be nice to have this in the pve-gui per vm or backup job because you probably do not want this for every test-vm or different amount of kept backups
  10. S

    Inter node network

    it is not clear to me why you setup a pve-cluster when both pve-hosts are not sharing the same networks - in this setup you cannot migrate vm's from one pve-host to the other one; are the two pve-hosts directly connected to the internet or is there a firewall/router/l3-switch in between...
  11. S

    Inter node network

    simply route these two networks - either locally on your pve-host or on your default gateway(s)....
  12. S

    How to restore a container to a specific snapshot/backup

    maybe it would be a good feature to select how much backups back should be stored and the oldest will be deleted when a new one is created?
  13. S

    How do I change "root" user name?

    there is actually no possibility for a user management, so i assume at the moment you can't change it; and renaming the system root account should be never done....
  14. S

    pve cluster live migration setup

    ah ok - didn't know that...thx for clarifying...
  15. S

    pve cluster live migration setup

    ok, drbd without the whole rhcs makes it much easier :-) i am right that you are using a separate lvol for each vm? one lvol for all my vm's are enough because when the backup job run's all vm's can be processed with the same snapshot - or is there something i missed? so, i will try the drbd...
  16. S

    Network problem with guest

    do you have more details on your entire network configuration/szenario? are your vmware/hosts/guest on the same vlan, trunking configured on the old switch but not on the new one, interface configuration on pve host/guest, stp configuration, ...
  17. S

    problem with kernel 2.6.24-11-pve+bonding+vlan

    looks like there is a problem supporting vlans on bonding devices with kernel version 2.6.24-11-pve which is working with kernel 2.6.32-1-pve boot error message: vlan_check_real_dev: VLANs not supported on bond0 is there a patch for the 2.6.24 availiable to get this working? because 2.6.32 has...
  18. S

    pve cluster live migration setup

    hi, when i understand it right - for a pve cluster with live migration i need a drbd active/active+gfs2 or a shared mounted nfs on both nodes where the vm's are stored, correct? or is there a additional way for doing this? without shared storage? and what is the recommendation for such setup...
  19. S

    Proxmox 1.5 upgrading

    search the forum for 'time sync', 'time drift', 'pmtimer' - maybe you will find the right answer; a known problem on amd cpu's
  20. S

    Slow clock - time drift in windows guests

    the performance should be better with your hardware..... CPU BOGOMIPS: 23940.91 REGEX/SECOND: 993583 HD SIZE: 82.50 GB (/dev/sda3) BUFFERED READS: 175.34 MB/sec AVERAGE SEEK TIME: 8.61 ms FSYNCS/SECOND: 2825.50 DNS EXT: 90.10 ms DNS INT: 20.85 ms...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!