Search results

  1. R

    Proxmox VE 2.2 released!

    Re: Low performance of data transfer over cifs mounts Sorry, i reversed the figures for scp, correct ones: with 2 Cores : 4MB/s with 1 Core : 70MB/s The problems disappears on a 32bit Ubuntu 12.10 guest (but guest kernel is much more recent, of course) rob
  2. R

    Proxmox VE 2.2 released!

    Re: Low performance of data transfer over cifs mounts no, I have verified this also via scp, transfer rate passes from 70MB/s to 4MB/s. I opened also a Ticket for this issue. rob
  3. R

    Proxmox VE 2.2 released!

    Re: Low performance of data transfer over cifs mounts tried with scp; with 2 Cores : 70MB/s with 1 Core : 4MB/s rob
  4. R

    Proxmox VE 2.2 released!

    Low performance of data transfer over cifs mounts We have upgraded one of our cluster nodes to 2.2. We noted that performance of data transfer over smb mounts is severely impacted on machines configured with more than one sockets/cores configured. With 2 cores we have max 1.5 MB/s, with 1...
  5. R

    Red icons in storage view

    Done: https://bugzilla.proxmox.com/show_bug.cgi?id=260 rob
  6. R

    Red icons in storage view

    Hello, I noted that all three nodes in our 2.1 cluster are in red when I switch to "Storage view". The most part of storages are configured as LVM on SAN disks. The Volumes were created under pve 1.9; may be some LV tags are missing? The cluster seems to work very well anyway; just curious bye
  7. R

    Incremental backup of VM's

    I would stress that using a VM as BackupPC host could be a viable oction, if you take away NFS. The easiest way is to provide a physical dedicated disk directly to the vm for the pool filesystem. rob
  8. R

    Incremental backup of VM's

    Thanks! I think that a VM as BackupPC host should work, but it is a suboptimal choice; check the network load on PVE during backups using iftop or similar; Bear in mind that access to the pool filesystem should be as quick as possible. Finally, i don't know if hardlinking (base of BackuPC...
  9. R

    Incremental backup of VM's

    Sorry I didn't realize you referred to e100 post, not mine. rob
  10. R

    Incremental backup of VM's

    I have published the scripts, under GPL v3.0 License: http://pve.proxmox.com/wiki/File_System_level_backups_with_LVM_snapshots#Backuppc-snap_download_page Very interesting. bye, rob
  11. R

    Incremental backup of VM's

    Here: http://pve.proxmox.com/wiki/File_System_level_backups_with_LVM_snapshots I am starting to put some notes; I do not use rsnapshot, but I think the concept is very similar and you can easily adapt the scripts. rob
  12. R

    Incremental backup of VM's

    No, snapshots are taken from the host, acting inside the ssh/rsync redirection script; a Sync on traget disk is done before sapshotting, and optionally a set of task (stop/start of services, dump or lock databases ...) is executed, immediately before and after snapshot creation. Snapshot...
  13. R

    Incremental backup of VM's

    At the moment we are using backuppc for all but few vm in two clusters, fifty machines or so in total. We preferred to have two physical backuppc host; all hosts are configured in the usual way on backuppc. If an host is a vm, the rsync backup command is redirected on vm using an ssh "forced...
  14. R

    Incremental backup of VM's

    We are using a similar solution, but contacting directly the hosts and redirecting rsync on the snapshots on pve. We use to save also ntfs metadata for windows vm in case of bare metal restore. We are thinking to publish the scripts soon, if you are interested i can notfy here when is...
  15. R

    bonding and be2net

    Update: it was a firmware issue: http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&objectID=c02473928&prodTypeId=329290&prodSeriesId=4085948 In short: the interconnection module (passthrough) between the blade host and the switch was not passing the state of the link...
  16. R

    bonding and be2net

    It seems that bonding with arp monitoring inside a bridge has known issues; see http://www.linux-kvm.org/page/HOWTO_BONDING (bottom of the page: Problem with bridge + bonding) And, moreover: "[PATCH] bonding: fix arp_validate on bonds inside a bridge"[1] May be this patch is not yet in pve...
  17. R

    bonding and be2net

    Yes, we are at this level, no difference. Anyway, I have tried removing arp_validate 3 parameter from bonding configuration, and now the bond interface stays up even inside the bridge. Failover works. if i set that parameter to 1 or 2 the slave interfaces start oscillating between up and down...
  18. R

    bonding and be2net

    Hello, We are starting migrating our 1.9 cluster to 2.1, but there are problems in setting up bonding on two Emulex NC355i 10Gb ethernet interfaces (be2net driver), onboard on HP Blade BL460c G7. The two interfaces are connected to two different switches, with a trunk connecting them. First...
  19. R

    strange kernel error and vm soft lockup after upgrade to 1.9

    More ore less same problem here; seems solved after upgrading guest lenny kernel with 2.6.32-bpo.5-amd64 from lenny-backports. bye, rob
  20. R

    Proxmox crash

    More or less the same here: ==== ... Sep 8 18:42:17 lxdmz4 kernel: BUG: unable to handle kernel NULL pointer dereference at 0000000000000128 Sep 8 18:42:17 lxdmz4 kernel: IP: [<ffffffffa02974d4>] kvm_set_irq+0x65/0x109 [kvm] Sep 8 18:42:17 lxdmz4 kernel: PGD 602b8d067 PUD 602150067 PMD 0...