Search results

  1. D

    Bug when move VM from storage snapshot-able (ZFS) to storage not snapshot-able

    Hi ! Case: 1) VM in node 1 on ZFS storage: create snapshot 2) VM in node 1 migrate storage to datastore LVM (iscsi) 3) Old snapshot is visible (???) but inconsistent. 4) VM with cdrom ISO set before snapshot now set to "none" 5) Migrate VM to node 2: unable to migrate because is set an ISO...
  2. D

    Expand VM disk on LVM (on ISCSI SAN multipath): error *FROM GUI*, OK from CLI

    Hi all ! On a test cluster, when I expand a VM disk on LVM (on ISCSI SAN && multipah) I see: "error resizing volume '/dev/data-mp/vm-103-disk-1': Run `lvextend --help' for more information. (500)" On shell, lvextend works correctly. I use PX 3.3, last upgrade (I've tried to upgrade all...
  3. D

    Best Proxmox HW-RAID-Controller with Performance-Data

    I feel good with HP Smartarray controllers. Smart array P420 2 Gbyte DDR3 cache. 8x SAS 10k HDD (RAID-6): [single Xeon processor] CPU BOGOMIPS: 50278.80 REGEX/SECOND: 1113764 HD SIZE: 12.06 GB (/dev/mapper/pve-root) BUFFERED READS: 1132.28 MB/sec AVERAGE SEEK TIME: 5.10...
  4. D

    Distributed filesystem for HA cluster and SAS storage

    In my experience, in the past, I've tried to use gfs2-ocfs2. The conclusions were... they are NOT extremely robust. I use LVM on SAN (IBM Storwize), this is the best for me. If I need take snapshot I use LVM snapshot (in console)... can use a single level, but you can rollback or merge.
  5. D

    Distributed filesystem for HA cluster and SAS storage

    I use several storwize on clusters. Use only LVM storage, is more resilent and faster than a file system (fewer logical levels). But you can not use qemu snapshot.
  6. D

    Distributed filesystem for HA cluster and SAS storage

    Is not possible on LVM (this is not a LVM snapshot), qm snapshot require qcow2 and a file system or a more complex structure like ZFS. Luca
  7. D

    PfSense (or generic BSD 8.x) on PX 3.3, optimizations ?

    Thanks Mir !!!! For now, on PFSense 2.1.5 I will continue to use e1000 nics, CPU loads max 70% (on 4 cores), 35 VLAN and 100 mbps wan link. These two firewalls are really critical and I can not experiment. With the new PFSense version (series 2.2) I will try virtio-net. For now, if I increase...
  8. D

    PfSense (or generic BSD 8.x) on PX 3.3, optimizations ?

    Thanks Mir ! I use ide disk, really on PFSense true firewall (only router and firewall function) practically HDD is not used. I think the problem is only "number crunching". Increase CPU units ? I could have benefits?
  9. D

    PfSense (or generic BSD 8.x) on PX 3.3, optimizations ?

    ... for example, disable "Use tablet for pointer" (absolutely useless in PFSense) can be helpful ?
  10. D

    PfSense (or generic BSD 8.x) on PX 3.3, optimizations ?

    Hi ! I use two PFSense firewall on a PX Cluster, all ok, works very well. Firewal configuration is complex and works on 35 VLAN (4 virtual E1000 as NIC). The CPU utilization (4 cores on single VM, type QEMU64) is medium/high (maximum full load 70%, typical 20-30%). Phisical hardware is...
  11. D

    PfSense (2.1.4 AMD64) another hang

    Hi all ! On very last Proxmox (full update now, repository enterprise) I've try to install PfSense 2.1.4 AMD64: hangs immediately on boot. I know there are many post, but I've not found a solution. PfSense 2.1.4 AMD64 hangs with *any* processor type (QEMU 64, host, Sandy Bridge, ecc.) and with...
  12. D

    Server hardware compatibility

    Hardware support is provided by kernel (and firmware), Proxmox uses RHEL6 kernel, so you have'nt to refer to Debian. Luca
  13. D

    Strange report of PVE team of drivers version

    > 3- What is the driver for the broadcom 10 Gb/s in PVE 3.2? (ie, tg3 or what?) This is the "bnx2x" (10 gbps Broadcom NIC), "bnx2" for new 1 gbps NIC, Tigon G3 (tg3) for old 1 gbps Broadcom NIC, typically presents in HP Proliant servers. For example, ML 350G3 uses tg3 NIC, G4, G4p, G5... uses...
  14. D

    iSCSI

    NO !!! On a shared storage you must NOT use a simple file system (like XFS, EXT, ecc.). There is a protection system to lock shared LVM in Proxmox, or you can use a cluster file system, but NOT a standard file system (on a multiple mount destruction is assured). Luca
  15. D

    Proxmox storage migration

    Hi ! Upgrade to latest 3.1 version, configure multipath vs Storwize (use as LVM - SAN) and migrate storage (hot storage migration). Works fine Luca
  16. D

    Supported Motherboards / Hardware requirements

    Only a note... Proxmox use RH kernel 2.6.32. RH use a very very very very patched kernel, include drivers and support not present in 2.6.32 "Vanilla" kernel. Very important: "2.6.32 Vanilla" != "2.6.32 RH" Luca
  17. D

    multipath: failed to set dev_loss_tmo

    SOLVED ! Re: multipath: failed to set dev_loss_tmo Another SAN BIOS configuration problem !!! Set SAN BIOS as "Linux cluster" and works fine ! On another RDAC IBM model "Linux cluster" dosn't work, works as "Linux AVT" :p
  18. D

    IBM blade HS22V, Qlogic, IBM V7000

    On IBM blade I usually install Proxmox without join SAN datastore, after installation I join it. The installer fails ... Try it only on RAID-1 blade without other storage.. Luca
  19. D

    multipath: failed to set dev_loss_tmo

    Hi Tom ! This is my settings : ************** devices { device { vendor "IBM" product "^1814" path_grouping_policy group_by_prio getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n" path_selector "round-robin 0"...
  20. D

    multipath: failed to set dev_loss_tmo

    Hi !! I use several fc/sas SAN.. but on IBM 1814 i've this error. In /dev/mapper/device works, but path 1 and path 2 switch from ready to ghost, continuously, and this is not normal. On /var/log/syslog I can see a multipath error: failed to set /class/fc_remote_ports/rport-0:0-0/dev_loss_tmo...