Search results

  1. M

    very bad HDD write performance

    Try EXT4 for the underlying filesystem with RAW VM disk images. Your numbers will improve. # dd if=/dev/zero of=/mnt/pve/local-datastore/test2.img bs=512 count=1000 oflag=dsync 1000+0 records in 1000+0 records out 512000 bytes (512 kB, 500 KiB) copied, 0.735923 s, 696 kB/s # dd if=/dev/zero...
  2. M

    Differential backups

    Maybe *this* differential backup solution is not suitable, but certainly *a* differential backup of some sort is useful. I do not want to have to implement ZFS, with all the endless knob tweaking that comes along with it, into a ZFS-less environment just to get differential backups.
  3. M

    Differential backups

    Your customers are asking for it. Shouldn't you listen to them?
  4. M

    Snapshots in 'delete' status

    I personally had found that the more qcow2 snapshots i took and deleted the slower the VM performance got. I read that qcow2 can suffer from internal fragmentation. Perhaps this was the case for me. Switching to raw alleviated the slowdown for me. Ymmv.
  5. M

    Snapshots in 'delete' status

    I did not find a solution. I ditched qcow2 in favor of raw because the speed penalty was too much for me. No more snapshots for me.
  6. M

    Differential backups

    Thanks. Still, +1
  7. M

    [SOLVED] Poor performance 5.1

    I could, but simpler is better. i've got everything mounted over 40G via NFS. No additional RAM requirements, no zpool knobs to turn, direct access to the disk files, etc.
  8. M

    [SOLVED] Poor performance 5.1

    I saw similar results in my Linux VMs. Slow qcow2 performance. Switched everything to raw a while back and have been happy since. I miss the flexibility of qcow2, but I'll take the performance improvement over the flexibility.
  9. M

    Providing DNS Service to the Proxmox Hypervisor

    In my experience it's best to have one physical DNS server and one VM DNS server. The same for DHCP and also Active Directory if you run it.
  10. M

    Cisco ACI Integration

    We are in the process of deploying Cisco ACI at my company. ACI supports virtual network integration with VMware, Hyper-V, KVM, and OpenStack via API calls. Since PVE provides an API for cluster management, it sure would be nice to also have PVE support in ACI. I realize that this request may...
  11. M

    > 3000 mSec Ping and packet drops with VirtIO under load

    My understanding is that it is one controller per disk.
  12. M

    > 3000 mSec Ping and packet drops with VirtIO under load

    This thread is very interesting to me. I have a moderately used 5 node cluster (PVE version 5.0-31 on all nodes) using OVS with 10Gig ethernet to a couple of NFS shared storage hosts. All my VMs (~25) are configured with virtio disks using the 'virtio scsi single' controller type and all the...
  13. M

    VXLAN

    It would be nice to have GUI support for creating VXLANs when using OVS. VMWare does this with their NSX product. PVE has the capability to do this when used with OVS but currently provides no GUI support for it. Is it planned to incorporate this feature? It has the potential to be pretty...
  14. M

    Qcow2 vs. Raw speed

    The underlying storage is hardware RAID6 with BBU and an EXT4 filesystem served by NFS over 40Gb Infiniband. The largest VM has a 4TB and 2 TB virtio single disk configuration, with other VMs of various disk sizes all configured with virtio single disks.
  15. M

    Qcow2 vs. Raw speed

    Perhaps it is a 5% performance difference at creation time. I have no doubt of this. However, at least for me, over time with daily snapshotting and heavy use qcow2 slowed down considerably. It used to take me more than a day to backup a 5TB VM on qcow2. Now it takes just a few hours. Nothing...
  16. M

    Qcow2 vs. Raw speed

    After recently having some slowness issues with my VMs, I decided to convert all their disk formats from qcow2 to raw. With raw disks I now have no more slowness issues with any of my VMs. Even though snapshots are not supported for raw disks on a non-cow filesystem, the speed difference I...
  17. M

    Network drops on new VMs, not old

    Sounds like a network issue I used to encounter when the port security on the connected switch port was set to only allow a limited number of mac addresses on the port. After the limit was reached, the traffic for the additional mac addresses above the limit was silently dropped.
  18. M

    WANTED - Proxmox / Ceph consultancy

    Just for reference, here is 10Gbps Ethernet latency during iperf testing: ~$ ping 10.12.23.27 PING 10.12.23.27 (10.129.23.27) 56(84) bytes of data. 64 bytes from 10.12.23.27: icmp_seq=1 ttl=64 time=0.154 ms 64 bytes from 10.12.23.27: icmp_seq=2 ttl=64 time=0.146 ms ... --- 10.12.23.27 ping...
  19. M

    WANTED - Proxmox / Ceph consultancy

    Seems to work okay for me: # iperf -c 192.168.168.100 -P4 -w8M ------------------------------------------------------------ Client connecting to 192.168.168.100, TCP port 5001 TCP window size: 8.00 MByte ------------------------------------------------------------ [ 6] local 192.168.168.98 port...