Search results

  1. Z

    Capacity testing with Proxmox 4.4

    1. Creating 120 clones takes on average 2.7s per clone without pause between them without any problem i'm using in script: qm clone $TEMPLATE $VM_ID qm set $VM_ID -net0 virtio=$MAC,bridge=vmbr0,tag=$VLAN 2. Never had problem with network using virtio. 3. I'm not using spice. For me it...
  2. Z

    SSD low speed in lxc

    I initially do 2 test on proxmox root partition and from the container and there is some performance lost, then i do test with mounted volume and speed loss is consistent. on proxmox itself i mean /dev/dm-0 ext4 9.8G 6.8G 2.5G 74% / /dev/mapper/pve-root on /...
  3. Z

    SSD low speed in lxc

    zfs or lvm? i have lvm. there is difference in speed but not such as yours from lxc container: sysbench --test=fileio --file-total-size=2G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 run Operations performed: 1461480 Read, 974320 Write, 3117770 Other = 5553570 Total...
  4. Z

    Updating templates

    just pass all the arguments to script rather than hard code them inside
  5. Z

    i/o disk limit

    1.first read: dd if=test.long of=test.long2 skopiowane 9682944 bajty (9,7 MB), 246,865 s, 39,2 kB/s 2.second read (file is cached) dd if=test.long of=test.long2 skopiowane 9682944 bajty (9,7 MB), 0,0677105 s, 143 MB/s 3.drop cache and read file again echo 1 > /proc/sys/vm/drop_caches dd...
  6. Z

    i/o disk limit

    No, it's working ok. What you see is read/writes from/to cache and blkio.throttle.read/write_bps_device is limiting physical access to device not cache, if your data is in cache then you get the full speed. Flush cache and then read the file again and you will see that limit is in place. The...
  7. Z

    i/o disk limit

    This is normal for cgroups, they are created dynamically and parameters are passed at container start (mem,cpuset etc). You have to script this yourself or wait for (if) proxmox implement this.
  8. Z

    i/o disk limit

    just checked on 2 containers and after +1h limit is still there and working. Tue Nov 15 15:35:51 CET 2016 root@dreadnought:/# cat /sys/fs/cgroup/blkio/lxc/107/blkio.throttle.read_bps_device 251:19 50000 root@dreadnought:/# stat /sys/fs/cgroup/blkio/lxc/107/blkio.throttle.read_bps_device...
  9. Z

    LXC Container show Load and hardware from Host

    Is it numa aware when pining/repining cpus or just calculating cpus load and repining to free cpu? Can we have old "cpu limit" without pining to choose from in config for specific container? i don't mind container showing all cpus. edit:Ii just found cpu limit is still available in options -...
  10. Z

    Changing colors of the graphs

    Hi, Where i can find definition of colors used in graphs? I made dark theme and only graphs don't fit with default colors ( just simple revers looks ok).
  11. Z

    ProxMox 4.x is killing my SSDs

    2x raid 10 - 4x samsung 850pro, lvm thin connected to H700 so no trim support. SSD 0 write: 5.67 TB power_on: 391.6 days wear_index: 099 Full_write:27 bad_sector: 0 serial:S250NXAGB02245X SSD 1 write: 6.06 TB power_on: 391.6 days wear_index: 099 Full_write:25 bad_sector: 0...
  12. Z

    Proxmox VE 4.3 released!

    It is hard to fill the page with graphs now, if i show more of vm list graphs are in one column, and then the right side of page is so empty. I also like classic theme more, and with old horizontal buttons page was even nicer.
  13. Z

    Proxmox 4 Reset Statistics and calculate the disk space new

    Stats are in /var/lib/rrdcached/db/pve2-vm, just delete file for container/vm id then restartt rrdcached.
  14. Z

    LXC NFS PVE 4.2-60 (running kernel: 4.4.15-1-pve)

    log from container without specified apparmor profile (no lxc.aa_profile: in container conf). I upgraded proxmox at 18:00 today Aug 9 06:26:29 dreadnought kernel: [1548710.957186] audit: type=1400 audit(1470716789.218:485): apparmor="DENIED" operation="mount" info="failed flags match"...
  15. Z

    LXC NFS PVE 4.2-60 (running kernel: 4.4.15-1-pve)

    only [mount fstype=nfs] is needed. This looks like nfs server side error - look maybe there.
  16. Z

    LXC NFS PVE 4.2-60 (running kernel: 4.4.15-1-pve)

    I'm using [mount fstype=cifs] and [mount fstype=nfs] in custom apparmor profile and mounting nfs from fstab - everything working ok on 4.4.15-1
  17. Z

    LXC migration - ssh compression on/off

    I also changed cipher in ssh between cluster members to arcfour and getting 1gbit/s instead of 0.5Gbit/s even with old x7560 (max is 1.2Gbit/s) . New cpus achieves +5gbit/s speeds with single ssh thread.
  18. Z

    i/o disk limit

    For lxc: lsblk to get device: └─sda3 8:3 0 3.7T 0 part ├─pve-root 251:0 0 10G 0 lvm / ├─pve-swap 251:1 0 2G 0 lvm [SWAP] ├─pve-data_tmeta 251:2 0 116M 0 lvm │ └─pve-data-tpool...
  19. Z

    LXC migration - ssh compression on/off

    Hi, Usualy migration do not exceed 12MB/s because of the option -C in ssh. I understand that compression is good in case of copying free space. But in normal use, when containers don't have huge amount of free space, speeds without compression are far more greater and benefits from compression...