Search results

  1. Y

    Migration issue - storage 'zfs1-vps1' is not available on node

    Why I cannot do this by UI? Is it intentional or the code for frontend was not developed?
  2. Y

    Restoring failed

    After trying to restore zst backup I got: TASK ERROR: unable to restore CT 606 - command 'set -o pipefail && cstream -t 41943040 | lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar xpf - --zstd --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs...
  3. Y

    Migration issue - storage 'zfs1-vps1' is not available on node

    In cluster I've got two hosts. - vps1 - vps2 Each host got it local storage. - VPS1 got zfspool named zfs1-vps1 - VPS2 got zfspool named zfs1-vps2 On host VPS1 I click "Migrate" on some turned off VM/CT and click OK. There is always an error: Task viewer: CT 31337 - Migrate 2020-10-24...
  4. Y

    Unprivileged LXC Problem (debian/ubuntu template)

    I need to: mkdir /sys/fs/cgroup/memory/lxc/200/ns/ touch /sys/fs/cgroup/memory/lxc/200/ns/memory.stat touch /sys/fs/cgroup/blkio/lxc/200/ns/blkio.throttle.io_service_bytes mkdir /sys/fs/cgroup/blkio/lxc/200/ns/ touch /sys/fs/cgroup/blkio/lxc/200/ns/blkio.throttle.io_service_bytes mkdir...
  5. Y

    Unprivileged LXC Problem (debian/ubuntu template)

    I tried to create LXC Container using debian 10.0.1 template and ubuntu 19.04. Container created but after clicking it I see: I went on ssh console and did pct list: root@vps1:~# pct list can't open '/sys/fs/cgroup/blkio/lxc/200/ns/blkio.throttle.io_service_bytes' - No such file or directory...
  6. Y

    [SOLVED] Ping problem from apache2 in LXC container

    That is the apache problem. Need to do more research.
  7. Y

    [SOLVED] Ping problem from apache2 in LXC container

    root@cacti:~# ls -la /bin/ping -rwsr-xr-x 1 www-data root 73496 sty 31 00:11 /bin/ping root@cacti:~# sudo -u www-data /bin/ping www.google.com /bin/ping: socket: Operacja niedozwolona
  8. Y

    [SOLVED] Ping problem from apache2 in LXC container

    I can ping as root: # ping -c3 www.google.com PING www.google.com (172.217.20.164) 56(84) bytes of data. 64 bytes from waw02s07-in-f164.1e100.net (172.217.20.164): icmp_seq=1 ttl=58 time=4.34 ms 64 bytes from waw02s07-in-f164.1e100.net (172.217.20.164): icmp_seq=2 ttl=58 time=4.35 ms ^C I...
  9. Y

    [SOLVED] Ping problem from apache2 in LXC container

    # pveversion -v proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve) pve-manager: 6.1-8 (running version: 6.1-8/806edfe1) pve-kernel-helper: 6.1-8 pve-kernel-5.3: 6.1-6 pve-kernel-5.3.18-3-pve: 5.3.18-3 pve-kernel-5.3.18-2-pve: 5.3.18-2 ceph-fuse: 12.2.11+dfsg1-2.1+b1 corosync: 3.0.3-pve1 criu...
  10. Y

    [SOLVED] Ping problem from apache2 in LXC container

    Apparmor default profile doesn't allow to ping hosts from apache2. If Your create ping.php with contents: <?php ///$result = shell_exec('su -pc "ping -c3 172.20.2.42" 2>&1'); $result = shell_exec('sudo -u www-data ping -c3 172.20.2.4 2>&1'); print "<pre>$result</pre>"; ?> ping: socket...
  11. Y

    cannot start LXC

    I did also some dirty thing after that .... I created that not-existent directories and files manualy using mkdir and touch. There are several bugs in proxmox or LXC ... with that but I cannot isolate that.
  12. Y

    cannot start LXC

    Container works but pct list get an error... can't open '/sys/fs/cgroup/memory/lxc/444/ns/memory.stat' - No such file or directory how to live?
  13. Y

    cannot start LXC

    This comes after restoring backup from proxmox 5.x with unprivileged=1. I did: zfs send | ssh zfs receive copy config from /etc/pve/lxc/444.conf to new serwer. pct start 444 Workaround works.
  14. Y

    cannot start LXC

    newest proxmox 6.x I tried to start lxc with GUI: Job for pve-container@444.service failed because the control process exited with error code. See "systemctl status pve-container@444.service" and "journalctl -xe" for details. TASK ERROR: command 'systemctl start pve-container@444' failed: exit...
  15. Y

    [TUTORIAL] Proxmox ZFS raid1 performance

    From my point of view: - flashing generic LSI P20.0.7.0 version to H220 LSI gets about 30% !!!!!!!! - turned off NCQ (for i in a b c d e f; do echo 1 > /sys/block/sd$i/device/queue_depth; done) - we get about 3-6% - changed scheduler to none from mq-deadline (for i in a b c d e f; do echo none >...
  16. Y

    [TUTORIAL] Proxmox ZFS raid1 performance

    @sa10 could You give me information about your test envirnoment? For real good compare I need ZFS dataset parameters: - recordsize, - ashift, - compression, - atime.
  17. Y

    [TUTORIAL] Proxmox ZFS raid1 performance

    I did some tests for others with different recordsizes. recordsize 32K, pool1 write bs=1mb iodepth 4 write: IOPS=499, BW=500MiB/s (524MB/s)(10.0GiB/20485msec); 0 zone resets clat (usec): min=8, max=12788, avg=6004.62, stdev=2669.45 read bs=1mb iodepth 4 read: IOPS=2323, BW=2324MiB/s...
  18. Y

    [TUTORIAL] Proxmox ZFS raid1 performance

    Next test I did is different ashift: ASHIFT = 0 write bs=1mb iodepth 4 write: IOPS=509, BW=510MiB/s (535MB/s)(10.0GiB/20087msec); 0 zone resets clat (usec): min=8, max=107857, avg=5888.93, stdev=2487.83 read bs=1mb iodepth 4 read: IOPS=2244, BW=2244MiB/s (2353MB/s)(10.0GiB/4563msec)...
  19. Y

    [TUTORIAL] Proxmox ZFS raid1 performance

    Testing variables: - firmware P20.0.7.0 (Crossflash from H220 to Generic LSI 9207-8e), - compression = off, - atime = off, - recordsize=128k, - ncq disabled (cat /sys/block/sda/device/queue_depth = 1), - ashift = 12, After upgrading to H220 to P20 (crossflash to: LSI 9207-8e firmware) FIO TEST...