Search results

  1. Y

    Proxmox default instalation

    Yes I used ashift when I was creating zpool. smartctl --all /dev/sdb |grep "Sector Sizes" Sector Sizes: 512 bytes logical, 4096 bytes physical
  2. Y

    Proxmox default instalation

    Yes You can change it when creating partitions see fdisk manual. But is it really worth to do this? And why proxmox installer creates 512 partitions instead of 4Kn?
  3. Y

    SMART error (CurrentPendingSector)

    You can do it online. zfs offline baddisk zfs replace rpool badisk gooddisc
  4. Y

    Proxmox default instalation

    After default instalation, after zpool create zfs1 raidz2 /dev/disk/sd{b,c,d,e,f,h} I have: root@vps1:/sys/block/sda/queue# cat /sys/block/sd{b,c,d,e,f,h}/queue/physical_block_size 4096 4096 4096 4096 4096 4096 and: root@vps1:/sys/block/sda/queue# cat...
  5. Y

    [SOLVED] Migrating Proxmox LXC containers with low downtime

    I did: zfs set canmount=off zfs-vps1/subvol-105-disk-1 And after V: zfs set canmount=on zfs-vps1/subvol-105-disk-1 That helped for me. Thanks.
  6. Y

    [SOLVED] Migrating Proxmox LXC containers with low downtime

    Hellou guys, I have a big container >500GB whitch I cannot shutdown for a long time. I use ZFS and my script for migrating is: I. zfs snapshot zfs-vps2/subvol-105-disk-1@snap1 II. zfs send zfs-vps2/subvol-105-disk-1@snap1 | ssh HOST2 zfs receive zfs-vps1/subvol-105-disk-1 III. pct shutdown...
  7. Y

    Planning Proxmox VE 5.1: Ceph Luminous, Kernel 4.13, latest ZFS, LXC 2.1

    My question is not how to speed up that but will it be in production iso implemented in this or other way?
  8. Y

    Planning Proxmox VE 5.1: Ceph Luminous, Kernel 4.13, latest ZFS, LXC 2.1

    Are You planning to enhance backup in 5.x with lz4 compression or a packer program that is not single core? It's really bottleneck when You have to compress more than 1TB (...)
  9. Y

    Uncompressed Backup Problem

    a did a strace for that whole backup process: http://pastebin.com/7ZEQGavb http://pastebin.com/3h7ebE9e Is somebody able to diagnose that?
  10. Y

    Uncompressed Backup Problem

    If local storage is checked as a backup storage the problem not exists. If samba storage is checked as a backup storage the problem exists. [9631705.054960] CIFS VFS: Error connecting to socket. Aborting operation. [9631705.055143] CIFS VFS: cifs_mount failed w/return code = -113 All the time...
  11. Y

    Uncompressed Backup Problem

    I've got a problem with backuping a VM to remote NAS whitch is mounted with samba to /mnt/z1 This is the log: INFO: starting new backup job: vzdump 101 --compress 0 --node far --mode snapshot --remove 0 --storage z1 INFO: Starting Backup of VM 101 (qemu) INFO: status = running INFO: update VM...
  12. Y

    How to export all VM IDs, name and notes?

    Dude: qm list - list of VMs, pct list - list of LXC Containers, vzlist - list of OpenVZ Containers. (Proxmox <3.X)
  13. Y

    How to export all VM IDs, name and notes?

    $ pct list copy&paste to text editor :)
  14. Y

    [SOLVED] LXC and device passthrough

    I thought that proxmox-ve scripts do it automaticly. :-)
  15. Y

    [SOLVED] LXC and device passthrough

    I added this: mknod /dev/ttyS0 c 4 64 chown root.dialout /dev/ttyS0 chmod 0644 /dev/ttyS0 to my rc.local. And it is OK right now.
  16. Y

    [SOLVED] LXC and device passthrough

    Do I need to do something more? After rebooting container ... 204# ls -la /dev/ | grep ttyS0 doesn't show anything.
  17. Y

    [SOLVED] LXC and device passthrough

    Hi all, I tried to passthrough /dev/ttyS0 and /dev/ttyUSB0 to container by: lxc-device add -n 204 /dev/ttyS0 lxc-device add -n 204 /dev/ttyUSB0 It works well. For making it avaible after container restart I added this to /var/lib/lxc/204/config by adding lines: lxc.cgroup.devices.allow = c...