Search results

  1. I

    [SOLVED] How to move linked clone with base to another node?

    @fabian Thank you so much! Solved it on my proxmox cluster: node-from# zfs send -R rpool/base-555-disk-1@__base__ | ssh "node-to" zfs recv rpool/base-555-disk-1 node-from# zfs snapshot rpool/vm-800-disk-1@__send__ node-from# zfs send -Rv -i rpool/base-555-disk-1@__base__...
  2. I

    [SOLVED] How to move linked clone with base to another node?

    zfs send -i rpool/vm-800-disk-1@test rpool/vm-800-disk-1 | ssh dc02 zfs recv rpool/vm-800-disk-1 cannot receive incremental stream: destination 'rpool/vm-800-disk-1' does not exist dc02# zfs create -s -V 32G rpool/vm-800-disk-1 cannot receive incremental stream: most recent snapshot of...
  3. I

    [SOLVED] How to move linked clone with base to another node?

    Ok. I send base template. Now i try to send linked clone zfs send -i rpool/vm-800-disk-1 rpool/vm-800-disk-1 | ssh dc02 zfs recv rpool/vm-800-disk-1 cannot receive: failed to read from stream How to send correct command?
  4. I

    [SOLVED] How to move linked clone with base to another node?

    ZFS send / receive making same as vzdump, copy all data linked + base. At night I'm tested it with test linked clone, I made new vm, it disk (volume in zfs) size 8 kb. When I send and receive it to other node zfs created disk size 10gb like base vm.
  5. I

    zpool failed to import on reboot

    Are you try clean zpool cachefile with nvme name? zpool set cachefile=none nvme zpool set cachefile=/etc/zfs/zpool.cache nvme
  6. I

    zpool failed to import on reboot

    Disk /dev/nvme0n1: 477 GiB, 512110190592 bytes, 1000215216 sectors /dev/nvme0n1p1 34 2047 2014 1007K BIOS boot /dev/nvme0n1p2 2048 1000198797 1000196750 477G Solaris /usr & Apple ZFS /dev/nvme0n1p9 1000198798 1000215182 16385 8M Solaris reserved 1 Disk...
  7. I

    zpool failed to import on reboot

    You cant use nvme0n1 or 1n1. You can use only partition for zfs like 0n1p2 / 1n1p2
  8. I

    zpool failed to import on reboot

    zpool set cachefile=none rpool zpool set cachefile=/etc/zfs/zpool.cache rpool update-initramfs -u reboot
  9. I

    [SOLVED] How to move linked clone with base to another node?

    I use ZFS disks and linked clones. When i backup linked VM (disk size on zfs list 1Gb), vzdump backup it with base disk (disk size on zfs list 10Gb). Then i restore linked vm on another node, my vm restore without base disk and it size 11Gb. On other node i have same Base template, but .... Can...
  10. I

    Proxmox VE on Debian Jessie with zfs - Hetzner

    We are talk about "How to install Proxmox on ZFS native" (from iso). Hezner have imagesetup with proxmox, but it only with mdadm and LVM.
  11. I

    Proxmox VE on Debian Jessie with zfs - Hetzner

    Hi guys! That night, I began, according to your instructions, to install the ZFS on the proxmox from the installation iso. It took me 6 hours. There was a problem with ZFS. That's how I solved it and my instruction. My server PX61-NVMe 1. Boot Rescue linux 64 1.2 Check eth name udevadm...
  12. I

    Proxmox VE 5.0 beta1 released!

    https://bugzilla.proxmox.com/show_bug.cgi?id=1351
  13. I

    Proxmox VE 5.0 beta1 released!

    VM not start if network rate limit (like 12.5). RTNETLINK answers: Invalid argument We have an error talking to the kernel command '/sbin/tc filter add dev tap301i0 parent ffff: prio 50 basic police rate 13107200bps burst 1048576b mtu 64kb drop flowid :1' failed: exit code 2 PVE 5.0.6 4.10.5-1
  14. I

    VIRTIO SCSI very few iops on write for Windows VMs

    Virtio Storage ----------------------------------------------------------------------- CrystalDiskMark 5.2.1 x64 (C) 2007-2017 hiyohiyo Crystal Dew World : http://crystalmark.info/ ----------------------------------------------------------------------- * MB/s =...
  15. I

    VIRTIO SCSI very few iops on write for Windows VMs

    I found the problem. If you need iops, install the old virtio storage drivers 0.1-8 (etc). But virtio-scsi did not work as it should. IOPS test on image visrtio stor with 0.1-8 iso
  16. I

    VIRTIO SCSI very few iops on write for Windows VMs

    My server configuration E3-1275 3.6 / NMVE x 2 disks on ZFS mirror / Proxmox VE 4.4 Linux VM show me more then 20000 iops on write, but the windows vms before 200-250 on write with last version virtioscsi drivers. If i change virtio on IDE my iops on write more then 8000-13000. I use HD tune pro...
  17. I

    Proxmox VE 5.0 beta1 released!

    Server 2016 in nested virtualization mode, after install Hyper-V not starting. Hypervisor Error on blue screen. But the server 2008r2 work fine. 2012 not tested. Auto start & bulk actions same not working with this error Use of uninitialized value $type in string eq at...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!