Search results

  1. Y

    How Can Avoid disk renaming adding new HD

    Hello, how can avoid disk renaming adding disk ? Usually i have server with 12 Bay, os on 2 SSD (sda/sdb). Remainig disk are couple of disk in stripe. I Start with 2 disk and sometimes i add more couple. Every time i add disk and reboot i find different name on the device and Zpool status give...
  2. Y

    Volume disappeared after crash and reboot

    I have finded the cause, we have insered 7 new Hard disk, hot swap, after the crash, the server reboot and re-map every HD!! so every entry like sdc/sdd/sde are changed ! zpool status -v pool: rpool state: ONLINE scan: scrub repaired 0B in 0h1m with 0 errors on Sun Apr 8 00:25:18 2018...
  3. Y

    Volume disappeared after crash and reboot

    Hello, after a crash and reboot i can't see my "storage" zfs volume under /dev/zvol and at boot promt i see: a start job is running for udev wait for complete device initialization ... ls -la /dev/zvol total 0 drwxr-xr-x 3 root root 60 Apr 20 08:54 . drwxr-xr-x 21 root root 5280 Apr 20 08:56...
  4. Y

    Infinite Resilvering on Raid1 Zfs

    Is it possible to force stop the resilvering ?
  5. Y

    Infinite Resilvering on Raid1 Zfs

    The problem is That resilvering is infinite ...
  6. Y

    Infinite Resilvering on Raid1 Zfs

    Hello, i have a Raid 1 in Resilvering state from 2 month .. it's a stage server.. not critical. I have thinked to stop the resilvering to try a SCRUB but i can't stop it: ---------------------------------------------------------------------------------------------------------- zpool status...
  7. Y

    I/O problems on Windows 2016 VM

    Hello, i have tested every options to have decent performance on Windows 2016 VM but with big writes ( over 30/40Giga ) the VM hang and i can see on dmesg: -------------------------------------------------------------------------------------------------------------------------------------- [...
  8. Y

    Boot problem Changing from Virtio to SCSI Virtio on Windows srv 2016

    Hello, i'm testing Win2016 server VM, using Virtio disk ( no SCSI VIRTIO ) i have very slow I/O performance ( Write Back or NoCACHE ). Is it better to switch to SCSI VIRTIO ? I have tried but the VM can't boot .. need i a new setup from zero or can i switch in some way from Virtio to SCSI VIRTIO...
  9. Y

    Remove Zfs Snapshot - Best Practise

    Hello, what is the best practice to remove old zfs snapshot ? Can i remove only strating from newer or can i start olso from older ? Can i remove a random snapshot without corrupt others ? Can i remove all VM snapshot using one commant line ? Thanks!
  10. Y

    Zfs Snapshot fails - out of space

    No, is not flagged, i have noticed it only now! .. i think i have to flag it, right ?
  11. Y

    Zfs Snapshot fails - out of space

    Many Thanks, i make a full backup than i try!
  12. Y

    Zfs Snapshot fails - out of space

    root@nodo5:~# zfs get all rpool/KVM/vm-105-disk-1 NAME PROPERTY VALUE SOURCE rpool/KVM/vm-105-disk-1 type volume - rpool/KVM/vm-105-disk-1 creation Thu Aug 3 15:16 2017 - rpool/KVM/vm-105-disk-1...
  13. Y

    Zfs Snapshot fails - out of space

    root@nodo5:~# zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT rpool 2.09T 480G 1.62T - 34% 22% 1.00x ONLINE - FREE: 1.62T
  14. Y

    Zfs Snapshot fails - out of space

    I see 398G free, where do you see is full ? Thanks
  15. Y

    Zfs Snapshot fails - out of space

    Hello, on my server i can't do new snapshot: ----------------------------------------------------------------------- root@nodo5:~# zfs snapshot rpool/KVM/vm-105-disk-1@snap1 cannot create snapshot 'rpool/KVM/vm-105-disk-1@snap1': out of space root@nodo5:~# root@nodo5:~# zfs list -t all NAME...
  16. Y

    Kernel 4.15.10-1-pve Bug on Pfsense 2.4.3

    The problem appear only on the NAT traffic, do you use NAT ?
  17. Y

    Kernel 4.15.10-1-pve Bug on Pfsense 2.4.3

    Hello, Virtio Interfaces doasn't works on Pfsense KVM virtual machines using kernel 4.15.10-1-pve: (workaround: switch to IntellE1000) proxmox-ve: 5.1-42 (running kernel: 4.15.10-1-pve) pve-manager: 5.1-46 (running version: 5.1-46/ae8241d4) pve-kernel-4.13: 5.1-43 pve-kernel-4.15.10-1-pve...
  18. Y

    Kernel Bug / LXC kworker 100%

    The gnulinux-4.13.4-1-pve-advanced-3018d25a67c6ca88 and gnulinux-4.13.13-6-pve-advanced-3018d25a67c6ca88 Have serius bug that on LXC env cause kworker 100% usage. Only way is to boot with the old gnulinux-4.10.15-1. Tested on 7 different server.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!