Search results

  1. K

    command IOTOP = ksoftirqd/0 99%

    You mean PVE 4.4? I dont't believe it's supported for too long if at all... Anyway it looks like a kernel bug surfacing in your setup.
  2. K

    command IOTOP = ksoftirqd/0 99%

    My idea was an old kernel or a misbehaving acpi interrupt. Neither of them seems to be relevant, sorry. I suggest flashing the newest bios or downgrading the kernel to see if it helps.
  3. K

    command IOTOP = ksoftirqd/0 99%

    Can you post outputs? pveversion -v grep '.*' /sys/firmware/acpi/interrupts/*
  4. K

    Increased space - now what?

    Simplest way is adding a new partition to fill the free space, make it a pv, add the new pv to the vg, extend the root lv, then resize the fs on it. This should always work online.
  5. K

    [SOLVED] ZED Notifications not working

    What does "systemctl status zfs-zed.service" say? Don't mind the errors running zedlets standalone. They're supposed to run from the daemon. Also, check your postfix logs.
  6. K

    Proxmox on SD card

    With that kind of write load I'd suggest moving away mentioned dirs (or the whole /var) to other disks, especially in production. But then I think there should be no problems with the cards.
  7. K

    Proxmox on SD card

    To me this fear of SD cards is not realistic. What wears out these cards is excessive write. If you keep most on separate partitions on spinning disks or SSDs I see no actual downside to them as boot devices. For practical purposes, you can consider a booted PVE system like a VMWare appliance.
  8. K

    Proxmox on SD card

    These Dell servers have a dual SD drive, so you can use a mirror. In contrast, none of the Raspberry Pis that I've been using had a failed SD card, while I've been even using mysql on some of them - hobby stuff, nothing serious, but still. For a boot device their speed is more than adequate...
  9. K

    Proxmox on SD card

    I wonder about that too. If you put the log dirs (eg. /var/log) on a different disk the write load and thus wear on the card should be almost non-existent.
  10. K

    Zabbix: Too many processes on PVE node

    Raise the trigger threshold, for PVE it's way too low. When using OVZ or LXC and dozens of containers, it's normal to have 5-600 processes or more, nothing out of the ordinary. But checking for other issues never hurts.
  11. K

    LCX Container Wont Start

    Try starting with lxc-start --name=CTID --foreground and observe the output. Additionally, make use of the --logfile --logpriority options of that command.
  12. K

    Large-scale migration from VMWare to Proxmox (Ceph)

    Yes, I fixed my reply in the meantime realising that. But pv needs the size when fed through stdin.
  13. K

    Large-scale migration from VMWare to Proxmox (Ceph)

    Wouldn't it rather be: dd if=/path/to/device | pv -s <size of device> | ssh root@remote-server 'rbd --image-format 2 import - pool/webserver' Device is supposedly the product of using qemu-rbd, right? EDIT: fixed my suggestion
  14. K

    Large-scale migration from VMWare to Proxmox (Ceph)

    The way I usually do it is creating the target VM with appropriate config and disks, then use qemu-nbd to present the vmdk as a block device and just dd it over to the new VM's disk. It can be easily automated in scripting. I haven't used ceph yet but I suppose it provides a logical block device...
  15. K

    zfs raid mirror 1 to raid 10

    If you absolutely need that space, just create a new mirrored pool. It's not possible to add these 2 disks to to the existing pool.
  16. K

    zfs raid mirror 1 to raid 10

    PVE export/import will do.
  17. K

    zfs raid mirror 1 to raid 10

    Your paste shows the correct sizes for a 2T pool, where is the problem?
  18. K

    zfs raid mirror 1 to raid 10

    They are similar but a ZFS pool is much more than "regular" RAID10, though they share the same high-level topology. You do get similar "performance boost" in ZFS, too.
  19. K

    zfs raid mirror 1 to raid 10

    No. Do not do that. Makes no sense. Instead do it like this here: https://forum.proxmox.com/threads/zfs-raid-disks.34099/#post-167015 On another note, naturally, a balanced pool is "better", at least a normal, unbothered pool stays as-is in its lifetime and is always balanced. IIRC ZoL already...
  20. K

    zfs raid mirror 1 to raid 10

    Technically, AFAIK, yes. The installer uses just 2 disks for boot partitions. The rest of the disks are used only for data in the pool. Practically, you start with no data if you install with a "RAID10" so when adding data and VMs you don't end up with an unbalanced pool. When you add the 2nd...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!