Search results

  1. K

    Tuning ZFS 4+2 RAIDZ2 parameters to avoid size multiplication

    That's wrong. The minimum volbocksize for ashift 12 is 4k. But you are right that ideal volblocksize for is 16K both from a storage efficiency perspective (not taking into account compression) and from a perspective of VM workloads. But if in doubt benchmark your specific workload.
  2. K

    Tuning ZFS 4+2 RAIDZ2 parameters to avoid size multiplication

    @guerby Go to this spreadsheet: https://docs.google.com/spreadsheets/d/1tf4qx1aMJp8Lo_R6gpT689wTjHv6CGVElrPqTA0w_ZY/edit?pli=1#gid=1576424058 Select TAB "Raidz2 total parity cost in % of total storage" and scroll to the bottom with the power of 2 block sizes that you can actually use. Then...
  3. K

    My watchdog crashes the server during reboot. Any idea how to fix that?

    Hi @t.lamprecht! 1. We have now updated to the latest PVE 6. 2. "Datacenter -> Options -> HA Settings" is still set to "default" 3. The logs are telling: "IPMI Watchdog: Unexpected close, not stopping watchdog" appears at the end of the reboot process. The line only occurs on VGA output...
  4. K

    Backup ceph-fs?

    Otherwise should I snapshot the CephFS filesystem and then mount the snapshot and backup the snapshot? Or can I backup without snapshotting first? That sounds like quite the involved script and cronjob. Yes, indeed this needs to be in the UI!
  5. K

    My watchdog crashes the server during reboot. Any idea how to fix that?

    Hi, thanks for the reply! Yes, we do use HA. No, this is not a regression. But I only noticed it now, since we rearely reboot your Proxmox servers. We are still on Proxmox 6.3 with Kernel 5.4 Will collect that and get back to you next week.
  6. K

    My watchdog crashes the server during reboot. Any idea how to fix that?

    I have a problem that every time I want to cleanly reboot my proxmox server with either the reboot button in the PVE GUI, or via command line with `reboot`, the watchdog power cycles my server! I am afraid that crashing my server during reboot will eventually corrupt something. We are using Dell...
  7. K

    Testing the watchdog

    Another problem that I noticed is that I get "The watchdog timer expired." messages in iDRAC if I reboot any of the servers using the reboot button in the PVE GUI, sudo reboot or sudo init 6. So I don't think the servers are restarting cleanly. They get power cycled by the watchdog during reboot...
  8. K

    Testing the watchdog

    I am revisiting this now 5 months later. One of the servers apparently in the last months by itself switched its Watchdog action to No action (0x00) root@pve3:~# ipmitool mc watchdog get Watchdog Timer Use: Reserved (0x40) Watchdog Timer Is: Started/Running Watchdog Timer Actions: No...
  9. K

    [SOLVED] Create Ceph block device from RAW image

    Hi, I apologize, is this probably a really trivial question for most here. I am trying to convert a physicial server to a VM. So I have booted off a live cd and created a RAW image of the pysical server's SSD (with dd). That image file is now on an ext4 formatted USB drive. How do I import...
  10. K

    ashift, volblocksize, clustersize, blocksize

    I had a similar problem. I wrote about this here: https://www.reddit.com/r/zfs/comments/opu43n/zvol_used_size_far_greater_than_volsize/ I wrote this from the perspective of exporting a ZVOL with an NTFS filesystem via iSCSI from Ubuntu server. But the same thoughts apply when you have an ext4...
  11. K

    Testing the watchdog

    I found a solution to test stopping to kick the watchdog: root@pve4:~# lsof /dev/watchdog COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME watchdog- 861 root 3w CHR 10,130 0t0 431 /dev/watchdog root@pve4:~# kill -9 861 The terminal on the VGA output showed a warning message...
  12. K

    Testing the watchdog

    I have Dell based servers with iDRAC. In UEFI BIOS I have enabled the setting "Integrated Devices" - "os watchdog timer: enabled" I have successfully enabled the hardware watchdog using this guide...
  13. K

    [SOLVED] Error : 401 401 authentication failure when join a cluster

    @Moayad @proxmox staff Since this keeps tripping up admins can we please mark the foreign root account password as autocomplete=new-password so it doesn't get autofilled. Or maybe just renaming the field id is already enough. Original suggestion...
  14. K

    [SOLVED] Error : 401 401 authentication failure when join a cluster

    Problem solved: Lastpass filled in the local machine's root password, and I assumed it was decoded from the join string, that's why I didn't notice that the remote machine's root password was wrong. Source: https://forum.proxmox.com/threads/cant-join-cluster-through-gui.68201/#post-321486
  15. K

    Can't join cluster through GUI

    Thanks! That hint helped a lot. I also assumed that the password field was decoded from the join string, but actually Lastpass filled in the local machines' root password instead.
  16. K

    [SOLVED] Error : 401 401 authentication failure when join a cluster

    I have the same problem. Freshly installed Proxmox 6.2-6 machines. Subscription + fully updated. Joining second machine to first machine's cluster via GUI gives error message: Establishing API connection with host '192.168.194.11' TASK ERROR: 401 401 authentication failure I double checked...
  17. K

    Shutdown of the Hyper-Converged Cluster (CEPH)

    I have done some research, but I am still confused as to how I can turn off a Proxmox-HE cluster with Ceph, from a script that runs on low UPS battery safely and without race conditions. There has to be a better answer than "never shutdown the whole cluster". Is it as simple as setting VM...
  18. K

    Planning a cluster with HA vm with USB modem.

    I have successfully evaluated Proxmox in a HA cluster. And now I just ordered new servers from Dell that this will be running on commercially. One thing where I am not sure yet if it's possible: One of our current machines is using a USB modem to query an embedded legacy machine every night...
  19. K

    Ceph OSD on LVM logical volume.

    So here's my little guide for everyone who wants to do this: 1. During install set maxvz to 0 to not create local storage and keep free space for Ceph on the OS drive. [GUIDE, 2.3.1 Advanced LVM Configuration Options ] 2. Setup Proxmox like usual and create a cluster 3. Install Ceph packages...
  20. K

    Homelab: Ceph requirements

    I would like to equip my servers with Dual 10G NICs: 1 NIC for ceph replication and 1 NIC for client communication and cluster sync I understand having a separate network for Ceph replication and redundancy but 3 separate networks just to keep latency low is not really modern "converged". My...