Search results

  1. R

    [PVE-9 BETA] PVE Nested Upgrade from 8 to 9 breaks boot

    Sorry for the late reply, yes uefi+secure boot. I Installed the Server on 10.Okt.2023 at 16:09:27, with Kernel 6.2.16-3, Debian 12.2.0-14 (Thats info from the oldest boot log) So it was the Proxmox 8.0 ISO. there is maybe broken text in the term.log -> that was the ncurses window, which asks...
  2. R

    Proxmox Virtual Environment 9.0 released!

    @t.lamprecht ./find-old-packages2.sh cpufrequtils libaio1:amd64 libapt-pkg6.0:amd64 libassuan0:amd64 libcbor0.8:amd64 libcpufreq0 libflac12:amd64 libfmt9:amd64 libfuse3-3:amd64 libglib2.0-0:amd64 libglusterd0:amd64 libicu72:amd64 libldap-2.5-0:amd64 libmagic1:amd64 libpcre3:amd64...
  3. R

    [PVE-9 BETA] PVE Nested Upgrade from 8 to 9 breaks boot

    Hi Thomas, its sadly not fixed for btrfs root systems still. Im just upgrading all Servers to PVE9 and only those on btrfs fail. Cheers
  4. R

    [PVE-9 BETA] PVE Nested Upgrade from 8 to 9 breaks boot

    Okay i have a solution, was actually pretty easy: 1. Download the Proxmox VE 9 ISO (Or 8) and mount it either with your KVM/Ipmi or make an USB-Drive 2. Boot from it and select "Install Proxmox VE (Graphical, Debug Mode)" -> Press CTRL+D when it asks... 3. Check your partitions: lsblk -o...
  5. R

    [PVE-9 BETA] PVE Nested Upgrade from 8 to 9 breaks boot

    Same with btrfs as root Filesystem. I Blacklist by Default the zfs modules, since i dont need zfs in my Home Servers... Im on the Minisforum MS-02 which doesnt have legacy boot / csm sadly. Now i have to find Out how i can fix the bootloader :-)
  6. R

    Opt-in Linux 6.14 Kernel for Proxmox VE 8 available on test & no-subscription

    Thats sad, then i have to dig further for the issue :-)
  7. R

    Opt-in Linux 6.14 Kernel for Proxmox VE 8 available on test & no-subscription

    @t.lamprecht Can you maybe include this fix in the 6.14 Kernel: https://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git/commit/?id=bc0b828ef6e5 i Stumbled on one Server against this, with an I219-LM that i passthrough to opnsense (Its a hetzner Server). Its only an issue on Kernel 6.14...
  8. R

    [SOLVED] Can't mount Hetzner Storage Box CIFS

    This Thread is a bit old, but i found out that the Issue with Hetzner Storage Box is not Proxmox or any Samba Versions.... Its Hetzner's Firewall! I have at home luckilly 2 Internet Providers, Cable (Vodafone) and DSL (Telekom). Over Telekom, i can access the Hetzner Box straight from the...
  9. R

    AMD : BSOD unsupported processor since Windows build 26100.4202+ ( update kb5060842 + its preview kb5058499 )

    Maybe this helps, i tried on all my AMD Servers. Since others confirmed only AMD is affected, i didn't tested Intel Servers, but i can (have a lot of them) Tested: W11 26100.4351 and W11 26200.5651 Ryzen 7 5800X -> Crash Ryzen 9 9955HX -> Crash EPYC 9374F -> Crash Ryzen 7 PRO 8700GE -> Crash...
  10. R

    [SOLVED] MS-A2 9955hx -> Host Type CPU doesnt work for Windows VM's

    Thanks, this was indeed my issue :-( args: -cpu 'host,arch_capabilities=off' fixed it!
  11. R

    [SOLVED] MS-A2 9955hx -> Host Type CPU doesnt work for Windows VM's

    EDIT: Duplicate Thread! Correct Thread: https://forum.proxmox.com/threads/amd-bsod-unsupported-processor-since-windows-build-26100-4202-update-kb5060842-its-preview-kb5058499.166828/page-3#post-779003 ------ I have a new Server, (14 servers in total now). Its the Minisforum MS-A2 with the...
  12. R

    Opt-in Linux 6.14 Kernel for Proxmox VE 8 available on test & no-subscription

    1. AMD EPYC 9374F -> 6.14.5-1-bpo12-pve -> No Issues 2. AMD EPYC 9374F -> 6.14.5-1-bpo12-pve -> No Issues 3. AMD Ryzen 7 5800X -> 6.14.4-1-pve -> No Issues (But IOMMU Groups changed numbers, had to adjust) 4. Intel(R) Core(TM) i3-1315U -> 6.14.4-1-pve -> No Issues 5. Intel(R) Core(TM) i3-1115G4...
  13. R

    PROXMOX ON A BAREMETAL SERVER WITH 18 x 3.84TB NVMe (ZFS?)

    You cant place dual controllers in 2 separate buildings. Blades have at least a separate mainboard, separate memory and separate CPUs, compared to 2 controllers in the same node. I hung up for the mechanical Layout, because thats how HA is handled in Companies. TrueNas Enterprise is probably...
  14. R

    PROXMOX ON A BAREMETAL SERVER WITH 18 x 3.84TB NVMe (ZFS?)

    We buyed JovianDSS before 2022 and compared against TrueNas. At that time there was definitive no true HA on TrueNas. If things were different at that time, we would have gone TrueNas 100%. Especially because its surely cheaper. And Dual controllers is not HA for me. Ha is for me at least...
  15. R

    QEMU 9.2 available on pvetest and pve-no-subscription as of now

    maybe i missed something, but what is 9.2+pve1 ?
  16. R

    PROXMOX ON A BAREMETAL SERVER WITH 18 x 3.84TB NVMe (ZFS?)

    Just readed, they talk about resilient replication, so i think its still the same periodically and simple zfs-send/receive. Even if its made every 5 seconds, its not the same. On JovianDSS (zfs) and Synology (btrfs) every blockchange gets instantly synced over a dedicated interface to the other...
  17. R

    PROXMOX ON A BAREMETAL SERVER WITH 18 x 3.84TB NVMe (ZFS?)

    I know, we were in a decision some years ago. We wanted an zfs over iscsi solution that was able to run in HA. At that time we used Synology HA with iscsi for the esxi servers and needed to replace that, because Synology iscsi had some issues. In the end we took JovianDSS, because it was the...
  18. R

    PROXMOX ON A BAREMETAL SERVER WITH 18 x 3.84TB NVMe (ZFS?)

    And Synology is not garbage at all, you probably aware only about the home stuff. There are RS Servers, which run in proper high-availibilty and has xeons with ecc memory. Additionally to that, there is no other solution that im aware of, that features as fine grained Samba User & Group...
  19. R

    PROXMOX ON A BAREMETAL SERVER WITH 18 x 3.84TB NVMe (ZFS?)

    1. 12-15 Proxmox Servers alone. 2. There is no word about metadata off, i tested with metadata all and metadata only. 3. Block size on disks is 4kb native, on 7450/7500/cm7-r you can reformat the disks to different blocksizes either. 4. Others didnt pointed out anything, just were lazy to read...