Update: after a "low level" format everything works again as expected, even with re-partitionin and zfs formatting:
nvme format -s1 /dev/nvme0
sgdisk -n 0:0:+1650GiB -t 0:bf01 /dev/nvme0n1
sgdisk -n 0:0:+10GiB -t 0:bf01 -c 0:SLOG1 /dev/nvme0n1
zpool create -o cachefile=none -o ashift=12 nvmep1...
I was able to grab a cheap M.2 pcie adapter and I tested in another PC.
htparam is slow there also, even if faster (216 MB/s vs 124 MB/s... even if in a "degraded" bus state, OS Kubuntu 23.10 "beta"), and I've opened a ticket with Kingston support.
I'm really puzzled about this problem (hope an...
Hi, I've a U2 Kingston DC1000M 1.92TB as secondary storage (ZFS formatted) that I use "just in case I need speed".
When installed it was very fast as expected (es read performance 1454.99 MB/sec).
root@proxmm01:~# hdparm -tT /dev/nvme0n1
Timing cached reads: 20680 MB in...
I've (almost) the same problem with Kubuntu 23.04.
At boot I have:
root@kub2210:/home/marco# systemctl status spice-vdagent
* spice-vdagentd.service - Agent daemon for Spice guests
Loaded: loaded (/lib/systemd/system/spice-vdagentd.service; indirect; preset: enabled)
So your software is just using Proxmox API, correct?
So is like a "web browser" that is using HTTP protocol to talk with a web server. If the web server is AGPL, the web browser can have whatever license it likes.
BUT if your web browser requires that you "enhance" or patch or extend web server...
I don't think so. If you modify Proxmox (i.e. add some library / api / whatsoever) to interact with your PaaS services, THEN you have to provide the source code of THOSE parts of software (the "proxmox side" ones) as well.
I think you have to consider AGPL as a "GPL not circumvented by Internet...
Well, is not just that "Samsung 970 Evo is not made for this", is "Samsung 970 Evo does NOT have to be used for this", it has really poor sync performances, so if you use it as SLOG you have a really bad performance.
Of course SLOG does not increase not sync performances, nor bandwidth.
PBS is a good solution, but costly and for a fast restore you also need fast connections. With a local backup you simply move the backup storage in the new server and do a fast restore, so local backup is not useless if you can move it :)
Forgive my ignorance in "high level hardware" but I've only experience with my "self made" Proxmox server at home and I've been asked by a friend that runs a small IT business about a Proxmox server for his customers.
My idea is that I will boot from 2 ZFS raid1 small DC Sata SSD, have a VM...
I've been affected 2021/10/11, had found no info online, no time to investigate further, I was away from the server and with the help of a collegue I had to reinstall Proxmox, lose all the VM (wondering why Proxmox don't have something like a "reinstall the OS only", and Proxmox rescue mode did...
Have you tired the "non subscription" new kernel pve-kernel-5.13.19-2-pve and kvm package pve-qemu-kvm 6.1.0-3? I think they are addressing lot of these issues, judging from other related posts in this forum.
I've a lot of VM that are "dormient" that have not autorun, or copies of the production one or a lot of other stuff that is NOT set to autorun. So a script to "mass deactivate/activate" all VM at startup is not the case, and that's why I need a "global" switch when I have to just experiment with...
Reading this thread, but not being experienced in clusters, I'm really worried about a couple of points:
a) fencing should be different, i.e. Proxmox node finds itself isolated, understands that has to "suicide" then stops/kills all KVM processes (or LXC or whatever), logs the fact, syncs the...
Sigh, I did not noticed the "menu" / "submenu" structure, my fault!
For the record, with my setup, the right line was:
Now worked as expected, thanks!
BTW, since proxmox-boot-tool are able to...
No luck, I've tested and failed with
grep menuentry /boot/grub/grub.cfg | grep 5.4.128
menuentry 'Proxmox VE GNU/Linux, with Linux 5.4.128-1-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5
Grub blue menu, as you can read in my post I do have a "legacy boot" setup (no UEFI) (also proxmox-boot-tool status output confirms this).
That's why I ask if the proxmox-boot-tool commands to set the boot kernel to use is only for UEFI, if not and I'm doing something wrong, or also there is a...
Is it me that I'm doing something wrong, or because my test machine (a VM with Proxmox zfs installed but BIOS mode) needs different commands, and if so, which are?
I.e. in this example I've tried to boot with 5.4.128-1-pve but after reboot the same 5.11.22-5-pve was used