Ich habe besagtes Gigabyte MB unter Proxmox laufen. Mit GPU ist da nichts, aber es hat 2 x 10GBit. Bin eigentlich recht zufrieden, wenngleich die Proxmox Installation auf dem eingebauten eMMC konstant Probleme macht(e?). Debian drauf, dann Proxmox: ging.
Hinsichtlich Service kann ich mich nur...
Herzliches KNX Beileid.
"Heutzutage" kannst Du von der Leistungsfähigkeit eine Feld-Wald-Wiesen Hardware nehmen, das reicht. Wie mein Vorschreiber bereits sagte: Es kommt darauf an, was Du willst. Da es ja um Hausautomation geht sollte ja Langlebigkeit auf dem Plan stehen, meistens halten...
Willkommen im Club.
Leider gibt es im Proxmox Wiki 2 PCI passthrough Anleitungen, die hier würde ich fast als "neuer" ansehen:
https://pve.proxmox.com/wiki/PCI(e)_Passthrough
Im Host würde ich das Kernelmodul für i915 blacklisten, angeblich ist das uncool wenn der Host noch auf einem...
My 2 cents would be, that the opposite is the case. Server hardware design principles target endurance. You replace servers because they do not fulfill your performance or security (CPU bugs) requirements, not because they break down. Sure, hard disk and PSU may break down, you change these -...
Unter Options habe ich für eine VM:
Würde also erwarten, dass wenn die Festplatte nicht zu Potte kommt er wieder von der CD bootet. Nüscht. Nada.
Hängt da auf der Console mit "Booting from Hard Disk..." und wartet auf Godot.
This is simply not true. https://en.wikipedia.org/wiki/Haswell_(microarchitecture)
For starters, Haswell is a true successor (tock) to Ivy Bridge which is a successor (tick) to Sandy Bridge. Second, while there is not "the one single Haswell" (because it's merely an Intel code name), this...
The Xeon v3 is the representative for the Haswell architecture. What I can imagine happens here, is that some features might be turned off by the BIOS (just a hypothesis), so it looks like a Haswell minus X to QEMU. I can look into it once I fix my other more pressing config problems.
It seems "my" problem is discussed in detail here:
https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/issues/100 "KVM Support on proxmox"
and while I was successful with setting the PCIe bits for atomics, I'm stuck for now in hacking the amdgpu kernel module.
It seems the problem is...
So trying ROCm with GPI pass-through and 2 x AMD WX4100 (Baffin, Polaris11).
I should mention, that GPI pass though works perfectly with some Nvidia 1050Ti
ROCm ofc wants more HW features and thus OpenCL is no dice, dmesg contains:
[ 9.822248] kfd kfd: amdgpu: skipped device 1002:67e3, PCI...
I have to necromance this for Proxmox 7.0.
VM CPU set to Haswell will not boot
kvm: warning: host doesn't support requested feature: CPUID.07H:EBX.hle [bit 4]
kvm: warning: host doesn't support requested feature: CPUID.07H:EBX.rtm [bit 11]
The system is ofc a Haswell
model name ...
You are right, giving it a local different storage worked.
Ok, sort of. It transformed a qcow2 into a raw, but that's definitely a different issue than the topic of this thread, so feel free to carve that out from here.
file seems ok on both source and target machine
# zstd -d --check -o /dev/null vzdump-qemu-113-2021_08_16-18_45_55.vma.zst
vzdump-qemu-113-2021_08_16-18_45_55.vma.zst: 9667888640 bytes
Show Info on PVE 6.4 works, on 7.0 error - see topic.
Restore fails completely
restore vma archive: zstd -q...
I'll take the liberty and re-use this thread. Same problem (error message when trying to show configuration), zstd check says all is ok:
# zstd --check vzdump-qemu-113-2021_08_16-18_45_55.vma.zst
vzdump-qemu-113-2021_08_16-18_45_55.vma.zst : 99.55% (5273936590 => 5250305290 bytes...
Also solved. /etc/hostname remained "pve". Changing that and many things start to work automagically.
It's a little unfortunate - especially when the correct name was given during the installation.
I've solved it now.
Wipefs works, but Proxmox 7 does see the correct status only after a reboot.
By "wipefs works" I mean both the wipefs issued via GUI as well as from command line. Just the GUI display is broken.
Same problem here. Installed from your official ISO on my machine at home.
Gave it during installation the name rigel.home.org, was still pve.localdomain in /etc/hosts
So changed /etc/hosts to
root@pve:/etc# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.2.20 rigel.home.org...
Yeah. It's simple: Proxmox GUI is buggy - does not show the correct status.
Moreover, Proxmox somewhere stores that wrong (not updated) status and therefore does not handle the disks right (i.e. being free for other use)
/dev/sda was wiped already, /dev/sdb wipe no problem:
DEVICE OFFSET...
Nope. Nope. Nope.
PVE7, fresh install. Put some disks from an old vmware installation in there, want to create a ZFS pool on them.
ZFS does not see any free disks, because all are marked as ddf_raid_member.
wipe disk in GUI does nothing. And please "we decided yadda yadda"... I can click "wipe...
ZFS creation is a blind shot in the dark:
Ok - shell, here I come:
# /sbin/zpool create -o 'ashift=12' space2 raidz /dev/disk/by-id/ata-ST10000VN0004-1ZD101_ZA21BN0X /dev/disk/by-id/ata-ST10000VN0004-1ZD101_ZA242EM2 /dev/disk/by-id/ata-ST10000VN0004-1ZD101_ZA242D0Q...
Hmm...
I know what's puzzling me. Hosts do have a "Note" entry in the tree and no "notes" widget in the summary.
VMs vice versa. Not trying to be nitpicking, but that might deserve some consistency love.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.