hmm
nc -vz 192.168.0.7 8006
Connection to 192.168.0.7 port 8006 [tcp/*] succeeded!
seems the port is working, im on the same subnet on the same switch and i can connect to the guests running on the host but not this ip and the port... via a...
Kommt halt drauf an, ob man die ZFS features haben will oder nicht: https://forum.proxmox.com/threads/fabu-this-is-just-a-small-setup-with-limited-resources-and-only-a-few-disks-should-i-use-zfs-at-all.160037/
Das Writeup bezieht sich zwar auf...
Looking to create a cluster with two existing standalone nodes ( with VMs on them) and a new third node. The second node has a truenas vm, using 4 pci passed through disks. I am not entirely sure how to handle this when adding the node to the...
Hello everyone,
I have a little issue with RDP to VM Windows 2016 server/W11/2019
The image was freezing but when i click, writting that works, if i close the RDP windows and open the RDP connexion again that restore image.
I tried to search...
Hi,
Does it make sense to use it if you only have one physical Proxmox Server and one physical PBS Server?
Are there any benefits to that scenario and if which ?
Installation is via the .iso file?
Does it need an own dedicated physical...
You say RustFS is "still a bit beta", and that just isn't good enough for me.
I need mature software that I can deploy now and keep running for years.
On that front I fully trust Ceph to be stable and supported.
As a counterexample, the Garage...
Absolutely.
We don't use a bond, but we do use a bridge on the NIC for the storage-net on PVE node with MTU 9000 (slightly anonymized):
# pve-node
auto ens27f1np1
iface ens27f1np1 inet manual
mtu 9000
auto vmbr1
iface vmbr1 inet static...
At first glance, Proxmox appears to offer substantial improvements over the old setup, with a few important observations:
1) According to Samsung’s official specifications this model is rated for 6800 MB/s sequential read and 2700 MB/s...
What exactly are you asking? The read speeds would be affected by the ZFS read cache which is in RAM, writes would be affected by the fact you are mirroring, so it can only go as fast as the slowest drive can respond and that is based on an IOPS...
No. Both servers are configured identically: 2x1.92 TB drives (mirrored, "RAID 1") for the operating system, and another 2x1.92 TB drives (mirrored, "RAID 1") for the VMs.
The new Linux kernel version was part of the 4.1 release, the issue got unnoticed during the extended public testing phase as well as for the Proxmox VE 9.1 release a week earlier, which was released with the same kernel as well. Further, the...
Apparently there is an issue with the pvescheduler since yesterday:
root@nab91:~# journalctl -k | grep pvescheduler | more
Dec 04 14:15:00 nab91 kernel: pvescheduler[1055747]: segfault at 231 ip 0000000000000231 sp 00007ffebaa4f5c8 error 14...
- why identical tests on identical hardware are producing significantly different results?
- why the Hyper-V benchmarks seem to align more closely with the manufacturer’s published performance? (It might simply be coincidence)
- why Hyper-V...