I'll give you some options:
OPNSense
pfSense
VyOS
Proxmox!
A Debian box is quite capable of being a very decent router and you already have one: Proxmox. However, that is one for the likes of me to run up. What you probably need is something...
I thought I installed my 2nd server the same way I did my 1st one, but I might have chosen ZFSinstead of LVM; even though the concept of Thin Provisioning sounded interesting to me.
This output of the two commands:
root@pve-universe-server:~#...
Hi,
It would be nice if there was an option to prevent IPv6 Link-Local happening on the bridges that get created, in my setup I've created a VNET for each VLAN I need but I noticed each bridge has a IPv6 Link-Local, which means the host is...
Thanks. It seems that the problem is described on this "tip".
I was trying to keep ceph private network isolated so it is not routed. It looks that both public networks needs to be routed and visible between them.
In that case, I will choose...
New to virtualization here haha, so managing the nested nature of this project has been hard.
Reason for doing it with VM instead of LXC: needing access to stable proprietary AMF drivers for AMD VCN hardware encoding on another service (not...
I use Proxmox. I have four virtual machines on SATA SSD drives (the machine's motherboard doesn't support RAID). They were running in a ZFS pool. Using a ZFS data recovery program, I recovered the machine file, but without the extension. Is it...
Hi All
Just wanted to update my findings on what worked for me.
Below are steps taken to recover a RHEL9 Virtual Machine running on Hyper-V 2019 to Proxmox VE9
1. Copied .vhdx file to /var/lib/vz/images
2. convert image to qcow2
<qemu-img...
There isn't any specific ones for intel cards I use. I dont have access to my computer remotely. I can get all the information on how mine is set up, later today.
When you had the code 43 error in windows, what did the windows event viewer say...
ceph allows multiple public networks. just make sure your monitors exist on whatever public network(s) you define.
see https://docs.ceph.com/en/reef/rados/configuration/network-config-ref/
Sorry for the late reply, but I fixed the problem by reinstalling the whole system from scratch. Hopefully you have proper backups so the reinstallation can run smoothly.
Sounds good, thanks for reporting back!
I'd generally recommend a dedicated primary network for corosync, to prevent other traffic from driving up corosync latencies, see [1]. corosync can handle multiple redundant networks itself [2], so...
Just an additional info. It seems that the problem related to kernel version only. I have very similar issues, but:
The machine is new (2 months old)
Had no problem after installed using PVE 8 (Kernel 6.8.12-17-pve)
After 2 weeks upgraded to 9.1...
Well, live migration should usually work always. With a non-shared storage it will also transfer the disks of the guests, and that can take a long time.
So if you followed the multipath guide and still have some issues, the question would be...
Well, live migration should usually work always. With a non-shared storage it will also transfer the disks of the guests, and that can take a long time.
So if you followed the multipath guide and still have some issues, the question would be...