Hi,
I'm running a three node hyperconverged cluster. I've been running Proxmox 7 for some time and decided to take the leap to Proxmox 8. Since then, I've noticed some migration issues regarding disks. Below is an example of one of the errors I got when trying to migrate. The migration actually...
I'm doing this for a proof of concept. I've got a single Proxmox server with a single NIC, connected to a router. The gateway of the router as an example is 190.150.165.1 and I have access to IP's 190.150.165.35 and 190.150.165.41-45 (all not real IPs).
Is it possible to configure a vmbr0...
I'd like to do a live restore (as a new VM alongside the existing) but I don't want the VM to instantly power on.
Before booting I want to change the VLAN it's sat on so that it doesn't conflict with the existing machine.
Is this possible to achieve?
Thanks,
Chris.
Just to update for anyone that comes across this thread, despite our fibre passing light tests, and running at full speed when not aggregated, it turns out it was indeed a faulty optic and was fixed by replacing. We're now back to full 20GB/s bonded throughput.
Yep that's fair enough. It breaks autocomplete if it doesn't have access to what you are calling. For me its just personal preference, makes me think twice about the command I'm entering. For example, if I accidentally typed rm -rf / as my user, worse things have happened. Whereas if I...
Hi, the new node maintenance mode is a great addition to Proxmox, thank you!
Running sudo ha-manager crm-command node-maintenance enable pve01 worked perfectly.
However, it'd be really nice to be able to manage maintenance state directly from Datacenter > HA. Is this something that's in the...
Hi,
I've suddenly started getting crashes on the active Ceph manager node. It looks to be that the reason is caused by NoOrchestrator. We upgraded to Quincy at the weekend, but have never had any orchestrator modules.
root@pve01:# ceph crash info...
Thanks @spirit
I'd be inclined to say the same regards direct=0 / direct=1but this is what I'm getting below...
root@testing:# cat fio-rand-read.fio
; fio-rand-read.job for fiotest
[global]
name=fio-rand-read
filename=fio-rand-read
rw=randread
bs=4K
direct=1
numjobs=1
time_based
runtime=900...
I've been testing various disk caches with Ceph RBD. I'm using the writethrough cache mode and getting very impressive figures. On a Linux machine, I'm running fio tests and have the below config to test random read speeds -
[global]
name=fio-rand-read
filename=fio-rand-read
rw=randread
bs=4K...
Thanks for your reply Aaron.
The issue is still present so we have taken the port out of the LAG which resolves everything. We can't work out exactly what the cause is. Out of the LAG the interface in question is capable of transferring 9GBit/s (on a 10GBit link) with no problem. As soon as...
Hi,
We have a three node setup, each of which is running Proxmox & Ceph.
Each node has a four-port 10GBit link. Two ports for Ceph and two ports for public/private networking.
There are also two 1GBit ports used purely for Corosync.
Link aggregation for Ceph works great, Ceph is nice and...
Hi,
I raised this topic as I was having issues backup up a large 3.6TB guest - https://forum.proxmox.com/threads/cannot-backup-3-6tb-ceph-guest-iowait-through-the-roof.109232/#post-473772
Since switching the drives over to run via KRBD I've been able to run unthrottled backups without hitting...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.