Absolutely, what log file would be helpful? Here's resources.cfg
:~# cat /etc/pve/ha/resources.cfg
ct: 100
group HA
state started
ct: 102
group HA
state started
ct: 104
group HA
state started
vm: 103
group HA
state started
vm: 106
group HA
state...
Yeah that's the thing, they were already configured for HA but when that node went offline they disappeared from the ha-manager status list.
I got the hardware fixed yesterday on the crashed node, booted it up, and the vm's came back online. They then appeared back in ha-manager status without...
Interestingly from another node, I can see the conf file for the machines that did not move in the directory of the offline node. These machines did not have any local resources and are using shared ceph storage for the drive.
In the ha-manager status above, they don't appear to be listed...
I've got a 5 node setup with ceph running SSD. VM's and Containers have been setup for HA.
We just had a node crash due to hardware, but the machines that were on that node are not failing over. In the GUI they have a grey question mark next to them, and clicking on them shows "no route to hose...
No SMART errors, these seem to be all over the place.. 1-2% wearout on each drive nearly brand new.
osd.0: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 2.2047925269999999,
"bytes_per_sec": 487003566.48115581,
"iops": 116.1106983378305
}
osd.1: {...
Fixed up the cluster and public network, seperating them. Both on 40GbE mellanox. Reran tests with fio - these numbers are even worse.
fio: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.16
Starting 1 process...
I've recent setup a cluster using latest Proxmox 7 iso. One of the OS disks using ZFS went bad.
:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
WARN: /dev/disk/by-uuid/0E18-2679 does not exist -...
The best route is using the API. If you are using something like WHMCS there are modules to do all of this automatically. For example modulesgarden has a few to do exactly what you describe.
Here's the current network interface on one of the nodes. I'd actually like the heavy lifting parts for ceph to go over the 10.3.32.0/24 network and utilize 10.3.34.0/24 for our front-end mgt/client network and heartbeat.
That would mean keeping cluster network as 10.3.32. and moving public to...
Having issues trying to figure out where the issues lies with performance here.
I currently have 5 nodes, each node containing 5 SSD samsung evo disks (I know consumer drives are not the best, but I still wouldn't expect performance to be this low)
the ceph public and cluster network are using...
Having an issue figuring out how to authenticate to PMG API via PHP.
I was able to generate a new ticket just fine, put passing in the cookie and header seems to be a problem for follow up requests. The API tells me:
HTTP/1.1 401 No ticket
$pmgToken = Http::withOptions([
'verify' => false...
I noticed after upgrading that I cannot get SMART values to load on a few servers I have, including my home server. Smartctl is working and has data, the web GUI will never load the values. It's just stuck on "Loading..."
Yeah I run into this a lot as well. Backups within proxmox built-in utilities have been disaster recovery due to the potentially large sizes in the environment. Utilizing different software for backups within each VM in order to grab individual files when needed. Not a problem with containers...
Finally got it..
root@pve:~# ls /sys/bus/pci/devices/0000\:00\:02.0/mdev_supported_types/
i915-GVTg_V5_4 i915-GVTg_V5_8
I also installed i965-va-driver via apt but not sure if that did anything.
Hello,
I've been attempting to get GVT-g working on this board with an Intel E2146G. I'm able to get IOMMU groups listed, but not seeing the Intel iGPU anywhere. Proxmox GUI is also not showing mediated devices. I know it should work somehow because I see others using this setup with unraid...
I have an AsRock C246 with Coffee Lake as well, but I'm unable to even see any IOMMU groups in proxmox. Can you share what you have in grub and modules to get this working? I've tried seemingly every combination in the documentation.
Is there any sort of replication option in Proxmox that would mimic functionality in Hyper-V? Would love to be able to enable replication for a VM from one proxmox cluster to another across geographical locations and simply flip a switch to do a failover.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.