Hi,
You need to dump the vBios of the 1060 in hostpci0: 03:00,x-vga=1
Check the guide again, you will probably need a primary VGA to get to the 1060 vBios..
No, HA on proxmox does only "VM ping>VM Dead>Move to the next available node based on HA priority", but not based on load...it'd be very neat tho!
(you could actually do that with crontab and the pve shell, but its a bit..hackish)
About latency - There a quite a few reports in the forums on nodes auto-rebooting when corosync throws a hissy fit, so it would be in everyones best interest if we could at least have an option to disable the auto-reboot.
Individual HDD activity in the RRD graphs would be my most wanted feature!
And a HA where I can define resource constraints, if a VMHost is bogged down for example, I'd want VMs to be moved..
Dont know if this is applicable, but I was running into ceph pool too full warnings for a long time even though I had plenty of space...
So what I ended up doing is to increase the size of the PGs for that pool, and I went from 80% of usage to about 35% in one night. I had turned on the...
I had crackling sound with Steam and bad FPS so I tried parsecgaming client instead, with the host audio card passed through which made it a whole lot easier!
Hi,
Has anyone gotten this to work, trying to convert my Ubuntu+samba+cephfs VM to an LXC container but could not install cephfs client on it.
Thanks,
Alex
You could always install & configure a samba server directly on Proxmox ? This is the biggest advantage for running proxmox, it debian with a custom kernel + proxmox packages!
Good luck :)
PS. This does not apply, but I run cephfs on 3 nodes with a ubuntu+samba+cephfs VM pointing at the shared...
I have this issue from time to time as well, I will uninstall qemu agent next time as all it does is provide a "nice shutdown" which it already does on other linux VMs without the agent installed..
One more thing, try adding a GFX to the server and make sure BIOS uses that as the primary device.
Also check this thread for exakt configuration I use:
https://forum.proxmox.com/threads/gpu-passthrough.46231/
One technique would be to stop OSD 20 and 23 for a while and see if there are any issues after that. And then reenable until you know which OSD is the culprit.
I would also head over to the ceph mailing armed with logs (often they need osd logging set to 20) in order for expert help!
Myself am...
I agree with this fully, however the more nodes he has, the faster ceph will be. Also, make sure to factor in one NVME for the ceph journal and maybe a NVMEs for cache tier ?
I have a GTX 750 ti that has worked since 1 year (so both 5.2 and 5.3) but my 1050 ti has ONLY worked on 5.2. As soon as 5.3 dropped, all I got was Error -43 on the 1050ti while my 750ti is working just fine.
And yes, I am using UEFI BIOS in the VM - did dump my own rom (not from techrepublics...
I have a 3 nodes proxmox/ceph cluster for the always-on feature and so I dont have to rely on any customised OS they put on NAS devices and it works really really well!
So in your case I'd put Proxmox OS on SSDs (don't put it on SD card as its not supported atm and will kill the SD card!) in...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.