I am running a VM with a GTX750ti passed through via OMVF the parsecgaming application (+console unlock) and its working well.
I have a second card - GTX 1050ti - which was working fine until one day when I received the dreaded Nvidia driver -43 error and have since then been unable to make use...
ceph uses udev or something else making the point moot as the drives change designation.
Having said that, I have had this problem when the network stopped working as I have a migration and a public network setup.
I have a cluster of 3x workstations I got from work. And the one thing I always bump in is that even when maxed out, RAM IS NEVER enough :)
So get more RAM, make sure its DDR3 so its cheap!
Would this work for windows also do you think ?
Would save me a bit of trouble (mainly the samba network server) if I can just pass on the hosts /mnt/ceph to the Windows guest...
I just tested this after the latest upgrade and it works as advertized! Here is a cookie, and we will give Thomas a cake! :D
Thanks all involved!!
(PS. I have complained about this very same situation since day 1 on proxmox but was never put my finger on what the fix needed to be, so am a...
I live in Sweden and I have 1gbit x2 at home and they live in the sticks sitting on 100mbit each.
But I get what you are saying, I have been eyeing the 1gbit links over at OMVH as well, but since I fear for my privacy I decided its not worth it to outsource my NAS.
So I will make do with what...
This is my aim as well, but for a different purpose namely a HA NAS with all the bells and whistles like Plex for my family. To that end the plan is to install nodes on pretty crappy internet connections in my sibblings houses and to connect all that via tinc vpn, which is able to connect to...
Thank you. I will check the logs further in ceph see if I can see the 172.16 net dropping or having heavy traffic.
However corosync shouuld speak on both nets as they are in a totem, I am thinking.
If its defined in ceph.conf there is nothing you more you need to do. The monitors will tell the client where to find the OSDs!
Welcome to the ceph world - courtesy of the fine folks of Proxmox! Happy camping!
PS. Check out CephFS, its like samba but on all hosts! DS.
There might also be another possibility, which is unicast? https://pve.proxmox.com/wiki/Multicast_notes
To note that I still havent gotten around to try tincvpn and a distant cluster yet, but am planning too, soon! :)
I use Proxmox + ceph and cephfs installed on 3 nodes as NAS with a HA ubuntu 18.04 VM that connects to cephfs and serves SMB. AM very happy with it since ceph does such a wonderfull job of scrubbing and keeping my data in mint condition even if I kill one of the nodes!! :)
I highly recommend it!
First off, it doesnt seem like you have joined the NUC to the proxmox cluster ? Once joined, install ceph / ceph mon and ceph mgr according to the wiki on that NUC node and it will automagically sync with the other nodes - adding a mon entry in ceph.conf
Second off, remove host=host2 they are...
- If .10 is the 1gb network accessible by all then that is your public network.
- The 10gb network .7 that is daisy chained is then your OSD/storage network.
They know how to find each other because *drumroll* of the mons. Mons act like a torrent tracker!
If that doesnt work you could ofc...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.