I can tell you, using the script made by Drallas from above in this thread got me a working setup. At least in regard of the virtiofs part. There is a "my guide" link above, walk to that. It installs a hookscript, and you have to reboot the vm twice probably, but then it should work.
Thank you for thinking about the problem. unfortunately 9p is no solution either because it has the same problem with export via nfs than the thought-to-be-replacement virtiofs. In the longrun I will probably end up with reformatting the HDs and using ZFS instead. But at this point I simply have...
Hm, unfortunately the USB passthrough was not my first choice for solving the setup in question. It is only a currently working one. In fact I would have liked to passthrough the HDs as virtiofs. But since this cannot be easily exported via nfs, and nobody could tell me so far how to make a...
No, have not so far. But that would not be very useful anyway as there are only HDs connected, and they lack the necessary bandwidth for the test. But I do think that passthrough USB ports should come out on the vm exactly as they were on the host, else it is no real passthrough.
Hello all,
I try to passthrough some USB ports from host to linux vm and found that they all come out with 5000M speed and not 10G, although they show 10G when used on the host.
Is there a way to change this behaviour? Has anybody seen 10G USB devices inside a vm?
Thank you for comments.
Hello,
thank you for posting the content from red hat, only I doubt the given cause. Clearly NFS V2/3 use persistent file handles. But V4 should be able to provide volatile file handles. The problem I have is that I cannot find any docs for linux how you force NFSv4 to always give volatile...
This seems to be not true. At least this is what
https://access.redhat.com/solutions/7000411
seems to say, as it marks a "solution verified".
I cannot tell you how though, because the red hat content is closed...
After some reboots and fiddling I managed to mount the virtiofs filesystems and can use them on this vm. But now I tried to export them via nfs and that seems to make troubles again. The nfs clients seem to see the basic fs tree (one can ls and cd to folders), but as soon as I try to open an...
Hello,
I am first time trying to use virtiofs together with the proposed hook script. I try to export 3 folders from host to vm, I see 6 virtiofsd processes running and everything looks ok. Only the vm cannot find the tags and can therefore perform no mount. dmesg shows that the tags are...
Hi Tom,
Thanks for answering. Maybe I should have explained a bit more around the basic problem. It's not just aimed at the pure function itself, but rather how you can group/pool/tag some vms together to perform some function on them afterwards.
I did the tagging and it has really some...
I am no guy for a feature request of this kind. I rather prefer writing stuff myself, which I do since the 1980's.
The things I see missing in this issue are really marginal in terms of additional code. I believe they are not the coders' problem but rather a deficiency of the people defining the...
Thank you for pointing to tags. After looking at some videos I tried that and it does as you say. It is not really nice looking, I personally would prefer a tree view where you can see the node and then a group and below it the vms. So the tag view is kind of second best.
Nevertheless I found...
I will not mess around with HA any more, did that with a cluster of 5 nodes and it always ended up the wrong way round.
This time we have a cluster of two nodes and I doubt this will make HA any better (doubting that it will work at all as I seem to remember you need at least three nodes for HA).
Ok, just to make that clear: my major concern is _not_ to bulk migrate many vms in parallel. I only do not want to click (or type) a hundred vm ids for migrating from one node to another. _AND_ - another point worth mentioning - I want to have them all together after this initial migration to be...
Hello all,
I tried to find a way in proxmox to "bundle" vms together in a group which can then be addressed with functions like a single vm. Like I want to be able to migrate a pool from one node to another in a cluster. I would like to see it more like a group of vms rather than a "pool" of...
It just came to my mind that the whole issue is quite simple to solve: all needed is somebody having daily snapshots of the proxmox repositories and one would easily be able to update to any given date (version).
Is there someone out there having such an archive?
... when I have a cluster with 4 nodes, where one is down for some reason. Then I add a new node which runs perfect. Then I restart the node that was down during adding the new node. Does this work, or is it a problem in some way?
The same btw might happen if you have to restore some node from a...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.