Search results

  1. S

    virtiofs with nfs ?

    This seems to be not true. At least this is what https://access.redhat.com/solutions/7000411 seems to say, as it marks a "solution verified". I cannot tell you how though, because the red hat content is closed...
  2. S

    [TUTORIAL] virtiofsd in PVE 8.0.x

    After some reboots and fiddling I managed to mount the virtiofs filesystems and can use them on this vm. But now I tried to export them via nfs and that seems to make troubles again. The nfs clients seem to see the basic fs tree (one can ls and cd to folders), but as soon as I try to open an...
  3. S

    [TUTORIAL] virtiofsd in PVE 8.0.x

    Hello, I am first time trying to use virtiofs together with the proposed hook script. I try to export 3 folders from host to vm, I see 6 virtiofsd processes running and everything looks ok. Only the vm cannot find the tags and can therefore perform no mount. dmesg shows that the tags are...
  4. S

    promox pools

    For me obviously, as you even reject to read them in detail...
  5. S

    promox pools

    Sorry, bugfixes are solely your job, not mine. And I can see quite some necessary in your GUI.
  6. S

    promox pools

    Hi Tom, Thanks for answering. Maybe I should have explained a bit more around the basic problem. It's not just aimed at the pure function itself, but rather how you can group/pool/tag some vms together to perform some function on them afterwards. I did the tagging and it has really some...
  7. S

    promox pools

    I am no guy for a feature request of this kind. I rather prefer writing stuff myself, which I do since the 1980's. The things I see missing in this issue are really marginal in terms of additional code. I believe they are not the coders' problem but rather a deficiency of the people defining the...
  8. S

    promox pools

    Thank you for pointing to tags. After looking at some videos I tried that and it does as you say. It is not really nice looking, I personally would prefer a tree view where you can see the node and then a group and below it the vms. So the tag view is kind of second best. Nevertheless I found...
  9. S

    promox pools

    I will not mess around with HA any more, did that with a cluster of 5 nodes and it always ended up the wrong way round. This time we have a cluster of two nodes and I doubt this will make HA any better (doubting that it will work at all as I seem to remember you need at least three nodes for HA).
  10. S

    promox pools

    Ok, just to make that clear: my major concern is _not_ to bulk migrate many vms in parallel. I only do not want to click (or type) a hundred vm ids for migrating from one node to another. _AND_ - another point worth mentioning - I want to have them all together after this initial migration to be...
  11. S

    promox pools

    Hello all, I tried to find a way in proxmox to "bundle" vms together in a group which can then be addressed with functions like a single vm. Like I want to be able to migrate a pool from one node to another in a cluster. I would like to see it more like a group of vms rather than a "pool" of...
  12. S

    What happens ...

    It just came to my mind that the whole issue is quite simple to solve: all needed is somebody having daily snapshots of the proxmox repositories and one would easily be able to update to any given date (version). Is there someone out there having such an archive?
  13. S

    What happens ...

    Is there nothing else to update besides the corosync.conf?
  14. S

    What happens ...

    ... when I have a cluster with 4 nodes, where one is down for some reason. Then I add a new node which runs perfect. Then I restart the node that was down during adding the new node. Does this work, or is it a problem in some way? The same btw might happen if you have to restore some node from a...
  15. S

    ifupdown2 installation without proxmox update

    Hello all, I recently ran into the problem that an updated proxmox node cannot join a cluster with older version. So I tried to install the node with the same old proxmox version - which works. I can join the cluster. Only I had to find out that installing ifupdown2 the usual way updates...
  16. S

    Problems joining new node with 6.4 to cluster with 6.0

    Thanks for your help. I changed the ICODE, thanks for the hint. Last question: how am I to make updates only taking small steps for old versions? Lets say I start from the 6.0. How do I update to 6.1 ? My understanding so far was that this is only possible as one step to the current last minor...
  17. S

    Problems joining new node with 6.4 to cluster with 6.0

    Is there a possibility that the problem has to do with passwords? When I joined the node to the cluster this node had another password entered during the setup phase. Is there a way to change this password for pve? Or is "passwd root" sufficient?
  18. S

    Problems joining new node with 6.4 to cluster with 6.0

    You're right. The major reason against this idea is the downtime. But there are others as well. We need to use the old servers for some time as a failover resource until all new material is delivered (which is beyond our influence). PS: Whats the right tag to include lists like above? Obviously...
  19. S

    Problems joining new node with 6.4 to cluster with 6.0

    Hello, thank you for coming back to the issue. Here is the requested output (hopefully correctly inlined) # journalctl -u pve-cluster Dec 15 13:31:25 pm-249 systemd[1]: Starting The Proxmox VE cluster filesystem... Dec 15 13:31:25 pm-249 pmxcfs[1631]: [quorum] crit: quorum_initialize failed...
  20. S

    Problems joining new node with 6.4 to cluster with 6.0

    Let me mention that we read the doc you pointed to before and there: Preconditions Upgraded to the latest version of Proxmox VE 6.4 (check correct package repository configuration) So we are in fact at the beginning of your requested upgrade and find out that even the one-by-one upate to 6.4...