Recent content by grin

  1. grin

    Problem with pseudoterminal in container

    It doesn't seem to be resolved and I observed the same on v7.1. ptmx is mounted 0000 root:root and fstab is empty. devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,mode=600,ptmxmode=000) devpts on /dev/ptmx type devpts (rw,nosuid,noexec,relatime,mode=600,ptmxmode=000) devpts on...
  2. grin

    bugs related to ceph and disk move

    Easiest to test is to create two separate ceph clusters and create the same pool on both. Indeed, it's possible that the target may contain the source and rsync happily does nothing. I believe rbd does the wrong thing as well since it can't map into the /dev/rbd/<pool>/<image> schema the...
  3. grin

    bugs related to ceph and disk move

    Summary: ceph krbd cannot handle same pool name on multiple clusters, mainly because /dev/rbd/<poolname>/<imagename> does not consider cluster-id, it means that most of pve functions relying of "working rbd call" fail silently, mostly due to rbd map failing to map samepool/sameimage from a...
  4. grin

    bugs related to ceph and disk move

    I had time to test, and my guess was wrong: it is not about permission. The move-volume is buggy: it seems cannot move images between rbd storages. My guess is, without checking the code, that it forgets to map the target image. The result is fatal anyway: the copy will fail and the...
  5. grin

    bugs related to ceph and disk move

    For the record the permissions on the source are 'client.admin' (everything allow *), and the destination is caps: [mon] allow rwx caps: [osd] profile rbd pool=C64
  6. grin

    How to update Proxmox safely?

    And don't try to skip reboots. Multiple proxmox components (lxcfs comes to mind) cannot survive the upgrade, especially around from 5 to 6. Also old proxmox neglected to freeze ha manager and had a nasty habit of rebooting in the middle of the upgrade, and I am not sure this is mentioned in...
  7. grin

    Which events can trigger automatic pve nodes reboot?

    The nodes reboot when there is no quorum, BUT it may happen for various reasons, including high local load, network packet or connectivity loss (often due to high network load), or any kinds of iowait stopping the ha manager (or corosync) to update its state. I am not sure whether you can...
  8. grin

    Create VM larger than 2TB

    You can safely convert DOS to GPT, even with fdisk. At least I have done it multiple times years ago (it was possibly gdisk but as far as I remember the same functionality is now in fdisk as well), but backing the MBR up (which you should do anyway!) is also really simple.
  9. grin

    bugs related to ceph and disk move

    I'll be brief. 1) when moving between different ceph pools and they have different premissions (I don't yet know which one it chokes on), and you see this in the move log: mount: /var/lib/lxc/2003/.copy-volume-1: WARNING: source write-protected, mounted read-only. then expect the destination...
  10. grin

    lxcfs: utils.c: 331: read_file_fuse: Write to cache was truncated

    Have you rebooted the node after the last update? lxcfs daemon seems to be notorously sensitive to restarts, and restaring it may require all containers to be restarted too.
  11. grin

    [SOLVED] Non starting redis in restored container

    Except there is no solution listed. Does the same here and works without bloody damned systemd.
  12. grin

    Scheduled downtime of large CEPH node

    Generally you need more than 3 and odd amount of MONs to provide a reliable majority quorum all the time; other daemons are not picky, and usually one spare will do fine. In all cases I advise you to read the documentation of ceph about the redundancy suggestions of the specific daemons, but...
  13. grin

    Are group acls broken in v6.4?

    Ok, that was what I was wondering. Tomas vs. regex 0:1 :) I have patched mine so I'm okay, thank you. Well I see only the web interface and it gave no hint nor history; I try not to touch git if possible since I dislike it pretty much, but no offense, just mentioning that it seems the code...
  14. grin

    Scheduled downtime of large CEPH node

    Indeed, there is no real good answer, apart from the generic "copy your whole storage to another node by your preferred means and connect it to the cluster and migrate your nodes over". As for the ceph part: adding new OSDs to a ceph cluster shall be relatively painless. You can create new OSDs...
  15. grin

    Are group acls broken in v6.4?

    I was fighting to create an already tested state of: "a group [member] who can only manage users within the group foo" and kept failing, and I was thinking it's me: # pveum acl modify /access/realm/pve -groups vmadmin -roles PVEUserAdmin 400 Parameter verification failed. path: invalid ACL path...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!