Search results

  1. grin

    Reinstalled node doesn't start osd's after reboot

    For old systems (fstore) still may be relevant: `ceph-volume` is not a dependency just recommended, so it's possible that it's not getting installed (or gets removed) at the upgrade. Without it filestores do not get mounted, thus osd's will not be present and can't start.
  2. grin

    [SOLVED] Network issue when using only IP on VLANs

    Sidenote: the error can be prevented by reordering the interfaces in the config file: physical interfaces bridges vlan interfaces
  3. grin

    New version breaks old?

    Is it possible that 2.3.x server breaks 2.0.x client? The best indicator is that `proxmox-backup-client list` (2.0.14) waits forever then timeouts trying to reach 2.3.3 server, while a 2.3.1 client works well. No, it is not a question whether it would be optimal if everything would be up to...
  4. grin

    Minimally required packages for external ceph cluster

    Without browsing through the code, here's one example from Ceph.pm (line 645, or the very end): code => sub { my ($param) = @_; PVE::Ceph::Tools::check_ceph_inited(); my $rados = PVE::RADOS->new(); my $rules = $rados->mon_command({ prefix => 'osd crush rule...
  5. grin

    Minimally required packages for external ceph cluster

    I am not sure you're correct. Proxmox UI correctly has all the features required to use an external Ceph cluster, including handling multiple mons and authorization. I would also guess that Proxmox uses standard Ceph API calls to get information about the clusters, so when ceph status works on...
  6. grin

    Minimally required packages for external ceph cluster

    Please do not hijack threads. Open one with your question (and ping me there), or use direct messages and I'll answer there. Here I'd prefer to get answers to my question, thank you.
  7. grin

    Backup progress percent or bar or else?

    @t.lamprecht any chance on this?
  8. grin

    Minimally required packages for external ceph cluster

    Hello there! I have tried to search the forum but found no definite answer, and the documentation seem only mention "hyper-converged" clusters (where the ceph daemons are on the proxmox nodes, which is - based on exprience - is not a good idea in the long run). Proxmox see the cluster storage...
  9. grin

    Problem with pseudoterminal in container

    It doesn't seem to be resolved and I observed the same on v7.1. ptmx is mounted 0000 root:root and fstab is empty. devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,mode=600,ptmxmode=000) devpts on /dev/ptmx type devpts (rw,nosuid,noexec,relatime,mode=600,ptmxmode=000) devpts on...
  10. grin

    bugs related to ceph and disk move

    Easiest to test is to create two separate ceph clusters and create the same pool on both. Indeed, it's possible that the target may contain the source and rsync happily does nothing. I believe rbd does the wrong thing as well since it can't map into the /dev/rbd/<pool>/<image> schema the...
  11. grin

    bugs related to ceph and disk move

    Summary: ceph krbd cannot handle same pool name on multiple clusters, mainly because /dev/rbd/<poolname>/<imagename> does not consider cluster-id, it means that most of pve functions relying of "working rbd call" fail silently, mostly due to rbd map failing to map samepool/sameimage from a...
  12. grin

    bugs related to ceph and disk move

    I had time to test, and my guess was wrong: it is not about permission. The move-volume is buggy: it seems cannot move images between rbd storages. My guess is, without checking the code, that it forgets to map the target image. The result is fatal anyway: the copy will fail and the...
  13. grin

    bugs related to ceph and disk move

    For the record the permissions on the source are 'client.admin' (everything allow *), and the destination is caps: [mon] allow rwx caps: [osd] profile rbd pool=C64
  14. grin

    How to update Proxmox safely?

    And don't try to skip reboots. Multiple proxmox components (lxcfs comes to mind) cannot survive the upgrade, especially around from 5 to 6. Also old proxmox neglected to freeze ha manager and had a nasty habit of rebooting in the middle of the upgrade, and I am not sure this is mentioned in...
  15. grin

    Which events can trigger automatic pve nodes reboot?

    The nodes reboot when there is no quorum, BUT it may happen for various reasons, including high local load, network packet or connectivity loss (often due to high network load), or any kinds of iowait stopping the ha manager (or corosync) to update its state. I am not sure whether you can...
  16. grin

    Create VM larger than 2TB

    You can safely convert DOS to GPT, even with fdisk. At least I have done it multiple times years ago (it was possibly gdisk but as far as I remember the same functionality is now in fdisk as well), but backing the MBR up (which you should do anyway!) is also really simple.
  17. grin

    bugs related to ceph and disk move

    I'll be brief. 1) when moving between different ceph pools and they have different premissions (I don't yet know which one it chokes on), and you see this in the move log: mount: /var/lib/lxc/2003/.copy-volume-1: WARNING: source write-protected, mounted read-only. then expect the destination...
  18. grin

    lxcfs: utils.c: 331: read_file_fuse: Write to cache was truncated

    Have you rebooted the node after the last update? lxcfs daemon seems to be notorously sensitive to restarts, and restaring it may require all containers to be restarted too.
  19. grin

    [SOLVED] Non starting redis in restored container

    Except there is no solution listed. Does the same here and works without bloody damned systemd.