Search results

  1. grin

    lxcfs: utils.c: 331: read_file_fuse: Write to cache was truncated

    Okay, let's differentiate. One case is when you upgrade / restart lxcfs and this cause the problem you mentioned. This is what I was talking about: in some cases you cannot safely stop and start lxcfs (I think it's related to various changes in fuse and cgroups but never checked) and it needs a...
  2. grin

    lxcfs: utils.c: 331: read_file_fuse: Write to cache was truncated

    Yes. You have to reboot the machine. It seems to mess up so many things that nobody ever took the efforts to try to untangle it. Basically major updates of lxcfs are the main reboot magnets in proxmox. That, and kernel updates.
  3. grin

    Reinstalled node doesn't start osd's after reboot

    For old systems (fstore) still may be relevant: `ceph-volume` is not a dependency just recommended, so it's possible that it's not getting installed (or gets removed) at the upgrade. Without it filestores do not get mounted, thus osd's will not be present and can't start.
  4. grin

    [SOLVED] Network issue when using only IP on VLANs

    Sidenote: the error can be prevented by reordering the interfaces in the config file: physical interfaces bridges vlan interfaces
  5. grin

    New version breaks old?

    Is it possible that 2.3.x server breaks 2.0.x client? The best indicator is that `proxmox-backup-client list` (2.0.14) waits forever then timeouts trying to reach 2.3.3 server, while a 2.3.1 client works well. No, it is not a question whether it would be optimal if everything would be up to...
  6. grin

    Minimally required packages for external ceph cluster

    Without browsing through the code, here's one example from Ceph.pm (line 645, or the very end): code => sub { my ($param) = @_; PVE::Ceph::Tools::check_ceph_inited(); my $rados = PVE::RADOS->new(); my $rules = $rados->mon_command({ prefix => 'osd crush rule...
  7. grin

    Minimally required packages for external ceph cluster

    I am not sure you're correct. Proxmox UI correctly has all the features required to use an external Ceph cluster, including handling multiple mons and authorization. I would also guess that Proxmox uses standard Ceph API calls to get information about the clusters, so when ceph status works on...
  8. grin

    Minimally required packages for external ceph cluster

    Please do not hijack threads. Open one with your question (and ping me there), or use direct messages and I'll answer there. Here I'd prefer to get answers to my question, thank you.
  9. grin

    Backup progress percent or bar or else?

    @t.lamprecht any chance on this?
  10. grin

    Minimally required packages for external ceph cluster

    Hello there! I have tried to search the forum but found no definite answer, and the documentation seem only mention "hyper-converged" clusters (where the ceph daemons are on the proxmox nodes, which is - based on exprience - is not a good idea in the long run). Proxmox see the cluster storage...
  11. grin

    Problem with pseudoterminal in container

    It doesn't seem to be resolved and I observed the same on v7.1. ptmx is mounted 0000 root:root and fstab is empty. devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,mode=600,ptmxmode=000) devpts on /dev/ptmx type devpts (rw,nosuid,noexec,relatime,mode=600,ptmxmode=000) devpts on...
  12. grin

    bugs related to ceph and disk move

    Easiest to test is to create two separate ceph clusters and create the same pool on both. Indeed, it's possible that the target may contain the source and rsync happily does nothing. I believe rbd does the wrong thing as well since it can't map into the /dev/rbd/<pool>/<image> schema the...
  13. grin

    bugs related to ceph and disk move

    Summary: ceph krbd cannot handle same pool name on multiple clusters, mainly because /dev/rbd/<poolname>/<imagename> does not consider cluster-id, it means that most of pve functions relying of "working rbd call" fail silently, mostly due to rbd map failing to map samepool/sameimage from a...
  14. grin

    bugs related to ceph and disk move

    I had time to test, and my guess was wrong: it is not about permission. The move-volume is buggy: it seems cannot move images between rbd storages. My guess is, without checking the code, that it forgets to map the target image. The result is fatal anyway: the copy will fail and the...
  15. grin

    bugs related to ceph and disk move

    For the record the permissions on the source are 'client.admin' (everything allow *), and the destination is caps: [mon] allow rwx caps: [osd] profile rbd pool=C64
  16. grin

    How to update Proxmox safely?

    And don't try to skip reboots. Multiple proxmox components (lxcfs comes to mind) cannot survive the upgrade, especially around from 5 to 6. Also old proxmox neglected to freeze ha manager and had a nasty habit of rebooting in the middle of the upgrade, and I am not sure this is mentioned in...
  17. grin

    Which events can trigger automatic pve nodes reboot?

    The nodes reboot when there is no quorum, BUT it may happen for various reasons, including high local load, network packet or connectivity loss (often due to high network load), or any kinds of iowait stopping the ha manager (or corosync) to update its state. I am not sure whether you can...
  18. grin

    Create VM larger than 2TB

    You can safely convert DOS to GPT, even with fdisk. At least I have done it multiple times years ago (it was possibly gdisk but as far as I remember the same functionality is now in fdisk as well), but backing the MBR up (which you should do anyway!) is also really simple.
  19. grin

    bugs related to ceph and disk move

    I'll be brief. 1) when moving between different ceph pools and they have different premissions (I don't yet know which one it chokes on), and you see this in the move log: mount: /var/lib/lxc/2003/.copy-volume-1: WARNING: source write-protected, mounted read-only. then expect the destination...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!