Search results

  1. S

    container storage content is empty

    I think that this will be fixed in nex releases cause alwin cannot reproduce the issue on him side but is using the testing repositories.
  2. S

    container storage content is empty

    thank you dear any way, I will try to look deeper into this with next updates. Regards from Italy
  3. S

    container storage content is empty

    Thank you for your patience: I tried to connect to all the other nodes, even the fourth one that is only for utilities staffs so it doesn't have ceph installed, and the issue is still present from each node view. of course this is the reason why I'm using this kind of environment, but...
  4. S

    container storage content is empty

    ooh really sorry I didn't notice the s: root@nodo1:~# ceph versions { "mon": { "ceph version 14.2.1 (9257126ffb439de1652793b3e29f4c0b97a47b47) nautilus (stable)": 3 }, "mgr": { "ceph version 14.2.1 (9257126ffb439de1652793b3e29f4c0b97a47b47) nautilus (stable)": 3...
  5. S

    container storage content is empty

    yes of course root@nodo1:~# ceph version ceph version 14.2.1 (9257126ffb439de1652793b3e29f4c0b97a47b47) nautilus (stable)
  6. S

    container storage content is empty

    Dear I have inspected the logs but there is no error or warning. This is really strange. As I told you the issue appears after upgrade from pve5 to pve6 so something goes wrong.. I'm a little scared to reboot each node of the cluster for this reason, cause maybe at reboot the cluster will not...
  7. S

    container storage content is empty

    yes of course root@nodo1:~# pveversion -v proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve) pve-manager: 6.0-5 (running version: 6.0-5/f8a710d7) pve-kernel-5.0: 6.0-6 pve-kernel-helper: 6.0-6 pve-kernel-4.15: 5.4-7 pve-kernel-4.13: 5.2-2 pve-kernel-5.0.18-1-pve: 5.0.18-1...
  8. S

    container storage content is empty

    Dear as you can see in the attached screen I just created a test machine in ceph_ct storage and the disk is still not showed under ceph_ct content. I know about the KRBD client but this is a cluster that is running from years, it was initially a proxmox 3 and I have always updated using the...
  9. S

    container storage content is empty

    Dear here is the output, as you can see is blank, before the update it was showing correctly root@nodo1:~# pvesm list ceph_ct root@nodo1:~# pvesm list ceph_vm ceph_vm:vm-100-disk-1 raw 161061273600 100 ceph_vm:vm-101-disk-1 raw 161061273600 101 ceph_vm:vm-102-disk-1 raw 161061273600 102...
  10. S

    container storage content is empty

    Dear here is my config: dir: local path /var/lib/vz content backup,iso,vztmpl maxfiles 5 shared 0 lvmthin: local-lvm thinpool data vgname pve content images,rootdir nodes nodo2,nodo1,nodo3 rbd: ceph_vm content images...
  11. S

    container storage content is empty

    Hi to all, after updating from proxmox 5 to 6 and ceph luminous to nautilus in a 4 node HA cluster environment, the container storage (ceph_ct) is empty and all the containers disk are instead showed under the vm storage (ceph_vm). I'm attaching some pics to understand bettere, any solution to this?
  12. S

    cannot start container after backup failure

    Hi and sorry for wasting your time, do you think that the just released kernel proxmox-ve: 5.4-2 (running kernel: 4.15.18-19-pve) fixes the issue? or I have to update to proxmox 6? I have a little hard time to understand the changelog and it is not clear to me if a fixed kernel is under...
  13. S

    cannot start container after backup failure

    thank you I will send this the soon as possible regards from italy
  14. S

    cannot start container after backup failure

    Dear thanks againg for your reply, I always run the latest proxmox version but passing from 5 to 6 is a little difficoult in this moment cause I'm on a production environment in the middle of the highest productive season.. I will upgrade to 6 in the next 2 weeks and check if it fixes the issue...
  15. S

    cannot start container after backup failure

    Thank you @oguz for your reply, I have a massive backup around 10TB of vms from proxmox cluster to an external huge freenas device through a 10Gb network, the entire backups takes around 4h during the night. This particular container is the one with the highest ID so is the last one backuped. It...
  16. S

    cannot start container after backup failure

    I have already tried this, at the end I restarted the node.. thanks anyway, but problem still remains, every time I try to backup/snapshot this container the tar process wil stuck.
  17. S

    cannot start container after backup failure

    Hi to all, I'm in a production environment and I'm on a 3 node ceph cluster so I know that I can migrate my vms to the other nodes but in this particular moment I prefer to not migrate anything cause I don't want to restart the affected node. I have a container id 118 that after a backup failure...
  18. S

    change postmaster email in an lxc container

    ok sorry, so how can I change postmaster email? I followed @heutger suggestions and rewrite the header in postfix, this works but maybe there is a clean way.
  19. S

    change postmaster email in an lxc container

    I have already tried this but is not affecting this is my mail server log as you can see postmaster remains.. is maybe lxc a problem for this setting?
  20. S

    change postmaster email in an lxc container

    this have to be done in the mail server or proxmoxMG?