Search results

  1. Error upgrading pve 6 to 7

    Fabian, when you answered me, I had already run the apt upgrade command without dist-.. and it seemed to work for me, because a message came up that I was going to update to version 7. Then when I was done I put what you told me, since I don't use Ceph and then apt dist-upgrade and finished the...
  2. Error upgrading pve 6 to 7

    pve6to7: pve6to7 --full = CHECKING VERSION INFORMATION FOR PVE PACKAGES = Checking for package updates.. WARN: updates for the following packages are available: proxmox-widget-toolkit, corosync, libnozzle1, libqb100, perl-base, libpolkit-gobject-1-0, python-six, libcrypt-ssleay-perl...
  3. Error upgrading pve 6 to 7

    Hey, I have 3 nodes in cluster.. I already upgraded 2 of them, but when I try to upgrade the one I have left, I always get this error: After this operation, 8,862 kB of additional disk space will be used. Do you want to continue? [Y/n] y W: (pve-apt-hook) !! WARNING !! W: (pve-apt-hook) You...
  4. Convert storage from local to local-lvm

    I want to say the following, Previously I used ESXI, then when I migrated to proxmox, the virtual machines use local disks. Now I want to use local-lvm, I think they are more efficient and faster.
  5. Convert to CT

    the virtual machine is in production, it has a postfix server installed
  6. Convert storage from local to local-lvm

    Is it possible, is there any way, to convert a local storage to local-lvm?
  7. Convert to CT

    Is it possible, is there any way, to convert an ubuntu 20.04 virtual machine to a container of the same OS?
  8. Problem with mds

    root@pve:~# ceph -s cluster: id: 8bfacf0e-e4e2-4c1e-a4b4-a3978cbc0bc5 health: HEALTH_WARN 1 MDSs report slow metadata IOs Reduced data availability: 160 pgs inactive OSD count 0 < osd_pool_default_size 3 services: mon: 3 daemons, quorum...
  9. Problem with mds

    Thanks a lot t.lamprecht I create a CephFS but none become active, look: And now. 1 of these are become state creating...
  10. Problem with mds

    Hi all, I have a cluster with 3 nodes (pve, pve1, pve2) Here the version information: root@pve:~# pveversion -v proxmox-ve: 6.4-1 (running kernel: 5.4.157-1-pve) pve-manager: 6.4-13 (running version: 6.4-13/9f411e79) pve-kernel-5.4: 6.4-11 pve-kernel-helper: 6.4-11 pve-kernel-5.3: 6.1-6...
  11. Ceph Error

    Hey Fabian_E. I create yesterday the mds on 1 of my nodes... and today have the status of creating !!! It is normal?
  12. Ceph Error

    Fabian_E: Now all monitors node all up and mgr... but look all error, can you help me?
  13. Ceph Error

    Fabian_E, Good morning again, You're right, the keys in /var/lib/ceph/mon/ceph-<nodename>/ keyring, they are different. pve and pve2 have the same, but pve1 does not. The @RokaKen suggestion dont show.. said: This member limits who may view their full profile. :oops: I have not created any OSD...
  14. Ceph Error

    Fabian_E, ty a lot for your support.. Everything seems to be better. I only have this:
  15. Ceph Error

    Hey Fabian_E, execution of command systemctl status ceph-mon@<nodename>.service on all nodes get this: node PVE: root@pve:~# systemctl status ceph-mon@pve.service ● ceph-mon@pve.service - Ceph cluster monitor daemon Loaded: loaded (/lib/systemd/system/ceph-mon@.service; enabled; vendor...
  16. Ceph Error

    Hey Fabian_E, Now i see the result of rbd -p pve ls, it is that, ok root@pve:~# rbd -p pve ls 2021-09-20T08:04:27.223-0400 7f2b4bd283c0 0 monclient(hunting): authenticate timed out after 300 2021-09-20T08:09:27.223-0400 7f2b4bd283c0 0 monclient(hunting): authenticate timed out after 300...
  17. Ceph Error

    Hey Fabian_E, Thanks a lot for your support. The content of /etc/pve/storage.cfg root@pve:~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content images,vztmpl,iso maxfiles 10 shared 0 lvmthin: local-lvm thinpool data vgname pve...
  18. Ceph Error

    This is the contents: root@pve:/etc/ceph# cat ceph.conf [global] auth_client_required = cephx auth_cluster_required = cephx auth_service_required = cephx cluster_network = 10.12.17.0/24 fsid = 8bfacf0e-e4e2-4c1e-a4b4-a3978cbc0bc5...
  19. Ceph Error

    Hello @Fabian_E Thanks for answering. Check what you told me in the 3 nodes, below the images of each one of them, their names are pve, pve1 and pve2. PVE: PVE1: PVE2:
  20. Ceph Error

    Good morning everyone, I have a cluster of 3 Proxmox servers under version 6.4-13. Last Friday I updated Ceph from nautilus to octupus, since it is one of the requirements to upgrade Proxmox to version 7. At first everything worked wonders. But today when I check I find that it is giving me the...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!