Fabian, when you answered me, I had already run the apt upgrade command without dist-.. and it seemed to work for me, because a message came up that I was going to update to version 7.
Then when I was done I put what you told me, since I don't use Ceph and then apt dist-upgrade and finished the...
pve6to7:
pve6to7 --full
= CHECKING VERSION INFORMATION FOR PVE PACKAGES =
Checking for package updates..
WARN: updates for the following packages are available:
proxmox-widget-toolkit, corosync, libnozzle1, libqb100, perl-base, libpolkit-gobject-1-0, python-six, libcrypt-ssleay-perl...
Hey,
I have 3 nodes in cluster.. I already upgraded 2 of them, but when I try to upgrade the one I have left, I always get this error:
After this operation, 8,862 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
W: (pve-apt-hook) !! WARNING !!
W: (pve-apt-hook) You...
I want to say the following,
Previously I used ESXI, then when I migrated to proxmox, the virtual machines use local disks.
Now I want to use local-lvm, I think they are more efficient and faster.
Hi all,
I have a cluster with 3 nodes (pve, pve1, pve2)
Here the version information:
root@pve:~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.157-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-11
pve-kernel-helper: 6.4-11
pve-kernel-5.3: 6.1-6...
Fabian_E, Good morning again,
You're right, the keys in /var/lib/ceph/mon/ceph-<nodename>/ keyring, they are different. pve and pve2 have the same, but pve1 does not.
The @RokaKen suggestion dont show.. said: This member limits who may view their full profile. :oops:
I have not created any OSD...
Hey Fabian_E,
Now i see the result of rbd -p pve ls, it is that, ok
root@pve:~# rbd -p pve ls
2021-09-20T08:04:27.223-0400 7f2b4bd283c0 0 monclient(hunting): authenticate timed out after 300
2021-09-20T08:09:27.223-0400 7f2b4bd283c0 0 monclient(hunting): authenticate timed out after 300...
Hey Fabian_E,
Thanks a lot for your support.
The content of /etc/pve/storage.cfg
root@pve:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content images,vztmpl,iso
maxfiles 10
shared 0
lvmthin: local-lvm
thinpool data
vgname pve...
Hello @Fabian_E
Thanks for answering.
Check what you told me in the 3 nodes, below the images of each one of them, their names are pve, pve1 and pve2.
PVE:
PVE1:
PVE2:
Good morning everyone,
I have a cluster of 3 Proxmox servers under version 6.4-13.
Last Friday I updated Ceph from nautilus to octupus, since it is one of the requirements to upgrade Proxmox to version 7.
At first everything worked wonders.
But today when I check I find that it is giving me the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.