centos/alma 8 based kernel or centos/alma 8 with cloudlinux kernel?
if are using cloudlinux kernel then is the same in centos 7, centos 8 or centos 1 if this existed
but could point if it really is something from cpanel or cloudlinux
in my case It also happened to me with cpanel + cloudlinux,
but some cpanel + cloudlinux work and another no, some crash in one day and some fail in random day's, don't care if i'm using pbs, nfs, local disk, crash some internally is the disventage of mantain very old kernel and try to patch...
you're using nvme disk's as cache devices or for the system only?
something useful for that:
https://pbs.proxmox.com/docs/system-requirements.html
If HDDs are used: Using a metadata cache is highly recommended, for example, add a ZFS special device mirror.
hello,
i have up to 16 nodes in a proxmox cluster and corosync is constantly showing retransmit in logs:
Jan 05 13:38:15 node2 corosync[18227]: [TOTEM ] Retransmit List: 66a9
Jan 05 13:38:23 node2 corosync[18227]: [TOTEM ] Retransmit List: 6708
Jan 05 13:38:30 node2 corosync[18227]...
hello,
now when i install new proxmox not comes with ksmsharing installed by default,
i installed by ksmsharing with this:
and configured all but when i try to run this then not work,
i'm looking ksmsharing sourcecode and:
the official repo calcule the committed ram withs this:
this not...
hello,
sorry for the delay,
yes! solved with your commands, i executed this:
mkdir /backup
chown backup:backup backup
#all other mentioned dirs just not exist
mv /vm /backup/
nano /etc/proxmox-backup/datastore.cfg
#and changed path config to /backup
Thanks!
Hello,
i'm using apt-get dist-upgrade or apt full-upgrade with proxmox but not with pbs, i'm documented this to run future upgrades,
my datastore is a zfs mounted directly in /
# df -h
Filesystem Size Used Avail Use% Mounted on
udev XXG 0 XXG 0% /dev
tmpfs...
hello,
when i start garbage collector in proxmox backup server version 0.8-X then this ocurs:
2020-11-13T19:09:36+02:00: starting garbage collection on store ssd2
2020-11-13T19:09:36+02:00: Start GC phase1 (mark used chunks)
2020-11-13T19:09:47+02:00: TASK ERROR: cannot continue...
hello,
this improves speed of backups, is a temporaly fix Until they solve it but restores still slow :'c i'm trying to solve but now still slow restores
Regards,
hello,
if any have this problem erasing disk from fstab and addiding this to crontab works for me:
@reboot mount /dev/pve/data /directory_of_disk (I feel dirty for using this)
i thing this is caused because linux can't mount disk (obviously)
for some reason some lvm thing is not loaded...
hello,
i formatted all disk and recreated without lvm :D, no lvm no problem,
i can't fear every update, in my experience without lvm not have such important problems, and this not is the first problem i have with lvm
thanks for review,
Regards,
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.