Hi everyone,
I have a simple 3 node cluster that has always worked for many years and successfully passed the updates starting from proxmox 4. After updating to version 7 of proxmox and pacific ceph, the system is affected by this issue:
every time I reboot a node for any reason (ie updating to...
lately I've noticed that if I run these commands AFTER SHUTTING DOWN ALL THE VMS so you will have enough free phisical ram to make things works
swapoff -a
wait some seconds
swapon -a
and then reboot from web interface, will not hung during the reboot process.. at least in this way I can reboot...
for me there is a web interface that manages the raid configuration, but I don't want to keep this setting disabled so this doesn't solves the issue for me
this is a issue from proxmox > 6.1 and there is an activity here with no developments
https://bugzilla.proxmox.com/show_bug.cgi?id=2333
I think that must be fixed!
I'm sorry do you mean that all those files are not enough to fill my inodes?
Can I delete all the files with something like rm /var/lib/samba/private/* or there is a specific samba command?
many thanks
thank you for the reply!
this is the result in / of the most important lines
root@nodo1:/var/lib/samba/private# du --inodes -xS | sort
--
--
--
203 ./lib/firmware
204 ./usr/share/lintian/overrides
218 ./sbin
225 ./usr/share/man/man7
234 ./usr/share/i18n/charmaps
236...
Hi to all,
I just updated my 4 nodes ceph cluster to latest proxmox 6.2, but after that I was receiving in my pve dashboard some errors related to the available space by ceph's mon. So searching with df -h I found that my root partition was around 75% on a 136GB sas 15k disk. At this point I was...
Thank you @Chris for the reply,
attached you can find the syslog in a timestamp around the reboot process,
if you check at
this is exacly when it stucks, but unfortunately I cannot see any errors.
For errors checking with systemctl, I have to wait the week cause this is a production...
Hi to all,
in a 3 node ceph cluster buit on 3 identical hp gen8 dl360p servers, I'm always receiving the error attached everytime I'm rebooting a node. Before rebooting the node I always move all the vms to another node, so when I press reboot there is no running VM. To fix this I have to force...
thank you for your reply, with windows kvm servers I don't have any problems, this is happening only with kvm Linux. Baloon is enabled, I tried with fixed memory or different min memory too. If I'm correct with Linux kvm I don't have to install nothing for balooning to works. Am I right?
Hi to all,
in my KVM linux servers I have this similar memory usage
so free total memory is around 8Gb and available around 6Gb
In my proxmox gui I have the following usage
as you can see I think that this is showing not the available memory but thee free one, is this the correct behaviour...
I think that the only think that I can do is to create a Vbox environment similar to the production one, and then try to change nics and see what happens.
I'm sorry but I'm writing from Italy..we usually use this term to identify a meshed network(interna ai nodi), in this particular case is the ring network built on the three nodes. So I have 2 "internodal networks", one is for ceph and is using both ports of existing 10Gbs nics(for a 3node ring...
I need to replace a 10gb sfp+ 2ports nic with a similar nic that provides 4 ports instead of 2. This particular nic is serving the internodal ceph network in a meshed network configuration, so no switches inside the ring. I'm in a production 3 node cluster with ceph and latest proxmox. replica...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.