Dear yes this just happens on reboot, but then everything works flawless and the pool is correctly imported, anyway I tried your code first time with 5 seconds and it didn't worked so I tried with 15 seconds and is the same. So no fix at the moment.
just a feedback from my experience, if you are on a hpe gen8 dl360p with an HP H220 HBA card in IT/JBOD mode that is usually installed for having direct access to drives with zfs, you need this package version of hp-ams to make it work with ilo, or you will have false positive errors about...
Dear all,
I have a couple of HP gen8 dl360 running latest proxmox 8.1.3 with the same issue, when they start I can clearly se a critical red error on sceen
cannot import 'tank-zfs': no such pool available
but then both starts good without any issue. Both servers(node4 and node5) are using an...
so the correct approach to retain latest 7 snapshots on the second pbs(considering that the first pbs has a lot more), is to transfer latest 7 through the synch job and after that run a purge job that will retain again latest 7, because without the purging job, the synch job will add everyday 7...
Dear all I have 2 pbs in the same lan, one is for synching backups from the other one. So I'm using the remote synch job and I have set the option transfer last 7, but every day I see the number of backups incrementing instead of stay to seven, but is not transfering the same number of the...
yes you are right I was talking about LXCs, I edited the post, anyway it will be very usefull in KVM too but this is not monitored even with guest tools installed.
In the proxmox guy if I click on vm name->summary I can see live Bootdisk size that is very usefull, but is there a way to live monitor other hard disk added to the same LXC?
Ok I fixed it without rebooting, so just for anyone facing same problems after full root local disk in a ceph cluster, if you want to turn things good without rebooting the servers my procedure was:
restart all mons on affected servers i.e.
systemctl restart ceph-mon@node1.service
systemctl...
sorry this ahslog is something related to HPE services probably it wasn't working even before, so alla services are ok, ceph health is ok but If I systemctl restart pveproxy console will stuck again
I tried to restart logrotate and I managed to restart all other red services except ahslog that is still red so I tried
root@node1:/tmp# systemctl status ahslog
× ahslog.service - Active Health Service Logger
Loaded: loaded (/lib/systemd/system/ahslog.service; enabled; preset: enabled)...
yes they have and I tried systemctl restart chronyd on all nodes and nothing changed, so I tried on the affected nodes
systemctl restart ceph-mon@node1.service
systemctl restart ceph-mon@node2.service
and now I can see healthy ceph cluster on the not affected node, but other nodes are still...
and this is what I can see in the gui accesing from one of the working nodes
but as I said I can access all vms and lxcs, I'm a little scared because of what will happen to ceph if I reboot the 2 nodes.
sure, seems related to ceph but all vms and lxc are working
root@node1:~# journalctl -xe
Oct 02 11:32:34 node1 ceph-osd[4449]: 2023-10-02T11:32:34.271+0200 7faacae716c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before...
it stucks simply no output, so I tried
root@node1:~# systemctl status pveproxy
● pveproxy.service - PVE API Proxy Server
Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; preset: enabled)
Active: deactivating (final-sigterm) (Result: timeout) since Mon 2023-10-02...
I made a mistake in my 5 nodes ceph cluster and I selected for my new backups schedule on some nodes the root local storage and it went full, today everything works but I have no access to the gui of the affected nodes(I receive connection refused). All vms and lxc are working good. I deleted...
Dear all,
I have a privileged ctn debian11 based that is a LAMP web server with a single web app developed by myself that worked for years without any issues. This app needs to access some windows shared folders on the operator's PC that uses the app, for making this the most reliable possible...
thank you dear, after your usefull information I checked again my mlag environment to understand if was something wrong on switch side, it comes out that with mikrotik if you use more than one bridge, the second one will use cpu for switching so no hardware offloading features, and this was...
I'm building a new proxmox cluster and I want to use MLAG + separated VLANS for ceph, lan and corosync. Everything it's working, linked and pingable but I'm facing random errors only in my corosync network similar to
[KNET ] host: host: 3 has no active links 802.3ad bond
[TOTEM ] Retransmit...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.