Search results

  1. S

    Import ZFS pools by device scanning was skipped because of an unmet condition check

    Dear yes this just happens on reboot, but then everything works flawless and the pool is correctly imported, anyway I tried your code first time with 5 seconds and it didn't worked so I tried with 15 seconds and is the same. So no fix at the moment.
  2. S

    HP Agentless Management Service

    just a feedback from my experience, if you are on a hpe gen8 dl360p with an HP H220 HBA card in IT/JBOD mode that is usually installed for having direct access to drives with zfs, you need this package version of hp-ams to make it work with ilo, or you will have false positive errors about...
  3. S

    Import ZFS pools by device scanning was skipped because of an unmet condition check

    Dear all, I have a couple of HP gen8 dl360 running latest proxmox 8.1.3 with the same issue, when they start I can clearly se a critical red error on sceen cannot import 'tank-zfs': no such pool available but then both starts good without any issue. Both servers(node4 and node5) are using an...
  4. S

    Synch job, transfer last not working as expected

    so the correct approach to retain latest 7 snapshots on the second pbs(considering that the first pbs has a lot more), is to transfer latest 7 through the synch job and after that run a purge job that will retain again latest 7, because without the purging job, the synch job will add everyday 7...
  5. S

    Synch job, transfer last not working as expected

    Dear all I have 2 pbs in the same lan, one is for synching backups from the other one. So I'm using the remote synch job and I have set the option transfer last 7, but every day I see the number of backups incrementing instead of stay to seven, but is not transfering the same number of the...
  6. S

    monitor space of second hard drive attached to a guest LXC or KVM

    yes you are right I was talking about LXCs, I edited the post, anyway it will be very usefull in KVM too but this is not monitored even with guest tools installed.
  7. S

    monitor space of second hard drive attached to a guest LXC or KVM

    In the proxmox guy if I click on vm name->summary I can see live Bootdisk size that is very usefull, but is there a way to live monitor other hard disk added to the same LXC?
  8. S

    [SOLVED] What service to restart after root disk full

    Ok I fixed it without rebooting, so just for anyone facing same problems after full root local disk in a ceph cluster, if you want to turn things good without rebooting the servers my procedure was: restart all mons on affected servers i.e. systemctl restart ceph-mon@node1.service systemctl...
  9. S

    [SOLVED] What service to restart after root disk full

    sorry this ahslog is something related to HPE services probably it wasn't working even before, so alla services are ok, ceph health is ok but If I systemctl restart pveproxy console will stuck again
  10. S

    [SOLVED] What service to restart after root disk full

    I tried to restart logrotate and I managed to restart all other red services except ahslog that is still red so I tried root@node1:/tmp# systemctl status ahslog × ahslog.service - Active Health Service Logger Loaded: loaded (/lib/systemd/system/ahslog.service; enabled; preset: enabled)...
  11. S

    [SOLVED] What service to restart after root disk full

    yes they have and I tried systemctl restart chronyd on all nodes and nothing changed, so I tried on the affected nodes systemctl restart ceph-mon@node1.service systemctl restart ceph-mon@node2.service and now I can see healthy ceph cluster on the not affected node, but other nodes are still...
  12. S

    [SOLVED] What service to restart after root disk full

    and this is what I can see in the gui accesing from one of the working nodes but as I said I can access all vms and lxcs, I'm a little scared because of what will happen to ceph if I reboot the 2 nodes.
  13. S

    [SOLVED] What service to restart after root disk full

    sure, seems related to ceph but all vms and lxc are working root@node1:~# journalctl -xe Oct 02 11:32:34 node1 ceph-osd[4449]: 2023-10-02T11:32:34.271+0200 7faacae716c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before...
  14. S

    [SOLVED] What service to restart after root disk full

    it stucks simply no output, so I tried root@node1:~# systemctl status pveproxy ● pveproxy.service - PVE API Proxy Server Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; preset: enabled) Active: deactivating (final-sigterm) (Result: timeout) since Mon 2023-10-02...
  15. S

    [SOLVED] What service to restart after root disk full

    I made a mistake in my 5 nodes ceph cluster and I selected for my new backups schedule on some nodes the root local storage and it went full, today everything works but I have no access to the gui of the affected nodes(I receive connection refused). All vms and lxc are working good. I deleted...
  16. S

    Container random crash with error autofs resource busy

    Dear all, I have a privileged ctn debian11 based that is a LAMP web server with a single web app developed by myself that worked for years without any issues. This app needs to access some windows shared folders on the operator's PC that uses the app, for making this the most reliable possible...
  17. S

    Is 802.3ad bonding still not supported for corosync?

    thank you dear, after your usefull information I checked again my mlag environment to understand if was something wrong on switch side, it comes out that with mikrotik if you use more than one bridge, the second one will use cpu for switching so no hardware offloading features, and this was...
  18. S

    Is 802.3ad bonding still not supported for corosync?

    I'm building a new proxmox cluster and I want to use MLAG + separated VLANS for ceph, lan and corosync. Everything it's working, linked and pingable but I'm facing random errors only in my corosync network similar to [KNET ] host: host: 3 has no active links 802.3ad bond [TOTEM ] Retransmit...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!