Search results

  1. S

    [SOLVED] An Help with wrong partitioning after replacing ZFS drive

    So this time chatgpt helped me a lot, the steps for fixing this were First we destroy the partition table of the disk sgdisk --zap-all /dev/sdb we need to reboot to get the new partition table, after this the zpool status will report the disk as 10387678568864950468 reboot we copy the partition...
  2. S

    [SOLVED] An Help with wrong partitioning after replacing ZFS drive

    Dear all, Can someone help me to fix my partition table? it looks like the attached one now, you will notice the difference in /dev/sdb that is the replaced disk missing the BIOS and EFI partitions. I didn't followed the right instructions provided here...
  3. S

    Import ZFS pools by device scanning was skipped because of an unmet condition check

    Dear yes this just happens on reboot, but then everything works flawless and the pool is correctly imported, anyway I tried your code first time with 5 seconds and it didn't worked so I tried with 15 seconds and is the same. So no fix at the moment.
  4. S

    HP Agentless Management Service

    just a feedback from my experience, if you are on a hpe gen8 dl360p with an HP H220 HBA card in IT/JBOD mode that is usually installed for having direct access to drives with zfs, you need this package version of hp-ams to make it work with ilo, or you will have false positive errors about...
  5. S

    Import ZFS pools by device scanning was skipped because of an unmet condition check

    Dear all, I have a couple of HP gen8 dl360 running latest proxmox 8.1.3 with the same issue, when they start I can clearly se a critical red error on sceen cannot import 'tank-zfs': no such pool available but then both starts good without any issue. Both servers(node4 and node5) are using an...
  6. S

    Synch job, transfer last not working as expected

    so the correct approach to retain latest 7 snapshots on the second pbs(considering that the first pbs has a lot more), is to transfer latest 7 through the synch job and after that run a purge job that will retain again latest 7, because without the purging job, the synch job will add everyday 7...
  7. S

    Synch job, transfer last not working as expected

    Dear all I have 2 pbs in the same lan, one is for synching backups from the other one. So I'm using the remote synch job and I have set the option transfer last 7, but every day I see the number of backups incrementing instead of stay to seven, but is not transfering the same number of the...
  8. S

    monitor space of second hard drive attached to a guest LXC or KVM

    yes you are right I was talking about LXCs, I edited the post, anyway it will be very usefull in KVM too but this is not monitored even with guest tools installed.
  9. S

    monitor space of second hard drive attached to a guest LXC or KVM

    In the proxmox guy if I click on vm name->summary I can see live Bootdisk size that is very usefull, but is there a way to live monitor other hard disk added to the same LXC?
  10. S

    [SOLVED] What service to restart after root disk full

    Ok I fixed it without rebooting, so just for anyone facing same problems after full root local disk in a ceph cluster, if you want to turn things good without rebooting the servers my procedure was: restart all mons on affected servers i.e. systemctl restart ceph-mon@node1.service systemctl...
  11. S

    [SOLVED] What service to restart after root disk full

    sorry this ahslog is something related to HPE services probably it wasn't working even before, so alla services are ok, ceph health is ok but If I systemctl restart pveproxy console will stuck again
  12. S

    [SOLVED] What service to restart after root disk full

    I tried to restart logrotate and I managed to restart all other red services except ahslog that is still red so I tried root@node1:/tmp# systemctl status ahslog × ahslog.service - Active Health Service Logger Loaded: loaded (/lib/systemd/system/ahslog.service; enabled; preset: enabled)...
  13. S

    [SOLVED] What service to restart after root disk full

    yes they have and I tried systemctl restart chronyd on all nodes and nothing changed, so I tried on the affected nodes systemctl restart ceph-mon@node1.service systemctl restart ceph-mon@node2.service and now I can see healthy ceph cluster on the not affected node, but other nodes are still...
  14. S

    [SOLVED] What service to restart after root disk full

    and this is what I can see in the gui accesing from one of the working nodes but as I said I can access all vms and lxcs, I'm a little scared because of what will happen to ceph if I reboot the 2 nodes.
  15. S

    [SOLVED] What service to restart after root disk full

    sure, seems related to ceph but all vms and lxc are working root@node1:~# journalctl -xe Oct 02 11:32:34 node1 ceph-osd[4449]: 2023-10-02T11:32:34.271+0200 7faacae716c0 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before...
  16. S

    [SOLVED] What service to restart after root disk full

    it stucks simply no output, so I tried root@node1:~# systemctl status pveproxy ● pveproxy.service - PVE API Proxy Server Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; preset: enabled) Active: deactivating (final-sigterm) (Result: timeout) since Mon 2023-10-02...
  17. S

    [SOLVED] What service to restart after root disk full

    I made a mistake in my 5 nodes ceph cluster and I selected for my new backups schedule on some nodes the root local storage and it went full, today everything works but I have no access to the gui of the affected nodes(I receive connection refused). All vms and lxc are working good. I deleted...
  18. S

    Container random crash with error autofs resource busy

    Dear all, I have a privileged ctn debian11 based that is a LAMP web server with a single web app developed by myself that worked for years without any issues. This app needs to access some windows shared folders on the operator's PC that uses the app, for making this the most reliable possible...