Search results

  1. C

    [SOLVED] Migrate and SSH broken after upgrade 7 to 8

    Hi guys. I have a cluster with nodes proxmox18s.mydomain.net to proxmox24s.mydomain.net. They all have a public IP, and a second LAN (192.168.150.18 to 192.168.150.24) for the cluster communications. I migrate every VM out of proxmox24s (192.168.150.24) to other PVE, and I updated this PVE...
  2. C

    [SOLVED] Need help replacing disk in ZFS

    Hi Tmanok. Thank you for your reply. The resilvering ended, and ZFS removed the faulty drive by itself! The only thing is the device name that isn't sexy, but I don't really care:
  3. C

    [SOLVED] Need help replacing disk in ZFS

    Hi guys. I run PBS with ZFS on a pool named RPOOL that contains 4 drives of 4Tb. /dev/sdb was phasing out and gave tons of errors. I did a "ls -a /dev/disk/by-id/", and here is the output concerning ths serial number K4KJ220L: ata-HGST_HUS726040ALA610_K4KJ220L...
  4. C

    Replacing PBS and moving old backups in it

    Why on earth didn't I think of that! Thanks. I did the setup and my new server is filling up as I'm writting these lines. Will get back with my results. No FAILED for now. Thanks !!
  5. C

    Replacing PBS and moving old backups in it

    Hi all. I have a PBS with one datastore in it that contains about 4Tb of backups for a few VM and CONTAINERS. I want to upgrade to a new server with more storage, and MOVE the content of the datastore to it. Any procedure/tuto to help me do it? Thanks.
  6. C

    Backup error from a container "failed: exit code 23"

    I'm still strugling with this one. I tried to create a new schedule to a repo where I never did a backup too, and I received the same error: () INFO: starting new backup job: vzdump 1307 --mailto info@xcxcxcxcx.com --notes-template '{{guestname}}' --storage pbs100b-repo2 --mode snapshot --quiet...
  7. C

    Backup error from a container "failed: exit code 23"

    Hi guys. Since I move a container 1307 (PMG12 - it's a proxmox mail gateway container) to it's new PVE, I receive this error every night when it tries to backup itself on my two proxmox backup server: VMID NAME STATUS TIME SIZE FILENAME 1009 xxxxxxxx.legardeur.net OK...
  8. C

    [SOLVED] /lib/modules filling up hard drive

    I have another server to replace, so I'll try your script again before formating it. Thanks for the update!
  9. C

    [SOLVED] /lib/modules filling up hard drive

    Thanks Apoc, and don't worry about the damage, as long as it can be prevented for someone else, that's what I was hoping for. I tried to copy with rsync all files in both folders from another working PVE, but I don't know what to do to get back what was deleted in apt. And I didn't tried to...
  10. C

    [SOLVED] /lib/modules filling up hard drive

    Hi. Yes, I commented out some lines before pasting it in this forum for lisibility, see below for the complete script. I opened your script in notepad, I commented out some lines, and I copy-paste in ssh in my machine. Do you think WINDOWS CRLF bug could be responsible? #!/bin/bash...
  11. C

    [SOLVED] /lib/modules filling up hard drive

    Hi Apoc. Thanks for the code. First I don't accuse you of anything, I know it was at my own risk. I just want to know what I did wrong, and prevent anybody else to do the same mistake that I did. I tried running a cleaned version of your file just to take care of the kernel in /usr/lib/modules...
  12. C

    Playing with ZFS

    Thanks a lot, it's noted. What is the reason of the 80% max usage, I'm curious?
  13. C

    Playing with ZFS

    Hi guys. I'm trying to play a bit with ZFS following a recommandation (thank you Dunuin). I have an OVH server, no HW RAID card, two NVME disks of 1Tb (nvme0n1 and nvme1n1). I've uploaded the ISO, started a fresh install from the latest PVE ISO, selected ZFS MIRROR for my disks and was able to...
  14. C

    [SOLVED] Unable to remove node from cluster

    Found it: I had to remove it by name, not by ID: pvecm delnode proxmox13s. Have a good day.
  15. C

    [SOLVED] Unable to remove node from cluster

    Hi guys. I have a dead PVE in my cluster, and I can't delete the node since it's not shown in pvecm nodes, but it show in the GUI and also in pvecm status. His name was PROXMOX13S and it's ID was 4: root@proxmox10s:~# pvecm status Cluster information ------------------- Name...
  16. C

    Raid partially down

    Thanks to all, my first problem is solved. To rebuild my RAID, I had to simply "mdadm --add /dev/md2 /dev/nvme1n1p2". What about my other question : I also saw somewhere on this forum something about the Linux boot partition that was missing on the second disk, so in case of a failure with the...
  17. C

    Raid partially down

    Thanks Dunuin. I know that its isn't supported and that people that help do their best; it's more than appreciated and I try to help other users when I know the answer.
  18. C

    Raid partially down

    Hi bbgeek. At the time of the crash, the mdstat report I received was also showing the MD4 degraded, but by the time I sshed in the machine, it was showing that it was in sync. This is an automatically generated mail message from mdadm running on proxmox13s A Fail event had been detected on md...
  19. C

    Raid partially down

    Hi guys. I know it may not be a PVE specific question, but I'm taking a chance here just in case. One of my PVE server crash with a RAID error, and I had to reboot it so the GUI come back online and I start migrating my VM away on a secondary server. Once it will be empty, I'll be able to play...
  20. C

    [SOLVED] Failed to verify TOTP challenge

    SOLVED ! It was a conflict with APPARMOR solved with this: ln -s /etc/apparmor.d/usr.sbin.ntpd /etc/apparmor.d/disable/ apparmor_parser -R /etc/apparmor.d/usr.sbin.ntpd