Search results

  1. V

    3 nodes cluster messed up

    Just for ref if anyone wants to find some help reading this... I ended firing up a new virtual machine (Debian11) on which I attach a copy of the faulty (virtual) drive. The drive had no partition but I succeeded in repairing the (raw) drive with testdisk. Once I had access to the data I made a...
  2. V

    3 nodes cluster messed up

    root@pvemini1:~# ceph-volume lvm activate --all --> Activating OSD ID 0 FSID 51ec4819-b28c-4119-b8ed-c42f1dfb449c --> ConfigurationError: Unable to load expected Ceph config at: /etc/ceph/ceph.conf Note that as stated by Aaron I did # ceph mon remove 0 edited /etc/pve/ceph.conf to remove the...
  3. V

    3 nodes cluster messed up

    Hi Chris, and sorry ofr the delay I had to wait for my friend who is on site to try and tell you what we get (as i'm doing all this using ssh from far away) none of these worked, no ping, no ssh, I surrendered and begun the node replacement porcedure. I shut down the node1 (the dead one) and...
  4. V

    3 nodes cluster messed up

    GRUB Then it stays there even after more than 5min waiting. Thta's why I made a kind of LivePVE on a USB Stick and booted on it. (and if that's usefull I can mount the original/internal pve root partition and edit files on it).
  5. V

    3 nodes cluster messed up

    About Node1 startup it hangs at disk cleanup...
  6. V

    3 nodes cluster messed up

    Thanks you very much, here's what I get : root@pvemini3:~# rados lspools .mgr ceph-replicate root@pvemini3:~# rbd ls ceph-replicate base-103-disk-0 vm-101-disk-0 vm-102-disk-0 vm-104-disk-0 vm-105-disk-0 vm-106-disk-0 vm-107-disk-0 vm-109-disk-0 root@pvemini3:~# lsblk NAME...
  7. V

    3 nodes cluster messed up

    root@pvemini2:~# ceph -s cluster: id: 13bc2c1c-fcac-45e2-8b4a-344e7f9c1705 health: HEALTH_WARN mon pvemini2 is low on available space 1/3 mons down, quorum pvemini2,pvemini3 Degraded data redundancy: 28416/85248 objects degraded (33.333%), 33 pgs...
  8. V

    3 nodes cluster messed up

    If ever someone was kind enought to take a look at my desperate case I've managed to get some of the Node1 log files....
  9. V

    3 nodes cluster messed up

    Thank you very much to look into my case. The disk I'm looking for is vm-101-disk-0 which originally was on ceph-replicate a replicated 3 disks (one per node) ceph volume. Node one came down and as HA was setup VM101 magically migrated to Node2 (mini2). It was working till my friend rebooted...
  10. V

    3 nodes cluster messed up

    # qm config 101 agent: 1 boot: order=ide2;scsi0 cores: 4 description: unused0%3A local%3A101/vm-101-disk-0.qcow2%0Aunused0%3A local-lvm%3Avm-101-disk-0.qcow2%0Aunused1%3A local-lvm%3Avm-101-disk-0%0Aunused2%3A local%3A101/vm-101-disk-0.qcow2 ide2: none,media=cdrom memory: 8192 meta...
  11. V

    3 nodes cluster messed up

    Hi, Here is the story, a three nodes cluster with, three MacMinis with internal disk and each one an external disk. Cluster configured, ceph configured with replication, HA configured. After a few hours of running I discover the local-lvm disk on Node2 and Node3 are offline. A short...
  12. V

    [SOLVED] Problem with GPU Passthrough

    This works for me NVidia 1060 on Intel PVE 7.3.3 GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream pcie_acs_overrid=multifunction nofb nomodeset video=vesafb:off initcall_blacklist=sysfb_init" Big thanks
  13. V

    Proxmox and Netdata

    Thanks. Did you do it ?
  14. V

    Proxmox and Netdata

    Hi, I was wondering if anybody happen to install Netdata monitoring on Proxmox Debian, not on one VM but on the host. If so it seems default Netdata settings are not the right ones for proxmox as mine seems to run globally fine but Netdata keeps telling me there's problems. Here are some of...
  15. V

    Backup kill windows process

    Hi, I've got a Win10 guest, during backup some process of the guest are killed. One is BlueIIris CCTV software (which runs as a service) the other is DeepStack which runs as a server. As these are supposed to answer http request I can send requests and see when they're no more answering...
  16. V

    ZFS pool lost after power outage

    I knew of what you're telling me. The only thing I wasn't aware was SMR vs CMR drives. Actually this drive was the only one I had on hand at the time I built the pool and I didn't expect mush of it, only to be as reliable as a standard ext4 drive, I mean beeing aware of imminent failure thanks...
  17. V

    ZFS pool lost after power outage

    PVE only knows about the drive as a PBS server drive not the other way around. If it doens't exist in PBS it can't exist in PVE.
  18. V

    ZFS pool lost after power outage

    I succeed to ask a smart diag from the pve server : root@pve:~# smartctl -a -d sat /dev/sdb smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.4.128-1-pve] (local build) Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family...
  19. V

    ZFS pool lost after power outage

    Hello, I've installed a PBS server as a virtual machine on PVE. Attached a external hard drive as ZFS pool. It was working fine but we suffered a power outage. On reboot the ZFS pool was not there. Here is what I tried : root@pbs:~# dmesg |grep sdb [ 1.372620] sd 2:0:0:2: [sdb]...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!