Search results

  1. B

    Netdata on Proxmox nodes ?

    Hello, I'm starting to install and customize Netdata on several devices (physical and virtual) @home and I'm wondering if there's any "contraindications" to deploy it on Proxmox/CEPH nodes ? Thanks in advance :)
  2. B

    How to "suspend" HA ?

    Hello, I have a 3 nodes PM5.2 cluster running about 20 HA VMs (CEPH storage). Tomorrow, I have a planed power outage for about 4 hours and my UPS won't be able to last that long. Two nodes are running apsupsd (one server and one client) and one node nut (upsmon). I'd like to "suspend" the HA...
  3. B

    Strange behaviour when migrating container on shared storage.

    Hello, I'm using PM on a 3 nodes cluster@home. So far only with VMs but I wanted to test containers. I've created a simple deb9 container on shared (NFS) storage but when I want to migrate it, it tried to migrate the disk to local storage first ... Container definition...
  4. B

    CEPH: Pool (min_)size update via GUI

    Hello, Is there a reason updating an already created pool size or min_size value is not possible through the GUI ? The GUI instantaneously reflects changes if "ceph osd pool set" commands are ran from a CEPH node CLI, and nothing "critical" is reported in the log. So why there's no "Edit"...
  5. B

    [SOLVED] "Phantom" destroyed OSD

    Hello, I've replaced a failing OSD (osd.1) on my cluster. GUI: Stop -> Out -> Destroy then Create OSD with the new disk (same sdX as the removed failed one). On the GUI, OSD tab, my 3 OSDs where present and Up/In but I did not noticed that osd.3 replaced osd.1 at first. Now, on the Ceph TAB...
  6. B

    Add snapshot=0 option to Hard Disk ?

    Hello, Could it be possible to implement a flag like the existing backup=0/1 and replication=0/1 but for snapshot ? I have a OpenMediaVault VM with its boot disk based on CEPH and with 4 disks passed through. I'd like to snapshot the boot disk before OS maintenance/upgrade/tests/whatever and...
  7. B

    CEPH: Use lvm volume as block.db device ?

    Hello all, Before being flamed ;) please note that my question is related to a @home infrastructure and is not supposed to be deployed in any kind of production ! My Proxmox infrastructure is based on 3 nodes with an SSD used for OS and a single dedicated disk used as CEPH OSD (bluestore). As...
  8. B

    [SOLVED] How does CEPH identify OSD ?

    Hello all, I have a simple question. I have a 2 controllers host with boot/backup disks on the 1st controller and OSD/data devices on the 2nd one. So far, sda & sdb are present (1st controller) and sdc (OSD) & sdd (pass-through a NAS VM using disk/by-id) on the second. I have to add a disk on...
  9. B

    [SOLVED] CEPH: FileStore to BlueStore

    Hello, I'd like to move my CEPH environment from Filestore to Bluestore. Could you confirm that it is not feasible "online" but I have to destroy and then re-create my OSDs ? In this case, does it look correct ? 1.- Move disk(s) from RDB to NFS (for instance). 2.- Destroy CEPH pool(s) 3.-...
  10. B

    [ceph] New /dev/sdX for active OSD

    Hello, I have a 3 nodes PVE (4.4-18) with CEPH storage (10.2) configured. I have to add a SATA controller and some disks/SSDs on one server. Is it a problem if active OSDs have a new /dev/sdX during the process ? Or if an OSD located on /dev/sdf will be auto discovered if it is now /dev/sdh...
  11. B

    CEPH: inconsistent PG did not heal first and then heals ...

    Hello all, Few days ago I've been warned about a 1 pgs inconsistent; 1 scrub error on my PM4.4 cluster: root@pve2:~# ceph health detail HEALTH_ERR 1 pgs inconsistent; 1 scrub errors pg 8.2f is active+clean+inconsistent, acting [0,1,2] 1 scrub errors After investigation it appeared that PG...
  12. B

    ceph: using uuid for OSD instead of /dev/sdx ?

    Hello, I have a 3 nodes PM 4.4 running CEPH with a dedicated physical disk as OSD on each node (home/lab usage). root@pve2:~# pveversion pve-manager/4.4-12/e71b7a74 (running kernel: 4.4.44-1-pve) root@pve2:~# ceph --version ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367) I've...
  13. B

    Very poor I/O performance on Linux guest.

    Hello, I have @home a small 3 nodes cluster running PVE 4.4.12. It's based on 2x NUC with core i5 and 16GB and formerly, a virtual node based on a virtualbox instance on a NAS just used for Quorum purpose. As I've planed to replace one of my NAS (Syno) by an HP µserver Gen8 I've installed PVE...
  14. B

    CEPH Jewel and iSCSI OSD supported ?

    Hello and happy new year, I've get rid of an iSCSI DRBD9 configuration in order to put a CEPH in place. I've removed old LUNs, provisioned fresh ones, installed Jewel following the new wiki and I'm now at the point I have to create OSD. The problem is that pveceph createosd keeps failing with...
  15. B

    How to fully remove/cleanup DRBD9 ?

    Hello all ! I've configured and used DRBD9 on my LAB PVE4.3 cluster following this article: https://pve.proxmox.com/wiki/DRBD9 Now I want to move to CEPH and get rid of DRBD9 on my PVE nodes. I've moved all VMs formerly on RAW/DRBD to Qcow2/NFS and there's nothing left on DRBD: root@pve2:~#...
  16. B

    vmbr0: received packet on eth0 with own address as source address

    Hello all and sorry for the title of the post but I failed searching for a better one ... So I simply used the error message itself. Since some days (let's say 2 weeks) one of my PVE node started to log in kernel.log, messages and syslog the following messages over and over...
  17. B

    DRBD based storage and snapshot ?

    Hello, I've migrated my VMs to DRBD for storage redundancy purpose. So now, my disks are raw type instead of qcow2. The "take snapshot" button is grayed out because of that. How can I do snapshot now ? Maybe directly @lvm2 level ? Create snapshot: pve1# lvcreate -L1G -s -n vm105snap...
  18. B

    DRBD on Proxmox cluster nodes ?

    Hello all, I'm using Proxmox@home for months now and I really like the product. Well designed, stable, etc, ... My environment is based on 2 NUCs (pvetest repo) and 2 Synos. Right now, each Syno provides NFS storage to the cluster and one of them is also providing a VM (phpvirtualbox) used as...
  19. B

    PM Server upgrade. Considerations ?

    Hi all, I have a 3 nodes HA cluster running the latest pvetest. I've planed to upgrade (replace) one or more servers. Hardware will be quite close, faster CPUs (still i5) same SATA controller and different NIC but still handled by e1000e. My "migration plan" is: 1.- Migrate VMs to other nodes...
  20. B

    Nodes keep shutting down at boot time.

    Hello, I have a 4.2.5 PVE cluster that was running fine for few weeks but now I have an issue I can't figure out. This is a basic cluster based on 2 physical nodes and one VM running on a NAS. I have a single vlan and storage is based on 2 NAS (nfs/iscsi). I've created an HA group with the 2...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!