Search results

  1. B

    experiences with HP proliant and HA

    Hello. What do you mean by "HA had bad results" ?
  2. B

    [SOLVED] Ceph high disk usage

    3/2 means 3 replicas and minimum 2 replicas have to be online to make the pool available.
  3. B

    ceph: using uuid for OSD instead of /dev/sdx ?

    Hi Spirit, Thanks for taking some time to answer. In order to give you an idea of my home lab here is a quick description: I have 3 physical PVE nodes, 2 NUCs, one HP µserver G8, and a Synology NAS. Each physical node has a boot SSD (Proxmox) and an attached USB3 disk used for CEPH (3/1). The...
  4. B

    ceph: using uuid for OSD instead of /dev/sdx ?

    Hello, I have a 3 nodes PM 4.4 running CEPH with a dedicated physical disk as OSD on each node (home/lab usage). root@pve2:~# pveversion pve-manager/4.4-12/e71b7a74 (running kernel: 4.4.44-1-pve) root@pve2:~# ceph --version ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367) I've...
  5. B

    RB PI for third node

    Hello, No, with corosync configured, you just have a running cluster member used for quorum. But for pve the node is not online as no pve process are actually started. If you want to have a 'running' lightweight node, you need to go down the virtual pve path and install a full pve instance...
  6. B

    Missing CPU and RAM usage stats on WebUI

    I have the same issue with VMs (pvetest) randomly stats are not displayed. Refreshing/reloading the web page does the trick for some time for me. Cheers
  7. B

    Very poor I/O performance on Linux guest.

    Hello, I have @home a small 3 nodes cluster running PVE 4.4.12. It's based on 2x NUC with core i5 and 16GB and formerly, a virtual node based on a virtualbox instance on a NAS just used for Quorum purpose. As I've planed to replace one of my NAS (Syno) by an HP µserver Gen8 I've installed PVE...
  8. B

    CEPH Jewel and iSCSI OSD supported ?

    Hello Dominik, Thanks for your answer ! I'll check ceph doc. Do I need to just create/initialize the OSDs outside of PVE and then return to the wiki starting from "Ceph Pools" section ? Now of course your question makes sense. But my PVE HA environment is very humble. Based on 2 Synology NAS...
  9. B

    CEPH Jewel and iSCSI OSD supported ?

    BTW, I'm up to date: proxmox-ve: 4.4-77 (running kernel: 4.4.35-1-pve) pve-manager: 4.4-5 (running version: 4.4-5/c43015a5) pve-kernel-4.4.35-1-pve: 4.4.35-77 pve-kernel-4.2.6-1-pve: 4.2.6-26 pve-kernel-4.4.8-1-pve: 4.4.8-52 pve-kernel-4.4.15-1-pve: 4.4.15-60 pve-kernel-4.4.16-1-pve...
  10. B

    CEPH Jewel and iSCSI OSD supported ?

    Hello and happy new year, I've get rid of an iSCSI DRBD9 configuration in order to put a CEPH in place. I've removed old LUNs, provisioned fresh ones, installed Jewel following the new wiki and I'm now at the point I have to create OSD. The problem is that pveceph createosd keeps failing with...
  11. B

    How to fully remove/cleanup DRBD9 ?

    What do you mean by "setup the Ceph environment from scratch" ? I don't want to reconfigure, just cleanly remove everything regarding DRBD in order to setup CEPH properly. I have about 10 VMs running (in NFS now) on this LAB and re-installing the PVE nodes from scratch is not an option. Thanks :)
  12. B

    How to fully remove/cleanup DRBD9 ?

    Hello all ! I've configured and used DRBD9 on my LAB PVE4.3 cluster following this article: https://pve.proxmox.com/wiki/DRBD9 Now I want to move to CEPH and get rid of DRBD9 on my PVE nodes. I've moved all VMs formerly on RAW/DRBD to Qcow2/NFS and there's nothing left on DRBD: root@pve2:~#...
  13. B

    Error in local storage info

    Lobic, It (may) has nothing to do with your issue but you should reboot your host in order to load the new kernel you've installed: root@pvetemp:~# pveversion -v proxmox-ve: 4.3-66 (running kernel: 4.4.16-1-pve) pve-manager: 4.3-3 (running version: 4.3-3/557191d3) pve-kernel-4.4.6-1-pve...
  14. B

    Move VM to shared storage

    Yes of course, my examples include format change because I started with vmdk then changed to qcow2 and then I have had to move with conversion once more because only raw is supported with drbd. But moving 'live' from local to another location shared or not can be done keeping the original...
  15. B

    Locations

    Ok I've just tested the second solution I've given you. vmtest (112) is my healthy VM: root@pve1:/etc/pve/nodes/pve1/qemu-server# cat 112.conf #Debian 8 # #Test VM ... bootdisk: ide0 cores: 1 cpuunits: 512 ide0: drbd1:vm-112-disk-1,cache=writethrough,size=32G ide2: none,media=cdrom memory...
  16. B

    vmbr0: received packet on eth0 with own address as source address

    BTW, I've restarted the node without eth1 and vmbr0 with eth0 configured as static. I don't have the messages any more (hopefully) but network connection is not better (timeouts, etc, ...)
  17. B

    vmbr0: received packet on eth0 with own address as source address

    Hello ! Here you are: root@pve1:~# brctl show bridge name bridge id STP enabled interfaces vmbr0 8000.001999e42c7b yes eth0 tap110i0 tap112i0 root@pve1:~# brctl showmacs vmbr0 port no mac addr is local? ageing timer 1 00:04:4b:48:6e:4e no 31.84 1 00:11:32:25:f2:df no...
  18. B

    Move VM to shared storage

    Yes, that's what the "Move disk" is designed for :) So far I've used it to move local/vmdk to nfs/qcow2 and then from nfs/qcow2 to drbd/raw ... And I've never stop VM's ! It works as a charm. One of the VM being my access point to my network (VPN) and moving did not disconnected me ...
  19. B

    Locations

    Or you can attach the broken VM's disk as a secondary disk to an healthy VM to access it form this VM. Edit1: But it looks like it's not directly possible thru the GUI (why?) so you need to create a new .qcow2 disk on the healthy VM and then overwrite it with the "broken" .qcow2 ... Edit2: Or...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!