Hi Spirit,
Thanks for taking some time to answer.
In order to give you an idea of my home lab here is a quick description:
I have 3 physical PVE nodes, 2 NUCs, one HP µserver G8, and a Synology NAS.
Each physical node has a boot SSD (Proxmox) and an attached USB3 disk used for CEPH (3/1). The...
Hello,
I have a 3 nodes PM 4.4 running CEPH with a dedicated physical disk as OSD on each node (home/lab usage).
root@pve2:~# pveversion
pve-manager/4.4-12/e71b7a74 (running kernel: 4.4.44-1-pve)
root@pve2:~# ceph --version
ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
I've...
Hello,
No, with corosync configured, you just have a running cluster member used for quorum. But for pve the node is not online as no pve process are actually started.
If you want to have a 'running' lightweight node, you need to go down the virtual pve path and install a full pve instance...
I have the same issue with VMs (pvetest) randomly stats are not displayed. Refreshing/reloading the web page does the trick for some time for me.
Cheers
Hello,
I have @home a small 3 nodes cluster running PVE 4.4.12. It's based on 2x NUC with core i5 and 16GB and formerly, a virtual node based on a virtualbox instance on a NAS just used for Quorum purpose.
As I've planed to replace one of my NAS (Syno) by an HP µserver Gen8 I've installed PVE...
Hello Dominik,
Thanks for your answer !
I'll check ceph doc. Do I need to just create/initialize the OSDs outside of PVE and then return to the wiki starting from "Ceph Pools" section ?
Now of course your question makes sense. But my PVE HA environment is very humble. Based on 2 Synology NAS...
Hello and happy new year,
I've get rid of an iSCSI DRBD9 configuration in order to put a CEPH in place.
I've removed old LUNs, provisioned fresh ones, installed Jewel following the new wiki and I'm now at the point I have to create OSD.
The problem is that pveceph createosd keeps failing with...
What do you mean by "setup the Ceph environment from scratch" ?
I don't want to reconfigure, just cleanly remove everything regarding DRBD in order to setup CEPH properly.
I have about 10 VMs running (in NFS now) on this LAB and re-installing the PVE nodes from scratch is not an option.
Thanks :)
Hello all !
I've configured and used DRBD9 on my LAB PVE4.3 cluster following this article:
https://pve.proxmox.com/wiki/DRBD9
Now I want to move to CEPH and get rid of DRBD9 on my PVE nodes.
I've moved all VMs formerly on RAW/DRBD to Qcow2/NFS and there's nothing left on DRBD:
root@pve2:~#...
Lobic,
It (may) has nothing to do with your issue but you should reboot your host in order to load the new kernel you've installed:
root@pvetemp:~# pveversion -v
proxmox-ve: 4.3-66 (running kernel: 4.4.16-1-pve)
pve-manager: 4.3-3 (running version: 4.3-3/557191d3)
pve-kernel-4.4.6-1-pve...
Yes of course, my examples include format change because I started with vmdk then changed to qcow2 and then I have had to move with conversion once more because only raw is supported with drbd.
But moving 'live' from local to another location shared or not can be done keeping the original...
Ok
I've just tested the second solution I've given you.
vmtest (112) is my healthy VM:
root@pve1:/etc/pve/nodes/pve1/qemu-server# cat 112.conf
#Debian 8
#
#Test VM ...
bootdisk: ide0
cores: 1
cpuunits: 512
ide0: drbd1:vm-112-disk-1,cache=writethrough,size=32G
ide2: none,media=cdrom
memory...
BTW, I've restarted the node without eth1 and vmbr0 with eth0 configured as static. I don't have the messages any more (hopefully) but network connection is not better (timeouts, etc, ...)
Hello !
Here you are:
root@pve1:~# brctl show
bridge name bridge id STP enabled interfaces
vmbr0 8000.001999e42c7b yes eth0
tap110i0
tap112i0
root@pve1:~# brctl showmacs vmbr0
port no mac addr is local? ageing timer
1 00:04:4b:48:6e:4e no 31.84
1 00:11:32:25:f2:df no...
Yes, that's what the "Move disk" is designed for :)
So far I've used it to move local/vmdk to nfs/qcow2 and then from nfs/qcow2 to drbd/raw ... And I've never stop VM's ! It works as a charm. One of the VM being my access point to my network (VPN) and moving did not disconnected me ...
Or you can attach the broken VM's disk as a secondary disk to an healthy VM to access it form this VM.
Edit1: But it looks like it's not directly possible thru the GUI (why?) so you need to create a new .qcow2 disk on the healthy VM and then overwrite it with the "broken" .qcow2 ...
Edit2: Or...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.