Had an issue with this in the log:
Aug 03 18:35:02 pve1-weha pveproxy[1728]: proxy detected vanished client connection
Aug 03 18:35:02 pve1-weha pveproxy[1729]: '/etc/pve/nodes/pve2-weha/pve-ssl.pem' does not exist!
Aug 03 18:35:32 pve1-weha pveproxy[1729]: proxy detected vanished client...
I have a simple setup with PVE install on regular HDDs (RAID) and planing for zfs pool for a VM on ZFS. I have 2 enterprise Samsungs SSDs mixed use with 3 DWPD , 800GB per server. There is going to be only one virtual machine running on this pool about 150-200GB size.
Is the default GUI setup...
good point about 3rd node for ceph I guess I have no choice but to go with replication. I need a third physical node , it will have local storage and in case of a total disaster it can serve as a PVE server with restored backup with expected -1.
Is it possible to connect bonded interfaces...
I have 3 servers, one for keeping quorum and two production servers. I ma planing to put some ssds for quest VMs but I was wondering if I should go with storage replication or ceph on the two production nodes. It is a relatively simple setup and very few VMs will be running there 2 or 3, 4...
Nice graph , I assume this is Zabbix. Did you have to install the agent on proxmox nodes to get that info from SMART ?
BTW my SSD on Ceph installed on Proxmox say N/A under Wearout. Not sure if this is a bug or they say N/A because there is no wearout so far. I thought with no wearout it...
you can use lvdisplay, vgdisplay and pvdisplay to list everything that is related to LVM. Then remove accordingly. by remove command (vgremove, pvremove etc.) , then you might have to do fdisk or blkdiscard.
Thank you for that Wolfgang , no problems so far. It is just strange. We updated from 5.x both our clusters (PVE and Ceph on PVE) and before I saw the opposite. The I/O wait was half the CPU usage not it it the CPU usage that is half of I/O wait.
Is anybody else experiencing this ?
Thank you
Did you do vgremove as well ? After that you might also need to do fdisk /dev/sdx to remove the partition. Was actually doing it recently few times, it was annoying but easy enough to default the drive to be able to use it for something else. If you have SSD you might want to use blkdiscard...
I am running ceph dedicated 4 node cluster with 10Gbos networks for ceph cluster and ceph public networks over bonded interfaces:
proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve)
pve-manager: 6.1-5 (running version: 6.1-5/9bf06119)
pve-kernel-5.3: 6.1-1
pve-kernel-helper: 6.1-1...
I used to do this from CLI on Proxmox 4.x but after reinstalling to new 6.1 version I used the web interface and added local storage of type directory to the system. I used the lvm-thin. Is there performance difference of lvm-thin vs. lvm volumes when mounted as directories ?
Thank you
Tried local network NTP source, with two local NTP servers but got clock skew after 3 days of running. At this point I will be disabling systemd time services and going with regular ntpd as I used to do.
thx
I had to pull two drives in raid1 array. They were not used and I could not reboot/stop the server to do this as I have tons of VMs on it. IO removed the LVM , lv and vg and storage fro0m the node before I pulled them out. Now I see in the log tons of:
kernel: blk_partition_remap: fail for...
Sadly reporting clock skew with the default time settings. Our ceph cluster is still in testing, so limited production. We got clock skew on 2 out of 4 nodes on the 14th so 4 days after we started the cluster. It lasted only for 29 sec till the Health check cleared but it did happen. Will have...
I have two clusters , 1 that runs VMs and 1 with ceph storage. When I am moving a hard drive from my local storage on the proxmox cluster to RBD on dedicated ceph cluster I get:
create full clone of drive virtio0 (local-lvm-thin:vm-100-disk-0)
2020-01-20 00:11:54.296691 7f640c7270c0 -1 did not...
Must be a new feature, I see it on 6.1-5 but my VM running cluster is still on 5.3-11 (upgrading soon). I see the option for migration subnet on the nodes running 6.1-5 - cool.
Now what is the difference between moving a disk and full VM migration ?
I usually just move the storage of the VM...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.