Search results

  1. B

    The server certificate /etc/pve/local/pve-ssl.pem is not yet active

    Had an issue with this in the log: Aug 03 18:35:02 pve1-weha pveproxy[1728]: proxy detected vanished client connection Aug 03 18:35:02 pve1-weha pveproxy[1729]: '/etc/pve/nodes/pve2-weha/pve-ssl.pem' does not exist! Aug 03 18:35:32 pve1-weha pveproxy[1729]: proxy detected vanished client...
  2. B

    ZFS on SSDs with replication

    I have a simple setup with PVE install on regular HDDs (RAID) and planing for zfs pool for a VM on ZFS. I have 2 enterprise Samsungs SSDs mixed use with 3 DWPD , 800GB per server. There is going to be only one virtual machine running on this pool about 150-200GB size. Is the default GUI setup...
  3. B

    3 sever cluster without external storage.

    good point about 3rd node for ceph I guess I have no choice but to go with replication. I need a third physical node , it will have local storage and in case of a total disaster it can serve as a PVE server with restored backup with expected -1. Is it possible to connect bonded interfaces...
  4. B

    3 sever cluster without external storage.

    I have 3 servers, one for keeping quorum and two production servers. I ma planing to put some ssds for quest VMs but I was wondering if I should go with storage replication or ceph on the two production nodes. It is a relatively simple setup and very few VMs will be running there 2 or 3, 4...
  5. B

    CEPH : SSD wearout

    Thx, will give it a try.
  6. B

    CEPH : SSD wearout

    Nice graph , I assume this is Zabbix. Did you have to install the agent on proxmox nodes to get that info from SMART ? BTW my SSD on Ceph installed on Proxmox say N/A under Wearout. Not sure if this is a bug or they say N/A because there is no wearout so far. I thought with no wearout it...
  7. B

    Removing a deleted LVM/LG from proxmox webgui

    Did you remove it from the cluster storage ? If not please try that , also fdisk /dev/sdx
  8. B

    Removing a deleted LVM/LG from proxmox webgui

    you can use lvdisplay, vgdisplay and pvdisplay to list everything that is related to LVM. Then remove accordingly. by remove command (vgremove, pvremove etc.) , then you might have to do fdisk or blkdiscard.
  9. B

    Dedicated ceph servers/cluster i/o delay

    Thank you for that Wolfgang , no problems so far. It is just strange. We updated from 5.x both our clusters (PVE and Ceph on PVE) and before I saw the opposite. The I/O wait was half the CPU usage not it it the CPU usage that is half of I/O wait. Is anybody else experiencing this ? Thank you
  10. B

    Local directory storage with LVM vs. LVM-thin

    Thank you, will try to test when I have time.
  11. B

    Removing a deleted LVM/LG from proxmox webgui

    Did you do vgremove as well ? After that you might also need to do fdisk /dev/sdx to remove the partition. Was actually doing it recently few times, it was annoying but easy enough to default the drive to be able to use it for something else. If you have SSD you might want to use blkdiscard...
  12. B

    Dedicated ceph servers/cluster i/o delay

    I am running ceph dedicated 4 node cluster with 10Gbos networks for ceph cluster and ceph public networks over bonded interfaces: proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve) pve-manager: 6.1-5 (running version: 6.1-5/9bf06119) pve-kernel-5.3: 6.1-1 pve-kernel-helper: 6.1-1...
  13. B

    Local directory storage with LVM vs. LVM-thin

    I used to do this from CLI on Proxmox 4.x but after reinstalling to new 6.1 version I used the web interface and added local storage of type directory to the system. I used the lvm-thin. Is there performance difference of lvm-thin vs. lvm volumes when mounted as directories ? Thank you
  14. B

    no such cluster node 'nodename' (500) [SOLVED]

    Got that issue too on 6.1.1, restarting corosync on affected node fixed the issue.
  15. B

    Proxmox 6.1-2 with Ceph 14.2.5 - does it still need NTP ?

    Tried local network NTP source, with two local NTP servers but got clock skew after 3 days of running. At this point I will be disabling systemd time services and going with regular ntpd as I used to do. thx
  16. B

    help with log

    Thanks, forgot to remove the mapper and fstab entry. All good now.
  17. B

    help with log

    I had to pull two drives in raid1 array. They were not used and I could not reboot/stop the server to do this as I have tons of VMs on it. IO removed the LVM , lv and vg and storage fro0m the node before I pulled them out. Now I see in the log tons of: kernel: blk_partition_remap: fail for...
  18. B

    Proxmox 6.1-2 with Ceph 14.2.5 - does it still need NTP ?

    Sadly reporting clock skew with the default time settings. Our ceph cluster is still in testing, so limited production. We got clock skew on 2 out of 4 nodes on the 14th so 4 days after we started the cluster. It lasted only for 29 sec till the Health check cleared but it did happen. Will have...
  19. B

    Did not load config file - message when moving hard drive to RBD storage

    I have two clusters , 1 that runs VMs and 1 with ceph storage. When I am moving a hard drive from my local storage on the proxmox cluster to RBD on dedicated ceph cluster I get: create full clone of drive virtio0 (local-lvm-thin:vm-100-disk-0) 2020-01-20 00:11:54.296691 7f640c7270c0 -1 did not...
  20. B

    Moving VM from local storage to ceph

    Must be a new feature, I see it on 6.1-5 but my VM running cluster is still on 5.3-11 (upgrading soon). I see the option for migration subnet on the nodes running 6.1-5 - cool. Now what is the difference between moving a disk and full VM migration ? I usually just move the storage of the VM...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!