Search results

  1. B

    Dedicated ceph servers/cluster i/o delay

    I am running ceph dedicated 4 node cluster with 10Gbos networks for ceph cluster and ceph public networks over bonded interfaces: proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve) pve-manager: 6.1-5 (running version: 6.1-5/9bf06119) pve-kernel-5.3: 6.1-1 pve-kernel-helper: 6.1-1...
  2. B

    Local directory storage with LVM vs. LVM-thin

    I used to do this from CLI on Proxmox 4.x but after reinstalling to new 6.1 version I used the web interface and added local storage of type directory to the system. I used the lvm-thin. Is there performance difference of lvm-thin vs. lvm volumes when mounted as directories ? Thank you
  3. B

    no such cluster node 'nodename' (500) [SOLVED]

    Got that issue too on 6.1.1, restarting corosync on affected node fixed the issue.
  4. B

    Proxmox 6.1-2 with Ceph 14.2.5 - does it still need NTP ?

    Tried local network NTP source, with two local NTP servers but got clock skew after 3 days of running. At this point I will be disabling systemd time services and going with regular ntpd as I used to do. thx
  5. B

    help with log

    Thanks, forgot to remove the mapper and fstab entry. All good now.
  6. B

    help with log

    I had to pull two drives in raid1 array. They were not used and I could not reboot/stop the server to do this as I have tons of VMs on it. IO removed the LVM , lv and vg and storage fro0m the node before I pulled them out. Now I see in the log tons of: kernel: blk_partition_remap: fail for...
  7. B

    Proxmox 6.1-2 with Ceph 14.2.5 - does it still need NTP ?

    Sadly reporting clock skew with the default time settings. Our ceph cluster is still in testing, so limited production. We got clock skew on 2 out of 4 nodes on the 14th so 4 days after we started the cluster. It lasted only for 29 sec till the Health check cleared but it did happen. Will have...
  8. B

    Did not load config file - message when moving hard drive to RBD storage

    I have two clusters , 1 that runs VMs and 1 with ceph storage. When I am moving a hard drive from my local storage on the proxmox cluster to RBD on dedicated ceph cluster I get: create full clone of drive virtio0 (local-lvm-thin:vm-100-disk-0) 2020-01-20 00:11:54.296691 7f640c7270c0 -1 did not...
  9. B

    Moving VM from local storage to ceph

    Must be a new feature, I see it on 6.1-5 but my VM running cluster is still on 5.3-11 (upgrading soon). I see the option for migration subnet on the nodes running 6.1-5 - cool. Now what is the difference between moving a disk and full VM migration ? I usually just move the storage of the VM...
  10. B

    Moving VM from local storage to ceph

    We have 4 nodes PVE cluster and separate 4 nodes ceph cluster with separate networks and interfaces for PVE cluster, ceph private and ceph public. When moving a VM from local PVE storage it seems like it is using the PVE cluster subnet - is there a way to change this behavior ? Thank you
  11. B

    Proxmox 6.1-2 with Ceph 14.2.5 - does it still need NTP ?

    yes , I meant local server/servers which cache the NTP.org pool or Debian pool. I happen to have 2 NTP server across our subnets that can serve that purpose. I used them before to provide time for regular NTP while disabling the Systemd's time sync service. I just added more VMs and still...
  12. B

    Proxmox 6.1-2 with Ceph 14.2.5 - does it still need NTP ?

    Thank you for explanation. Is it still best practice for Ceph to use local NTP source and not x.debian.pool.ntp.org that comes with time synced.service ? Also I remember that the NTP was doing peering between nodes following this post...
  13. B

    Proxmox 6.1-2 with Ceph 14.2.5 - does it still need NTP ?

    Just installed Nautilus from scratch and it's been operational for a day with several VMs (4 nodes), 2 Pools. Because I am just testing I left the default timesyncd and for last 24 hours I did not get any clock skews messages and in the log. The time setting are defaults that come with PVE...
  14. B

    Need advice on ssd setup for ceph

    The use is just to accommodate more systems that have a need for more intense disk operations. Majority of our systems (linux) are almost idle but we have some heavy users, I kept them on local to Proxmoxes drives but want to move them to ceph. We are ustlizing now for ceph only 25% of the link...
  15. B

    Need advice on ssd setup for ceph

    I am planing to get 8 x PX05SMB160 ssd drives and spread them in 4 ceph servers , two per server. The drives are a decent SAS 1.6TB drives: read 1900MiB/s, write 850 MiB/s, 270000 IOPS read and 100000 IOPS write , DWPD 10. I am currently using 13K SAS spinners with 6 OSDs per server (3...
  16. B

    2 clusters vs. 1, ceph and VM clusters

    I don't want to merge two clusters. I want to add a node with a slightly different hardware to the ceph cluster that is ONLY running ceph storage and NOT VMs on it.
  17. B

    Ceph hardware diferent server models

    I asked this question before but cannot find my own post :-( ... I have some aging servers used only for ceph storage installed on top of Proxmox. Do you think I can mix hardware by adding additional node but with slightly different hardware , same network speed , drives and comparable CPU and...
  18. B

    Reinstall CEPH on Proxmox 6

    Can confirm, after upgrading to PVE 6 from 5.4 (which was successful) I tried to upgrade Ceph which was not successful. I purged the Ceph config and tried to reinstall with nautilus, I made sure it is installed. It is failing with the same message. I even put all the nodes in the host table but...
  19. B

    Performance PVE 5.3.1 vs. 5.4.1

    yes I think they were fixing something in the kernel which had a performance penalty but I cannot find the original post and was wondering if anybody can confirm based on the CPU usage on 5.4. Thx
  20. B

    Ceph 12.2.12 RAM usage

    I am running a cluster of PVE 3 nodes with Ceph that is providning another PVE cluster with ceph storage. I first notice on 12.2.2 the memory leak and a post on this forum that is a bug that is fixed in newer version so I updated to 12.2.12. Now I see on 12.2.12 that all nodes use much more...