Search results

  1. B

    Proxmox 6.1-2 with Ceph 14.2.5 - does it still need NTP ?

    Tried local network NTP source, with two local NTP servers but got clock skew after 3 days of running. At this point I will be disabling systemd time services and going with regular ntpd as I used to do. thx
  2. B

    help with log

    Thanks, forgot to remove the mapper and fstab entry. All good now.
  3. B

    help with log

    I had to pull two drives in raid1 array. They were not used and I could not reboot/stop the server to do this as I have tons of VMs on it. IO removed the LVM , lv and vg and storage fro0m the node before I pulled them out. Now I see in the log tons of: kernel: blk_partition_remap: fail for...
  4. B

    Proxmox 6.1-2 with Ceph 14.2.5 - does it still need NTP ?

    Sadly reporting clock skew with the default time settings. Our ceph cluster is still in testing, so limited production. We got clock skew on 2 out of 4 nodes on the 14th so 4 days after we started the cluster. It lasted only for 29 sec till the Health check cleared but it did happen. Will have...
  5. B

    Did not load config file - message when moving hard drive to RBD storage

    I have two clusters , 1 that runs VMs and 1 with ceph storage. When I am moving a hard drive from my local storage on the proxmox cluster to RBD on dedicated ceph cluster I get: create full clone of drive virtio0 (local-lvm-thin:vm-100-disk-0) 2020-01-20 00:11:54.296691 7f640c7270c0 -1 did not...
  6. B

    Moving VM from local storage to ceph

    Must be a new feature, I see it on 6.1-5 but my VM running cluster is still on 5.3-11 (upgrading soon). I see the option for migration subnet on the nodes running 6.1-5 - cool. Now what is the difference between moving a disk and full VM migration ? I usually just move the storage of the VM...
  7. B

    Moving VM from local storage to ceph

    We have 4 nodes PVE cluster and separate 4 nodes ceph cluster with separate networks and interfaces for PVE cluster, ceph private and ceph public. When moving a VM from local PVE storage it seems like it is using the PVE cluster subnet - is there a way to change this behavior ? Thank you
  8. B

    Proxmox 6.1-2 with Ceph 14.2.5 - does it still need NTP ?

    yes , I meant local server/servers which cache the NTP.org pool or Debian pool. I happen to have 2 NTP server across our subnets that can serve that purpose. I used them before to provide time for regular NTP while disabling the Systemd's time sync service. I just added more VMs and still...
  9. B

    Proxmox 6.1-2 with Ceph 14.2.5 - does it still need NTP ?

    Thank you for explanation. Is it still best practice for Ceph to use local NTP source and not x.debian.pool.ntp.org that comes with time synced.service ? Also I remember that the NTP was doing peering between nodes following this post...
  10. B

    Proxmox 6.1-2 with Ceph 14.2.5 - does it still need NTP ?

    Just installed Nautilus from scratch and it's been operational for a day with several VMs (4 nodes), 2 Pools. Because I am just testing I left the default timesyncd and for last 24 hours I did not get any clock skews messages and in the log. The time setting are defaults that come with PVE...
  11. B

    Need advice on ssd setup for ceph

    The use is just to accommodate more systems that have a need for more intense disk operations. Majority of our systems (linux) are almost idle but we have some heavy users, I kept them on local to Proxmoxes drives but want to move them to ceph. We are ustlizing now for ceph only 25% of the link...
  12. B

    Need advice on ssd setup for ceph

    I am planing to get 8 x PX05SMB160 ssd drives and spread them in 4 ceph servers , two per server. The drives are a decent SAS 1.6TB drives: read 1900MiB/s, write 850 MiB/s, 270000 IOPS read and 100000 IOPS write , DWPD 10. I am currently using 13K SAS spinners with 6 OSDs per server (3...
  13. B

    2 clusters vs. 1, ceph and VM clusters

    I don't want to merge two clusters. I want to add a node with a slightly different hardware to the ceph cluster that is ONLY running ceph storage and NOT VMs on it.
  14. B

    Ceph hardware diferent server models

    I asked this question before but cannot find my own post :-( ... I have some aging servers used only for ceph storage installed on top of Proxmox. Do you think I can mix hardware by adding additional node but with slightly different hardware , same network speed , drives and comparable CPU and...
  15. B

    Reinstall CEPH on Proxmox 6

    Can confirm, after upgrading to PVE 6 from 5.4 (which was successful) I tried to upgrade Ceph which was not successful. I purged the Ceph config and tried to reinstall with nautilus, I made sure it is installed. It is failing with the same message. I even put all the nodes in the host table but...
  16. B

    Performance PVE 5.3.1 vs. 5.4.1

    yes I think they were fixing something in the kernel which had a performance penalty but I cannot find the original post and was wondering if anybody can confirm based on the CPU usage on 5.4. Thx
  17. B

    Ceph 12.2.12 RAM usage

    I am running a cluster of PVE 3 nodes with Ceph that is providning another PVE cluster with ceph storage. I first notice on 12.2.2 the memory leak and a post on this forum that is a bug that is fixed in newer version so I updated to 12.2.12. Now I see on 12.2.12 that all nodes use much more...
  18. B

    changing the gateway to non default network

    We have been using the default gateway on proxmox the way it was setup during the installation , meaning being on the cluster network. We want to move it to an 10 Gbps interface to have the PVE VM backup work faster and not possibly interfere with the cluster network. I saw a post here that...
  19. B

    Performance PVE 5.3.1 vs. 5.4.1

    I saw a post shortly after 5.4.1 was released (which I cannot find now) that there was a significant performance degradation for VMs after updating to 5.4.1. I am planing to update from 5.3.1 and our cluster does not suffer any performance issues now. Is there anything in particular in 5.4.x...
  20. B

    vzdump backup network

    What is the network for vzdump or maybe it would be better to ask how to define that network on Proxmox. I have 3 networks on my Proxmox, running: 1. for PVE cluster (1Gbps speed link agregation) 2. for hosting VMs (1Gbps speed link agregation) 3. for Ceph public (10Gbps speed link...