Search results

  1. B

    Moving VM from local storage to ceph

    We have 4 nodes PVE cluster and separate 4 nodes ceph cluster with separate networks and interfaces for PVE cluster, ceph private and ceph public. When moving a VM from local PVE storage it seems like it is using the PVE cluster subnet - is there a way to change this behavior ? Thank you
  2. B

    Proxmox 6.1-2 with Ceph 14.2.5 - does it still need NTP ?

    yes , I meant local server/servers which cache the NTP.org pool or Debian pool. I happen to have 2 NTP server across our subnets that can serve that purpose. I used them before to provide time for regular NTP while disabling the Systemd's time sync service. I just added more VMs and still...
  3. B

    Proxmox 6.1-2 with Ceph 14.2.5 - does it still need NTP ?

    Thank you for explanation. Is it still best practice for Ceph to use local NTP source and not x.debian.pool.ntp.org that comes with time synced.service ? Also I remember that the NTP was doing peering between nodes following this post...
  4. B

    Proxmox 6.1-2 with Ceph 14.2.5 - does it still need NTP ?

    Just installed Nautilus from scratch and it's been operational for a day with several VMs (4 nodes), 2 Pools. Because I am just testing I left the default timesyncd and for last 24 hours I did not get any clock skews messages and in the log. The time setting are defaults that come with PVE...
  5. B

    Need advice on ssd setup for ceph

    The use is just to accommodate more systems that have a need for more intense disk operations. Majority of our systems (linux) are almost idle but we have some heavy users, I kept them on local to Proxmoxes drives but want to move them to ceph. We are ustlizing now for ceph only 25% of the link...
  6. B

    Need advice on ssd setup for ceph

    I am planing to get 8 x PX05SMB160 ssd drives and spread them in 4 ceph servers , two per server. The drives are a decent SAS 1.6TB drives: read 1900MiB/s, write 850 MiB/s, 270000 IOPS read and 100000 IOPS write , DWPD 10. I am currently using 13K SAS spinners with 6 OSDs per server (3...
  7. B

    2 clusters vs. 1, ceph and VM clusters

    I don't want to merge two clusters. I want to add a node with a slightly different hardware to the ceph cluster that is ONLY running ceph storage and NOT VMs on it.
  8. B

    Ceph hardware diferent server models

    I asked this question before but cannot find my own post :-( ... I have some aging servers used only for ceph storage installed on top of Proxmox. Do you think I can mix hardware by adding additional node but with slightly different hardware , same network speed , drives and comparable CPU and...
  9. B

    Reinstall CEPH on Proxmox 6

    Can confirm, after upgrading to PVE 6 from 5.4 (which was successful) I tried to upgrade Ceph which was not successful. I purged the Ceph config and tried to reinstall with nautilus, I made sure it is installed. It is failing with the same message. I even put all the nodes in the host table but...
  10. B

    Performance PVE 5.3.1 vs. 5.4.1

    yes I think they were fixing something in the kernel which had a performance penalty but I cannot find the original post and was wondering if anybody can confirm based on the CPU usage on 5.4. Thx
  11. B

    Ceph 12.2.12 RAM usage

    I am running a cluster of PVE 3 nodes with Ceph that is providning another PVE cluster with ceph storage. I first notice on 12.2.2 the memory leak and a post on this forum that is a bug that is fixed in newer version so I updated to 12.2.12. Now I see on 12.2.12 that all nodes use much more...
  12. B

    changing the gateway to non default network

    We have been using the default gateway on proxmox the way it was setup during the installation , meaning being on the cluster network. We want to move it to an 10 Gbps interface to have the PVE VM backup work faster and not possibly interfere with the cluster network. I saw a post here that...
  13. B

    Performance PVE 5.3.1 vs. 5.4.1

    I saw a post shortly after 5.4.1 was released (which I cannot find now) that there was a significant performance degradation for VMs after updating to 5.4.1. I am planing to update from 5.3.1 and our cluster does not suffer any performance issues now. Is there anything in particular in 5.4.x...
  14. B

    vzdump backup network

    What is the network for vzdump or maybe it would be better to ask how to define that network on Proxmox. I have 3 networks on my Proxmox, running: 1. for PVE cluster (1Gbps speed link agregation) 2. for hosting VMs (1Gbps speed link agregation) 3. for Ceph public (10Gbps speed link...
  15. B

    9000 MTU size on 1GB interface

    Let me just clarify I am using Proxmox backup to backup VMs to a external NFS storage. I guess I am using the vmbr0 for backup as "default" network which is also my cluster/corosync network , I was not aware that backup (theProxmox backup) network assignment can be changed. I dont see any type...
  16. B

    Changing a network card

    Thx - will give it a try...
  17. B

    9000 MTU size on 1GB interface

    W have 3 networks with 3 separate dual port for each on bonded interfaces: 1. PVE Cluster 1Gbps, 2. Client facing network for VMs 1Gbps 3. Ceph Public - 10 Gbps I do have 9000 MTU on the 10Gbps connection for Ceph but my question is really if it makes sense to put the 9000 MTU on the 1 Gbps...
  18. B

    Changing a network card

    huh... , we broke a port on the card by pulling the cable thinking it was unplugged already just stuck, it was stuck because it was not unplugged. It is working and we just made sure this cable is not moving with some tape :p... We want to replace it anyway just in case somebody pulls on it...
  19. B

    Moving cluster to new location

    hi udo, We are using ceph but moving all local hard drives to move the ceph servers as well ahead of time. I already did the expected -1 and the separated node is back up and operational. All seems to be working now in both locations. From the original location we will be moving 3 nodes at...
  20. B

    Moving cluster to new location

    Again I am moving our 4 node cluster to a new location. I though I could just move one node (node4) to the new location and slowly move the VMs over a period of two weeks to that node. Then move the remaining 3 nodes and they would resync. I am worried about the resyncing, I see the from teh...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!