We have 4 nodes PVE cluster and separate 4 nodes ceph cluster with separate networks and interfaces for PVE cluster, ceph private and ceph public.
When moving a VM from local PVE storage it seems like it is using the PVE cluster subnet - is there a way to change this behavior ?
Thank you
yes , I meant local server/servers which cache the NTP.org pool or Debian pool. I happen to have 2 NTP server across our subnets that can serve that purpose. I used them before to provide time for regular NTP while disabling the Systemd's time sync service.
I just added more VMs and still...
Thank you for explanation.
Is it still best practice for Ceph to use local NTP source and not x.debian.pool.ntp.org that comes with time synced.service ?
Also I remember that the NTP was doing peering between nodes following this post...
Just installed Nautilus from scratch and it's been operational for a day with several VMs (4 nodes), 2 Pools. Because I am just testing I left the default timesyncd and for last 24 hours I did not get any clock skews messages and in the log. The time setting are defaults that come with PVE...
The use is just to accommodate more systems that have a need for more intense disk operations. Majority of our systems (linux) are almost idle but we have some heavy users, I kept them on local to Proxmoxes drives but want to move them to ceph. We are ustlizing now for ceph only 25% of the link...
I am planing to get 8 x PX05SMB160 ssd drives and spread them in 4 ceph servers , two per server. The drives are a decent SAS 1.6TB drives: read 1900MiB/s, write 850 MiB/s, 270000 IOPS read and 100000 IOPS write , DWPD 10.
I am currently using 13K SAS spinners with 6 OSDs per server (3...
I don't want to merge two clusters. I want to add a node with a slightly different hardware to the ceph cluster that is ONLY running ceph storage and NOT VMs on it.
I asked this question before but cannot find my own post :-( ...
I have some aging servers used only for ceph storage installed on top of Proxmox. Do you think I can mix hardware by adding additional node but with slightly different hardware , same network speed , drives and comparable CPU and...
Can confirm, after upgrading to PVE 6 from 5.4 (which was successful) I tried to upgrade Ceph which was not successful. I purged the Ceph config and tried to reinstall with nautilus, I made sure it is installed. It is failing with the same message. I even put all the nodes in the host table but...
yes I think they were fixing something in the kernel which had a performance penalty but I cannot find the original post and was wondering if anybody can confirm based on the CPU usage on 5.4.
Thx
I am running a cluster of PVE 3 nodes with Ceph that is providning another PVE cluster with ceph storage.
I first notice on 12.2.2 the memory leak and a post on this forum that is a bug that is fixed in newer version so I updated to 12.2.12.
Now I see on 12.2.12 that all nodes use much more...
We have been using the default gateway on proxmox the way it was setup during the installation , meaning being on the cluster network. We want to move it to an 10 Gbps interface to have the PVE VM backup work faster and not possibly interfere with the cluster network. I saw a post here that...
I saw a post shortly after 5.4.1 was released (which I cannot find now) that there was a significant performance degradation for VMs after updating to 5.4.1.
I am planing to update from 5.3.1 and our cluster does not suffer any performance issues now. Is there anything in particular in 5.4.x...
What is the network for vzdump or maybe it would be better to ask how to define that network on Proxmox.
I have 3 networks on my Proxmox, running:
1. for PVE cluster (1Gbps speed link agregation)
2. for hosting VMs (1Gbps speed link agregation)
3. for Ceph public (10Gbps speed link...
Let me just clarify I am using Proxmox backup to backup VMs to a external NFS storage.
I guess I am using the vmbr0 for backup as "default" network which is also my cluster/corosync network , I was not aware that backup (theProxmox backup) network assignment can be changed. I dont see any type...
W have 3 networks with 3 separate dual port for each on bonded interfaces:
1. PVE Cluster 1Gbps,
2. Client facing network for VMs 1Gbps
3. Ceph Public - 10 Gbps
I do have 9000 MTU on the 10Gbps connection for Ceph but my question is really if it makes sense to put the 9000 MTU on the 1 Gbps...
huh... , we broke a port on the card by pulling the cable thinking it was unplugged already just stuck, it was stuck because it was not unplugged. It is working and we just made sure this cable is not moving with some tape :p...
We want to replace it anyway just in case somebody pulls on it...
hi udo,
We are using ceph but moving all local hard drives to move the ceph servers as well ahead of time. I already did the expected -1 and the separated node is back up and operational.
All seems to be working now in both locations. From the original location we will be moving 3 nodes at...
Again I am moving our 4 node cluster to a new location. I though I could just move one node (node4) to the new location and slowly move the VMs over a period of two weeks to that node. Then move the remaining 3 nodes and they would resync.
I am worried about the resyncing, I see the from teh...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.