I wonder if it is possible with Proxmox to do the HA that way with Storage Replication across nodes in two different data centers. Any tried that ? Any other ideas ?
We can get a fast and reliable link between data centers from the same data center provider.
thank you
I have an rbd storage, I can do rados bench from the ceph cluster but how would I go about testing from the client proxmox cluster ? When I try to run rados bench on the client with rbd storage I get "no monitors specified to connect to"
Any help with this command appreciated.
Thx
I have a new 3 node dedicated ceph cluster on pve5.1. I am connecting to 4 node PVE5.1 cluster with VMs over ceph public 10GB network with 2 bonded interfaces and 9000MTU size enables for private and public ceph networks.
I have 18 x same 1TB drives, 10K spinners which gives me 18 OSDs.
My 4...
ok I was just able to edit the storage.conf and restart pve daemon - that did the trick. I see now local zfs pool on each node.
Still wondering if there is a bug in the GUI that I was not able to see second node's pool and be able to add it.
I understand it is a local storage. I am just adding the storage to the cluster a local storage within that particular node just like I would be adding LVM or directory but of type zfs and I don't see the second pool for the second node. What am I doing wrong ?
I am refering to addtional storage, I have pve installed on a raid controller drive and have lvm there. I added additional hard drives to this machine and added two pools via cli:
on PVE01:
zpool create -f -o ashift=12 zfspoolpve01 mirror sda sdb
...and on PVE02:
zpool create -f -o ashift=12...
Thank you
I am not sure if I understand. I thought that numa had 2 main objectives in multi socket hyper-visor:
1 allocate the logical cores for a VM within the same CPU socket and
2 allocate memory for a logical core within the same CPU socket
...so there is no penalty when processing. So...
I have a two node test cluster with expected 1 setup. I created two zfs pools on two nodes. pve01 and pve02.
I see them being available/online:
root@pve01-nyc:~# zpool status
pool: zfspoolpve01
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM...
DO I need to enable numa on VM with 2 logical cores setup with Hyper-threading ?
I understand I need it with more than 2 cores for obvious reasons but I though I read somwhere that if there are only 2 cores and it is a CPU with Hyper-threading (in my case Xeon v4 family) I don't have to.
Let...
lucaferr wrote: I tried to synchronize 3 different nodes with a single NTP source a few hundreds kilometers away from the nodes...
You need to use local NTP servers for synchronization, I would not recommend using any outside/public servers.
See this post: https://forum.proxmox.com/threads/proxmoxve-ceph-clock-issue.20684/#post-105441 go to 6th post from the top from stevendemetrius. This is the instruction that I followed and had a caph cluster running for 2 years with no clock skew issues.
I have two local NTP servers that I...
I have 7 nodes total. 4 dedicated for VMs and 3 for Ceph only (under Proxmox). For about 2 years they were running as two separate clusters with no issues but now we are reinstalling everything and I am wondering if there would be an advantage to create a 7 node single cluster. Any advice, any...
I have read somewhere quite some time ago that having 1 phisical CPU with high clock is better than having 2 CPU for ceph, is this still that case ?
Also how would you know if it is time to upgrade your network from 10Gb to something faster- we are maxing out at about 1.5 Gbps when looking at...
ok so I found explanation of what exactly the apply/commit latency is (before I was just comparing it to what other people were posting) which makes sens for non ssd drives and journal that is on the OSDs (no separate journal drive).
Question now what can I do to improve these numbers ? Any...
We are running hummer on a dedicated 3 node cluster on top of Proxmox. Using 10Gb for cluster and public networks (separate cards). All the same 16 x 10K SAS drives spread evenly among 3 nodes.
We are not having any issues but I see the latency is almost never below in single digits, almost...
I am not sure what you mean, could you elaborate. What you are saying is that when using timesynd and a local NTP server you had no issues with clock skew on ceph - is that right ?
We had two local NTP server (in case one server dies, one of them was standalone and not a VM) and with Systemd...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.