I see.... I should have focus more on the ha part when i bought the hardware... would a 4 x1G be enough?
I wonder if using the 2nvme disks inside a sata enclosure works well but that would allow me to reuse them...
This is a followup of another post but I simplify the problem, removing iscsi from the balance for now. What I am trying to understand is if this setup could work for CEPH and what to achieve network redundancy...
I have 3 nodes with 2 nic each (2x10GbE) . Each port is connected to a distinct...
I would like to resize a zfs mirror to add a block db for ceph. Could it be done without reinstalling? One way i am thinkng to itis enqbling autoexpand’ put one disk off, ersize the partition, then do the samz doe the other disk. can it be done that way? any suggestion is welcome :)
a couple more questions.
For now each nodes as I said has 2x256GB nvme M2 disks and 2x480GB SSD used foir ceph. the M2 card is using the only PCiE3.0 x8 possible extension. I am wondering I one better way to handle what I need woul dbe replacing this M2 card by 1 network card to extend the...
I am looking to some guidance to finalize the setup of a 3-nodes Proxmox cluster with Ceph and shared ISCSI storage. While it's working, I am not really happy with the ceph cluster resilienc and I am looking for some guidance.
Each nodes have 2x10GbE ports and 2x480GB SSD dedicated for ceph...
I have 3 nodes that use their own subnet for ceph :
Node1 : 10.10.10.10
Node2 : 10.10.10.11
Node3 : 10.10.10.12
I would like now to put them in their own vlan. What would be the best way to do it with the minimum downtime and noise between nodes? Should I first stop the ceph node to be announced?
hmm ok. The 2 switches have a non-blocking throughput of 120 Gbps, switching capacity of 240 Gbps and forwarding rate of 178 Mpps so it's probably enough but indeed i will test. I guess also using a separate vlan for iscsi may be needed insuch case though not sure since they are on separate...
We have a cluster of 3 machines that have 2x460GB SSD HD on each and we plan to add 2x960GB to each. AT first we were thinking to just replace the 480GB disks but now I am wondering if we can mix disks from different size.
How will work the replication in such case? What's the best pattern...
yeah that probably the reason. I am wondering what 1% WEAROUT means though now. Should I contact my hardware supplier to do some exchange, theey were never used until the last 3 weeks... They are supposed to be endurant SSD (Samsung SSD PM883, SATA3, bulk, enterprise medium endurance)
My current setup is the following; I have 3 nodes with each 2 10GbE NIC. On each I setup ceph and an iscsi storahe. Isci is handled on main cluster NIC (shared with proxmox sync network) while the ceph data network is handled on another NIC. Each NIC is connected on a different switch. The NAS...
and following this issue it seems to have attempted to write/read a lot of stuf on SSDs.... I now have the 2 ceph disks wearout to 1%. What does it mean?
root@pve2:~# smartctl -A /dev/sda
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.4.73-1-pve] (local build)
Copyright (C) 2002-19, Bruce...
So to give more details, once the node restarted the folder /etc/pve was empty. And syslog was returning the following error:
pveproxy[15720]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1737.
I deleted...
it seems the versions have ben correctly installed on that node. I had a quick glance and it looks similar to the other nodes :
root@pve3:~# pveversion -v
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset)...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.