Search results

  1. B

    proxmox cluster and ceph network redundancy with 3 nodes

    I see.... I should have focus more on the ha part when i bought the hardware... would a 4 x1G be enough? I wonder if using the 2nvme disks inside a sata enclosure works well but that would allow me to reuse them...
  2. B

    proxmox cluster and ceph network redundancy with 3 nodes

    This is a followup of another post but I simplify the problem, removing iscsi from the balance for now. What I am trying to understand is if this setup could work for CEPH and what to achieve network redundancy... I have 3 nodes with 2 nic each (2x10GbE) . Each port is connected to a distinct...
  3. B

    resize zfs mirror?

    hrm ok... so i will need to reinstall i guess .... Is there any rule about sizing the block db?
  4. B

    resize zfs mirror?

    I would like to resize a zfs mirror to add a block db for ceph. Could it be done without reinstalling? One way i am thinkng to itis enqbling autoexpand’ put one disk off, ersize the partition, then do the samz doe the other disk. can it be done that way? any suggestion is welcome :)
  5. B

    [SOLVED] Ceph & iSCSI HA - How to configure the network?

    You mean bonding between 2 switches and bonding between each interfaces? should I balance the traffic or use a an active-backup strategy?
  6. B

    [SOLVED] Ceph & iSCSI HA - How to configure the network?

    a couple more questions. For now each nodes as I said has 2x256GB nvme M2 disks and 2x480GB SSD used foir ceph. the M2 card is using the only PCiE3.0 x8 possible extension. I am wondering I one better way to handle what I need woul dbe replacing this M2 card by 1 network card to extend the...
  7. B

    [SOLVED] Ceph & iSCSI HA - How to configure the network?

    I am looking to some guidance to finalize the setup of a 3-nodes Proxmox cluster with Ceph and shared ISCSI storage. While it's working, I am not really happy with the ceph cluster resilienc and I am looking for some guidance. Each nodes have 2x10GbE ports and 2x480GB SSD dedicated for ceph...
  8. B

    [SOLVED] how to change the network configuration of ceph nodes?

    I have 3 nodes that use their own subnet for ceph : Node1 : 10.10.10.10 Node2 : 10.10.10.11 Node3 : 10.10.10.12 I would like now to put them in their own vlan. What would be the best way to do it with the minimum downtime and noise between nodes? Should I first stop the ceph node to be announced?
  9. B

    mixing iscsi & ceph storage

    hmm ok. The 2 switches have a non-blocking throughput of 120 Gbps, switching capacity of 240 Gbps and forwarding rate of 178 Mpps so it's probably enough but indeed i will test. I guess also using a separate vlan for iscsi may be needed insuch case though not sure since they are on separate...
  10. B

    managin different disk sizes with ceph

    so i just need to add the 2 disks on eahc machines and it wil be good? ie having 2x480+2x960 on each ? That's pretty cool :)
  11. B

    managin different disk sizes with ceph

    We have a cluster of 3 machines that have 2x460GB SSD HD on each and we plan to add 2x960GB to each. AT first we were thinking to just replace the 480GB disks but now I am wondering if we can mix disks from different size. How will work the replication in such case? What's the best pattern...
  12. B

    Using CIFS/NFS as datastore

    out of curiosity, did you setup it on its own machine?
  13. B

    pve node status unavailable

    yeah that probably the reason. I am wondering what 1% WEAROUT means though now. Should I contact my hardware supplier to do some exchange, theey were never used until the last 3 weeks... They are supposed to be endurant SSD (Samsung SSD PM883, SATA3, bulk, enterprise medium endurance)
  14. B

    mixing iscsi & ceph storage

    My current setup is the following; I have 3 nodes with each 2 10GbE NIC. On each I setup ceph and an iscsi storahe. Isci is handled on main cluster NIC (shared with proxmox sync network) while the ceph data network is handled on another NIC. Each NIC is connected on a different switch. The NAS...
  15. B

    pve node status unavailable

    and following this issue it seems to have attempted to write/read a lot of stuf on SSDs.... I now have the 2 ceph disks wearout to 1%. What does it mean? root@pve2:~# smartctl -A /dev/sda smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.4.73-1-pve] (local build) Copyright (C) 2002-19, Bruce...
  16. B

    pve node status unavailable

    another info, it seems that ceph is now completely broken versions are diffrent and some are even undefined :/:
  17. B

    pve node status unavailable

    well i just did an apt upgrade. now second node is down. same result /etc/pve empty.
  18. B

    pve node status unavailable

    So to give more details, once the node restarted the folder /etc/pve was empty. And syslog was returning the following error: pveproxy[15720]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1737. I deleted...
  19. B

    pve node status unavailable

    it seems the versions have ben correctly installed on that node. I had a quick glance and it looks similar to the other nodes : root@pve3:~# pveversion -v perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset)...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!