Search results

  1. A

    Ceph - Multiples OSD and POOLS

    You should switch it then as I mentioned above, its simple... Change ceph.conf and do systemctl stop ceph\*.service ceph\*.target systemctl start ceph.target TP-LINK T1700G 28TQ is what I have, eyeing the cheap Microtiks for SFP+ 10gbit 5 ports as the next buy to tie the hobby room with my house.
  2. A

    Ceph - Multiples OSD and POOLS

    - Mon, mgr and proxmox https all should reside on network .10 - ie the PUBLIC network of 1gb. - CLUSTER is your SAN where OSDs speak to each other, as well as ring1 in the corosync for HA migrations on .7 - ie the CLUSTER network of 10gb. But getting a switch WILL make life much easier for you...
  3. A

    [TUTORIAL] Fix for "perl: warning: Setting locale failed."

    https://github.com/fulgerul/ceph_proxmox_scripts/blob/master/new_node_install.sh TLDR; sed -i 's/.*AcceptEnv LANG LC_\*.*/AcceptEnv LANG LC_PVE_* # Fix for perl: warning: Setting locale failed./' /etc/ssh/sshd_config service ssh reload # exit; Reconnect
  4. A

    Ceph - Multiples OSD and POOLS

    The NUC (mon + mgr) is fine with one RJ45 jack, the 10gb ports are only used in case you need to put an OSD on the NUC.
  5. A

    Ceph - Multiples OSD and POOLS

    You need 3 mon and 3 managers minimum for everything to work transparently. If you have 2 of each like you have now, there will be no majority quorum when its time to vote for a new mon/mgr leader. They all need to be on the same network, this can be achieved with VPN if the third node is...
  6. A

    Cluster node member - sudden reboot

    Hi, The corosync network is on the 10gb NIC with 172.16 as IP. The public network is the 1gb one on 192.168 net. I had no idea that it was both normal and expected for a node to magically reboot itself? Because the way I see it, the network worked just fine, it was just corosync that messed up...
  7. A

    Ceph - Multiples OSD and POOLS

    I hate to be that guy, but I told you so :D You never mentioned the NUC so I am guessing this is the third node, yes ? If yes then a monitor will be needed on this one, which basically means when one of the nodes (votes) go down, the other 2 monitors can vote for a new monitor to be "leader"...
  8. A

    Proof of concept at my workplace

    I understand. Since I am a bit cheap still until my proxmox/ceph NAS is done and ready I don't have a subscription (yet), so I wont be able to answer your question..
  9. A

    Proof of concept at my workplace

    Never done offline install of .debs, but it sounds like you have a good grasp of that. Check the below. You need to swap out the Enterprise repo to the free one like so: # Swap to free distroupgrade echo -e "deb http://ftp.se.debian.org/debian stretch main contrib\n\n# security updates\ndeb...
  10. A

    Cluster node member - sudden reboot

    I have a split the cluster and public networking (1x 10gbit NIC and 1 gbit NIC) I dont necessarily see a forced reboot as something normal tho ?
  11. A

    Kernel v4.2.1 just became v5.0

    I really look forward for this!! "- A lot of x86_64 KVM work including STIBP support, Processor Tracing virtualization, new Intel Icelake CPU instruction set extensions support, and other work." "Intel VT-d Scalable Mode"
  12. A

    unable to install: "no cdrom found"

    Hi, install Debian and put proxmox pkg on like so: https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Stretch
  13. A

    Cluster node member - sudden reboot

    That is strange, as the traffic to and from that node has worked according to the RRD graphs. And the other nodes on the same switch worked just fine. So how do I go from here ? Should I replace my NIC ?
  14. A

    Ceph - Multiples OSD and POOLS

    Its simple, dont put the third node in the HA group! :)
  15. A

    Ceph - Multiples OSD and POOLS

    Keep the same IP range and dont go wild on separate cluster and public networks and you should be fine.
  16. A

    Ceph - Multiples OSD and POOLS

    It is correct, it will work, no guarantees when it comes to the ceph replica x2 tho! You have been warned! Dont forget to put in small OS SSDs for the 2 main hosts as well! Yes, this gives you about 50% of usable disk space.
  17. A

    Cluster node member - sudden reboot

    Hi, I have a node that all of the sudden begun acting up, the memory pressure is high on all 3 nodes (90%+) since newest luminous but I have 2 other identical nodes that dont have this issue and everything is working perfectly which I attribute to ceph reserving memory for future use(?). So I...
  18. A

    Ceph - Multiples OSD and POOLS

    It is possible yes, not recommended at all due to the above..... but possible. Just remember to stick an SSD in that third node tho, else the swapiness monster will eat them spinning rust :D
  19. A

    Ceph - Multiples OSD and POOLS

    Answers inline PS. HA transfer speeds where the best when I switched to SSDs for the proxmox OS, 10gb network upgrade did not even come close! DS.
  20. A

    Create a ProxMox VM as an everyday personal computer?

    Read up on PCI passthrough to a VM on your cluster, then check out parsec gaming for streaming/gaming to said VM workstation!

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!