One of our Proxmox node has been running for 333 days without a reboot! Getting to a point where a reboot becoming increasingly necessary. But trying best to hit 365 days mark! :)
Just wanted to share with community.
If I am understanding the scenario clearly, it is certainly possible what you are trying to achieve. With virtualized pfSense cluster and use of OpenvSwtich this sort of setup is not a problem at all. I would implement vLan all around. We have deployed similar configuration to virtualize as mush...
The ceph command Udo suggested is the best way to insert new configuration into Ceph cluster. This is also great way to test many configurations and check performance. Once happy though you have to change the configuration in ceph.conf. Because config inserted using the command does not persist...
As Storm pointed out above, by reducing max backfills and max active to 1 you will eliminate that issue. Its not actually an issue. But too many backfills and recovery thread at the same time causes massive read/write load on Ceph cluster. If network infrastructure, drives, CPU, memory are not...
Are you using variable ram for the Windows VM? Memory ballooning in Windows causes higher memory usage than usual. Try to set fixed memory size and see the difference.
Since you have not mentioned anything about RAM, I am assuming you have plenty as ZFS is memory heavy system.
Your optimum really comes down to how much redundancy or data protection you want out of ZFS. Raid-Z1 allows 1 hard drive failure whereas Z2 and Z3 allows 2 drives and 3 drives...
From your reply it seems like you have 2 clusters on 2 different machines. After you have created new cluster on proxmox1 you ran the following command:
promox1# pvecm add 172.28.64.16
But you should run this command from proxmox2 as following:
promox2# pvecm add 172.28.64.16
Proxmox2 needs...
I agree with Fireon. I did notice HA performing better in the latest 4.4 than previous versions. Although we are still testing in the lab environment, we are now more willing to put it in real environment than before.
Its not whole lot but I am super excited to see Ceph GUI in Proxmox GUI. Completely replaces the need to have 3rd party dashboard such as Ceph-Dash which i have been using for a while now to visually monitor several Ceph clusters. Really great work Proxmox!
I tried with similar setup once but did not work for me. That was long ago during the time of Ceph Bobtail. Crush has came very long way since then. Have you tried to simulate it in Virtual environment?
Proxmox wont care much as long as it can write the virtual disk image on the Ceph storage. If you are doing it for learning purpose, a single node Ceph will not teach what Ceph is all about. Yes you can practice all the commands and things but not the mechanics of Ceph. Also on a single node...
I think he meant LXC Container instead of OpenVZ. I can confirm that a container does show the entire core count of the Host and not the allocated core count of the container. For example, my host has 12 core. I have assigned 1 core to a test LXC container. When i ran the following command from...
My Proxmox nodes uses both IPv4 and IPv6. Both works fine. I am not sure when this issue started but i am noticing #apt-get update command producing error that it cannot reach Deb repo on IPv6. But if i manually run the command as following which ignores IPv6 then there is no error:
# apt-get -o...
This is usual scenario when you are recreating an OSD with the same ID. Did you have osd.1 in the cluster before?
The Zap command prepares the disk itself but it does not remove the old ceph osd folder. When you are removing osd, there are some steps that need to be followed specially if you...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.