You could do 'host' (which would also give you more performance) or in the Proxmox 8.x series, you could use the new default of 'x86-64-v2-AES' which gives you greater compatibility of Proxmox features (like Live-migration) when it comes to having different servers with different cpu generations...
Hey Folks,
I'm needing a bit of help here and if someone can point me in the right direction, it would be greatly appreciated.
What I'm trying to do is create a bash script which has proxmox credentials and the VMID and from those, print a URL where by when I click it, it will open up a...
Hey folks,
When browsing this forum, I've seen a couple of different recommendations on which of these settings to choose (especially when it came to NFS Storage options).
I was wondering if anybody could educate us on which one of these are bests for the various use cases.
(and for personaly...
1. Re: Samsung PM883 --> Thank you. I will take a look into this.
What about Samsung Pro's?
2. "Set VM cache to none
We do this already
The question here is why are you using this? If you want to ensure that no data is lost in the event of a power outage, then both none and writeback...
It would be a pretty randomly used VM:
- Linux Web Server (httpd/nginx) application (Zabbix, nagios, nextcloud)
- DB (MySQL/PostGres/Mongo)
- Load Balancer (haproxy)
Hi @aaron
Thanks for your input. Had 3 question for you though:
1. What do you mean "(.mgr can be ignored)"
2. Shouldn't the "target_ratio" be "1.0" given that its exactly all the same hardware?
3. Because its all the same hardware, it looks like I don't need to adjust the 'Autoscaler', but I...
@alexskysilk
Thanks for the resource. I will check it out.
I've got a Dell R620 with 8 x 1.2TB HDDs configured in RaidZ2.
This is the Benchmark that i've done:
Command:
sync; dd if=/dev/zero of=tempfile bs=1M count=10240; sync
Results:
Proxmox Host: ZFS / = 1.5 GB/S
Proxmox VM on...
I just noticed this feature in Proxmox 8.1
Proxmox Node --> Ceph --> Pools
Select a Pool and click --> Edit
I'm guessing I could just do it from here on each node.
1 - Make Sense
2 - Would you say Zil is better for VMs, as opposed to L2ARC?
3 - What about an Intel Optane as ZFS Zil Cache when compared to Hadware Raid?
Hey @jdancer
Thanks for this info!
A company I work for has a 3-Node CEPH Cluster already setup.
The configuration for each of those Servers:
2 x Intel CPU Gold
512GB RAM
IT Mode RAID Controller
2 x 256GB SSD Samsung EVO 870 (Proxmox
6 x 1TB SSD Samsung EVO 870
2 x 10GB NICs (CEPH Public...
I was actually referring to what @UdoB was saying regarding RAID-10 vs Raid5/6 but with Zil + L2ARC... not with hardware raid.
To clear up my question:
If you are using an IT Flashed mode Controller and not a battery-backed Hardware RAID, can we still not get good performance with RAID 5 or...
Hey Folks,
I'm trying to get clear on something here with a possible setup...
Can we create a Proxmox Hyper-converged GlusterFS system on a system with hardware raid?
I'm thinking of the following scenario:
3 x Dell R720
PERC HARDWARE RAID
2x256 GB SSD -- Proxmox OS (Raid1)
6X1TB SSD --...
Hey Folks,
I'm trying to get clear on something here with a possible setup...
It sounds like from what you described here is that you can do a Proxmox Hyper-converged GlusterFS system on a system with hardware raid. Is that correct?
I'm thinking of the following scenario:
3 x Dell R720
PERC...
Hi folks,
Thanks for everybody's contributions on CEPH and Proxmox so far.
I'm looking for some instructions:
I'm Running Proxmox 8.1 with Ceph 17.2.7 (Quincy) on a 3 Node Cluster.
Each cluster has 6 x 1TB Samsung 870 EVOs
Each Proxmox Node has a 1TB Samsung 980 Pro 1TB NVME with 6 x 40GB...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.