1. Yes
2. ZFS would still be my choice, but with the limited RAM you have, you would definitely need to MAX your ARC size, because it could take up to 50% of your RAM. Otherwise MD-RAID is one way to go.
3. Both is possible. I'm a fan of having my block devices only in the Proxmox host itself...
I do not think that this is a security issue as long as access on the proxmox nodes is restricted.
Anyway, i don't know which provider you have, but the switch at the WAN side should only send packages through onr port anyway.
Anyone else: If i'm missing something here, feel free to correct me.
Ich hab das selbe Problem. Mit beiden meiner Microserver Gen10 Kisten.Sowohl mit Proxmox Backup Server als auch mit Proxmox VE.
Ich gehe daher mal von einem Kernel Problem aus und habe dazu einen englischen Thread eröffnet...
Hello everyone,
i'm kind of lost here.
I'm having the same issue as described here in the german thread: https://forum.proxmox.com/threads/reboot-problem-hp-microserver-gen-10.100508/
Just as a side information:
I've had PBS running for a while on an HPe Microserver now. Last Week Friday i...
Can't u just setup a VM on your proxmox host with proxmox backup server and use the cifs share as a datastore? That'd solve the issue. I'm doing that with a synology HA setup at a clients place too. works fine.
I don't think it should cause any issues. But i would offline the disk first.
Just don't do something to the healthy drive.
And i hope we're talking about Raid1 here.
Yes, that is correct. It limits I/O bandwidth.
Do you use a seperated Network (VLAN?) for backups, or have the possibility to do so?
If so you could create a VLAN or another network, add the VLAN interface, or network interface to proxmox and limit the bandwidth with tc or wondershaper for...
Thanks @H4R0 for pointing out, i didn't catch that.
Seems like they're Seagate SAS Exos drives with 2.4TB.
I'm taking a guess here by saying, that those drives are usually pretty reliable, but even if a double disk failure is likely to occur
which would destroy the pool. A triple disk-failure...
Just do:
ls -l /dev/disk/by-uuid/ | grep sdd
and you will see the UUID of the disk.
then run
zpool replace OLDDISK /dev/disk/by-uuid/NEWDISKUUID POOLNAME
like i said.
yes
I'm using SSD's. 3 OSD's per Server.
Exactly. I don't see a lot of traffic on my public network
This is probably that: https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.