Obviously, I meant "not in a sensible way" - at least for what the OP asked for, namely to increase his storage from 2x the size of one of his physical disks, so using 5 disks either does not use one of them, thus not increasing the net size...
Hallo @Falk R.,
ergänzend dazu kurz eine "Starthilfe" für @Martin.B., damit der Test nicht an der CLI-Hürde scheitert – gerade wenn iSCSI im produktiven Umfeld gesetzt ist.
Für die Dell Compellent (SC Series) unter Debian/Proxmox reicht meist...
A raid mirror usually has the read and write performance of a single disk (for a single thread - it can perform better with more threads if the data is distributed over more than one disk per mirror side). A raidz1 can perform much better on...
Of course I know that you (and most users here in the forum) know the facts, but this sentence:
calls for "Mr. Obvious", stating "yes, you can!" ;-)
root@pnz:~# zpool create multimirror mirror sdc sdd sde sdf sdg
root@pnz:~# zpool status...
Hallo @Falk R.,
da hast du absolut recht. Der NFS-Client von Windows ist leider notorisch ineffizient (u. a. wegen der Implementierung von Sync-Writes und Locking) und eignet sich überhaupt nicht als Referenz für die Performance, die man später...
From all of my own experience: yes!
How large the effect is depends on your actual use case, of course. Writing (and reading) the actual data may be (nearly) as slow as before - but the meta data can be read and written much, much faster than...
Hi, from a quick look, the "Retransmit" messages may be a symptom of network stability issues (e.g. lost packets, increased latency etc) that are more likely to occur if corosync shares a physical network with other traffic types -- I'd expect...
Hello everyone,
i'd like to expand my existing Zpool Mirror-0 to a Mirror-1 by adding two more NVMe drives.
What's the best way to do this? Here's my Zpool status:
pool: rpool
state: ONLINE
status: Some supported and requested features are not...
From all of my own experience: yes!
How large the effect is depends on your actual use case, of course. Writing (and reading) the actual data may be (nearly) as slow as before - but the meta data can be read and written much, much faster than...
All working now!! Even with 10.4.1. We noticed we had more wrong with our physical network. In our own troubleshooting, we noticed random subnets weren't getting routed locally between each other, but others would.
I ended up tossing the idea of...
You can just add it to other NICs, recommended is to use several: https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_redundancy ("Adding Redundant Links To An Existing Cluster")
For the VNET subnet you can use any /24 subnet in the range of 192.168.x.x as long as it does not overlap with the 192.168.178.0/24 subnet of your main network.
Hi @SkyZoThreaD, you absolutely are not the only person experiencing this!
I created an account on this forum just to respond to this issue.
@Lukas Wagner I can verify exactly as reported here, there absolutely is a problem with the Proxmox...
My initial idea for a workaround was to just use a veth to connect the parent bridge with the vnet bridge, instead of creating a VLAN subinterface a la vmbr1.1234 - which causes the problem in the first place. But there are issues with VLAN-aware...