having iscsi links separated is IDEAL, and works in most instances. iscsi over LACP can, AT BEST, match the performance of seperated links, but in practice this can be quite challenging.
Storage vendors rarely have any control over the end user...
try not to filter unnecessarily; what you want to see if there are PARTITIONS on those drives.
lsblk output will tell. if you have partitions, its just a matter of a race condition which could be solved. if there arent- you're doing stuff beyond...
Sure. the problem isnt with the networking per se; its that you're shoving 3 vlans into two nics, 2 of which really want their own interface. this would be very simply solved if you just added another interface (or 2) for non iscsi traffic.
That...
It will. I've used this kind of config in the lab before. like I said, not ideal for a production environment but can be done.
edit- I'll explain the logic. nic1 has mac1. nic 2 has mac2
in the above config, either mac1 or 2 is present in V0...
You can. it works. Its a bad idea, but it can be done. (as an aside, its a bad idea on vmware too; iscsi interfaces dont like to share the link.)
the interfaces would look like this:
iface nic0 inet manual
mtu 9000
iface nic1 inet...
well, you have quite the loadout.
Before we begin, make sure you have multipath-boot-tools installed. it is critical for the proper ordering of devices during boot.
next, I assume the disks in question are the 600GB drives. assuming you can...
We're looking at it from opposite perspectives. the word "prefer" tells the story; to facilitate high load, you will need to do this.
The key remains that backup systems are essentially passive- all the work have to occur from the sender end; If...
This is completely backwards. PBS doesnt want anything. load is generated by the sender, and is limited by the host storage and network connection. without establishing what your load is, you're not "wasting" anything.
More to the point, the...
why? having PBS and its payload storage integrated into one device seems like a perfectly reasonable solution... Agreed that option 1 is a non starter.
I stand corrected. looks like you can either stick with the current deployment with no updates or support or pay AIStor. I know that's not helpful in your usecase- but considering the non commercial nature of what you provide I'd be looking at...
By whom and for what purpose? its actively developed, if that's what you mean, but Proxmox has no tooling or integrated support. I've ran RGW in the past on Proxmox and the experience has been... mixed. tooling is a problem and there are...
You hit the nail on the head of why LXC is itself not a dependable method of containerization. As an operator, you have a choice- lightweight but far more in need of maintenance (and not appropriate for high security/multitenant application,) or...
It offers one big advantage though: It's supported by the developers and expected not to break after updates. While docker inside lxcs is known for breakage and thus not recommended in the documentation...
I find the inclusion of OCI containers a lazy first attempt. More to the point, it offers very little beyond running docker in a container (which is fairly simple to do) but unlike that approach you dont have any granular control. Also, the need...