Using with success docker inside unprivileged lxc on top of ceph. Nothing too complex but never had an issue. As I wrote in this thread some time ago, + 1 to not use docker directly un proxmox host.
I'm in the same situation but my disks are quite 90% full, ?'ve got a 3 node cluster with 6x1Tb disks on each node, I've got to replace une disk of each node with a 4Tb disk.
Anyone have experienced removing a disk ona a cluster full at 90 % and with 'backfill_toofull' label active fo 4 PGS...
Hi, had same issue on DELL R720.
Actually I'm using Proxmox 7.
After some searching I found this:
https://community.mellanox.com/s/question/0D51T00006RVuXj/mlx4core-missing-uar-aborting
I confirm it solved the issue
for details, I also had to disable acpi, so I report my /etc/default/grub...
Look like stupid, in my opinion, if you don't want to put your hand in cron files, just set multiple backup jobs in GUI, each one at the hour you need ;)
Oh yes, I'll ban all except what I want to make part of multipath.
In proxmox gui the sd* are visible in disks. Is there a way to prevent proxmox gui see the disks exposed by hba and make gui see the mapper one?
This is the answer I searched. Have you tested a configuration with thin-lvm on top of a SCSI FC disk with multipath only mounted on the single node?
Why you're saying ther's no cluster? I think i can put in a cluster two nodes with no shared storage... like if they have only local disks.
If...
I've been incomplete in my answer, sorry I made a bit confusing. I'm going to try explain it better.
I know mounting a regular file system in two system will lead corruption.
What I evaluate like alternative to LVM is to mount one LUN in one node and another LUN in the other node. Using the lun...
:) Not yet, I'm working on it right now.
I'm trying to figure out how to implement that situation.
I have a twin blade setup on a Bladecenter backed by a Storewize 7000 FC attached with HBA adapter.
Already setup multipath seems working like a charm, what's the next steps? which is my...
Hi, I think the only way to achieve your goals is to mount these storages and use it like directory, flagging the shared checkbox.
In this way you can migrate and have HA between the two nodes.
Concerning the snapshot you can obtain storing vm disks as qcow2 files.
I'm not really sure about...
is thin provisioning and snapshot working properly over a iscsi or scsi storage?
In my remember i read something about is not possible to activate snapshots on a thin-lvm built on top of a shared storage...
there's some cool news about it? I'm implementing in these days a two nodes proxmox...
Another solution is to partition, format, mount and use like local for storing qcow or raw files, in this case flag it like shared, i think proxmox cluster will manage concurrencies on access to the luns.
Same for me, if removed the path_selector "round-robin 0" and it work also with find_multipaths yes. Seems than the only way to work with round robin is to say no to find_multipaths.
is it normal than, also if in round-robin, the "multipath -ll" command still indicate for one device...
Hi, just to report about procedure.
After mounting remote storage throught sshfs need ot be passed to qm importdisk the the descriptor of the disk, not the flat itself
As an example, importing from a VMWare host:
mounting remote...
Personally I would NEVER permit someone put hands "programmatically" (the approach of Docker) on my Proxmox Host just to manage their Docker services!
The right solution indeed is to contain Docker itself or a Docker orchestration product inside a fence. Security need resources.
In my opinion...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.