The problem seems to be solved, but I didn't have much to do with it.
The last update that Synology released for the UC3200 broke the non-transparent bridge linking the two controllers together, and this would cause the UC3200 to reboot itself regularly. Synology had to have one of their techs...
Those are my thoughts exactly. This config ain't rocket science. Even if was doing something crazy and causing an issue to happen, that should be my problem, not a problem with the storage device. Synology's been quite responsive overall, but getting into a real technical discussion seems...
Just wondering if anybody's got any ideas here. We've got a 3 node Proxmox cluster, with a Synology UC3200 as shared storage. The UC3200 has dual controllers in an active-active SAN configuration, so if one controller craps out, the workload won't be interrupted. The nodes and the UC3200 are all...
I appreciate your help. Thank you.
There's just so many moving parts to this project, and for every thing that I know, there's like 15 things that I don't know, so there's a lot of learning as I go.
Thanks for the clarification. Am I to assume that my little experiment to try and create shared storage didn't actually damage anything then?
And yes, the LVM is thick so no issues there.
If I had to guess, I'm going to assume it has something to do with my iSCSI/MPIO config, and the PV, VG and LVM created on top of that.
/dev/mapper/HDDmpath0 or /dev/mapper/SSDmpath0 seem to be the block devices you're thinking of...I think those are the physical volumes. mpath0-VGSSD or...
I didn't create any LVM slice or anything. I just navigated to /dev/mpath0-VGSSD, saw that's where my VM files were, created a /sharedstorage folder there, then in the GUI, I went to Datacenter > Storage, and added a Directory for disk images and container templates, pointing to the...
Yes, the mpath storage is external storage, connecting to each node with multipath iSCSI connections.
I didn't do a whole lot when trying to access this storage in the way I described...pretty much just tried it, saw it wasn't sharing in the way that I wanted, and reversed everything I did. The...
Setting NFS up locally on one of the nodes makes no sense if I'm trying to make this storage independent of a particular node, and externally makes no sense, since I'm trying to migrate stuff to our cluster, not off of it.
Just to make sure I've got this straight...I already have storage that's...
As a final update, I never ended up figuring either the cause or solution to this.
I just nuked all my network storage and started over. Not really an ideal solution in a production environment, but what can ya do..
I'm trying to figure out how to simply create a shared storage location so ISO or container templates can be accessed from any node in the cluster.
I've got SSD (/dev/mpath0-VGSSD). and HDD (/dev/mpath0-VGHDD) storage added to the cluster...what do I need to do to actually use some of that...
No. I restarted multipath daemon, and SSDmpath0 reappeared. Everything that relied on it is still broken.
If I do a vgcfgrestore --list SynSSD, it'll show a handful of restore points.
If I then do vgcfgrestore SynSSD --test, it'll tell me this:
TEST MODE: Metadata will NOT be updated and...
lsscsi
[0:2:0:0] disk DELL PERC H730P Mini 4.30 /dev/sda
[11:0:0:1] disk SYNOLOGY Storage 4.0 /dev/sdb
[12:0:0:1] disk SYNOLOGY Storage 4.0 /dev/sdc
[13:0:0:1] disk SYNOLOGY Storage 4.0 /dev/sdd
[14:0:0:1] disk SYNOLOGY Storage...
I set up the iSCSI connections with the Proxmox GUI, and then followed https://pve.proxmox.com/wiki/ISCSI_Multipath to set up multipath
Yup
yup, it responds to pings just fine
Yup. I've got 4 sessions in total: 2 for my SSD multipath connection, and 2 for my HDD multipath connection. I hadn't...
I have a couple clustered Proxmox nodes that are utilizing shared storage. The shared storage uses LVM on an iSCSI LUN, connecting with a couple of multipath iSCSI connections, and it's all been working fine for the last couple months...
I noticed that the LVM was showing a question mark on the...
oof, yeah, that solves it, but creates another problem. I was originally using the 10G interface for VMs to access the network..changing the subnet drops VM connectivity. I didn't think it would be an issue to just have migration traffic use that interface as well...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.