[SOLVED] Replication with different target storage name

I need this as well.
Is there maybe a way to file a more official feature request?
I think it's fair to assume by now that this thread isn't read by staff members any more.

Cheers!
 
I am downsizing some machines from systems with a dedicated and independent Datastore (ZFS) to small systems with just "rpool" available. Some form of node-specific "aliasing" of the cluster-wide "ds0" on separate disks to something like "rpool/ds0" would really be helpful.

So +1 from me...
 
Count me too. It has been nearly 6 years now form the first request for this feature. Storage in clusters not always has the same layout and/or names. You may (I do) have small, expensive, NVMe disks or special needs plus big, cheap, HDD storage for bulk data in one node of your cluster and only HDDs in the other node where you do not need as much disk speed. Or, as told before, a big cheap node for DR only purposes.

Not having this feature is a big headache.

Thanks for all your work.

Best regards
 
Yes, this is a planned feature, but currently not available.
Any way this could be bumped up the feature list... it seems to me to be a very common setup to have a primary pve server, with failover to lesser hardware on disks or any other configuration with different pool names. I certainly have failover hardware that is underutilized due to this.
 
Oh man I just realised how long this has been an issue for... I would love this feature to happen.
 
  • Like
Reactions: cb88

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!