Its POSSIBLE to do what you ask. but I wouldnt, because ceph is pretty sensitive and trying to keep parts of the cluster with one central configuration AND the rest synchronized is quite failure prone.
This wouldnt work anyway. you need the same...
I see.
you dont really need to reflash your R700; just use it as hardware raid and put btrfs on top. that way you can use inline compression- perfectly adequate for non production/backup storage.
Whats autopve?
How often do you install PVE?
can describe a problem/issue you are trying to solve without discussions of toolset? I dont think I can answer as your feature description is too vague.
pve is open source. its your lane if you want...
It can. its just a generic LSI Raid controller- but you lose all integrated firmware control (which is probably ok for zfs use, just sayin.)
OP- what I didnt see in your post is WHY. what is the painpoint you are trying to solve, or is this just...
You will lose pg coherency on some pgs, and lose entire pgs for others.
if you no longer have at LEAST one OSD that contained a shard for that pg, you will need to destroy that pg manually.
Proxmox uses Generic ceph. there is no "other" version.
"copy redundancy" # availability. there is a limit to how much time I want to spend on this subject. I'd suggest you read and understand what ceph is, how it works, and why the limitations...
The simplest way to accomplish what you're after is to install samba on pve, like so
apt install samba
you can then defined /mnt/pve/recovery (or whatever your mountpoint is) as a smb share, and access it on your windows machine using...
Storage isnt just about keeping your files. its about availability. A 3 host ceph cluster has no resilience to speak of. What happens when your cluster shuts off write access in the middle of the day and you dont know how to fix it? is there...
Yes. this isn't news or a mystery.
ZFS has many OTHER advantages over lvm on block. performance is only one factor; if the zfs subsystem is providing SUFFICIENT performance for your application it is by far preferrable. You might want to...
yes. your USB enclosure doesnt do what you think it does.
Just because the marketing says its "USB 3.2 10gbit blah blah blah" doesnt mean that your host connection can, that the cable can, that the bridge chip can, nor the sata multiplexer...
If the payload is on another disk (or raid volume) then you dont need to do anything. simply copy all your vmid.conf files and reinstall pve. once its installed, simply import your existing data stores and put the vmid.conf files back.
ok, so you have two options.
first, you dont need a working python environment to edit /etc/network/interfaces and insert a stanza for vmbr0
but in your case, lets think through what you're actually able to accomplish. the most important...