This is a guide on online.net's forums.
https://forum.online.net/index.php?/topic/5380-configuring-ipv6-in-proxmox-on-dedibox-from-onlinenet/
Seems to be what your looking for.
ps.: never used it, never tried IPv6 with proxmox (for anything serious), will not claim responsibility for it to be a...
Question still remaining open:
What is the best option to assign storage space to a Virtual Machine running BTRFS and used 100% as Backup-space ?
I'd like to know which is faster (Write/Read performance) with proxmox AND does not have a 99.5% chance to lead to data-loss. (like the HBA...
This made me wonder if i just kept feeding a specific species of internet user.
True, for Cache that is.
But a Ceph Cache-Tier, is not just Cache, its also tiered Storage, that automatically gets utilisation based on the rules you have set up, without the need to ever do manually assignment of...
You can create your own Custom Group names with custom group settings.
As for ceph, Storage (defined via Datacenter > Storage) works, the Ceph Tab afaik only works via the "initial root" user.
I have done this once on a personal project wth Proxmox 3.4, Not really 100% anymore how it did it...
If you do not have any Backups and no desire to use the VM's ever again do the following:
nano /etc/pve/qemu-server/<vmID.conf>
or
nano /etc/pve/nodes/<NodeName>/qemu-server/<vmID.conf>
and remove the offending vDisk.
then ctrl+X, Y
Now you can remove the VM from your node(s)
The main advantage of Raid-Setups are :
- No Rebalance during a single Disk-Failure (you can get away with slower link-speeds)
- You can reduce the Replication count to achieve the same number of copies (again, lower link speed requirements)
- Less overhead on the Mons due to less OSD's to...
Correct, the Ceph-mon you wanna have on SSD drives.
The last time i used Proxmox in conjunctiion without a SSD-Drive was back in 2013 right around the time 3.0 came out. I used it on a OVH-Server with 5 HDD's (one for OS, 4 in Raid-10 for VM's). I had issues with IO-wait and switched to a new...
okay, so you are trying to Back up 700GB of Data per Node via 125 MB/s connection. That should take best case 95 Minutes (not counting overhead).
How long does it ACTUALLY take ?
Do you back up all VM's with a single Backup rule ?
Or do you back up every VM with a different Backup rule at...
Yes and No.
Yes VM-Storage and a general Purpose Nas are completely different. And they typically do not mix.
No, as once you use a Cache-Tier you can have both Speed and large capacities , on the same Hardware, both the speed and the performance.
True, with a Erasure-Coded Pool i'm...
Ceph just loves to do stuff in parallel. Thats where it excels at.
if performance is not even a secondary concern (which it does not sound like), then a single 1G link per node will be challenging but doable. 4x1G (dedicated to ceph - openvswitch balance-tcp) i have been told (never tried that...
Your Cpu, Ram and Disks are sufficient for the NAS.
How much Backup-data do you generate during your backup cycle ??
How many Nodes in your proxmox Cluster ? So i can roughly grasp the total backup-amount during your backup-window.
Do you do backups from all nodes at the same time, or do you...
I have 64 GB ECC installed at the moment. That said, should i use the SSD as a log-Device then, or not use it at all for the ZFS-pool ? I guess i can scrab the Log-Stripe then too and use it for Rockstor.
There is 2 "problems"
The basic Question was the ZFS setup seeing what Drives I have...
We are operating around 105 Proxmox/Ceph nodes at work (5040x HDD + 840x NVME Samsung 950 Pro ),not just my view, my businesses view :p
If you use SSD-Pools , 4x1G is stretching it (a single link can only handle 125 MB/s - less then your SSD can produce). 1x 10G might be doable. (5x2x 500...
You "mount" ceph pools via (k)rdb.
Ceph is not a file-System, Its a Block Device / Object storage. You DO NOT want to poke inside it (unless its last effort rescue attempt, cause someone royally screwed up (inwhich case you use "rados")
There is a File-system available for CEPH, called CephFS...
Some Background:
This is going to be my 6th Home-Lab Proxmox-Node.
I have a 3-Node Proxmox+Ceph Cluster that houses every single critical service i operate.
I have a single-Node 24-Disk Proxmox+ceph "Cluster" i use for Media-Storage + Backups + Surveillance, basically a giant 60 TB node...
Latest discussions can be found ..
Here https://forum.proxmox.com/threads/moving-to-lxc-is-a-mistake.25603/
And here https://forum.proxmox.com/threads/when-will-3-x-become-eol.25852/
So when you do backups it goes like this ?
VM-Storage-Server(s) <-> NFS <-> Proxmox-Node <-> Vzdump <-> NFS <-> Opendedup <-> Backup-Server ?
You say you use 4x1G - what config you using ? If bonded, which type ?
Have you checked to see what your VM-Storage Servers Storage-Subsystem is doing...
How "Fast" is considered "fast" by you ? can you give us an idea of how many MB/s you are currently writing Backups at to your FreeNas ?
Which servers ? Proxmox or Freenas ?
Or do you have FreeNas on Proxmox as a VM ?
If this is the case, can you tell us how your Storage-Subsystem works ...
Can you post /etc/hosts of one of the Nodes ?
check if this fixes it for you :
https://forum.proxmox.com/threads/how-can-i-set-migrate-interface.25340/#post-127789
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.