Search results

  1. Q

    ipv6 to KVM

    This is a guide on online.net's forums. https://forum.online.net/index.php?/topic/5380-configuring-ipv6-in-proxmox-on-dedibox-from-onlinenet/ Seems to be what your looking for. ps.: never used it, never tried IPv6 with proxmox (for anything serious), will not claim responsibility for it to be a...
  2. Q

    Proxmox and jumboframes

    We use Jumbo frames with proxmox openvswitch based bridges: https://pve.proxmox.com/wiki/Open_vSwitch Works great.
  3. Q

    Need advice on ZFS / BTRFS Disk allocation within proxmox

    Question still remaining open: What is the best option to assign storage space to a Virtual Machine running BTRFS and used 100% as Backup-space ? I'd like to know which is faster (Write/Read performance) with proxmox AND does not have a 99.5% chance to lead to data-loss. (like the HBA...
  4. Q

    Need advice on ZFS / BTRFS Disk allocation within proxmox

    This made me wonder if i just kept feeding a specific species of internet user. True, for Cache that is. But a Ceph Cache-Tier, is not just Cache, its also tiered Storage, that automatically gets utilisation based on the rules you have set up, without the need to ever do manually assignment of...
  5. Q

    User management

    You can create your own Custom Group names with custom group settings. As for ceph, Storage (defined via Datacenter > Storage) works, the Ceph Tab afaik only works via the "initial root" user. I have done this once on a personal project wth Proxmox 3.4, Not really 100% anymore how it did it...
  6. Q

    How to remove failed disk and associated VM's?

    If you do not have any Backups and no desire to use the VM's ever again do the following: nano /etc/pve/qemu-server/<vmID.conf> or nano /etc/pve/nodes/<NodeName>/qemu-server/<vmID.conf> and remove the offending vDisk. then ctrl+X, Y Now you can remove the VM from your node(s)
  7. Q

    Understanding Ceph

    The main advantage of Raid-Setups are : - No Rebalance during a single Disk-Failure (you can get away with slower link-speeds) - You can reduce the Replication count to achieve the same number of copies (again, lower link speed requirements) - Less overhead on the Mons due to less OSD's to...
  8. Q

    Evaluation Regarding Proxmox, Storage Models, and Deployment

    Correct, the Ceph-mon you wanna have on SSD drives. The last time i used Proxmox in conjunctiion without a SSD-Drive was back in 2013 right around the time 3.0 came out. I used it on a OVH-Server with 5 HDD's (one for OS, 4 in Raid-10 for VM's). I had issues with IO-wait and switched to a new...
  9. Q

    glusterfs, ceph or freenas

    okay, so you are trying to Back up 700GB of Data per Node via 125 MB/s connection. That should take best case 95 Minutes (not counting overhead). How long does it ACTUALLY take ? Do you back up all VM's with a single Backup rule ? Or do you back up every VM with a different Backup rule at...
  10. Q

    Need advice on ZFS / BTRFS Disk allocation within proxmox

    Yes and No. Yes VM-Storage and a general Purpose Nas are completely different. And they typically do not mix. No, as once you use a Cache-Tier you can have both Speed and large capacities , on the same Hardware, both the speed and the performance. True, with a Erasure-Coded Pool i'm...
  11. Q

    Understanding Ceph

    Ceph just loves to do stuff in parallel. Thats where it excels at. if performance is not even a secondary concern (which it does not sound like), then a single 1G link per node will be challenging but doable. 4x1G (dedicated to ceph - openvswitch balance-tcp) i have been told (never tried that...
  12. Q

    glusterfs, ceph or freenas

    Your Cpu, Ram and Disks are sufficient for the NAS. How much Backup-data do you generate during your backup cycle ?? How many Nodes in your proxmox Cluster ? So i can roughly grasp the total backup-amount during your backup-window. Do you do backups from all nodes at the same time, or do you...
  13. Q

    Need advice on ZFS / BTRFS Disk allocation within proxmox

    I have 64 GB ECC installed at the moment. That said, should i use the SSD as a log-Device then, or not use it at all for the ZFS-pool ? I guess i can scrab the Log-Stripe then too and use it for Rockstor. There is 2 "problems" The basic Question was the ZFS setup seeing what Drives I have...
  14. Q

    Understanding Ceph

    We are operating around 105 Proxmox/Ceph nodes at work (5040x HDD + 840x NVME Samsung 950 Pro ),not just my view, my businesses view :p If you use SSD-Pools , 4x1G is stretching it (a single link can only handle 125 MB/s - less then your SSD can produce). 1x 10G might be doable. (5x2x 500...
  15. Q

    Understanding Ceph

    You "mount" ceph pools via (k)rdb. Ceph is not a file-System, Its a Block Device / Object storage. You DO NOT want to poke inside it (unless its last effort rescue attempt, cause someone royally screwed up (inwhich case you use "rados") There is a File-system available for CEPH, called CephFS...
  16. Q

    Need advice on ZFS / BTRFS Disk allocation within proxmox

    Some Background: This is going to be my 6th Home-Lab Proxmox-Node. I have a 3-Node Proxmox+Ceph Cluster that houses every single critical service i operate. I have a single-Node 24-Disk Proxmox+ceph "Cluster" i use for Media-Storage + Backups + Surveillance, basically a giant 60 TB node...
  17. Q

    Understanding Proxmox 3.4 EOL and 4.0

    Latest discussions can be found .. Here https://forum.proxmox.com/threads/moving-to-lxc-is-a-mistake.25603/ And here https://forum.proxmox.com/threads/when-will-3-x-become-eol.25852/
  18. Q

    Backups break virtual machines

    So when you do backups it goes like this ? VM-Storage-Server(s) <-> NFS <-> Proxmox-Node <-> Vzdump <-> NFS <-> Opendedup <-> Backup-Server ? You say you use 4x1G - what config you using ? If bonded, which type ? Have you checked to see what your VM-Storage Servers Storage-Subsystem is doing...
  19. Q

    glusterfs, ceph or freenas

    How "Fast" is considered "fast" by you ? can you give us an idea of how many MB/s you are currently writing Backups at to your FreeNas ? Which servers ? Proxmox or Freenas ? Or do you have FreeNas on Proxmox as a VM ? If this is the case, can you tell us how your Storage-Subsystem works ...
  20. Q

    Proxmox cluster don't want to use LAN IPs

    Can you post /etc/hosts of one of the Nodes ? check if this fixes it for you : https://forum.proxmox.com/threads/how-can-i-set-migrate-interface.25340/#post-127789