Search results

  1. A

    Free space

    Using the root pool is a bad practice and you can set that as no mount (-O canmount=off) Having it mounted to /rpool could lead to some bad practise In example if you want to send rpool somewhere, you can't do it safely because receiving a full filesystem stream destroys the one already in...
  2. A

    Free space

    Thank you. As far as I know, using the "root pool" is a very bad practise and I see that proxmox is mounting that: # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 480G 353G 140K /rpool is it safe to unmount ? Why are you mounting that ?
  3. A

    Free space

    I dont use lvm but zfs
  4. A

    Free space

    I have the following: # df -h Filesystem Size Used Avail Use% Mounted on udev 63G 0 63G 0% /dev tmpfs 13G 1.1G 12G 9% /run rpool/ROOT/pve-1 357G 3.1G 354G 1% / tmpfs 63G 37M 63G 1% /dev/shm tmpfs 5.0M 0...
  5. A

    Roadmap

    Is possible to have the upcoming roadmap updated? https://pve.proxmox.com/wiki/Roadmap#Roadmap
  6. A

    Installing from USB Drive issues

    It's a shame that PVE is still requiring a dedicated USB drive and can't be placed on the same USB driver with tons of other ISO images. it's a known 18 months bug. I've also seen a bugzilla about this.
  7. A

    Migrate VM from Xen to Proxmox

    If your VMs are PV, yes, you have to install grub (if not already present) beacuse PV VMs doesn't have a bootloader (boot process is done directly with PyGrub from the Xen host) Usually, all debian VM still have a proper bootloaded loaded and I don't have to install anything (except virtio...
  8. A

    Migrate VM from Xen to Proxmox

    Faster way is to use my convertion script. https://github.com/guestisp/xen-to-pve
  9. A

    Meltdown and Spectre Linux Kernel fixes

    One question: should i use "intel-microcode" coming from Debian or proxmox provide it's own?
  10. A

    HW raid-6 to ZFS Raid1

    AFAIK, any mirror (RAID1, RAID10, ....) is much faster than any parity RAID. So, a 3way mirror should be better than a RAID-Z2 (and also cheaper, as I need 1 disk less)
  11. A

    HW raid-6 to ZFS Raid1

    Using 4x2TB disks in a RAID10 is not an issue, but I really hate any 2-way mirror (and a RAID10 has 2 2way mirrors inside). I had multiple full data-loss when using mirror, now all my server are on RAID-6 or at least 3way mirrors. That's why i've talked about 3way mirror
  12. A

    HW raid-6 to ZFS Raid1

    My use case are web hosting VMs. There is almost 0 sequential write like in any web hosting VM. More or less, I have to migrate 4-5 VM, with about 400-500 sites each, that's why an L2ARC could be usefull, where ZFS will store most read files from the VM. (if ZFS is able to cache VM blocks when...
  13. A

    HW raid-6 to ZFS Raid1

    I don't have fio installed on XenServer and I prefere to not install additional software on this junk environment. I have "sysstat", thus I can provide you "iostat"
  14. A

    HW raid-6 to ZFS Raid1

    I'm asking because I don't have a new server to benchmark.
  15. A

    Big proxmox installations

    Based on Gluster's release schedule, you'll better to use vendor repository and not waiting for proxmox. Gluster is updated almost every month or two.
  16. A

    Big proxmox installations

    No one tried LizardFS ?
  17. A

    Big proxmox installations

    So, you are putting OSD and MONs on the same server. Interesting. If i understood properly: 3 ceph servers for both, OSDs and MONs, 2x10GB for redundancy with public and private on the same link, then proxmox is connected to these 3 servers via 10GB link (also used for LAN) How many ram on ceph...
  18. A

    Big proxmox installations

    Could you describe your ceph environments ? How many servers, how many switches and so on. 10GBe ? Not exactly right this. Gluster works, but from what I can see in dev mailing list, there isn't a real roadmap to follow, every release add tons of feature, most of the time with tons of bugs...
  19. A

    Proxmox VE 5.2 released!

    Yes that work. What is not working is xterm.js set as default that should be opened also by accessing the "Console" tab.