Search results

  1. W

    Proxmox VE Ceph Benchmark 2018/02

    No redundancy on install disk. What i'm looking for is: 2 small disk in mirror zfs for proxmox Other disks for zfs/ceph
  2. W

    Proxmox VE Ceph Benchmark 2018/02

    Thanks you very much! I'll try this method.
  3. W

    Proxmox VE Ceph Benchmark 2018/02

    I'm looking for this solution for like a year but Never understood how to do It so i've made a raid 0 of every single disk. Can you Please explain how you did It? Thanks
  4. W

    Proxmox VE Ceph Benchmark 2018/02

    If you out your p420 in hba, from what are you booting? Where did you install proxmox (p420 in hba cannot boot from It)
  5. W

    ZFS SSD Pool with NVMe Cache (ZIL & L2ARC)

    Is this a problem due to write amolification handled better or Just because enterprise SSD has more write Endurance? If second case, is not a solution, is like putting a bigger fuel tank in a car because the engine is loosing fuel on the floor
  6. W

    VM gone after joining cluster

    https://pve.proxmox.com/wiki/Cluster_Manager Is clear that a node should be empty before adding It to a cluster. Copy from the wiki page: A new node cannot hold any VMs, because you would get conflicts about identical VM IDs. Also, all existing configuration in /etc/pve is overwritten when...
  7. W

    ZFS SSD Pool with NVMe Cache (ZIL & L2ARC)

    No official comunication by proxmox team as far as i know. But as the posts mentioned in my previous message and the test i made myself the problem exist
  8. W

    ZFS SSD Pool with NVMe Cache (ZIL & L2ARC)

    The problem is true. https://forum.proxmox.com/threads/proxmox-4-x-is-killing-my-ssds.29732/ https://forum.proxmox.com/threads/high-ssd-wear-after-a-few-days.24840/ https://forum.proxmox.com/threads/zfs-with-ssds-am-i-asking-for-a-headache-in-the-near-future.25967/ The answer is to disable the...
  9. W

    ZFS SSD Pool with NVMe Cache (ZIL & L2ARC)

    Be careful installing on ssd. Proxmox cluster service is writing every 4 second. I've made a test server with 4x500gb ssd in raidz1, no vm running, just proxmox, and in 1 week It writes 50gb/day for every disk, so until this problem Will be fixed, usually i configure 2 sata disk for proxmox and...
  10. W

    Before buying some servers, thoughts ?

    did you follow some guide? because i've google but i can't find how to make the USB Drive
  11. W

    Before buying some servers, thoughts ?

    Nono, i mean 50 reboot between Every test i've made trying to start from USB. So. If is possible i'm interested in the procedure to make the USB drive. As of now i've crated a raid 0 for every single disk and is not the best solution
  12. W

    Before buying some servers, thoughts ?

    I've tried this solution for 3 days, and something like 50 reboots, so Please tell us how to do this ;)
  13. W

    Before buying some servers, thoughts ?

    I've played with some 360 gen 8 with p420i, you can force it to HBA mode from cli, starting with the SPP 2017.04 and entering CTRL+ALT and keeping pressed type in sequence x+d+b, a cli appear and you can force to hba. The problem is that is not able to boot from any of the port if you set it...
  14. W

    "pmxcfs" writing to disk all the time...

    So. In 5.2 this problem is still there. Is there any solution other than disable pve-ha-lrm/pve-ha-crm?
  15. W

    Install on non bootable disk (USB /SD boot)

    Nobody has some suggestion? For the moment i'm testing setting all disks as a single raid 0
  16. W

    Install on non bootable disk (USB /SD boot)

    Hi, i have a server with these specs: HP DL360 gen8 with HP SmartArray P420i and 4x300GB. I've managed how to set the controller into HBA mode, the problem is that after install it cannot boot,as P420i cannot boot when set to HBA, but i can boot from USB and go through all install process and...
  17. W

    Proxmox and non-enterprise SSDs

    I can confirm. Installing on SSD will kill it if you not stop and disable the 2 pve-ha services. The solution is installing over on a mirror of little mechanical disks. It's a pity because in this way you waste 2 disk bays. It could be a good way to write the cluster status on ram on every node.
  18. W

    ZFS backup solution with email report

    Hi, i need some advice on what are you using for backupping zfs storage. I'm using znapzend for snapshotting on local storage and sending to external USB and is working fine, but it has no report if snapshot on usb fail. I've made a simple script that check the last snapshot date on USB, but i...
  19. W

    znapzend backup generator script

    How are you monitoring your backups? I''ve looked around but found nothing about. It could be very usefull an email alert if the job fail
  20. W

    HP Gen9 ZFS Boot Smartarray 440ar

    I've managed how to install grub on efi partition on a USB drive. So the system boot to USB drive containing an efi partition and now the system start. Thanks for your help