Search results

  1. E

    Some Newbie questions....!

    With RAID10 ( miror + mirror ) firstly loss of space %50 percent with 4 Disk. Raid 10 also use two disk at same time for WRITE request, but read performance will be grow up. With Raidz ( one disk for parite, 3 disk for data. Same with RADI5 ) you will get 3 disk space so write random write...
  2. E

    Multi Ceph Cluster or Pool from Specific OSD

    Hi All, I have many diffrent type disk on my servers and I was use GlusterFS in past also will continue anyway... Bu I want sparate my some disk for Ceph test but I want create two pool from may some SSD ( NVME ) and SAS disk... For Ceph; multi cluster possible but I think we can not manage on...
  3. E

    Proxmox Backup Solution from Bacula Systems

    So you are searching aplication aware backup system...
  4. E

    [SOLVED] high swap usage and a way to add swap.

    cat /etc/ksmtuned.conf # Configuration file for ksmtuned. # How long ksmtuned should sleep between tuning adjustments # KSM_MONITOR_INTERVAL=60 # Millisecond sleep between ksm scans for 16Gb server. # Smaller servers sleep more, bigger sleep less. # KSM_SLEEP_MSEC=100 # KSM_NPAGES_BOOST=1000...
  5. E

    Proxmox Backup Solution from Bacula Systems

    Proxmox already have powerfull backup system why you searching new one ? What is diffrent ?
  6. E

    [SOLVED] high swap usage and a way to add swap.

    You can look host with "journalctl -xe" and with "dmesg -wH" if kernel not tell to you I was experiance this problem memory usage will be normal. I want ask to you one question, did you use ksm ? Normally Proxmox already coming with memory deduplication feature but also you can grow up...
  7. E

    [SOLVED] Drive/RAID Configuration for Proxmox - Advice/Guidance

    wipefs remove all disk format signature and gpt partition table from disk.. if that device was have another disk signature maybe cause of that proxmox can not be add that...
  8. E

    [SOLVED] Drive/RAID Configuration for Proxmox - Advice/Guidance

    Coul you try on shell mkfs.ext4 /dev/sdc wipefs -a /dev/sdc That seems like Read Only raid pool...
  9. E

    Write speeds fall off/VM lag during file copies ZFS

    Block based system every time is best, I know because operation system or aplication can manage block size and disk formatsystem it is very useful feature... Also I was and I will use block based storage system at any professional project with SAN device because on that project block switch...
  10. E

    Write speeds fall off/VM lag during file copies ZFS

    I never tell that type word, 20Gb ARC so big and do not use L2ARC.. About LAG issue and IO delay can you chek your server with "atop 1" ( when you see atop screen then push shift+c for watch cpu activitiy ) "iotop -P -d 1" then you will see which system was create IO on your CPU. Also I suggest...
  11. E

    [SOLVED] Drive/RAID Configuration for Proxmox - Advice/Guidance

    Brother on computer system free never be free, all system have cost. For example old EXT4 and XFS system people think that system RAM cost zero but not, they using RAM as a cache and buffer and use more more RAM from ZFS and BTRFS. Also ZFS and BTRFS have inline ( on the fly ) compression system...
  12. E

    [SOLVED] Drive/RAID Configuration for Proxmox - Advice/Guidance

    Build your RAID with RAID card and format RAID pool with BTRFS activate comperesin on that pool. RAM cost is zero, BTRFS have CPU IO cost. So in computer system all system have diffrent type cost some one want dedicated RAM some one create CPU IO...
  13. E

    Write speeds fall off/VM lag during file copies ZFS

    On that test 512MB/s incompressible data will be cycle on Cache if your disk can not be read and write 256MB/s data in a second cache will be full in a short time also that test will be continue 120second. Also I was give that test for test LOG disk, because that test will be full ARC very...
  14. E

    HA storage for web cluster on PVE/Ceph

    GlusterFS+NFS Genesha + VRRP on your network :)
  15. E

    Using 2 storages and 6 nodes.

    that is your selection :) I love GlusterFS because that is file based and repair or file recover so easy :).
  16. E

    HA storage for web cluster on PVE/Ceph

    Why you do not continue with NFS ? cause of IO, if yes grow up your "RPCNFSDCOUNT"
  17. E

    HA storage for web cluster on PVE/Ceph

    I think I miss understand you. First Question, your all WEB server will work active+active am I right ? If yes you need ClusterAvare system on your WEB server. Ceph,GlusterFS,OCFS2,NFS etc... all of tham ClusterAvare system.. For Proxmox Ceph solition that is for virtualization means you QCOW...
  18. E

    ZFS backups

    Proxmox backup system can work with snapshot system so backup not interput your Guest...
  19. E

    Using 2 storages and 6 nodes.

    GlusterFS system support many diffrent system; 1. If you will build your GlusterFS system with 2 replica mode that means two server will work as RAID1 so what you have on RAID1 ? GlusterFS will support that. For any type splite brain problem GlusterFS have arbitter system and arbitter cost is...