Search results

  1. V

    Proxmox cluster + ceph recommendations

    Hi a4t3rburn3r DRBD and Linstore are a good alternative to Ceph, we are going to be testing this solution soon. There are a few other posters on here testing drbd as well. There are modules available for DRBD that hook into ProxMox and allow for HA failover as well so it’s a good alternative...
  2. V

    Shared ceph storage on a 5 node setup

    Hi rodrigobaldasso Just to clarify if you are only looking to deploy a single server then just use local storage in a Raid config otherwise its a messy setup and more to trouble shoot. Ceph is replicated object storage designed to be used across multiple nodes for both Flie and block storage...
  3. V

    Backups

    Hi RCD when adding a new disk to a VM via PVE it won't be seen within the VM OS until you log into the VM OS via SSH or console > perform a Disk partition on the new drive > format the drive with the appropriate file system > mount it so it is available for use. some basic Linux skills will be...
  4. V

    3par Multipath iscsi best settings

    Hi Dazza76 You may wish to disable “delay-ack” Make sure you are using jumbo frames MTU 9000 on the NIC ports (host and SAN) and Switch. How’s the setup going? We are investigating attaching a 3Par would be great to hear any feedback. “”Cheers G
  5. V

    Hi Area TIC How’s the cluster going? We are considering adding a 3Par. May i ask what 3Par...

    Hi Area TIC How’s the cluster going? We are considering adding a 3Par. May i ask what 3Par model are you using? Have you configured multi path on the iSCSI initiator? Have you done any initiator tuning with delayed ACK etc? How are you finding performance? Are you just using LVM and ext4 on...
  6. V

    3-node Proxmox/Ceph cluster - how to automatically distribute VMs among nodes?

    Hi tokala Just had a look at your mini projects they look pretty interesting. Are you officially contributing to Proxmox or are these just independent projects? ""Cheers G
  7. V

    VM CPU settings

    Hi TheJM no offence taken :) hope it all come together for you. ""Cheers G
  8. V

    Scaling beyond single server. Suggestion wanted.

    Hi dsh we've been running SSD in an enterprise production environment under above average loads and the drives have really been great so far. We don't use SATA only SAS eMLC and MLC drives in hardware equivalent to Raid 6. With the DRBD replication between hosts this is another mirror of the...
  9. V

    Scaling beyond single server. Suggestion wanted.

    Forgot to mention zfs will slow things down and chew up more memory for little benefit. Personally the less layers to deal with the better. How’s the testing going ? “”Cheers G
  10. V

    VM CPU settings

    It’s up to you if you wish to assign more than amount of real cores and sockets go for it. Just remember it will not perform optimally and you’ll see a lot of high cpu and think its stuck processes. One more time. Hyperthreading is multiple paths to the same cpu core 2 data paths 1 core so in...
  11. V

    [SOLVED] VM Crashing

    That’s awesome news! Wonder if this can be marked as resolved? “”Cheers G
  12. V

    VM CPU settings

    Hi TheJM Im from a VMware background so can help with their best practise for cores and sockets. A thread is not a core its just another data path to a core. VMware best practise should be equal number of sockets and real physical cores (not threads). If you only have 1 socket then the config...
  13. V

    [SOLVED] VM Crashing

    Hi Alemiz also change the current 1 CPU 8 cores to 1 CPU socket 6 cores total: or 2 sockets 6 total (3 x 2) the VM will perform better once its up and running. let us know how you go with the network and VIRTIO storage changes. ""Cheers G
  14. V

    [SOLVED] VM Crashing

    Best practise is to have cores equally spread across sockets. i.e. total physical cores 12 cores/ 2 sockets = max 6 cores per socket. total physical cores 24/ 2 sockets = 12 cores per socket etc This does not include hyper threading as those cores don't actually exist they are just additional...
  15. V

    Store VM application data locally vs in a NAS

    Hi nununo NFS in my opinion is a better protocol than SMB 1 or 2 which is the default for a long time, but one draw back of NFS is that if the PVE initiator drops connection with NAS it will create queued up IO and eventually crash PVE trying to reconnect if there is a network issue. CIFS/...
  16. V

    Backups

    HI RCD See below: Ok it sounds like you know what you would like to do. This could be achieved with a USB drive and USB pass through to the VM The only other way to have a scratch disk and attach to a VM is either as a virtual drive or over a file share from within another VM or external file...
  17. V

    Scaling beyond single server. Suggestion wanted.

    Hi dsh My 2 cents. Ceph = object storage DRBD = 1:1 or 1:N replication DRBD Simple vs CEPH complicated. If Ceph breaks it could mean no data. If DRBD breaks it just stops replicating you can still use your data on 1 host while troubleshooting issues. CEPH support from ProxMox directly...
  18. V

    Store VM application data locally vs in a NAS

    Hi all Keep things simple. If local PVE storage is SSD use it to run VM and application inside VM. Connect the NAS with CIFS share to PVE as a backup destination to backup the VM’s Backup VM > NAS share on a schedule Set backup compression to save space. Data lives in VM and VM is backed up -...
  19. V

    [SOLVED] VM Crashing

    Agree with LnxBil I would also set virtio as the SCSI storage adaptor. Set storage cache to pass through The VM Kernel is panicking which can be a few things. How many phisical sockets does the server have? How many cores? How many cores and sockets does the VM have? More info and maybe a...
  20. V

    Additional IPs through Proxmox

    Hi JRshaw Have you checked the FW on the VM in PVE? Disable IP tables in the centos VM too. Make sure it’s disabled for testing as a troubleshooting steps. Have you set DNS in the /etc/resolve.conf If not you set a name server on the adaptor eth0 or in the above mentioned file. For testing add...