Perhaps the below has been answered before, but my quick search through the forums and documentation have yielded no results.
I run a couple of VM's using a central storage server and multipath ISCSI. The VM's function properly.
The complete proxmox cluster consists at the moment of 4 nodes...
I have successfully added an OVA VM into proxmox yesterday. I unpacked the OVA file which contained the .vmdk disk.
I than created a VM in proxmox with the appropriate configuration.
Created two .vmdk disk in the local directory, which puts the files in /var/lib/vz/images
- Copied my .vmdk file...
I would like this feature in the GUI as well ..... I find importing of vm's not intuitive
- I have a couple of VM's that I would like to import, but because it is not easy ...... I have been waiting until I have time to sit down and figure it out
I wonder if the issue has to do with running librenms as a container and trying to listen to the host.
I have librenms installed as full VM and it monitors the various proxmox nodes through SNMP (I am actually in the middle of moving librenms to a container....)
You may want to try to install...
HAHA that is funny .....
I would like to keep a central backend storage for SAN/NAS function and nothing else.
Would really like to know how to fix this as I am pretty sure it is not a freenas issue....... I will confirm by testing with Xen and ESXi
I would suggest possibly looking at used or refurbished enterprise SSD's
I have three proxmox nodes which use SSD's
R620 - 1gb PERC H710P - 6x Intel DC S3610 400GB SSD's in raid 5 ( purchased 10 total refurbished from Newegg with 3yr warranty $89 each)
R620 - 1gb PERC H710P - 8x Samsung 850...
Just speaking from my experience ..... I have netdata installed on each node of my 4 node cluster and so far I have not noticed any issues with proxmox. (running for about 1 year)
You have to set the mtu=9000 through the command line /etc/network/interfaces
It is not possible to change the mtu through the GUI
I usually change the mtu within the VM to 9000......
I have the same issue ..... 4 proxmox nodes and two freenas 11.1 systems for back-end storage connected over ISCSI multipath. I can't see anything else within my freenas console because all I see are these timeout/exit notifications.
It is from every node on both ISCSI paths
I completely forgot about webmin and using that solution for storage management.
Yeah in my case I have all the things you mentioned already running. NAS/SAN on bare metal and 4 node proxmox HA cluster. I already virtualize Zentyal, windows server, windows 10 virtual clients with nvme drive...
The ZFS advanced checksum is definitely a feature you don't get putting ZFS over hardware raid.
On my FreeNAS box I run the disks through an HBA, but on Proxmox I was thinking of moving to ZFS over my perc raid controller. The advanced checksum is nice, but what I really am interested in is...
Vassilis - What do you use for just standard network file sharing? User setup and management? File permission control?
- I am not sure what the use case is for the individual starting this thread
I can say in my environment, all my clients pcs (not servers) are windows and need to access...
You may want to look at passing through the entire HBA (I assume with that many disks you are using an HBA or maybe HBA + Expender).
Your data disk attached to the HBA will be passed through directly to OMV. The OMV OS would reside on a virtual disk within proxmox. This would give you some...
I see this Freenas quote all the time that to run ZFS you need to give it direct access to disks, but is this really a requirement of ZFS or a statement from freenas/IXsystems ?
I have ran Freenas virtual on top an enterprise raid controller with cache and BBU. This presents one single disk to...
bobmc, would you please elaborate on why there are risks running FreeNAS as a VM? Data loss or corruption?
Would this really be an issue if you pass the disks directly to the VM? Providing FreeNAS raid manager direct access to disks....
What if you pass an entire HBA (if possible) to the...
I don't have an answer to your question, but why not virtualize OMV ? (at least the OS portion of OMV?)
I have a 4 node proxmox cluster which has both local storage (spinning drives and SSD) and central storage via two FreeNas machines which provide VM storage over ISCSI multipath + CIFS for...
Thank you for the info.
The next question would be what about the file system?
Should I use LVM-thin ?
Create a ZFS volume ?
I like the ability to be able to do snapshots, but with now having two nodes the ZFS option for replication would be nice.
Performance is still key
Is there any advantage of having the proxmox OS on solid state drive?
Performance gains?
How much does the OS disk performance affect the overall performance of Proxmox?
- Is VM storage the only real key in the performance of the system?
Good Day!!
I currently have one proxmox node running on some standard desktop hardware: AMD 6-Core CPU with 16gig ram
I have a LSI 8-port SAS raid card, with pair of SSD's in raid 1 on which the proxmox system is installed
For my VM storage I have a pair of 1TB 7200rpm disks in raid 1 - setup...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.