When I replace a motherboard in a Proxmox server with an identical model, it wont come back online after power up. Seems that Proxmox detects the changed ethernet MAC address and maps them to unconfigured ethX device. eg. if I have eth0 and eth1 working properly, then change the motherboard...
A bit of renewed interest in this because Apple discontinued it's Xserve and Mac Mini Server range of products, so they have no hosting servers anymore.
Here is a blog on getting Yosamite onto Proxmox.
http://blog.will3942.com/virtualizing-osx-yosemite-proxmox
I am looking at a 4.1 and 4.3 side by side, and the 4.1 uses smaller fonts, and smaller skeuomorphic icons, freeing up some screen real estate.
4.1 is definitely quicker for me to locate what I want. Perhaps time spent getting use to the new flattened, larger interface is all that is needed...
I just had one of my Proxmox 3.4 servers reboot in the middle of the night. Couldn't find anything in the syslog to indicate a problem, it had been up for over 200 days. Where is the best place to look for clues?
I just had a similar problem at 4am during a backup, except the stuck process was httpd
Found some information on it here:
https://access.redhat.com/solutions/39542
Why run a write cache when you have SSDs as your main storage in the first place? For that matter why run L2ARC when you have SSDs sounds like a waste of time.
I put vm.swappiness = 0 in /etc/sysctl.conf
It's not a complete disable, it just wont swap unless totally out of RAM which shouldn't occur unless something unplanned for happens.
If Proxmox 4.1 runs on the PowerEdge R220 than ZFS is already loaded and should be able to run. Test it at the command line make a couple of files into a small zpool
What you would need is a small SSD drive as the read cache, it would have to go in a PCIe slot. The SSD is just a performance...
It should work fine, though it's always better to have a RAID1 or better, make sure you tune the amount of RAM zfs uses down to a sensible value, it defaults to 50% I think.
Live migration requires the same storage, therefore what you are doing is not live migration and can be done a number of ways. Take the VM offline and copy the disk image from storage1 to storage 2, or backup the VM and restore specifying the changed storage.
zfs storage uses block devices called Zvols not disk files. You will have to convert them to files just like you would have to convert an ext3 etc. partition to a file if your hypervisor required it.
Probably better off disabling the RAID hardware, and using the ZFS RAID in the Proxmox installer. You will need to read up and plan that.
Some advantages are outlined in this Wiki page https://pve.proxmox.com/wiki/Storage:_ZFS
There seems to be countless people, including myself, that have problems with running mutlicast,. The instructions for not using multicast need elaboration, step by step in https://pve.proxmox.com/wiki/Multicast_notes#Use_unicast_instead_of_multicast_.28if_all_else_fails.29
Before mutltcast is...
Not really, people have been using swapfiles for decades, typically when they realize they didn't allocate sufficient space when the drive was partitioned. Mac OS X uses swapfiles in /var/vm.
I block all ports from the Internet to my Proxmox servers IP, with the exception rules for the workstations I do administration from. I only remove the block for software updates, then put it back immediately after. ntp is done to an internal server.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.