Hi All
I am working on a proof of concept for a micro hosting requirement. The client requires hosting of many small VM's, about 30-50 of 2GB of RAM each and 30GB of storage each. CPU and disk Load generated by these VM's are very low.
They are looking to retire their two old Dell R540...
Hi All
I am recovering data from a lab server that got borked due to power loss and now kernel panics on boot
I did a new 7.2 install on a separate SSD. Old version was 7.1
I connected the old boot SSD to the system and ran an
to get the disk imported
I then noticed that the disk is...
I have also been experiencing the same since my upgrade to Proxmox 5.1.
I restarted a container, which resulted in Kworker using 100% CPU of IOWAIT. however, non of the disk devices had high utilization percentages, they were normal. VM and containers already running, also experienced no...
Hi All
I am consulting with a client, who is using Proxmox with a Intel Modular server that, which is getting pretty old now.
We are busy looking into replacement hardware for them. Ideally they would like to retain the ability to live migrate they VPS client from one Node to another, for...
See this about Ceph on ZFS
https://www.spinics.net/lists/ceph-users/msg33646.html
When selecting distributed storage, most seem to either select Gluster on top of ZFS, or Ceph on XFS.
1. The Satadom MLC device should be ok, so long as you dont use it for VM storage.
2. seems like overkill in terms of capacity, remember L2Arc has overheads. Maybe partition the drives as follows:
- 2x 10GB mirror - Log
- 2x 100GB stripe - L2arc
- 2x mirror balance - Pure SSD pool
3. 2...
I am also experiencing the same in Proxmox 4.4. When this Container used to be a KVM VM, the backup took 2 hours or so, now it takes 16 hours. backing up to NFS.
Hi all
Using Proxmox 4.4-12 - community subscription
after created a new container using the debian8.0-standard_8.6-1.tar.gz template, I am unable to login via SSH, or via Console.
I have also tried and older debian 8.0 that I have used in the past, same problem.
I have also tried usernames...
manu, Thank you for the input.
yes, i was referring to Multi-queue.
There are 3 unused NICs in the server. I will look into the passthrough option, but it does not leave a lot of flexibility, so I will consider it a last resort.
Would using OVS be more or less efficient?
Hi All
I have 2 Windows server 2012 R2 on the same host connected on an internal bridge network. They access internet connectivity via an Untangle Firewall with an interface bridged on an external adapter and another on the internal bridge. The Internal Bridge has no physical adaptor assigned...
I have a similar problem with a 2008 R2 guest. I managed to get better stability when switching disk caching to none. The performance is more consistent then, but slow.
Do you have virtio stable, or virtio latest installed?
are you using Qcow2 as the disk format?
Its Ubuntu Kenel based, so I cant see a reason why it can run in a VM. The issues is as far as I know, clonezilla cant live backup an OS yet. We actually keep an SSD ready with a "base install" of Proxmox ready to go. We have all the config files backing up on each host via Rsync. Once we...
we are careful about which SSD brand we use. We stick with Samsung, Intel or Kingston usually, and never buy the entry level product. We have never had one fail with advanced notice via SMART. of the +- 50 SSD's we have, only an intel 530 unit has failed. It gave us ample warning via smart...
I currently have a DB that runs 100% from ram, so I can free up an additional 6GB that way.
What about if I split the SSDs partitions 3 way, to make the RAM requirements more conservative?
1. 10GB - Zil partition mirrored - 10GB
2- 40GB - L2Arch - 40GB (80GB combined)
3. ~400GB - mirrored pool...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.