Glad to get your answer, thank you! But mind if I please ask you to answer the question in title: which mirror should I prefer to OS install, SSD or HDD?
I have a server, which equipped with 2 x 2Tb HDD and 2 х 512 Gb SSD disks. I'd like to install PVE 7.0.2 on it, and plan to create mirrors out of these disks.
So, I really doubt, if I should install PVE (OS) itself on SSD ZFS mirror or on HDD ZFS mirror? Both approaches are appears to have its...
All of my previous experience of PVE setups related to stand-alone hosts, and PVE is perfect for that.
Now we want to connect shared SAN storage to several PVE hosts to have HA available. SAN (we rent it) seems to be FC connected.
At the same time we want to have the same VMs copied...
Hey, LVM is not an option (
We need to replicate guests to another storage (as a HA approach) and LVM over FC won’t let us do that as it has no snapshots, right? Only “local” ZFS capable of, as docs says.
Surely new is new, and that’s even greater when it comes to deal with new approaches .
But as I think of server with many disks I start to think that it is less hardware prepared for handling hardware issues and failures with its part. 3par has 2 controllers so when one will fail second one will...
You're quite right, storage I listed above is not PVE-specific at its birth so no special optimization was made (poor HPE :) ). So, the only way for us to play with that SAN is to go with thick LVM (thick itself won't bother), so at least we'll end up with shared storage and we'll be able to...
I used to use standalone PVE hosts, with local attached disks. Nice and tiny setups :)
Now I have a chance to set up PVE cluster with shared storage. It involves some like 2 groups each consists of "6 hosts and 1 SAN". All hardware to be rented so we can make some adjustments at this point...
But https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0 and https://proxmox.com/en/news/press-releases/proxmox-virtual-environment-7-0 both not claiming its beta nature. Seems this is the release anyway, itsn't it?
I have handy server I can play with (but not to kill it). Its /etc/apt/sources.list looks this way:
deb http://ftp.debian.org/debian buster main non-free contrib
deb http://ftp.debian.org/debian buster-updates main non-free contrib
deb http://security.debian.org buster/updates main non-free...
A friend of mine have set up a host for webdev purpuses and after my advice used fresh PVE instead of initialluy choosen Hyper-V.
The host is at datacenter, and they provided only one IPv4 address, and renting more is a bit tricky there.
So, my friend installed nginx at Proxmox host itself...
Dealing with several PVE hosts, all of them are standalone, I want to know disk of each VM in some algorithically easy way. The problem that, some of VMs are raw or qcow2 files, some are stored in zfs volumes.
I suspect I can only scan /etc/pve/qemu-server/*.conf files for virtio/scsi/ide/sata...
I did that, no clear picture. VMs are very low intense on I/O, and the only thing I see is zfs-related processes. But I suspect 20+% of wearout for 2-3 months is high a bit for zfs scrub mostly.
I have single PVE server (6.4, community repo) and I noticed It has constant iodelay > 0 even when host is almost idle. The host is very low loaded, so not expected to see that.
There is single m2 NVMe disk on board (Samsung 970 EVO Plus) installed mostly as a test disk, and couple of Intel 545...
You see, this is some some small company produced software and the author is kind of non-pleasant person who think everyone will try to steal his software (not that case, but we can’t prove him anything). This time (of 4->16 Mb cache change) we almost ready to pay him for reissue the license for...
Thank you, but I just can’t find how to change reported cache size for virtual CPU and your links seems not to help. I see I can report CPU mode and flags, but not cache?
May you please advice the magic how to set CPU cache size that is seen by guest OS?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.