I can't help with zfs but your iso upload problems sound like you are hitting the USB drive hence the grey outs and timeouts.
To re-enable boot from CD /iso you can look in options boot order and hardware cddrive image choice in the proxmox gui. Or if you are quick you can choose the boot in...
I guess you are referring to the exchange that happened here:
https://forum.proxmox.com/threads/iommu-broke-solved.53188/#post-245797
I'd leave it alone, don't pick that thread.
Hi,
You have provided lots of info, which is good, but, as you admit, there is a lot to process there...
At the start I think you are saying that you are installing proxmox onto a 32GB usb stick. This is not a good idea, install proxmox on a SATA hdd or on a sdd. If you accept the default...
"In my case I mostly noticed VMs that were idle all weekend performing poorly on Monday because all their RAM was swapped to disk during weekend backups."
THIS! And if you get this all you can do is turn swap off completely or keep the VM artificially busy.
It is my opinion that linux will use filesystem cache and buffers in preference to rarely used memory. i.e. If you do a lot of filesystem work then some other things will get swapped out. It could be that your backup is pushing things into swap.
As for "because it kinda hangs on last ~10MB of...
task started by HA resource agent
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/QemuServer.pm line 2026.
TASK OK
It seemed to work properly - after I did "shutdown -h now" in the VM, the VM was started up again by HA then this message was seen in Task Viewer...
A recent kernel change broke the 400. You need to upgrade the raid firmware. This needs to be done from 32 bit ubuntu(that worked for me anyhow) Then the disks will go from cciss to sda type.
Hope that helps.
I think I've seen this before (when the root vg does not match the vg on disk for me) when the kernel is looking for root file system but not finding it.
I'd try taking your hdd out to see if you get the same error or something different.
You could also try doing the install from usb or dvd.
Is there any possibility for a light proxmox node that joins a cluster to just provide extra Ceph disk space ?
I have a proxmox cluster that I am very pleased with, but maybe I could expand its ceph storage with some storage nodes that are managed by the cluster.
(I can see that others might...
Hi,
I am a fan of using ...not so good hardware with proxmox, however I can't see any use for proxmox here.
Your nodes are not running VMs so why proxmox? Also, with 6-8 nodes you'll be OK but you will not have a spare ethernet to run the clustering on. If you try to expand (on success) then...
ceph osd pool set ceph size 3
would in theory lose 3 of your replicas of each data block but since you don't have them already (due to lack of nodes) you don't lose any data.
If you really want 6 copies (?!) you could chooseleaf osd but it is a bit late for that now unless you create another...
Hi,
On a switch I think the correct technology is called "hairpinning" on a NAT it is called "NAT reflection"
Overriding DNS can work (Big companies do it too) but DNS over HTTPS might be trouble for the future.
Andrew
You can turn the network card off in the hardware list of the vm in proxmox. It is a checkbox called "disconnect".
If you still want your VMs to talk to each other then just make sure the bridge vmbr1 that they are connected to has no ethernet cards.
If you have a private network like 192.168.22.x and you want to talk to the outside world then you'll need a NAT.
A bridge is the equivalent of a network switch and doesn't do that.
You crazy cat.
(sorry it felt like a poem)
There is a saying ... virtualise everything except time ...
Proxmox uses systemd's time sync (which is best for hosts that need to sync quickly after a reboot) and it is configured in /etc/systemd/timesyncd.conf
Here is a PVE ceph cluster I have:
3 servers of:
CPU(s): 8
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 2
Model name: Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
ceph status
cluster:
id: cf6aec56-7ab1-41b1-90c1-972d52cb9d90...
Hi,
What would happen to cause 3 out of five nodes to die? I think if you had a five node cluster, the most you would expect is 1 failure at a time. There is also the point that if you did get it working how you want, then all 40-50 VMs could end up running on one node.
Your idea of having two...
There are two disks on each of the servers. Since each of the servers will only have gig ethernet anyhow, ssd would be a waste of time.
The servers probably don't even have trim support.
If it really is a "stack", I'd go for a proxmox ceph hyperconverged cluster. Allocate 1 hdd disk from each as your proxmox and 1 each as an osd for ceph. Assuming you have a spare gig switch that you can dedicate to ceph.
If you need high performance vm put it on lvm on the local hdd, if you...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.