Hi,
what do you mean with old?! What kind of server is it?
I install pve on many old systems without any trouble (with the installer).
If only the nic makes trouble, it's perhaps easier to install an intel-nic?
But if the server are very old, you are perhaps not satisfied with the...
Hi Lucas,
the trick is "can be something like"... mean, you have source lists, which uses oldstable stretch, wich must replaced by buster.
to find your files use
grep -r stretch /etc/apt/
sed is an streamline-editor and the command simply replace stretch with buster in the file (like vi...
Hi,
with pve 6 the new corosync version is used.
Are there any changes for the amount of cluster nodes in one cluster?
If I remember right, for now is the limit 32 nodes, but less are recommendet (amount?).
Udo
Hi,
do you have any settings in the bios depends on clock source?
How are the performance/powersave settings?
Any difference with full performance mode?
Udo
Hi,
I've some R620 too but only two disks as zfs below an perc (raid-0 construct).
All don't have any boot trouble... but an similiar effect i see at an R410 with enabled sata (bios). After disable sata in bios the boot went well without 10 minutes dead time.
Udo
Hi,
looks like trouble with an sata disk...
This can happen with an damaged disk, or if you use an SAS-Disk on an Sata-Port! (See this some days ago).
Udo
Hi,
you don't wrote which perc do you have in the R610.
With the BBU-Version you can't use pass through - for zfs you must work with single raid-0 which raided with zfs (ugly)!
ZFS (zol) on SSD-Raid 1 is much slower than an raid-1 with the lsi-controller (don't mean the H330).
Esp. with heavy...
Hi,
is your downloaded iso broken? check the checksum.
I have installed many times pve on Dell generation 11 servers, but allways from an usb-stick (copied iso with dd to the stick).
Udo
Hi,
Non-raid is the dell-naming for path through - so this is the right choice for zfs + ceph disks.
About ahci - take a look here: https://www.diffen.com/difference/AHCI_vs_IDE
Udo
Hi Wolfgang,
in my case both servers use an seperate disk for swap...
swapon
NAME TYPE SIZE USED PRIO
/dev/sdf1 partition 16G 147M -2
swapon
NAME TYPE SIZE USED PRIO
/dev/sdf1 partition 16G 0B -2
Udo
Hi,
i have the same issue with the replication between two nodes.
One VM was replicates from B to A every 15 minute and fails app. four times a day and one VM with the the same migration schedule from A to B fails not or only one times a day.
I would guess it's the load (IO) and ZOL related in...
Hi,
which versions you are running?
Which process need the most ram?
AFAIK there is/was an memory issue with ceph... get you the memory back if you restart the OSDs? (one by one)?
Udo
Hi,
which profile do you have selected in the bios? If isn't Performance you can have very weird IO-performance (had this with normal raid volumes on an R620).
Udo
Hi,
unfortunality the performance for zfs is not so good, like with lvm and an hw-raid-controller.
We had some trouble with mysql-write-latencies on zfs (SSDs), which are gone, after switching to lvm-raid-storage (same SSDs).
Pro for features, like replication for disaster recovery...
Udo
Hi,
proxmox ve is an rolling release. It's important to use "apt dist-upgrade" (or full-upgrade)!
"apt upgrade" (or apt-get) isn't enough!
Perhaps this solve the issue?!
Udo
Hi,
yes this is possible. I have the pve-config in puppet and can distribute all settings to an new installed server and switch after that the new disks to the old server and all works like before (I must set the registration-code again, but it's the same).
The last thing on the new installed...
Hi,
just see, that you filled your root-partition (No space left on device) because you wrote to an new file (the logical volume wasn't open).
To correct stop the VM 333, deactivate the LV and remove the big file:
qm stop 333
lvchange -a n /dev/BANK_SSD/vm-333-disk-0
ls -lsa...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.