ah, ok... my debian 4 had a 2.6 kernel. (i dont know why... maybe because of an update from debian 3 or older. don't know... it's too lang ago)
virtio doens't work with this kernel.
but as i wrote: i used the better (more time intensive) solution and replaced the system with a current operating...
Hello,
I am not sure if I am correct in my assumptions or if something has broken.
these are the latest updates i have installed
Start-Date: 2023-02-28 05:19:42
Commandline: apt-get -y dist-upgrade
Install: libslirp0:amd64 (4.4.0-1+deb11u2, automatic)
Upgrade: libcurl4:amd64...
hi fabian,
thank you for this information
aaron said:
"If you do not have a dedicated physical network for corosync, having multiple links configured might save you from fencing, but it is no guarantee as the other networks might also be in a state that makes them unusable for corosync."...
ouuu... today i installed the regular updates on the "old" and not yet updated pve6 cluster and started booting the hosts in order. then on the second of seven i had "cluster-wide" fencing and all hosts booted. :-(
don't know which bugfixes for corosync are included in pve7.... but maybe they...
today i checked the pve7.1 release notes. is it possible, that this fixes my problem described above?
Updated corosync to include bug-fixes for issues occurring during network recovery.
This could have otherwise lead to loss of quorum on all cluster nodes, which in turn would cause a...
i checked all my logs but nothing relevant at. very rare entries in the night during backup tasks like this:
Oct 06 02:03:23 p1 corosync[4323]: [KNET ] link: host: 3 link: 0 is down
Oct 06 02:03:23 p1 corosync[4323]: [KNET ] host: host: 3 (passive) best link: 0 (pri: 1)
Oct 06 02:03:23 p1...
No. It happend during the update of the first node. I think apt showed about 97 percent and then everything was offline and all nodes rebooted.
Yes. File Backup from several VMs are made through this physical interface (other vlan). But definitely not at this time when this happened.
As the...
Hey guys,
nearly exact the same happend here. During the upgrade from pve 6 to 7 of the first node in a three node cluster, all nodes rebooted at the same time. I thought, this is an upgrad issue. But two days ago, all three nodes rebooted again at exactly the same time without a warning and...
thanks! that did the trick :-)
after turning relatime on, there is only a short disk access and permanent IO is gone.
why isn't it the defauilt behavior? or in my case because i created the zpool and datastore manually and not via the UI ?
no scheduled tasks are running during daytime. (only backups)
and yes... the UI was open and showed the datastore summary page.
when i show the dashboard, the IO is gone
take a look at the screenshot. during the time in the red frame the summary page of the datastore was shown. the peak at the...
hey there,
my current (test) PBS has 20 ssds in one pool with 2 raidz2 vdevs,
when the system is idle and no backup is running, there are permanently much io on the pool.
is this a normal behavior for zfs?
~800 write iops when idle?
regards
stefan
hi folks,
- on my PVE cluster and the PBS the same version of smartmontools is installed (7.2-pve2)
- both have the same file in /var/lib/smartmontools/drivedb/drivedb.h
- for one model of ssd (crucial mx500) the PVE shows the wearout, but the PBS shows N/A.
- other disks wearout is shown as...
yes, I know that. There's a reason those SSDs are lying around and no longer used in the Ceph cluster. (Not because of the wearout, but because of the cache behavior and the performance under longer load). I just don't want to throw them away and also don't want to sell them. For "testing" it...
hey,
i have about 20+ (unfortunately consumer) ssd's with 2TB left and a server with 24x 2,5 inch slots and i want to use that for a PBS.
now i am thinking about is the most sensible variant for a zfs pool. one big vdev with raidz2/3? multiple vdevs? special devices? what do you think...
am i seeing this correctly? if the message "clients are using insecure global_id reclaim" is disappearing, it is safe to run this command: "ceph config set mon auth_allow_insecure_global_id_reclaim false" ? (I have restarted all pve/ceph cluster nodes and all vms were migrated during this procedure)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.