FYI, those packages are already out and publicly available through the pvetest package repository.Hi, I find that most of GITS has a update on 7/12(bump version), so if we can get beta2 or RC by this weekend? thanks
FYI, those packages are already out and publicly available through the pvetest package repository.Hi, I find that most of GITS has a update on 7/12(bump version), so if we can get beta2 or RC by this weekend? thanks
How hard will it be expected to be to upgrade from PVE6 BETA to stable once released?
Is there a set of tests which I should run on an installed beta to provide feedback? Its not a problem if thinks break, its a test machine.something breaks
I guess the key question is whether a 5.4 node and 6.0 node can exist in the same cluster and migrate VMs between themselves.
What is the recommended sequence? Migrate all machines to one node, remove from cluster, then re-install fresh with 6.0/zfs, and migrate VMs across? Or upgrade in place to 6.0 on both nodes first before re-installation? I guess the key question is whether a 5.4 node and 6.0 node can exist in the same cluster and migrate VMs between themselves.
Is there a set of tests which I should run on an installed beta to provide feedback? Its not a problem if thinks break, its a test machine.
thanks, so could you pls help to advice when the next ISO will be released for 6.0 as your team plan?FYI, those packages are already out and publicly available through the pvetest package repository.
I'd guess it's a combination of both, but that probably does not help you It'd be interesting to know why udev now needs a while to settle things... A look at "dmesg" output during that time and maybe checkingthis something in the current version OR in my machine?
systemd-analyze blame
I'd guess it's a combination of both, but that probably does not help you It'd be interesting to know why udev now needs a while to settle things... A look at "dmesg" output during that time and maybe checkingfor the case that systemd-udev-settle.service just needs to wait on something else..Code:systemd-analyze blame
Maybe also try to revert the HPET change back, just temporarily to see if that'd would solve this issue.
# systemd-analyze blame
1min 8.596s systemd-udev-settle.service
1min 8.567s ifupdown-pre.service
2.636s lvm2-pvscan@259:0.service
2.127s pvebanner.service
1.865s pvedaemon.service
1.761s pve-cluster.service
1.644s dev-md0.device
1.616s systemd-modules-load.service
1.589s lvm2-monitor.service
1.277s pvestatd.service
1.268s pve-firewall.service
1.170s zfs-import-cache.service
1.139s smartd.service
1.047s rrdcached.service
# dmesg | egrep 'ifupdown|udev|helper|wait'
[ 0.256842] process: using mwait in idle threads
@Gaia
I think this has nothing to do with the Proxmox version,
have a look here, Post 21:
https://forum.proxmox.com/threads/wrong-time-wrong-time-synchronisation.55208/page-2
regards,
maxprox
time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
12:40:56 0 0 0 0 0 0 0 0 0 678M 3.7G
Jul 13 15:51:48 node1 systemd[1]: ifupdown-pre.service: Main process exited, code=exited, status=1/FAILURE
Jul 13 15:51:48 node1 systemd[1]: ifupdown-pre.service: Failed with result 'exit-code'.
Jul 13 15:51:48 node1 systemd[1]: Failed to start Helper to synchronize boot up for ifupdown.
Jul 13 15:51:48 node1 systemd[1]: Dependency failed for Raise network interfaces.
Jul 13 15:51:48 node1 systemd[1]: networking.service: Job networking.service/start failed with result 'dependency'.
Jul 13 15:51:48 node1 systemd[1]: systemd-udev-settle.service: Main process exited, code=exited, status=1/FAILURE
Jul 13 15:51:48 node1 systemd[1]: systemd-udev-settle.service: Failed with result 'exit-code'.
Jul 13 15:51:48 node1 systemd[1]: Failed to start udev Wait for Complete Device Initialization.
auto lo
iface lo inet loopback
iface eno2 inet manual
allow-vmbr0 eno1
iface eno1 inet manual
ovs_type OVSPort
ovs_bridge vmbr0
auto vmbr0
iface vmbr0 inet manual
ovs_type OVSBridge
ovs_ports eno1 vlan148
allow-vmbr0 vlan148
iface vlan148 inet static
address 10.250.100.21
netmask 255.255.255.0
gateway 10.250.100.253
ovs_type OVSIntPort
ovs_bridge vmbr0
ifup vmbr0
I am trying to limit zfs memory usage on PVE6 beta using this manual https://pve.proxmox.com/wiki/ZFS_on_Linux
Added "options zfs zfs_arc_max=2147483648" to "/etc/modprobe.d/zfs.conf" then run "update-initramfs -u" then reboot.
After reboot i run "arcstat" and see:
Code:time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c 12:40:56 0 0 0 0 0 0 0 0 0 678M 3.7G
My setup is pve6 latest beta root on zfs (raid10) with UEFI boot.
Need some help!
[ 136.820241] openvswitch: Open vSwitch switching datapath
[ 137.419453] bpfilter: Loaded bpfilter_umh pid 999
[ 141.378328] sctp: Hash tables configured (bind 512/512)
[ 148.469395] L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and deleted link
[ 243.111684] INFO: task kworker/9:1:140 blocked for more than 120 seconds.
[ 243.111722] Tainted: P IO 5.0.15-1-pve #1
[ 243.111744] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 243.111773] kworker/9:1 D 0 140 2 0x80000000
[ 243.111784] Workqueue: events redo_bmc_reg [ipmi_msghandler]
[ 243.111786] Call Trace:
[ 243.111796] __schedule+0x2d4/0x870
[ 243.111798] schedule+0x2c/0x70
[ 243.111800] schedule_preempt_disabled+0xe/0x10
[ 243.111803] __mutex_lock.isra.10+0x2e4/0x4c0
[ 243.111806] ? __switch_to_asm+0x34/0x70
[ 243.111808] ? __switch_to_asm+0x40/0x70
[ 243.111810] __mutex_lock_slowpath+0x13/0x20
[ 243.111811] mutex_lock+0x2c/0x30
[ 243.111814] __bmc_get_device_id+0x65/0xaf0 [ipmi_msghandler]
[ 243.111815] ? __switch_to_asm+0x40/0x70
[ 243.111817] ? __switch_to_asm+0x34/0x70
[ 243.111818] ? __switch_to_asm+0x40/0x70
[ 243.111819] ? __switch_to_asm+0x34/0x70
[ 243.111824] ? __switch_to+0x96/0x4e0
[ 243.111825] ? __switch_to_asm+0x40/0x70
[ 243.111826] ? __switch_to_asm+0x34/0x70
[ 243.111827] ? __switch_to_asm+0x40/0x70
[ 243.111830] redo_bmc_reg+0x54/0x60 [ipmi_msghandler]
[ 243.111835] process_one_work+0x20f/0x410
[ 243.111836] worker_thread+0x34/0x400
[ 243.111840] kthread+0x120/0x140
[ 243.111842] ? process_one_work+0x410/0x410
[ 243.111843] ? __kthread_parkme+0x70/0x70
[ 243.111845] ret_from_fork+0x35/0x40
[ 243.111870] INFO: task systemd-udevd:685 blocked for more than 120 seconds.
[ 243.111896] Tainted: P IO 5.0.15-1-pve #1
[ 243.111918] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 243.111946] systemd-udevd D 0 685 587 0x80000104
I'd guess it's a combination of both, but that probably does not help you It'd be interesting to know why udev now needs a while to settle things... A look at "dmesg" output during that time and maybe checkingfor the case that systemd-udev-settle.service just needs to wait on something else..Code:systemd-analyze blame
Maybe also try to revert the HPET change back, just temporarily to see if that'd would solve this issue.
# rear udev
Cannot include keyboard mappings (no keymaps default directory '')