stuck @ initramfs-tools upgrading

Molch

Active Member
May 11, 2012
46
0
26
Hi,

my server stuck here :

dpkg --configure -a
Setting up initramfs-tools (0.130) ...
update-initramfs: deferring update (trigger activated)
Processing triggers for initramfs-tools (0.130) ...
update-initramfs: Generating /boot/initrd.img-4.10.15-1-pve

Waiting since 2 hours, no changes here. And my server load is getting high about 60 @ AVG.

Is there a way to fix it without rebooting?

Thanks!
 
hi,

tried the command "sync" nothing happends.
I unmounted all nfs-shares too.
 
any relevant output in `dmesg`
`ps auxwf` should show you a tree, where you can see which process actually hangs, and in which state
 
hi,

i did the following :

edit "/usr/sbin/update-initramfs": comment out the "sync" command in the "generate_initramfs" function , after this i did dpkg --reconfigure -a.
Now it runs without problem.

But i still have a high load, is there a way to find it out, or reduce it?

Attached my output for ps auxwf:


root@vmm:/var/log# ps auxwf | grep sync
root 65446 0.0 0.0 205044 1604 ? Ds 10:26 0:00 (imesyncd)
root 67435 0.0 0.0 205012 1832 ? Ds 10:31 0:00 (imesyncd)
root 140270 0.0 0.0 12784 1020 pts/11 S+ 17:48 0:00 | \_ grep sync
root 79152 0.0 0.0 5840 72 ? D 10:31 0:00 \_ sync
root 80691 0.0 0.0 205120 2492 ? Ds 10:35 0:00 (imesyncd)
root 82192 0.0 0.0 205120 2492 ? Ds 10:40 0:00 (imesyncd)
root 83638 0.0 0.0 205120 2492 ? Ds 10:44 0:00 (imesyncd)
root 85192 0.0 0.0 205120 2492 ? Ds 10:49 0:00 (imesyncd)
root 86860 0.0 0.0 205120 2492 ? Ds 10:53 0:00 (imesyncd)
root 88471 0.0 0.0 205120 2492 ? Ds 10:58 0:00 (imesyncd)
root 89857 0.0 0.0 205120 2492 ? Ds 11:02 0:00 (imesyncd)
root 91335 0.0 0.0 205120 2492 ? Ds 11:07 0:00 (imesyncd)
root 92772 0.0 0.0 205120 2492 ? Ds 11:11 0:00 (imesyncd)
root 94250 0.0 0.0 205120 2492 ? Ds 11:16 0:00 (imesyncd)
root 95697 0.0 0.0 205120 2492 ? Ds 11:20 0:00 (imesyncd)
root 97135 0.0 0.0 205120 2492 ? Ds 11:25 0:00 (imesyncd)
root 98605 0.0 0.0 205120 2492 ? Ds 11:29 0:00 (imesyncd)
root 100137 0.0 0.0 205120 2492 ? Ds 11:34 0:00 (imesyncd)
root 101564 0.0 0.0 205120 2492 ? Ds 11:38 0:00 (imesyncd)
root 103095 0.0 0.0 205120 2492 ? Ds 11:43 0:00 (imesyncd)
root 104505 0.0 0.0 205120 2492 ? Ds 11:47 0:00 (imesyncd)
root 105936 0.0 0.0 205120 2492 ? Ds 11:52 0:00 (imesyncd)
root 107337 0.0 0.0 205120 2492 ? Ds 11:56 0:00 (imesyncd)
root 108823 0.0 0.0 205120 2492 ? Ds 12:01 0:00 (imesyncd)
root 110241 0.0 0.0 205120 2492 ? Ds 12:05 0:00 (imesyncd)
root 111671 0.0 0.0 205120 2492 ? Ds 12:10 0:00 (imesyncd)
root 113064 0.0 0.0 205120 2492 ? Ds 12:15 0:00 (imesyncd)
root 114582 0.0 0.0 205120 2492 ? Ds 12:19 0:00 (imesyncd)
root 116032 0.0 0.0 205120 2492 ? Ds 12:24 0:00 (imesyncd)
root 117539 0.0 0.0 205120 2492 ? Ds 12:28 0:00 (imesyncd)
root 119001 0.0 0.0 205120 2492 ? Ds 12:33 0:00 (imesyncd)
root 120521 0.0 0.0 205120 2492 ? Ds 12:37 0:00 (imesyncd)
root 121997 0.0 0.0 205120 2492 ? Ds 12:42 0:00 (imesyncd)
root 123425 0.0 0.0 205120 2492 ? Ds 12:46 0:00 (imesyncd)
root 124923 0.0 0.0 205120 2492 ? Ds 12:51 0:00 (imesyncd)
root 126327 0.0 0.0 205120 2492 ? Ds 12:55 0:00 (imesyncd)
root 127806 0.0 0.0 205120 2492 ? Ds 13:00 0:00 (imesyncd)
root 129232 0.0 0.0 205120 2492 ? Ds 13:04 0:00 (imesyncd)
root 130758 0.0 0.0 205120 2492 ? Ds 13:09 0:00 (imesyncd)
root 137746 0.0 0.0 5840 72 ? D 13:11 0:00 \_ sync
root 138440 0.0 0.0 205120 2492 ? Ds 13:13 0:00 (imesyncd)
root 139909 0.0 0.0 205120 2492 ? Ds 13:18 0:00 (imesyncd)
root 141373 0.0 0.0 205120 2492 ? Ds 13:22 0:00 (imesyncd)
root 142836 0.0 0.0 205120 2492 ? Ds 13:27 0:00 (imesyncd)
root 144343 0.0 0.0 205120 2556 ? Ds 13:31 0:00 (imesyncd)
root 145813 0.0 0.0 205120 2492 ? Ds 13:36 0:00 (imesyncd)
root 147253 0.0 0.0 205120 2492 ? Ds 13:40 0:00 (imesyncd)
root 1581 0.0 0.0 205120 2492 ? Ds 13:45 0:00 (imesyncd)
root 3114 0.0 0.0 205120 2492 ? Ds 13:49 0:00 (imesyncd)
root 9020 0.0 0.0 5840 68 ? D 13:51 0:00 \_ sync
root 10632 0.0 0.0 205120 2492 ? Ds 13:54 0:00 (imesyncd)
root 12166 0.0 0.0 205120 2620 ? Ds 13:58 0:00 (imesyncd)
root 13690 0.0 0.0 205120 2576 ? Ds 14:03 0:00 (imesyncd)
root 15196 0.0 0.0 205120 2576 ? Ds 14:07 0:00 (imesyncd)
root 16928 0.0 0.0 205120 2584 ? Ds 14:12 0:00 (imesyncd)
root 22279 0.0 0.0 5840 68 ? D 14:12 0:00 \_ sync
root 24065 0.0 0.0 205120 2588 ? Ds 14:16 0:00 (imesyncd)
root 26943 0.0 0.0 205120 2588 ? Ds 14:21 0:00 (imesyncd)
root 37508 0.0 0.0 5840 72 ? D 14:30 0:00 \_ sync
root 50291 0.0 0.0 5840 68 ? D 14:51 0:00 sync
root 73639 0.0 0.0 5840 664 ? D 15:42 0:00 \_ sync
root 88330 0.0 0.0 5840 676 ? D 15:49 0:00 sync
root 104785 0.0 0.0 5840 664 pts/4 D 16:21 0:00 \_ sync
root 126415 0.0 0.0 5840 620 ? D 17:27 0:00 sync


@Syslog/dmesg , only :


[30992657.569530] INFO: task pvesr:87935 blocked for more than 120 seconds.
[30992657.569640] Tainted: G O 4.10.15-1-pve #1
[30992657.569737] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[30992657.569870] pvesr D 0 87935 1 0x00000000
[30992657.569874] Call Trace:
[30992657.569887] __schedule+0x233/0x6f0
[30992657.569890] schedule+0x36/0x80
[30992657.569892] schedule_timeout+0x22a/0x3f0
[30992657.569899] ? dequeue_task_fair+0x5ab/0xaa0
[30992657.569904] ? __switch_to+0x3d7/0x520
[30992657.569907] ? ktime_get+0x41/0xb0
[30992657.569910] io_schedule_timeout+0xa4/0x110
[30992657.569913] __lock_page+0x10d/0x150
[30992657.569915] ? unlock_page+0x30/0x30
[30992657.569917] pagecache_get_page+0x19f/0x2a0
[30992657.569919] shmem_unused_huge_shrink+0x214/0x3b0
[30992657.569921] shmem_unused_huge_scan+0x20/0x30
[30992657.569924] super_cache_scan+0x190/0x1a0
[30992657.569928] shrink_slab.part.40+0x1f5/0x420
[30992657.569930] shrink_slab+0x29/0x30
[30992657.569932] shrink_node+0x108/0x320
[30992657.569935] do_try_to_free_pages+0xf5/0x330
[30992657.569937] try_to_free_pages+0xe9/0x190
[30992657.569939] __alloc_pages_slowpath+0x40f/0xba0
[30992657.569942] ? radix_tree_lookup_slot+0x22/0x50
[30992657.569944] __alloc_pages_nodemask+0x209/0x260
[30992657.569949] alloc_pages_current+0x95/0x140
[30992657.569952] pte_alloc_one+0x17/0x40
[30992657.569959] __pte_alloc+0x1e/0x110
[30992657.569961] alloc_set_pte+0x592/0x600
[30992657.569963] finish_fault+0x2c/0x50
[30992657.569965] handle_mm_fault+0xb49/0x1330
[30992657.569968] ? common_mmap+0x48/0x50
[30992657.569969] ? apparmor_mmap_file+0x18/0x20
[30992657.569973] __do_page_fault+0x23e/0x4e0
[30992657.569976] do_page_fault+0x22/0x30
[30992657.569979] page_fault+0x28/0x30
[30992657.569982] RIP: 0033:0x7f7058eeea57
[30992657.569983] RSP: 002b:00007ffe253c57f0 EFLAGS: 00010246
[30992657.569985] RAX: 0000000000000000 RBX: 000055e87feece10 RCX: 0000000000000000
[30992657.569986] RDX: 0000000000000000 RSI: 00007f70509b3000 RDI: 00007f705f34d000
[30992657.569987] RBP: 0000000000000002 R08: 00007ffe253c5990 R09: 00000000ffffffff
[30992657.569988] R10: 0000000000000172 R11: 00007f7058ef2570 R12: 000000000000001c
[30992657.569989] R13: 0000000000000010 R14: 0000000000000000 R15: 0000000000000010
[30993322.714455] vmbr0: port 44(tap106i0) entered disabled state
[31018085.971143] device tap120i0 entered promiscuous mode
[31018085.986975] vmbr0: port 44(tap120i0) entered blocking state
[31018085.986978] vmbr0: port 44(tap120i0) entered disabled state
[31018085.987355] vmbr0: port 44(tap120i0) entered blocking state
[31018085.987357] vmbr0: port 44(tap120i0) entered forwarding state
[31023533.066239] vmbr0: port 32(tap126i0) entered disabled state
[31024055.798074] vmbr0: port 44(tap120i0) entered disabled state
[31024101.994801] device tap120i0 entered promiscuous mode
[31024102.005482] vmbr0: port 32(tap120i0) entered blocking state
[31024102.005484] vmbr0: port 32(tap120i0) entered disabled state
[31024102.005756] vmbr0: port 32(tap120i0) entered blocking state
[31024102.005758] vmbr0: port 32(tap120i0) entered forwarding state
[31030928.506492] vmbr0: port 32(tap120i0) entered disabled state
[31030948.257257] device tap126i0 entered promiscuous mode
[31030948.266406] vmbr0: port 32(tap126i0) entered blocking state
[31030948.266409] vmbr0: port 32(tap126i0) entered disabled state
[31030948.266696] vmbr0: port 32(tap126i0) entered blocking state
[31030948.266698] vmbr0: port 32(tap126i0) entered forwarding state
[31079204.109615] device tap106i0 entered promiscuous mode
[31079204.126938] vmbr0: port 44(tap106i0) entered blocking state
[31079204.126941] vmbr0: port 44(tap106i0) entered disabled state
[31079204.127251] vmbr0: port 44(tap106i0) entered blocking state
[31079204.127253] vmbr0: port 44(tap106i0) entered forwarding state
[31079872.004188] vmbr0: port 44(tap106i0) entered disabled state
[31094182.636737] vmbr0: port 32(tap126i0) entered disabled state
[31094190.100684] device tap120i0 entered promiscuous mode
[31094190.112405] vmbr0: port 32(tap120i0) entered blocking state
[31094190.112409] vmbr0: port 32(tap120i0) entered disabled state
[31094190.112800] vmbr0: port 32(tap120i0) entered blocking state
[31094190.112803] vmbr0: port 32(tap120i0) entered forwarding state
[31107550.158171] vmbr0: port 36(tap105i1) entered disabled state
[31108156.516520] systemd-journald[951]: Received SIGTERM from PID 1 (systemd).


ep 5 17:51:46 vmm systemd[1]: systemd-logind.service: State 'stop-sigterm' timed out. Killing.
Sep 5 17:51:46 vmm systemd[1]: systemd-logind.service: Killing process 1330 (systemd-logind) with signal SIGKILL.
Sep 5 17:51:48 vmm rrdcached[1703]: flushing old values
Sep 5 17:51:48 vmm rrdcached[1703]: rotating journals
Sep 5 17:51:48 vmm rrdcached[1703]: started new journal /var/lib/rrdcached/journal/rrd.journal.1536162708.053216
Sep 5 17:51:48 vmm rrdcached[1703]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1536155508.038662
Sep 5 17:53:16 vmm systemd[1]: systemd-logind.service: Processes still around after SIGKILL. Ignoring.
Sep 5 17:54:47 vmm systemd[1]: systemd-logind.service: State 'stop-final-sigterm' timed out. Killing.
Sep 5 17:54:47 vmm systemd[1]: systemd-logind.service: Killing process 1330 (systemd-logind) with signal SIGKILL.
 
Last edited:
Seems like some storage/disk is not available - and hence most things seem to hang - you could check if a `df -h` runs through, if it hangs you can see which mount point does not show up in the output - that's the one that's hanging.

also taking a look at the complete output of ps auxwf might give you a hint.

pve-kernel 4.10 is also quite dated (currently we're at 4.15.18-2 on pve-enterprise, 4.15.18-4 on the pve-no-subscription)
 
hi,

storage looks good, my controller give me the following status:


-- Array information --
-- ID | Type | Size | Strpsz | Flags | DskCache | Status | OS Path | CacheCade |InProgress
c0u0 | RAID-1 | 136G | 256 KB | RA,WB | Default | Optimal | /dev/sda | None |None
c0u1 | RAID-10 | 2726G | 512 KB | RA,WB | Default | Optimal | /dev/sdb | None |None

-- Disk information --
-- ID | Type | Drive Model | Size | Status | Speed | Temp | Slot ID | LSI Device ID
c0u0p0 | HDD | SEAGATE ST9146803SS 00066SD4HAAK | 136.2 Gb | Online, Spun Up | 6.0Gb/s | 25C | [18:0] | 19
c0u0p1 | HDD | SEAGATE ST9146803SS 00066SD4H9D7 | 136.2 Gb | Online, Spun Up | 6.0Gb/s | 24C | [18:1] | 13
c0u1s0p0 | HDD | SEAGATE ST9600205SS 00046XR3GZDD | 558.4 Gb | Online, Spun Up | 6.0Gb/s | 26C | [18:2] | 14
c0u1s0p1 | HDD | SEAGATE ST9600205SS 00046XR3K35A | 558.4 Gb | Online, Spun Up | 6.0Gb/s | 27C | [18:3] | 15
c0u1s1p0 | HDD | SEAGATE ST9600205SS 00046XR3ER4H | 558.4 Gb | Online, Spun Up | 6.0Gb/s | 27C | [18:4] | 8
c0u1s1p1 | HDD | SEAGATE ST9600205SS 00046XR3BABZ | 558.4 Gb | Online, Spun Up | 6.0Gb/s | 27C | [18:5] | 9
c0u1s2p0 | HDD | SEAGATE ST9600205SS 00046XR3DGBL | 558.4 Gb | Online, Spun Up | 6.0Gb/s | 28C | [18:6] | 10
c0u1s2p1 | HDD | SEAGATE ST9600205SS 00046XR3K364 | 558.4 Gb | Online, Spun Up | 6.0Gb/s | 29C | [18:7] | 11
c0u1s3p0 | HDD | SEAGATE ST9600205SS 00046XR3K3C5 | 558.4 Gb | Online, Spun Up | 6.0Gb/s | 30C | [18:8] | 16
c0u1s3p1 | HDD | SEAGATE ST9600205SS 00046XR3FBT5 | 558.4 Gb | Online, Spun Up | 6.0Gb/s | 28C | [18:9] | 17
c0u1s4p0 | HDD | SEAGATE ST9600205SS 00046XR3K3AR | 558.4 Gb | Online, Spun Up | 6.0Gb/s | 28C | [18:10] | 21
c0u1s4p1 | HDD | SEAGATE ST9600205SS 00046XR3CD9A | 558.4 Gb | Online, Spun Up | 6.0Gb/s | 27C | [18:11] | 20


I checked df etc. - no command is running. Will the server reboot, cause of the problem above?

Thx!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!