call traces in syslog of proxmox 4.1

dirtbag

New Member
Feb 29, 2016
8
1
3
49
hey folks , I have 4.1 up and running on a proliant dl380g5 with 32gigs of memory and Im just testing various things out.. Id like to migrate my esxi server over to this platform, but Im seeing some things Id like more info about.
first off, im running:

root@rock:~# pveversion --verbose
proxmox-ve: 4.1-26 (running kernel: 4.2.6-1-pve)
pve-manager: 4.1-1 (running version: 4.1-1/2f9650d4)
pve-kernel-4.2.6-1-pve: 4.2.6-26
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 0.17.2-1
pve-cluster: 4.0-29
qemu-server: 4.0-41
pve-firmware: 1.1-7
libpve-common-perl: 4.0-41
libpve-access-control: 4.0-10
libpve-storage-perl: 4.0-38
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.4-17
pve-container: 1.0-32
pve-firewall: 2.0-14
pve-ha-manager: 1.0-14
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-5
lxcfs: 0.13-pve1
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve6~jessie


1. I see there are updates if I do "apt-get update; apt-get upgrade"
*shoud* i run the upgrade every so often there are updates?


2. also in the logs on the host, I see this a lot
Feb 28 19:56:34 rock kernel: [28080.208579] lzop D ffff88082fad6a00 0 9360 9351 0x00000000
Feb 28 19:56:34 rock kernel: [28080.208585] ffff880697e0f998 0000000000000082 ffff88080c323300 ffff8807ff658cc0
Feb 28 19:56:34 rock kernel: [28080.208589] 0000000000016a00 ffff880697e10000 ffff88082fad6a00 7fffffffffffffff
Feb 28 19:56:34 rock kernel: [28080.208593] ffffffff81804790 ffff880697e0fb30 ffff880697e0f9b8 ffffffff81803eb7
Feb 28 19:56:34 rock kernel: [28080.208596] Call Trace:
Feb 28 19:56:34 rock kernel: [28080.208607] [<ffffffff81804790>] ? bit_wait_timeout+0x90/0x90
Feb 28 19:56:34 rock kernel: [28080.208611] [<ffffffff81803eb7>] schedule+0x37/0x80
Feb 28 19:56:34 rock kernel: [28080.208614] [<ffffffff818070d1>] schedule_timeout+0x201/0x2a0
Feb 28 19:56:34 rock kernel: [28080.208619] [<ffffffff8101e7a8>] ? native_sched_clock+0x28/0x90
Feb 28 19:56:34 rock kernel: [28080.208622] [<ffffffff8101e879>] ? sched_clock+0x9/0x10
Feb 28 19:56:34 rock kernel: [28080.208626] [<ffffffff810ab24b>] ? sched_clock_local+0x1b/0x90
Feb 28 19:56:34 rock kernel: [28080.208629] [<ffffffff8101e249>] ? read_tsc+0x9/0x10
Feb 28 19:56:34 rock kernel: [28080.208632] [<ffffffff81804790>] ? bit_wait_timeout+0x90/0x90
Feb 28 19:56:34 rock kernel: [28080.208635] [<ffffffff818034ab>] io_schedule_timeout+0xbb/0x140
Feb 28 19:56:34 rock kernel: [28080.208639] [<ffffffff810bd417>] ? prepare_to_wait+0x57/0x80
Feb 28 19:56:34 rock kernel: [28080.208642] [<ffffffff818047ce>] bit_wait_io+0x3e/0x60
Feb 28 19:56:34 rock kernel: [28080.208644] [<ffffffff8180428f>] __wait_on_bit+0x5f/0x90
Feb 28 19:56:34 rock kernel: [28080.208647] [<ffffffff81804790>] ? bit_wait_timeout+0x90/0x90
Feb 28 19:56:34 rock kernel: [28080.208650] [<ffffffff81804341>] out_of_line_wait_on_bit+0x81/0xb0
Feb 28 19:56:34 rock kernel: [28080.208653] [<ffffffff810bd750>] ? autoremove_wake_function+0x40/0x40
Feb 28 19:56:34 rock kernel: [28080.208674] [<ffffffffc0a329a8>] nfs_wait_on_request+0x38/0x40 [nfs]
Feb 28 19:56:34 rock kernel: [28080.208685] [<ffffffffc0a37712>] nfs_updatepage+0x162/0x930 [nfs]
Feb 28 19:56:34 rock kernel: [28080.208693] [<ffffffffc0a27bd8>] nfs_write_end+0x158/0x500 [nfs]
Feb 28 19:56:34 rock kernel: [28080.208699] [<ffffffff813db323>] ? iov_iter_copy_from_user_atomic+0x93/0x230
Feb 28 19:56:34 rock kernel: [28080.208703] [<ffffffff81181a97>] generic_perform_write+0x117/0x1d0
Feb 28 19:56:34 rock kernel: [28080.208707] [<ffffffff8121875e>] ? dentry_needs_remove_privs.part.12+0x1e/0x30
Feb 28 19:56:34 rock kernel: [28080.208711] [<ffffffff81183c16>] __generic_file_write_iter+0x1a6/0x1f0
Feb 28 19:56:34 rock kernel: [28080.208713] [<ffffffff81183d48>] generic_file_write_iter+0xe8/0x1e0
Feb 28 19:56:34 rock kernel: [28080.208722] [<ffffffffc0a26f12>] nfs_file_write+0xa2/0x180 [nfs]
Feb 28 19:56:34 rock kernel: [28080.208725] [<ffffffff811fcacb>] new_sync_write+0x9b/0xe0
Feb 28 19:56:34 rock kernel: [28080.208728] [<ffffffff811fcb36>] __vfs_write+0x26/0x40
Feb 28 19:56:34 rock kernel: [28080.208730] [<ffffffff811fd1b9>] vfs_write+0xa9/0x190
Feb 28 19:56:34 rock kernel: [28080.208733] [<ffffffff81805f06>] ? mutex_lock+0x16/0x40
Feb 28 19:56:34 rock kernel: [28080.208735] [<ffffffff811fdff5>] SyS_write+0x55/0xc0
Feb 28 19:56:34 rock kernel: [28080.208739] [<ffffffff81807ff2>] entry_SYSCALL_64_fastpath+0x16/0x75
Feb 28 20:02:58 rock pvedaemon[18652]: <root@pam> successful auth for user 'root@pam'
Feb 28 20:14:34 rock kernel: [29160.208586] lzop D 0000000000000000 0 9360 9351 0x00000000
Feb 28 20:14:34 rock kernel: [29160.208596] ffff88082fffbe00 ffff880697e10000 ffff88082fa16a00 7fffffffffffffff

I assume thats not good? ;)

3. also, Im seeing high cpu on the host when I was copying data over to my new vm..
for the storage for the vm, Im using a mirrored zfs volume. the underlying volumes are
just individual raid0 disks on the DL380.
should that be ? is there any tuning I should do for zfs volumes?

Jason
 

dirtbag

New Member
Feb 29, 2016
8
1
3
49
i think they went away after I figured out how to add the non-enterprise repos to my servers and updated them.

-db
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!