Proxmox VE 4.0 beta2 released!

Hi,

zfsutils package does not work on a fresh install.
I can't post the full error (You are not allowed to post any kinds of links, images or videos until you post a few times.)
Errormessage:
zfsutils (0.6.5-pve1~jessie) wird eingerichtet ...
insserv: Service zfs-mount has to be enabled to start service zfs-zed
insserv: exiting now!
update-rc.d: error: insserv rejected the script header
dpkg: Fehler beim Bearbeiten des Paketes zfsutils (--configure):



Seems to be fixed in 0.6.5.1-4, please update :-)
I can't post links, use google with the keywords zfsutils insserv to find the bug on github.
 
Hi,

zfsutils package does not work on a fresh install.
I can't post the full error (You are not allowed to post any kinds of links, images or videos until you post a few times.)
Errormessage:
zfsutils (0.6.5-pve1~jessie) wird eingerichtet ...
insserv: Service zfs-mount has to be enabled to start service zfs-zed
insserv: exiting now!
update-rc.d: error: insserv rejected the script header
dpkg: Fehler beim Bearbeiten des Paketes zfsutils (--configure):



Seems to be fixed in 0.6.5.1-4, please update :-)
I can't post links, use google with the keywords zfsutils insserv to find the bug on github.

works here. fresh beta2 install from ISO, upgraded via apt to latest.

seems your are doing something different?
 
Sorry, forgot to tell: Plain jessie install via netinstall, then a manual installation via howto in the wiki.
 
I'm running 4.0 beta2 with two VMs, I want to know if I will be able to migrate to the 4.x stable with the subscription?

Thanks!

yes.
 
the kernel 4.2.1has a problem with the sky network card of mac mini 1.1

Code:
[FONT=monospace][COLOR=#18B218][191304.388971] [/COLOR][COLOR=#B26818]eth0[/COLOR][COLOR=#B21818]: hw csum failure[/COLOR]
[COLOR=#18B218][191304.389550] [/COLOR][COLOR=#B26818]CPU[/COLOR][COLOR=#000000][B]: 0 PID: 0 Comm: swapper/0 Tainted: P           O    4.2.1-1-pve #1[/B][/COLOR]
[COLOR=#18B218][191304.389553] [/COLOR][COLOR=#B26818]Hardware name[/COLOR][COLOR=#000000][B]: Apple Inc. Macmini2,1/Mac-F4208EC8, BIOS     MM21.88Z.009A.B00.0706281359 06/28/07[/B][/COLOR]
[COLOR=#18B218][191304.389557] [/COLOR][COLOR=#000000][B] ffff880001247901 ffff8800bc4039b8 ffffffff817c9063 0000000000040400[/B][/COLOR]
[COLOR=#18B218][191304.389568] [/COLOR][COLOR=#000000][B] ffff88003626e000 ffff8800bc4039d8 ffffffff816b35c2 ffffffff816a1fa0[/B][/COLOR]
[COLOR=#18B218][191304.389572] [/COLOR][COLOR=#000000][B] ffff880001247900 ffff8800bc403a18 ffffffff816aa128 ffff8800bc403a58[/B][/COLOR]
[COLOR=#18B218][191304.389576] [/COLOR][COLOR=#000000][B]Call Trace:[/B][/COLOR]
[COLOR=#18B218][191304.389579] [/COLOR][COLOR=#000000][B] <IRQ>  [<ffffffff817c9063>] dump_stack+0x45/0x57[/B][/COLOR]
[COLOR=#18B218][191304.389595] [/COLOR][COLOR=#000000][B] [<ffffffff816b35c2>] netdev_rx_csum_fault+0x42/0x50[/B][/COLOR]
[COLOR=#18B218][191304.389599] [/COLOR][COLOR=#000000][B] [<ffffffff816a1fa0>] ? skb_push+0x40/0x40[/B][/COLOR]
[COLOR=#18B218][191304.389603] [/COLOR][COLOR=#000000][B] [<ffffffff816aa128>] __skb_checksum_complete+0xc8/0xd0[/B][/COLOR]
[COLOR=#18B218][191304.389609] [/COLOR][COLOR=#000000][B] [<ffffffff81796798>] ipv6_mc_validate_checksum+0x98/0x150[/B][/COLOR]
[COLOR=#18B218][191304.389616] [/COLOR][COLOR=#000000][B] [<ffffffff816a8a7e>] skb_checksum_trimmed+0x9e/0x190[/B][/COLOR]
[COLOR=#18B218][191304.389623] [/COLOR][COLOR=#000000][B] [<ffffffff81796961>] ipv6_mc_check_mld+0x111/0x330[/B][/COLOR]
[COLOR=#18B218][191304.389629] [/COLOR][COLOR=#000000][B] [<ffffffff817a9bd2>] br_multicast_rcv+0x82/0xbe0[/B][/COLOR]
[COLOR=#18B218][191304.389634] [/COLOR][COLOR=#000000][B] [<ffffffff810b0003>] ? update_group_capacity+0x113/0x200[/B][/COLOR]
[COLOR=#18B218][191304.389640] [/COLOR][COLOR=#000000][B] [<ffffffff813bf9b5>] ? find_next_bit+0x15/0x20[/B][/COLOR]
[COLOR=#18B218][191304.389644] [/COLOR][COLOR=#000000][B] [<ffffffff813a0000>] ? cfq_quantum_store+0x30/0x50[/B][/COLOR]
[COLOR=#18B218][191304.389647] [/COLOR][COLOR=#000000][B] [<ffffffff810b022f>] ? update_sd_lb_stats+0x13f/0x540[/B][/COLOR]
[COLOR=#18B218][191304.389651] [/COLOR][COLOR=#000000][B] [<ffffffff817a0f68>] br_handle_frame_finish+0x298/0x600[/B][/COLOR]
[COLOR=#18B218][191304.389657] [/COLOR][COLOR=#000000][B] [<ffffffff816eaf01>] ? nf_iterate+0x31/0x80[/B][/COLOR]
[COLOR=#18B218][191304.389663] [/COLOR][COLOR=#000000][B] [<ffffffff816eafb9>] ? nf_hook_slow+0x69/0xc0[/B][/COLOR]
[COLOR=#18B218][191304.389669] [/COLOR][COLOR=#000000][B] [<ffffffff817a141f>] br_handle_frame+0x14f/0x270[/B][/COLOR]
[COLOR=#18B218][191304.389674] [/COLOR][COLOR=#000000][B] [<ffffffff817a0cd0>] ? br_handle_local_finish+0x80/0x80[/B][/COLOR]
[COLOR=#18B218][191304.389678] [/COLOR][COLOR=#000000][B] [<ffffffff816b6453>] __netif_receive_skb_core+0x253/0x9f0[/B][/COLOR]
[COLOR=#18B218][191304.389681] [/COLOR][COLOR=#000000][B] [<ffffffff816b6c0a>] __netif_receive_skb+0x1a/0x70[/B][/COLOR]
[COLOR=#18B218][191304.389684] [/COLOR][COLOR=#000000][B] [<ffffffff816b6c83>] netif_receive_skb_internal+0x23/0x80[/B][/COLOR]
[COLOR=#18B218][191304.389688] [/COLOR][COLOR=#000000][B] [<ffffffff816b7825>] napi_gro_receive+0xb5/0xf0[/B][/COLOR]
[COLOR=#18B218][191304.389695] [/COLOR][COLOR=#000000][B] [<ffffffffc000020b>] ? sky2_rx_submit+0x2b/0x80 [sky2][/B][/COLOR]
[COLOR=#18B218][191304.389703] [/COLOR][COLOR=#000000][B] [<ffffffffc000621b>] sky2_poll+0x5eb/0xd80 [sky2][/B][/COLOR]
[COLOR=#18B218][191304.389711] [/COLOR][COLOR=#000000][B] [<ffffffff810eef68>] ? tick_program_event+0x48/0x80[/B][/COLOR]
[COLOR=#18B218][191304.389716] [/COLOR][COLOR=#000000][B] [<ffffffff816b712e>] net_rx_action+0x1fe/0x310[/B][/COLOR]
[COLOR=#18B218][191304.389720] [/COLOR][COLOR=#000000][B] [<ffffffff8107b8f5>] __do_softirq+0x105/0x260[/B][/COLOR]
[COLOR=#18B218][191304.389723] [/COLOR][COLOR=#000000][B] [<ffffffff8107bbae>] irq_exit+0x8e/0x90[/B][/COLOR]
[COLOR=#18B218][191304.389727] [/COLOR][COLOR=#000000][B] [<ffffffff817d26f8>] do_IRQ+0x58/0xe0[/B][/COLOR]
[COLOR=#18B218][191304.389731] [/COLOR][COLOR=#000000][B] [<ffffffff817d066b>] common_interrupt+0x6b/0x6b[/B][/COLOR]
[COLOR=#18B218][191304.389735] [/COLOR][COLOR=#000000][B] <EOI>  [<ffffffff8165ba31>] ? cpuidle_enter_state+0xf1/0x220[/B][/COLOR]
[COLOR=#18B218][191304.389748] [/COLOR][COLOR=#000000][B] [<ffffffff8165ba10>] ? cpuidle_enter_state+0xd0/0x220[/B][/COLOR]
[COLOR=#18B218][191304.389751] [/COLOR][COLOR=#000000][B] [<ffffffff8165bb97>] cpuidle_enter+0x17/0x20[/B][/COLOR]
[COLOR=#18B218][191304.389755] [/COLOR][COLOR=#000000][B] [<ffffffff810b7b0b>] call_cpuidle+0x3b/0x70[/B][/COLOR]
[COLOR=#18B218][191304.389759] [/COLOR][COLOR=#000000][B] [<ffffffff8165bb73>] ? cpuidle_select+0x13/0x20[/B][/COLOR]
[COLOR=#18B218][191304.389765] [/COLOR][COLOR=#000000][B] [<ffffffff810b7dc8>] cpu_startup_entry+0x288/0x350[/B][/COLOR]
[COLOR=#18B218][191304.389772] [/COLOR][COLOR=#000000][B] [<ffffffff817bdb0c>] rest_init+0x7c/0x80[/B][/COLOR]
[COLOR=#18B218][191304.389777] [/COLOR][COLOR=#000000][B] [<ffffffff81d65fd4>] start_kernel+0x48b/0x498[/B][/COLOR]
[COLOR=#18B218][191304.389781] [/COLOR][COLOR=#000000][B] [<ffffffff81d65120>] ? early_idt_handler_array+0x120/0x120[/B][/COLOR]
[COLOR=#18B218][191304.389787] [/COLOR][COLOR=#000000][B] [<ffffffff81d654d7>] x86_64_start_reservations+0x2a/0x2c[/B][/COLOR]
[COLOR=#18B218][191304.389792] [/COLOR][COLOR=#000000][B] [<ffffffff81d65614>] x86_64_start_kernel+0x13b/0x14a[/B][/COLOR]

[/FONT]
 
I have a problem with server Dell 2950 with PERC 5i RAID. At first installed Proxmox 4 Beta1 with 3.19 kernel. Everything was fine. I upgrade to 4.2.0(1) and the server crashed and reboot after a rsync command in one KVM machine. A test the controller with dell diagnostics and it's OK. I tried again with 4.2.X and server crashed and reboot. The RAID was degraded with foreign configuration. No faulty disk. Now I am rebuilding the raida and working with kernel 3.19. Everything is working fine as in the beginning.

The errors are

Oct 4 23:07:24 perf-ve kernel: [609444.828057] megasas: [ 0]waiting for 52 commands to complete
Oct 4 23:07:24 perf-ve kernel: [609449.848030] megasas: [ 5]waiting for 51 commands to complete
Oct 4 23:07:24 perf-ve kernel: [609454.868016] megasas: [10]waiting for 51 commands to complete
Oct 4 23:07:24 perf-ve kernel: [609459.888026] megasas: [15]waiting for 51 commands to complete
Oct 4 23:07:24 perf-ve kernel: [609464.908022] megasas: [20]waiting for 50 commands to complete
Oct 4 23:07:24 perf-ve kernel: [609469.928024] megasas: [25]waiting for 50 commands to complete
Oct 4 23:07:24 perf-ve kernel: [609474.948146] megasas: [30]waiting for 50 commands to complete
Oct 4 23:07:24 perf-ve kernel: [609479.968027] megasas: [35]waiting for 49 commands to complete

pveversion -v
proxmox-ve: not correctly installed (running kernel: 3.19.8-1-pve)
pve-manager: not correctly installed (running version: 4.0-43/60482016)
pve-kernel-3.19.8-1-pve: 3.19.8-3
pve-kernel-4.2.0-1-pve: 4.2.0-13
pve-kernel-4.2.1-1-pve: 4.2.1-14
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: not correctly installed
qemu-server: not correctly installed
pve-firmware: 1.1-7
libpve-common-perl: 4.0-25
libpve-access-control: 4.0-8
libpve-storage-perl: 4.0-24
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-9
pve-container: not correctly installed
pve-firewall: not correctly installed
pve-ha-manager: not correctly installed
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.3-1
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
criu: 1.6.0-1
zfsutils: 0.6.5-pve3~jessie
openvswitch-switch: 2.3.2-1
 
iIsee that lvm on proxmox now support LXC but when i do a backup in snapshot mode, it does not create a lvm snap and try to do it in suspend mode which is verrrryyyyyy loooonnnnnggg ... is this normal ?

I also tested snapshot mode with zfs on my other server that have ECC... 53 seconds back up for a 5G CT.. perfect :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!