Proxmox VE 4.0 beta1 released!

Status
Not open for further replies.
No, because I consider two node a dangerous option.

Hi Dietmar

I know about the risk of "two_node" with fence devices by Hardware, and for it, instead of configure a fence device by Hardware, I enable manual fence (fence_ack_manual), of this mode, i can see previously what's going on, and then finally take the decision more wise about of my next actions to perform.

A example of this scenery exist in my minilab, also i know of mini companies that have only 2 PVE nodes (i think that only just look in this forum the amount of questions that the people have made about of to have only 2 PVE nodes and configure it in HA).

So i would like to order you that PVE can support two nodes in HA mode with fence devices, or in the worse of the case, only with manual fence (i am very sure that many people also will be grateful, i included).

Best regards
Cesar

Re-edit: What abounds,not damage.
 
Last edited:
You can simply run that inside a VM.
But his would add all the virtualization overhead I just wanted to avoid.

I think I'll try to give a spin to an lxc container with Kubernetes in the 4.0 beta :cool:

From the missing answers to my other questions I conclude that there is no planning to implement any type of awareness for docker containers in Proxmox VE for now. Is this true?
 
I have an error to start lxc container after full server restart.
lxc-start: bdev.c: find_free_loopdev: 1907 No loop device foundlxc-start: conf.c: mount_rootfs: 883 No such file or directory - failed to get real path for 'loop:/var/lib/vz/images/110/vm-110-rootfs.raw'
lxc-start: conf.c: setup_rootfs: 1290 failed to mount rootfs
lxc-start: conf.c: do_rootfs_setup: 3693 failed to setup rootfs for '110'
lxc-start: conf.c: lxc_setup: 3775 Error setting up rootfs mount after spawn
lxc-start: start.c: do_start: 702 failed to setup the container
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 2
lxc-start: start.c: __lxc_start: 1178 failed to spawn '110'
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

I use debian 7.8/8.1 i686 and ubuntu 14.10 i686 that was created from templates of already exisiting openvz containers from Proxmox 3.

Workaround for me is:
mount /var/lib/vz/images/110/vm-110-rootfs.raw /mnt/tmp; umount /mnt/tmp

Container start-stop work till next full server restart.

Also on the screen i have when container successfuly start
ext4-fs (loop1): couldn't mount as ext2 due to feature incompatibilities
ext4-fs (loop1): couldn't mount as ext3 due to feature incompatibilities
 
Last edited:
instead of configure a fence device by Hardware, I enable manual fence (fence_ack_manual), of this mode, i can see previously what's going on, and then finally take the decision more wise about of my next actions to perform.

That makes HA simply useless - instead, you can start the VM manually.
 
From the missing answers to my other questions I conclude that there is no planning to implement any type of awareness for docker containers in Proxmox VE for now. Is this true?

AFAIK docker people are working on that (see lxc devel list for details).
 
I have tried LXC with couple of different templates and receiving same error "TASK ERROR: Insecure dependency in unlink while running with -T switch at /usr/share/perl5/PVE/Tools.pm line 187.or " Anybody experiencing same error?

Same error while restoring lxc backup or template from openvz backup.

Formatting '/var/lib/vz/images/110/vm-110-rootfs.raw', fmt=raw size=4294967296
mke2fs 1.42.12 (29-Aug-2014)
Discarding device blocks: 4096/1048576 done
Creating filesystem with 1048576 4k blocks and 262144 inodes
Filesystem UUID: 11db835d-9fbe-478b-aa78-d26e2de4866b
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: 0/32 done
Writing inode tables: 0/32 done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: 0/32 done

allocated image: /var/lib/vz/images/110/vm-110-rootfs.raw
extracting archive '/mnt/pve/datastore/dump/vzdump-lxc-110-2015_06_28-23_38_41.tar.gz'
Total bytes read: 759900160 (725MiB, 64MiB/s)
TASK ERROR: Insecure dependency in unlink while running with -T switch at /usr/share/perl5/PVE/Tools.pm line 187.
 
That makes HA simply useless - instead, you can start the VM manually.

That's right, but when only is need execute a single command line, the task is easier to run.

Moreover, since no one can tell when a PVE host will die or will decompose, a small group of people in the company should be enabled to perform such task, and inclusive, ready to execute it at any time. And in a small business, not always these persons are part of computer department.

So if I have to explain to a group of people that aren't part of computer department, the commands that must be executed (always checking previously that the PVE host decomposed is power off), and wait that they can remember all these tasks at moment that be necessary apply it, i believe that is much to ask for them.

So i will be very thankful if in PVE ver. 4.x with only two nodes, we have some manner of execute HA manually (obviously with only a single command line), it will simplify the live of many people and mine too.

Moreover, many small businesses can not afford to buy a third server to get the three votes of a quorum required by PVE ver. 4.x.

Moreover, if you accept add this feature to PVE ver. 4.x, i pledge to be a beta tester.

Best regards
Cesar
 
Last edited:
Did a deployment of the beta this weekend on top of jessie 8. Because A the installer CD of proxmox doesn't seem to be able to install a native UEFI install and B lacks flexibility to partition my disks. I did use the Proxmox installer first and maybe a small gliltch in the configuration but I choose belgian-french as the keyboard layout and after finishing the install the setting was QWERTY (didn't check which keymap).

Points to look after:

LXC container -> did a restore from a openVZ backup as mentioned in the wiki. But it fails on the debian version. The system was an ubuntu 12.04 installed from an openVZ template and apparently they add a debian version file with wheezy/sid instead of a number. Changed that to just '7' and then it ran further. the machine starts and works fine but it doesn't run the init scripts and doesn't know in what runlevel it is. init 2 fixes some problems but I still need to manual run /etc/init.d/networking to get my interface up.

If someone could help met to get around this problem I can start to deploy more CT's;

Apart from that I have one VM running (Sophos UTM) that wouldn't start. On 3.4 I had changed all my nic's from virtio to E1000 because my nic's froze up after a time. And now on beta 1 of V4 the machine wouldn't even start with the E1000, now there running with virtio again and so far that is without any problem;
 
Nothing except records of named/bind9 and in an older log from when this machine was running just fine on proxmox 3.4 has the same content.
 
I close this announcement thread, as this one becomes confusing and off topic. if you have question or problems with the beta1, please open a new thread with a suitable topic.
 
Status
Not open for further replies.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!