"Canonical itself has used LXC for numerous Ubuntu.com and Canonical.com production services for many years, and in fact LXC has been used in every production OpenStack cloud that Canonical has deployed to date." via...
Yes. This was the trick. In order to increase $fence_delay value you must be modify the file /usr/share/perl5/PVE/HA/NodeStatus.pm on all cluster nodes, because every node could play the master role in a cluster at a certain moment and trigger the fencing process.
Now, working test is:
1)...
For the sake of simplicity, I have described only 3 nodes in the cluster, that are participating in the test.
In fact, there is a 4 node cluster, and node4 is Master at this moment! I guess I should hack/increase the $fence_delay in /usr/share/perl5/PVE/HA/NodeStatus.pm in node4 also, because...
My pve-ha-manager version 1.0-40, Proxmox 4.4
Thank you for the tips on graceful reboot, but is not my use case. The default behaviour on graceful reboot is freezing the VMs, and works as expected, even in case of long reboots.
In my situation, systemctl stop pve-ha-lrm is not usable, because...
A) First problem is the triggering moment
A1) Monitoring via cron when a node changes status from online to fenced.
Doable with get /cluster/ha/status/manager_status either from outside website, either from script inside nodes ( the script should live on all nodes, but only triggered if the...
Why expressly linking btrfs to zfs, and not other filesystem? Let me explain myself.
PERFORMANCE VERSUS HA
Always, there’s a trade-off between performance and high availability.
With ceph distributed storage you have HA, but there’s performance penalty, induced by network latency.
With...
Same issue ( pools gone), but pve services operating normal
root@nvme:/var/log# dpkg -l | grep zfs
ii libzfs2linux 0.6.5.8-pve14~bpo80 amd64 OpenZFS filesystem library for Linux
ii zfs-zed 0.6.5.8-pve14~bpo80 amd64...
From my tests, the error
tar: ./var/spool/postfix/dev/urandom: Cannot mknod: Operation not permitted, shows when trying to restore an unprivileged container from a backup of privileged container. The cause is privileged container missing mount | grep udev ?
This is very strange, because the...
Thank you for reply. The only thing working, from your suggestions, is fstrim inside container.
I think of running like https://forum.proxmox.com/threads/proxmox-execute-command.26030/, from outside, for my clients, on cron.
However, this is the most performant solution, according also to...
Tested on latest proxmox 4.4
Similar issues on docker:
- Delete data in a container devicemapper can not free used space https://github.com/docker/docker/issues/18867#issuecomment-223206599
- Device-mapper does not release free space from removed images...
Now it works: Support for current 3.19 and 4.4 64-bit kernels is now available with this release. These kernels are found on Ubuntu 14.04 and 16.04 machines.
via http://wiki.r1soft.com/display/ServerBackupManager/Server+Backup+6.2.1+Release+Notes#ServerBackup6.2.1ReleaseNotes-KernelModules
OFFICIAL WARNINGS:
http://docs.ceph.com/docs/master/start/quick-rbd/
You may use a virtual machine for your ceph-client node, but do not execute the following procedures on the same physical node as your Ceph Storage Cluster nodes (unless you use a VM)...
Sorry for the noise, but for me it seems that it works as designed!
Despite the bug report and information that is not being implemented, I've done a paranoia try:
root@vera:/etc/pve/nodes/vera/lxc# cat 103.conf
arch: amd64
cores: 1
hostname: vera3
memory: 1024
mp1...
For my use use, the default proxmox firewall rules setup the public address.
It works for private network forcing with https://pve.proxmox.com/wiki/Firewall#pve_firewall_ip_aliases
# /etc/pve/firewall/cluster.fw
[ALIASES]
local_network 192.168.0.1/24
I read that "Recursive sync" is a feature not so easy to realize. On the other hand, about this bug/feature... How complicated is it?
Do you have an ETA to fix it, depending on other proxmox development priorities?
Thank you!
Practically, you have done '' Is it a reasonable workaround to set the container to privileged, install httpd, and then set it back again? You may then need to change the file ownerships afterward, but sure." via https://github.com/lxc/lxd/issues/1245#issuecomment-253804636
But editing conf and...
Quote from stgraber "Until the kernel supports unprivileged fs capabilities you need security.privileged set to true which should then fix your problem, at the cost of much degraded container security." https://github.com/lxc/lxd/issues/1245#issuecomment-233199884
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.