Currently, I am using hardware raid with LVM thin volumes and LXC container which works great but I am trying to switch to ZFS.
Is it possible to hide the mount points when using ZFS? Here is what I got on my host system:
zfs list
NAME USED AVAIL REFER...
My PBS server gets a new certificate every 3 months so I have to manually get the new fingerprint from the server and then import it to the PVE servers. My idea is to automate this process with Ansible. First step is to get the fingerprint from cli. This is my idea:
proxmox-backup-manager cert...
I mounted the CT on the host like this
mount /dev/mapper/pve-vm--103--disk--0 /mnt/103/
and modified some files
after I unmounted and started the container. Now I can not change the files that were modified outside the container.
I am getting this error when initializing a disk
Disk sdc - Initialize Disk with GPT
2020-11-23T02:47:03+01:00: initialize disk sdc
2020-11-23T02:47:03+01:00: TASK ERROR: command "sgdisk" "/dev/sdc" "-U" "R" failed - status code: 2 - Caution: invalid main GPT header, but valid backup...
I have used Greylisting before and I now it can be setup to automatically whitelist a sender if the sender passes the check, that is if the sender is frequent. Is this possible with ProxMox Mail Gateway? Maybe @tom ?
For testing purposes I created an lxc CT with one network card
IP: 10.10.10.224/24
GW: 10.10.10.1
route inside CT
# ip route show
default via 10.10.10.1 dev eth0
10.10.10.0/24 dev eth0 proto kernel scope link src 10.10.10.224
Then from proxmox gui I change the gw from 10.10.10.1 to nodes...
I just tried to install the new kernel and I git this error
No space left on device
df -h /boot/
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 488M 471M 0 100% /boot
If I list content no boot partition I get this
ls /boot
config-4.13.13-1-pve config-4.4.40-1-pve...
When I login to a container and list mounts I see container's ext4 thin partition is mounted and data=prdered and I would like to change it as data=writeback. How can I do this since lxc.rootfs.options is depricated?
I have a small lxc container with 1GB of RAM but a large disk partition which is 800GB of images which are synced with rsync. While the rsync is syncing data over network the container RAM usage is max 10%. The problem is that the host's memory usage is increasing over time and finally gets to...
Does pct restore support any king of bandwidth limiting? Restoring from backup always makes the dedicated server not usable during the restore process....
I juct sreated mu debian 8 ct as an unprivileged container and got an error 'failed to run autodev hooks for container' .... Is there a way to fix this?
I notices when logged inside lxc container if I do cat /proc/partitions I will have access to hosts's partitions.
Is there a way to hide host's partitions?
pveversion
pve-manager/4.4-12/e71b7a74 (running kernel: 4.4.35-1-pve)
I user vzdump backup with stop mode on a lxc ct mrunning mysqld. After I did a restore the mysqld did not start. This makes sense as I user stop mode for backup mode. I did some testing and even when I shutdown the lxc ct and do a backup and then restore the ct from backup the mysql service...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.