I upgraded my hosts and removed all SWAP's from the LXC clients and this is the result.
Now I wonder if the problem was the code or the SWAP. I'm betting LXC don't like ZFS with SWAP.
I was trying to clone new Standalone Proxmox installation on KVM by doing a backup to PBS server then restoring that backup as a new virtual machine.
Worked quite well.
Then I updated
/etc/hostname
/etc/hosts
/etc/network/interfaces
and regenerated ssh keys with
ssh-keygen -A
After reboot I...
After upgrade this message started occasionally flash on LXC terminals.
channel 3: open failed: administratively prohibited: open failed
Then after a few seconds it disappears and says something about forwarding but too quickly to see.
Remote port forwarding failed to listen on port XXX
Oddly...
I can't seem to get remote logging to work on Debian12 LXC containers.
FQDN works on Qemu servers but the same rsyslog.conf file does not give full name from LXC container.
Any idea what's going on here..?
If Debian 12 has been installed with minimal network boot CD/ISO/USB then it probably does not include bridge-utils which is mandatory component to get vmbr bridges to work.
Make sure you install it with apt install bridge-utils first then fix these files /etc/hosts and /etc/network/interfaces...
hmm. I don't understand why root@pam always asks password when I do this:
/usr/bin/proxmox-backup-client backup etc.pxar:/etc var.pxar:/var --repository backup.ic4.eu:store1
The backup-client is Proxmox host and should have proper keys to backup server.
I have been testing PMG for a while now and I have to say I'm impressed.
I like how simple it is to use.
There is one thing that it's missing though.
You can have finer grained control (than domain level) by putting me@example.com in transport_maps by using virtual_alias_maps.
# transport...
I currently have this command running in cron:
echo "ConcurrentDatabaseReload no" >> /etc/clamav/clamd.conf && systemctl restart clam*
I would like to disable all Clam related services in our PMGW.
Any suggestions on how to do that?
How to backup/copy "Unused Disk 0" sdb-lvm:vm2404-disk-1 from Proxmox host using rsync?
I can "see it" on Proxmox GUI but since it's not mounted I'm not sure how to access it remotely.
(SDB is LVM.)
How does one go about fixing zfs problems inside virtual drive?
Should it be done inside the VM? (Might be difficlut if you can't start it.)
or
Should it be done on the host?
What commands do you use?
it seems that a "new" sequential scrub algorithm for ZFS is causing head aches on some of our Nested systems.
The "new" metadata scan reads through the structure of the pool and gathers an in-memory queue of I/Os, sorted by size and offset on disk. The issuing phase will then issue the scrub...
I have been testing my script to copy fail2ban log files to Proxmox firewall and have managed to make it work... one time :)
cat /root/bin/banned2proxmox.sh
#!/bin/bash
#
# Sync fail2ban log files from client servers
rsync -a root@vm1.ic4.eu:/var/log/fail2ban.log /root/bin/fail2ban-vm1.log...
I think there is a problem with pvecm since it can't seem to join a cluster that uses the secondary nic and IPv6 only.
All the nodes are listed on /etc/hosts file and all the nodes can ping each other and echo using IPv6 (using both hostname and IP).
But still every time I try to add a node to...
Seems that lots of people are getting tired of the constant strugle of Amavis and jumping to Rspam bandwagon.
I'm noting clear drop in resource use on every mail server that has switched away from Amavis.
Any thoughts?
ERROR: migration aborted (duration 00:00:06): storage migration for 'vdd:subvol-601-disk-1' to storage '' failed - no storage ID specified
TASK ERROR: migration aborted
I tried to migrate 1 LXC container to another from one (nested) Qemu node to another and got this error.
I remember when I...
I would like to import list of IP's to Proxmox IPset rule.
Any idea how that might be done?
I have collected Fail2Ban Recidive list for a while now and would like to import that to a ClusterWide rule set for every LXC container.
When I created a ZFS raid Proxmox allocated 13T out of 28T to data (ie. local-zfs).
How do I increase the size allocated to data?
zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 19.7T 805G 151K /rpool
rpool/ROOT...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.