Thanks for your input.
1. I guess I already have the DNAT rule in place in line 34 in http://ix.io/3pUe
2. telnet to port 25 of the mailserver (lxc guest) from proxmox host is reachable:
root@server2:~# telnet 192.168.25.110 25
Trying 192.168.25.110...
Connected to 192.168.25.110.
Escape...
@Stoiko Ivanov Thanks for your input.
Although it has been a while, but the problem appears everytime I upgrade the system (ubuntu 18.04). This time, can send emails, but not receive!
Legends:
Proxmox host = server2.domain.tld
LXC mailserver = mail.domain.tld
From the proxmox host, it works...
Hi,
I am running a mailserver (ISPConfig) in a proxmox lxc container which was working fine and stopped working recently all of a sudden.
All the ports necessary are open in the proxmox host:
# iptables -L | grep smtp
ACCEPT tcp -- anywhere 192.168.25.110 tcp dpt:smtp...
@fabian , Thanks for the nifty pointer.
But there appears to be a size mismatch, created 12G:
# pct config 108
arch: amd64
hostname: hdi.loc
memory: 1024
net0: name=eth0,bridge=vmbr0,gw=192.168.63.1,hwaddr=42:E4:13:94:32:93,ip=192.168.63.108/24,type=veth
onboot: 0
ostype: ubuntu
rootfs...
In my OP (https://forum.proxmox.com/threads/unable-to-parse-directory-volume-name-with-pct-create.55373/), I guess I specified exactly the same to get the errror:
--rootfs local:vm-108-disk-1.raw,size=12G
Also tried with:
--storage local --rootfs local:volume=vm-108-disk-1,size=12G
but got...
@sb-jw: Thanks for the pointer.
Appending the part with:
--storage local --rootfs volume=vm-108-disk-1.raw,size=12G
as well as with:
--storage local --rootfs volume=vm-108-disk-1,size=12G
fails to create the container with the following error:
to comply with what pct manpage states...
Hi,
I have two machines (in different locations) which has exactly same hardware as well as proxmox versions (4) running. The first machine was deployed two years ahead of the second one. The first server has more network transactions than the second one.
It is strange that smartctl states...
@Nemesiz, Thanks for sharing. Just wanted to know whether you did this in PM4 or PM5? And can you throw some light on 2 above? What is 'e'?
It would be nice if you can share a small note on exact commands that you executed to accomplish this setup.
Cheers,
/z
@dietmar Thanks for an update. I was just reading your reply at https://forum.proxmox.com/threads/dev-pve-data-not-mounted-after-fresh-4-2-install.27220/#post-136966.
1§ CLI issue: However, it makes very inconvenient to figure out data usage for the command line users which is the standard way...
Hi,
I cannot say for sure whether this is a feature or a bug?
Installed Proxmox 5.1 on a KVM machine with 100GB space allocated, but 'df -h' failed to show up pve-data mounted.
The only error message I see:
<QUOTE>
# dmesg | grep vd
[ 0.921700] vda: vda1 vda2 vda3
# dmesg | grep sd
[...
Hi,
The pct start failed after an upgrade as follows (only has pve-no-subscription repo enabled.
# pct start 250
Job for lxc@250.service failed. See 'systemctl status lxc@250.service' and 'journalctl -xn' for details.
command 'systemctl start lxc@250' failed: exit code 1
root@lhotse:~#...
A relevant thread WITHOUT the hwRAID is posted at https://forum.proxmox.com/threads/crashed-proxmox4-reboots-only-to-the-grub-boot-prompt.37568/ for those who are encountering a similar issue, not yet solved till these lines are written. ;)
grub > ls (hd0,gpt2)/ROOT/pve-1@/boot/grub
unicode.pf2 i386-pc/ locales/ fonts/ grubenv grub.cfg
When I checked the grub.cfg, the entries related to PVE have disappeared as seen at http://picpaste.com/P4fv0kEi.jpg
What is the right way to restore the entries with PVE running in zroot?
@fabian, thanks for taking time. The output of those two commands were shown in this picture attached and also available at http://picpaste.com/n70EiQNm.jpg
As @guletz stated at https://forum.proxmox.com/threads/crashes-with-zfs-root-and-stuck-on-grub-rescue-prompt.34172/page-2#post-168936, I am also getting only the grub boot prompt in a machine WITHOUT hw RAID with zfsroot running proxmox 4.4 which crashed without a reason.
grub > insmod zfs...
Thanks, Denny for sharing your experiences.
@Denny Fuchs, would you mind sharing your the 'new' monitoring solution with Icinga, Grafana and InfluxDB? A pointer to your repo would be great! ;-)
@Denny Fuchs, thanks for sharing real-life outcomes with different monitoring systems and...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.