Thanks GD
Really don't know how I missed that IP clash. I appreciate the fresh pair of eyes on it.
Oh, the two VMBR's are on different subnets... one is 192.168.29.20/23 the other is 10,10,29.20/24
Cheers!
Hi @GuillaumeDelaney
I'm not sure exactly what you mean by (ip a + ip r) but here's the basic network info.
PVE Host:
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno3 inet manual
iface eno4 inet manual
iface eno2 inet manual
auto vmbr0
iface vmbr0 inet static...
I have a single host PVE version 8.1.4 with a pretty basic out of the box configuration
I have
2 debian 12 LXC containers
1 debian 12 VM
1 Windows 2019 Server VM
The PVE host itself has no problems accessing the outside world and shows consistent ping latency with now packet loss. However the...
I've been working with the OP on this and it turns out that the problem was with a rogue apostrophe character in the HTML.
This text in the disclaimer:
A list of members'
actually appeared like this in the source:
A list of members’
Replaced that character and all now good.
Thanks Oguz
Yes the hostname fixed this. All seems to be working!
The package must have automatically added the enterprise repo as I only set up the no-subscription repo initially.
Cheers
Created a clean Debian 10 droplet
ran apt-get update and dist-upgrade
added No-Subscription repo to apt list
added GPG key
apt-get install proxmox-mailgateway-container
This is as much output as I can grab from the console :
Hi
Has anyone successfully managed to install PMG on a DigitalOcean Droplet?
I have been trying on a Debian 10 droplet with the standard package and also the container package but with no luck. Keep getting this:
dpkg: error processing package pmg-api (--configure):
installed pmg-api package...
OK, rather embarrassingly I (you) have discovered the problem.
On this host it appears I did not configure iSCSI timeouts or install the Multipath tools!
Since installing and copying my multipath.conf settings the node is behaving itself and I can browse the storage instantly.
I have however...
Here is one of the FreeNAS boxes:
Nov 15 14:50:20 ms-200-fn01 ctld[75708]: 10.4.132.110: read: connection lost
Nov 15 14:50:20 ms-200-fn01 ctld[2313]: child process 75708 terminated with exit status 1
Nov 15 14:50:20 ms-200-fn01 ctld[75709]: 10.4.132.110: read: connection lost
Nov 15 14:50:20...
I have 3 SANs on my storage networks.
1 x DELL MD3000i (1 x SAS LUN and 1 x SATAu LUN) 1gb network
1 x FreeNAS (24 x SSD array) 10Gbe network
1 x FreeNAS (SATA array) 1gb network
Whenever I attach any of those to my 3rd host via iSCSI I get the constant stream of I/O errors as in my above...
I'm afraid a full system shutdown didn't help with this one node (host, iSCSI SAN and switches all rebooted).
My logs are filling up with this:
Nov 14 18:54:55 ms-200-prox05 kernel: sd 5:0:0:2: [sdj] tag#0 <<vendor>>ASC=0x94 ASCQ=0x1
Nov 14 18:54:55 ms-200-prox05 kernel: sd 5:0:0:2: [sdj] tag#0...
Thanks for your suggestions Manu. They are indeed all iSCSI LUNS
It appears that all iSCSI sessions are logged in:
iscsiadm -m session -P 1
Target: iqn.2005-10.org.freenas.ctl:proxiscsi (non-flash)
Current Portal: 10.4.132.60:3260,257
Persistent Portal: 10.4.132.60:3260,257...
Hi
I am just in the middle of upgrading the first node in my cluster from version 4.4 to 5.
Towards the end of the upgrade I started getting the following message which keep scrolling round and around:
File descriptor 3 (pipe:[1941289]) leaked on vgs invocation. Parent PID 31851: grub-install...
Hi
I had a broken Nextcloud VM with two disks attached - one for the OS and one for the data.
I built a new VM with a new OS disk and added the previous data disk from the broken VM so both VMs share one data disk.
I would like to remove the broken VM but I do not want it to delete the data...
I can confirm that this has been an issue on my Microsoft Surface Pro 4 (touch screen) since I upgraded to 4.2 several weeks ago.
Not sure if it is related but since getting the Surface Pro, mouse control in a guest console never works.
Thanks for your prompt reply Dietmar.
Running the following whilst the container is running has no effect:
pct resize 315 rootfs 320G
Running when the container is shutdown produces:
# pct resize 315 rootfs 320G
qemu-img: Could not open '/dev/MD3000iA-SAS/vm-315-disk-1': Could not open...
Hi
I have been trying to resize an ubuntu 14.04 LXC for several hours now and it's getting a little frustrating.
The original container was set up as 64Gb and I have extended it to 320Gb on my 4.0-48 host.
The backend storage is LVM on a DELL iSCSI SAN.
In the proxmox Web gui if I browse my...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.