Thanks for the hint on the bug mate.
We do run jumbo inside our networks and have separate arista switches for "storage network" traffic with mlag, so..i'll poke the bat and check the link with the bug.
Tell me... did you put qemu 5.2 on hold in apt , meaning running an system package upgrade...
Also tested with
pve-manager/7.1-6/4e61e21c (running kernel: 5.11.22-4-pve)
and at 28% of taken backup
Nov 28 03:12:44 ********** kernel: [ 72.880477] device tap444i0 entered promiscuous mode
Nov 28 03:13:05 ********** kernel: [ 93.712445] connection1:0: detected conn error (1020)
Nov...
Hello,
I am getting back with some more info on this.
Indeed seems to be an issue with the new Proxmox release, as using hardware (previously installed with version 7 & fully upgraded - both OS and prox packages from pve-no-subscription repository), swaped OS harddrives, installed a fresh 6...
Hi,
I am getting similar issues with LVM and iSCSI connection from a 3 node cluster with latest installed (today) Prox 7 (pve-manager/7.1-6/4e61e21c (running kernel: 5.13.19-1-pve) and a TrueNas 12 storage (dell r510) via a clustered 10 Gbe fiber connection into some Arista switches.
The same...
Just writing a quick post to the resize part issue, maybe, it would come useful someday to someone:
So,
1st - install the cloud-guest-utils package in deb9 to have the growpart binary
2nd -in the cloud.cfg file add the following:
bootcmd:
- [ /usr/bin/growpart, "/dev/vda 1" ]
Just tested it...
Hi. it doesn't work.
Bumping into the same issue right now trying to make the template for deb9 with cloud init 20.1.
So.. some idea would be using a custom script at start-up and add it via bootcmd into the cloud.cfg, which might work.
Thank you for replying back on my post.
I did check further on the system to see if I have any other failed services in systemctl and I could find that lxcsfs.service was with a failed status due to the /var/lib/lxcfs/ not being empty once the node started.
With all CT stopped i did a rm -rf...
Hello,
I know this thread is old and hasn't been updated since 2017, but I need to report that I am still experiencing the same strange setup even in 2020, so it's worth on giving the forum a shout on this and see if anybody else has this issue.
I am using the following proxmox version...
Hello,
I am returning with an update.
I have reinstalled another d2950 server which was sitting in the closet, which has a pretty much configuration like the current one, where I have installed the proxmox 5.1-32 version from an old cd-rom I had.
I have not run any kind of package upgrade on...
New update on the progress:
Added on the kernel's grub start args max_loop=255 and started the bare metal system in the try to increase the max loop devices on the system and overcome this limitation.
After, the container starting/stopping operations are taking considerable longer but what...
Thanks for the feedback.
I'm pretty sure that I'm not the only one trying to run high number of CT on a prox box.
Regarding the LVM, I haven't tried that but for plain and quick manipulation of image files I'd rather stay on file storage backend because on the current storage there is also...
Hello guys,
I have a dell 2950 server with one 120 ssd drive and a 1Tb 8 drives array in raid50 for local storage 64GB of RAM and 2xX5460 . I have just installed the latest version of pve in the attempt to run a test environment for client api calls emulation on a software development project...
Hey guys,
I am encountering the same error with a much up-2-date version of pve.
Environment details:
7 prox nodes running the same version
each prox node connects via dual 10 Gbit NIC with 2 switches and forms an MLAG Po
each traffic type (also cluster -corosync vlan) separated/designated...
Hello Alwin,
Please pay attention to my post, I use an external ceph cluster which is up 2 date, and the same ceph cluster which I used (but on different pool) on pve 4.x branch before the 5.x upgrade.
The ceph version on the cluster is 10.2.10-0ubuntu0.16.04.1.
Regarding the pve' the system...
Hi Guys,
I just reinstalled a server after some disk changes, with proxmox, updated to latest version, rejoined the cluster and try to migrate some disk's VM from an external ceph cluster to the new server which is also configured to share a raid volume over nfs to all other servers.
I have...
1. 1 socket and 10 cores - that is the correct way
2. enable numa
3. I am talking about the hypervisor settings and not VMs.
4. be sure to have latest updates of qemu-kvm versions from pve repository
Hi,
I always run on virtio drivers because this is the kind of driver that delivers the max iops on the virtual system, at least if this is what you're hunting for.
My guess is to go for the 4.2.6-1 kernel branch, I think I just recently upgrade to latest 33 release of the 6 minor version and...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.