Sorry I dont know when this hapens exactly, but I have a realm jdaps that was working in proxmox ve 7, and don't work anymore today (8.1.4).
An SSL problem occurs: update auth server failed: SSL connect attempt failed error:0A00014D:SSL routines::legacy sigalg disallowed or unsupported (500)...
I've been facing same issue (clients using insecure global-id) after upgrade to octopus 15.2.11 & pve-manager 6.4-8, but my config is not really standard, don't know if the issue can help.
My proxmox cluster:
4 storage nodes, whith ceph installed;
3 compute nodes without ceph part deployed...
Workaround found ...
I've opened a case at Dell to see if problem was known ... but nothing in their databases. Debian / proxmox not really suported, but they suggested a really clever idea ... add the slaves later.
So in interfaces, bond0 has the 2 interfaces on one PCI card, and no...
Yes and MTU is not needed in vmbr confs .... but it works
Yes ... that the last test I wanted to do, reinstall ifupdown.
Seems ok
root@ceph1:~# systemctl status networking
● networking.service - Network initialization
Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor...
ens6xxxx : 2 x 10 G on one PCI card
eno1np0, eno2np1, eno3 eno4: the other card with 2 x 10 G + 2 x 1000BasrT (only 10G tested)
my interfaces file:
auto lo
iface lo inet loopback
auto eno1np0
iface eno1np0 inet manual
iface eno3 inet manual
iface eno4 inet manual
auto ens6f0np0
iface...
Same issue for me with 2 PCI cards with Broadcom Limited BCM57412 NetXtreme-E 10Gb in a Dell R740xd. No mix of interfaces between PCI cards in a bond come up at boot. The problem seems to be bond0.vlan not up ....
The more starnge is thant I can configure and run my network config on a living...
I've done some modifications to code, to achieve my goal, changing sector sizes of some of the virtio disks of a guest. My tests seem ok, but I don't know enough about the code to be sure.
My version : pve-manager/6.2-10/a20769ed (running kernel: 5.4.44-2-pve)
1) I introduce a new boolean...
Many tests later ... I think this is not possible.
The block_size parameters (logical & physical) must be added to the virtioX: definition .... but there are not implemented in proxmox. may be on a next release .....
In libvirtd/qemu in xml you have :
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source dev='/dev/eql/seafile1'/>
<blockio logical_block_size='4096' physical_block_size='4096'/>
<target dev='vdb' bus='virtio'/>...
Sorry but last week has been a bit hard and I was a bit confused ...
iSCSI mount from equallogic is Ok, and the raw device (or LUN) come as it has ben formated (Phy & logical sector of 4096).
The problem is when you give this raw device to a guest VM ... within the VM sector is 512 .... and I...
Bonsoir,
I'm testing proxmox/CEPH for the migration of a libvirtd/Kvm configuration.
The problem I get, is I use some 'data' disks on guests on my libvird that I can't 'mount on my proxmox platform ...
Theses disks are iSCSI LUNs from equallogic disks, formatted with 4 k sectors and used...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.