Hi,
in a pve7.4 cluster setup (4 nodes) one sadly died.
Before final "delnode" removed all entries from /etc/pve/replication.cfg that had this node as a target and re-grouped the replication to the remaining nodes.
Then regrouped all replication-groups, only then according to...
the pve kernel 5.15.104-1 seems a bit unstable in real life - it kind of out of no where reboots and we got only this in the kernel.log:
Apr 8 11:41:44 k14 kernel: [746103.306950] vmbr2: port 30(tap130024i0) entered disabled state
Apr 8 11:51:04 k14 kernel: [ 0.000000] Linux version...
We got two KVM guests, one ubuntu 20.04 LTS and one debian v10.
one has a 100GB disk, using roughly 54 GB, the other a 240GB disk, using roughly 146GB.
both are replicated using pvesr, both basically have static content in their filesystems (lots of images never changed).
zfs list (and -t...
We're still testing zfs replication using pvesr.
One issue we ran into was a vzdump snapshot being created during backup, while then pvesr did it's thing and we got stuck with this vzdump snapshot, that when trying to remove responded: snapshot 'vzdump' needed by replication job - run...
Hi,
we are testing pvesr to replicate several VEs for easy migration between several nodes.
Basically this is working nice, but if node A replicates a big chunk (100+GB) to node B while B trying to replicate another VE to A this results in hangs within this VE while zfs replication tries and...
We're currently relying on pre-start and post-stop hooks to basically configure the networking and routing for LXCs or KVMs.
During PVE7.3 tests for HA and migration automation we experience the quite unexpected thing:
The "loosing" PVE-node getting the LXC/KVM removed does execute the...
Hi,
is there a way to get automated warnings attached to the wearout-indication presented at the Host -> Disks view in PVE?
(yes I know smartd, having this in the gui to notify admin-roles etc and specific wearout-levels for notifications would just be a plus)
thx
hk
Ein PVE6.4-15 wurde hart rebooted und nun häufen sich beim Start der Container diese Probleme.
Versuche die VE Filesysteme zu checken enden hier:
pct fsck 67107
fsck from util-linux 2.33.1
MMP interval is 300 seconds and total wait time is 1202 seconds. Please wait...
und nach dem fsck der...
Hi,
in order to be able to pin a TLS connection on the reciving end of PMG it would be great to be able to set the acme cert renewal to keep the privatekey during renewals.
thank you in advance
hk
the latest and greatest PVE 7.2 delivers frr-8.2.2 as a package.
_but_ this release has a possible issue with ospf6d - for reference: https://github.com/FRRouting/frr/issues/10823
in order to fix this we installed frr-8.1 the following way:
a) implement the apt-list config following the guide...
well, trying to get spice to work - and it always failed to connect to the server....
finally ending up tcpdumping on the host for port 3128...
and there we got it: IPv6 calls are not answered, while after changing to IPv4 all is fine.
probably because of this:
tcp 0 0...
Here is what we tried:
moved an openvz container backup to a new proxmox, restores fine; networking also coming.
updated debian v7 within the container to v8
now after every reboot of this container the "eth0" within the container is gone.
after manually "/etc/init.d/networking restart" eth0...
the fun thing here is: the webGUI reports the start "ok", yet in syslog we get something like this:
pvestatd[1924]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)...
So, here we have a testsystem that we do also use for some regular mails (how should we test it otherwise :)).
it was created as PMG 5.x, then in-place-upgraded to 6.0, afterwards to 6.1.
(following https://pmg.proxmox.com/wiki/index.php/Upgrade_from_5.x_to_6.0#In-place_Upgrade of course)...
Hi
we are trying to solve the following issue:
we created a linux vmbr2 bridge and assigned a dummy IP to it: 10.10.10.10/32
we also created a LXC using this vmbr2 as bridge to this container for its veth (eth0).
assign 192.168.1.1/32 on it's eth0 and setting the default-gw to 10.10.10.10
the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.