Hello all,
On my PBS i have a Sync-job running that does it's job, however it also creates an underlying namespace which i cannot seem to get rid of.
So the scenario as i set it up originally might have caused this, but now (from my point of view) i can no longer get rid of this.
for your...
So after having met with PBS and wildly excited about it i have a question as to "best setup"
At current i have One PBS box, but have two separate PVE installations, one remote, one local.
I have separated the backups with NameSpaces, so there is no collision possible regarding...
Conditions:
- PVE8.04 / latest
- iso/templates are mounted as a separate LVM volume under /data/iso ( seen as iso storage in PVE UI)
- logged in as federated OIDC user ( with Enterprise admin privileges)
Behavior experienced:
- Uploaded new iso to /data/iso
- iso was correctly added
- temp...
With instaling a new Proxmox Box i ran into the following issue when trying to create a Linux Bond:
Now, in previous installs i created the bond via command-line by editing /etc/network/interfaces
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100...
This just happened :
On backup i got on one node on my cluster ( Bookworm/8.0.4/latest release Proxmox) weird behavior which in general blocked my whole cluster :
Message from syslogd@node01 at Aug 14 01:11:59 ...
kernel:[1233000.842855] watchdog: BUG: soft lockup - CPU#33 stuck for 1863s...
Hi all,
Seeking an explanation to why LXC containers popping up in Check-MK monitoring after x time regarding shared memory.
The only solution to get rid of the notification in Check_MK monitoring for now is to just bluntly reboot the container.
What i am after is an understanding as to what...
Situation:
OIDC correctly setup
OIDC user is part of Administrators - group, with the correct rights.
change LXC options ( ie. nesting/Fuse tick if unticked)
Result:
Is this by design? - in need of explanation here ...
.. i mean i explicitly went for a federative method so i have control...
In my setup i have a 4-node cluster with shared SAS-Storage configured with Multipath.
When i look at the Datacenter-> Summary report i see alot more storage which is actually available being calculated.
As the usage of 'local' storage (in any way) within a cluster when having shared storage...
I have the strangest thing with just -one- of my VM's where when its being backed up it times out :
INFO: Starting Backup of VM 125 (qemu)
INFO: Backup started at 2021-11-26 03:12:21
INFO: status = running
INFO: VM Name: vm-lx-01
INFO: include disk 'scsi0' 'vms01:125/vm-125-disk-0.qcow2' 64G...
Just a general question, as the topic said.
I myself have both a cluster and a standalone machine for testing purposes.
This machine is the result of a debian-install-move to proxmox based on the Wiki and Buster distro, running the latest 6.x version.
When running Lynis security audit...
Proxmox 7 has introduced OpenID-Connect authentication which enables us to go on the path of federated login.
However for all the different Identity provider solutions the option will have to get ALOT more flexible.
Options at the moment are soo limited they are constricting to a single IDP...
Hi,
Since the upgrade of my setup to v7.x i am unable to gather the cpu utilisation parameter of LXC containers with mij monitoring server (based on CheckMK / latest version 2.0.0.p6).
I am assuming this has to do with the move to cgroupv2.
Since the upgrade the message 'item not found in...
Hi all,
As i am in need of an option for authentication for OpenID-connect / OAuth or SAML(2), and the implementation from Proxmox itself as supplier has been postponed since 2017 ( first req seen in that year)
i have decided to take on this project and start development of such.
Now don't...
Dear all,
i am experiencing kernel panics since releases of kernel above version 5.4.44-pve2
Behaviour :
- update all packages ( including kernel)
- restart node
- node comes up fine/ communicates with cluster (4-node)
- node will kernel-panic after it starts either an LXC or VM guest
- issue...
Hi all, i have a (current) standalone node with ProxMox where in advance of getting it into production to test i copies some VMs from the current prod server.
However for some unknown reason backup fails at 100%, and theres no real errorcode i can investigate/solve.
Both VMs are stopped, so...
In my setup i am running some appliances which refuse to shutdown when i issue a host reboot.
As these are 'closed' appliances i am not able to install helpers like the Qemu-guest-agent to get around this.
So the only solution is to open a console, and shutdown them from there ( else Proxmox...
Hi all,
I've been working on setting up my MSA2040 (SAS) to be available in more then just sharing a raw LVM due to the fact that offering a raw lvm only supports diskimages and containers.
The idea was to also have (shared) storage (directory)presented to pve for backup/snippets/templates (...
Situation :
- 3 standalone nodes with VM's on XFS storage
- No Shared storage.
- NFS shares on all 3 nodes to eachother to restore backup files from one node to the other. (added as NFS storage, with backup files only selected in PVE)
Task :
- Introduce new node and setup first cluster node...
i am having issues with a backup-job or hit a bug
Conditions/env:
- pve-manager: 5.1-42 (running version: 5.1-42/724a6cb3)
- Guest VM on proxmox (single host) was backed up, and restored on a different (single box)
- On the old host the VM was deleted after the transfer.
- VM was part of a...
Since i have upgraded to v5.x (pve-manager/5.1-41/0b958203 (running kernel: 4.13.13-4-pve) it seems i can no longer ftp into a guest.
details:
- Firewall for the Guest system allows FTP (macro)
- Host has module nf_conntrack_ftp loaded -> nf_conntrack_ftp 20480 0 - Live 0xffffffffc0395000...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.