Wordpress container, TurnKey linux. There was no output except for 16 hours status "cloning" in the logs at the bottom, no errors. I will try again later this week and report again.
root@intranet02:~# hostnamectl
Static hostname: intranet02
Icon name: computer
Machine ID...
steps to reproduce:
1. shut down container
2, clone right mouse
3. keep origin console open
clone fuctioned normal after leaving console and started again
Kernel : Linux 5.13.19-6-pve #1 SMP PVE 5.13.19-15 (Tue, 29 Mar 2022 15:59:50 +0200)
PVE-manager : pve-manager/7.1-12/b3c09de3
Node: HP DL360 G7 (also saw some Google for DL360 G9 and other HP types)
After moving CT's to a node that also manages VM's HUNDREDS of these messages appeared...
the above fix no longer worked for me, this did fix it Yooda solution LXC , first download and install and then I copied / cloned template Template OS Linux by Zabbix agent to Template OS Linux LXC by Zabbix agent and changed in the LXC template by Zabbix agent the templates for ram and cpu to...
situation :
VM webhost ip X, VM NEWwebhost ip ALSO X , same node
turn original webhost off, new on, not reachable by web or ssh
migrate to another node , everything works
Tnxs for your reply Moayad, yes I do the dist-upgrade in the Ansible template, no difference, and I tried the apt -f install, this is the output you requested, regards, Rick
proxmox-ve: 7.0-2 (running kernel: 5.11.22-3-pve)
pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e)...
apt fix missing and dpkg reconfigure do not solve the issue
"
root@AVideo ~# apt upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
2 not fully...
Tnx a lot for your answer.
Yes I am sure about that. The only thing different in this server is the fact this was node 001 in the Cluster setup. I am familiar with capacitor problems. So, also no, I don't see any capacitors bulging.
Proxmox VE Cluster, latest, updated, upgraded. Swapped servers, swapped power supplies always same result. HP DL380 G7 cluster, 2 processors E5649. Calibrating to minimum power usage (BIOS settings) at post. Balanced use of power supplies. Other much better loaded nodes in the cluster (same...
reply,
on truenas:
create user, group backup
add a dataset in storage>pools
set storage>pool>options>quota for your dataset
set storage>pool>permissions to backup user and backup group for your dataset
add a windows share (open /mnt>name_of_your_nas_with_the_dataset and select the dataset)...
Looked for a button like that, is none in the Windows shares, looked quickly in Linux but couldn't find one either, do I miss something your eyes did see? Got it, that's in storage>pools. Will look into this tomorrow, thnxs! Yesterday I had the feeling there was no quota system in it.
Proxmox Backup Server 1.1-5 on Proliant DL360 G6 memory 32 gb, updated, upgraded
I get the same error, trying to mount a Truenas Map (running ZFS) when I try to create a datastore, creating a store on the native disks at the DL360 works
proxmox-backup-manager datastore create store1...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.