OK, it was very simple, I just edited the config file
before:
boot: order=scsi0;ide2;net0
cores: 1
ide2: local:iso/TrueNAS-13.0-U2.iso,media=cdrom,size=1015130K
memory: 4096
meta: creation-qemu=7.0.0,ctime=1665080338
name: sharedTrueNAS
net0: virtio=<HIDDEN_MAC>,bridge=vmbr0,firewall=1
numa: 0...
I installed TrueNAS on my Proxmox 7.2 node with a 32GB virtual disk for OS and a 1.5T disk for data.
When I want to create a pool in TrueNAS, I get the following error
There are 1 disks available that have non-unique serial numbers. Non-unique serial numbers can be caused by a cabling issue...
Freeing up enough storage fixed the problem. The disk of the container with the config in the backed up state was indeed targeted to the ceph storage that was full.
I feel like I would have never found the cause of the problem myself.
Wouldn't it be possible for PVE to detect when the target...
Indeed there is a ceph problem in the cluster:
Full OSDs blocking recovery: 1 pg recovery_toofull
I will try to resolve that and then restart the restore
PS:
I cancelled the restore on the PVE node now. The logs of the restore are now
recovering backed-up configuration from '<SERVER>:backup/ct/100/2022-09-21T23:00:01Z'
Using encryption key from file descriptor..
Fingerprint: 34:1d:00:b2:c6:bd:c4:ed
/dev/rbd0
Creating filesystem with 3932160...
4 Minutes later, still no new chunk has been read.
The logs of the restore in Proxmox VE so far are:
Task viewer: CT 100 - Restore
recovering backed-up configuration from '<SERVER>:backup/ct/100/2022-09-21T23:00:01Z'
Using encryption key from file descriptor..
Fingerprint...
Just restarted the restore process once more 5 minutes ago.
A task "Datastore data Read Objects ct/100/2022-09-21T23:00:01Z" appeared.
These are the logs so far
2022-10-03T23:41:50+02:00: starting new backup reader datastore 'data': "/rpool/data"
2022-10-03T23:41:50+02:00: protocol upgrade...
Due to a network outage, my cluster tried to start a HA managed container on another node which seems to have failed.
The CT was in errored state, so I deleted the HA entry for that container.
The errored state went away, CT was still stopped though
Tried to start the CT. The CT could not start...
I have a 5 node cluster and most of the time I use
* Debian Netinstall
* Ubuntu Server
* Other distros
It is quite time consuming to always have to upload the ISO to a node when I want to mount the ISO on another node. Most of the times, I remove the downloaded ISOs from my laptops Download...
I have a PVE cluster with 4 nodes.
I want to do hardware maintenance on a node.
To accelerate the process of migration the nodes resources (VMs, CTs, etc.) to other nodes, I want a subcommand of pvecm that guides me to all the usually necessary steps before taking down a node.
* Migrate VMs
*...
Here is my /etc/pve/user.cfg
user:root@pam:1:0:::mail@example.com:::
user:web@pve:1:0::::COMMENT.::
group:Remote-Maintenance:web@pve::
role:Allow-BackupNodePowerMgmt:Sys.PowerMgmt...
Thanks for the quick advice on the hooks.
The documentation says
"You can find an example in the documentation directory (vzdump-hook-script.pl)."
Could you please state in the docs where the documentation directory is located? A quick Google search did not give me any clue on that
What I want to achieve is that I have a user that can only view data in the Web UI and power on / off only certain nodes to save power and avoid accidental poweroff of nodes that I can't power on with etherwake remotely.
I have a cluster with node A,B,C,D
I have a PVE Authentication Server...
Let's say I have a 3 node PVE Cluster
Node 1 contains
* VM100
* CT101
* CT102
Node 2
* VM104
* CT103
Node 3
* VM105
Also I have a single PBS host added as storage to the cluster.
I want to create a daily backup job that backs up all VMs
To reduce system load on the backup server which has...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.