I am very very worried because I cannot restore several LXCs from a backup done on PBS.
Here is the error I get:
recovering backed-up configuration from 'pbs:backup/ct/109/2021-04-18T05:00:02Z'
Formatting '/mnt/p03ssd/images/109/vm-109-disk-0.raw', fmt=raw size=26843545600
I did set up a TrueNAS Core VM (version: TrueNAS-12.0-U2.1, FreeBSD 12.2-RELEASE-p3) with 2TB SSD passthrough from the Proxmox host (version: 6.3-6). Inside this VM I created a CIFS share in order to back up other VMs to. To test its functions I mounted the share on Linux and could...
Hey guys, I'm in a bit of a pickle.
I'll start by going over my setup:
- Storage server running TrueNas with 10Gb networking
- ZFS pools
- One pool with 12x 2TB drives in raidz2 (storage B)
- Another pool with 3x 400GB ssds in stripe (storage A)
- Single hypervisor
- Boot disk is 400GB NVMe...
Hallo liebe Proxmox Nation :)
Für mein Abschlussprojekt habe ich als Aufgabe bekommen einen Virtualisierungs-Cluster zu Evaluieren.
Der Cluster besteht auf 4 Nodes und der sowohl VMs als auch ISO-files liegen auf einem Server mit TrueNAS und spricht mittels NFS.
Mein Problem: Ich kann eine VM...
I have been trying to find this info but all examples point to adding a pve user, and when I try to add the pam user to the pve group it says, not to anyones surprise, that the user doesn't exists.
And if I add the user in the GUI as a PAM user that user does not appear when listing users in the...
This weekend I finished configuring most of my personal server ( which is for development and fun so I don't care about security much ).. It was previously working as a nas server with Freenas and had rancher as docker host vm..
Although I wanted to have a full hypervisor with PCI-E...
Yesterday we upgraded our entire Proxmox Cluster from PVE 5.4-5 to PVE 5.4-15 and Corosync to 3.x. Since then NFS mounts on some nodes are occasionally not available. Anyone else having similar problems? - We are now deleting/adding the NFS mounts again. Hope that will resolve the issues.
I see a lot of organization using NFS instead of ISCSI for vms and data storage. It's apparently more to ease operations: backup can be done on the vm files and do not require a separate server. Is this true? How much performance you are really losing when using NFS instead of ISCSI?
I acquired a Tiny/Mini/Micro PC to be my starter homelab, nothing fancy just a 4-core i5 from 2012 and 16gb non-ecc ddr3 with a 1TB SATA SSD (The SSD cost almost as much as the PC itself).
Goal is for this to be various things including PiHole, backup of other devices, log collection and...
After seeing Backup Server go GA, I was immediately interested in putting it to use in my home/lab environment.
Currently my storage server backs up to warm storage over sftp with Duplicacy and it has some amazing compression+deduplication. It's also cross-machine deduplicating so any machines...
I recently discovered that a lot of people around me are using nfs instead of iscsi on a PVE platform. The main reason for it seems that it’s easier to backup an nfs-based solution thaan an iscsi one, since on The remote disk everything is a file. Is this a general trend?
What would bring...
one of my nodes occasionally generate a lot of the following messages:
[Wed Jan 15 21:05:03 2021] nfs4_free_slot: slotid 1 highest_used_slotid 0
[Wed Jan 15 21:05:03 2021] nfs41_sequence_process: Error 0 free the slot
[Wed Jan 15 21:05:03 2021] nfs4_free_slot: slotid 0...
We have different setup of Proxmox 5.2 and Proxmox 6.1 Cluster.
Last one year we are using proxmox 5.2 cluster with nfs storage. Recently we created proxmox 6.1 cluster and tried to add the same nfs storage but we are not able to add the nfs storage. We are using netapp FAS8200...
I'll be honest, I have no idea what I am doing. I'm throwing commands at this thing left and right. You ask me what I have tried? lots of things! I mess up terminology all the time. Please help, I assume there is some simple thing that I missed .
This is my error - create storage failed...
i would like to share nfs folder ,
the lxc is ubuntu 18.04
i tried but every time i receive an error:
root@nfs-intenral:~# journalctl -xe
-- The result is RESULT.
Jan 12 11:21:13 nfs-intenral systemd: rpc-svcgssd.service: Job rpc-svcgssd.service/start failed with result 'dependency'.
What is the risk of using the soft option? Risking one backup or all?
NFS storage will be connected automatically, or you will need to remount manually? (option soft)
What is your advice for a backup storage?
Fresh install of Proxmox 6.3-3 on Debian 10 (4.15.18-1-pve).
Trying to make NFS shared folder for containers:
Allow NFS in apparmor profile: /etc/apparmor.d/lxc/lxc-default-cgns
mount options=(rw, bind),
setup /etc/exports with...
i experienced that on each node nfs shares are still available after removing a nfs share in the datacenter > storage configuration.
There is still a open bug report (https://bugzilla.proxmox.com/show_bug.cgi?id=1016)
My solution was to unmount the storage on each node, after i had done...
I'm trying to mount a NFs share on a privileged container. I am able to mount the share and can see the all folders and files, but if I try to copy a file to the container I get a "stale file handle" error. I can use the share if I mount it to a VM without problems.
SOLVED: I made some...