Can anyone see an issue with turning off the PBS when not in use?
I would set the power state in the bios so it will turn on when power is returned.
The sever would run a cron job to turn its self off and then the PDU would remove power. The power would be restored 1 hour before it is needed...
I have heard of something similar because of a run state with some cpu's not changing as they should, during a reboot. I can not find it right now, but does the bios have a setting for TDP or power levels? If it does set it to max and try it.
Does PBS support a physical Ubuntu 24.04 client backing up to it?
Many thanks, the doc's state Debain, but this is an older version.
I did read several old posts, but found no joy.
I do not know if this will help and you will likely have to play with it. I had a similar through put issue and I did the following, with some help from this forum.
https://forum.proxmox.com/threads/vm-as-firewall-through-put-limits.139673/#post-686293
A followup after the Opnsense 24.1 release. Lets just say it blew things up.
Now the settings for the following need to be off. System->Settings->tunables
net.inet.tcp.tso = 0
net.inet.udp.checksum = 0
Before I had them on or 1.
Thank you _gabriel! It did not dawn on me to look under other firewalls in the Doc's.
The note on the bottom of the pfsense notes states to disable check sum off loading. Instructions for Opnsense can be found here. https://teklager.se/en/knowledge-base/opnsense-performance-optimization/
This...
I tested an Opnsense firewall virtualized in Proxmox. The box has 1 gig nicks, but am unable to get speeds above 100mbps through it.
Is there a max limit of bandwidth a vm/ct are allowed to use?
First off this is not talking about running backup server as a Proxmox vm/ct.
I am evaluating this as a solution for a 1 pb storage cluster. The cluster will include 4 hp gen 10 servers. Each with 24 12tb drives.
I need to know if the backup server can be installed across all 4 servers, to act...
Thank you Chris! I appreciate the help.
I was able to get it to backup to the backup server, but while testing a restore, I am unable to even list the snapshots.
Test box output. I have tried the full fqdn and different users, as well as including the port.
Idea's anyone?
[User1@Arch-test ~]$...
Until this morning I had not had a need to do this, but now we do. Looks simple for Debian versions, I did not find anything when searching for Arch.
Also would like to know how it handles multiple partitions and drives.
Thank you,
I was able to finally clone the ct and the clone is able to be backed up. I then deleted the problem ct.
I guess I'm good until the next time. Thanks for your help.
These logs are pretty big. in the ceph.log there are thousands of the active+clean, but I do not see anything labeled as an error. I started a backup, had it fail and then went through the log. The log its self is concerning that it just cycles over and over. What am I actually looking for...
storage is a ceph storage across 18 drives on 3 nodes, it is stating it is healthy.
arch: amd64
cores: 4
features: nesting=1
hostname: node006
memory: 4096
net0...
I am getting the following error on a single CT. Just switched to Backup Server. Everything else seams ok so far.
INFO: Starting Backup of VM 706 (lxc)
INFO: Backup started at 2023-05-29 00:16:11
INFO: status = running
INFO: CT Name: node006
INFO: including mount point rootfs ('/') in backup...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.