Hello to all,
after today's upgrade from 7.2-11 to 7.2.14 LXC's won't boot
run_buffer: 321 Script exited with status 2
lxc_init: 847 Failed to run lxc.hook.pre-start for container "109"
__lxc_start: 2008 Failed to initialize container "109"
TASK ERROR: startup for container '109' failed...
That solution solved my issue too, so thank you!
I 'd like to add following:
My home-LAB setup consists of PVE & PBS on the same proxmox host. PBS has 2 datastores, "pbs-local" as local one (local zfs dataset) and "pbs-nfs" as nfs share on synology NAS. So VMs from the local zfs-vm dataset are...
Hi to all,
today I did 3-node-cluster upgrade from 7.1.x to 7.2.3 (with community subs) and the process broke up on every node. After Apt update / dist-upgrade (that obviously went with errors) I rebooted host , but than OpenVswitch network went down and I had to switch back to linux bridge...
one more thing .... Backup speed was not an issue at all ... PBS was receiving data at 950Mbs. .... backing up 1 VM ... but restoring the same one at 350Mbs :(
regarding all mentioned above we can say that PBS is "faster at writing than in reading" ... which is not that common ...
There is one more thing to point out (after further "combination" testing) ... :
my cluster consists of two powerful hosts (pve1&2), and the "little" 3rd quorum node (pve3) is SUPERMICRO A2 C3558 Atom 4 x 4TB sata 7200 rpm WD RED Pro drives - zraid10)
This quorum hardware concurrently runs...
Now I tested with hw-raid volume (LSI3108) (1 volume -> zfsraid0) and the result is the same like with 4 x 1T server ssd zraid10
But , we are always talking about one target , all this 4 VMs are being concurrently restored from this PBS to the same target ... So target is capable of...
After/during latest upgrade (w and w/o subscription) I had problem with openvswitch interfaces :
The node was cut off from the cluster and I had to use IPMI to recover net interfaces ... ifup vmbr0 did the job ...
So be careful ....
Hello to all
one of PBS-backedup VMs is linux mail server that stores each mail as one file (Kerio-Connect-GFI)
In this case, one of users has cca 160.000 mail files in #msgs directory and while browsing mentioned directory , this kind of time-out message pops up
Is there any chance to...
Hello to all , I just want to share my recent good experience with PBS ...
I upgraded my PVE - PBS lab setup to 7.x / 2.x
One of my VMs is truenas with virtual images in zfs raid (0-stripe)
File-restore works for widnows volumes, linux volumes but the possibility to read zfs-raid volumes was...
My PBS setup consists of:
4 x 4T sata (5400 rpm) -> raidz10 PBS + datastore datasate /rpool/datastore1
2 x satadom ssd 64G -> raidz1- special device
Backup speed gets up to 85-95% 1G wirespeed and this is pretty satisfactory, but restore speed is little questionable.
When restoring just...
JonathonFS thank you for your hints ... You're right , I'll post new(same) question to PBS Install&Conf forum
BTW ... My setup is 'default' and does not have has any bw-limits ... I've just tried vzdump backup and restore to (and from) the same PBS server (made nfs export in rpool/nfs...
No, there is no aggr or bond .. just one NIC towards PBS. So the main question would be : Is there any way to populate i.e. 80% of this 1G wire-bandwitdth when restoring just one VM at a time? When restoring 2-3 VMs concurrently to the same PVE node , wire link gets full populated at 1Gbs...
Hello to all,
I've just finished PBS setup that I'm pretty satisfied with.
Server is supermicro 721 case, 4 x Intel(R) Atom(TM) CPU C3558 @ 2.20GHz (1 Socket), MB with 6 sata ports available.
PBS is installed on raidZ10 (4 x 4T sata / slow 5400rpm) , data store is local dataset on rpool ->...
updated to 7.1.6 ... but pve-qemu-kvm is still 6.1.0-2 and with this version still have error like this (ubuntu fresh install virtio)
downgrading to 6.0.0-4 solved this out ....
Now I'm afraid upgrading from 6 to 7 ... Did that only once at one client and had some wierd problem with...
Ok, it seems we've got stuck here :
Now (with 6.0.0-4 version) error messages such as above ones do not appear any more ...
Waiting for some official response about this issue ...
Thanks in advance
BR
Tonci
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.