we are running into what looks like a storage bug. I know that we have to supply quite a bit more information then will be in this 1ST post.
yesterday our most important order entry system had these symptoms:
1- could ssh in
2- could not access data files . we use cisam data base and the...
hello, while doing apt dist-upgrade to bullseye we have seen upgrades get stuck at this point:
Installing new version of config file /etc/systemd/resolved.conf ...
Installing new version of config file /etc/systemd/system.conf ...
Installing new version of config file /etc/systemd/user.conf...
I am seeing slower migrations with pve7 then pve6 .
We do have a network issue that I have been trying to track down over the last week, which is probably the cause.
However I wanted to see if others have noticed slower migrations.
thank you for reading this.
i am trying to get -exclude to work. we do not want to put a .pxarexclude at each system
proxmox-backup-client backup etc.pxar:/etc home.pxar:/home --exclude home/*/.cache:home/*/.thunderbird/*/ImapMail
I've tried a few different ways over the last hour. each time i see:
we are moving pbs to new hardware.
zpools / zfs have been set up with same names
rsync of datastore is in progress and will take some hours.
for configuration , besides /etc/proxmox-backup/ is there anything else that needs to be copied over?
I saw someone post similar in the German forum ---
attempting upgrade from 6.4 to 7:
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Error!
Some packages could not be installed. This may mean that you have...
we are considering moving pbs to a raid-10 zpool using six 4-TB nvme disks.
i was searching threads on best file system type to use and saw concerns regarding zfs. however I can not see a more reliable way then raid-10 zfs.
does anyone have another idea to consider ?
I know there is a cli way that we did 4-5 years ago to remove left over osd's , mons etc that show as out from an abruptly dead node.
is there a newer way or is dump/edit/restore ceph config still the way to do so?
we use pve 6.4 and ceph-octopus
I searched I think i saw a this issue reported before
we are replacing server hardware but moving over storage to new systems.
the 1st system had a single disk ext4 pve system and all went well.
the next one has zfs raid-1 . and will not boot. instead a uefi shell appears..
our remotes have much more disk usage [ 2x approx] then the source pbs system.
so i am running garbage collect for the 1ST time.
AFAIK the syncs have always been set to 'Remove Vanished' . How ever we have had some wrong configurations in the past , so i assume our issue with higher...
In another ceph thread:
we want to be as cautions as possible on ceph cluster upgrades.
this is how we currently do ceph upgrades , and I am not sure that it is cautious enough. please advise
1- do mon systems 1st
2- restart services by
I have two remote pbs systems that syncs are sent to.
one is taking 10 times longer to complete . internet connection speeds are 50MB+ for other downloads.. so connection speed is not the cause of slowness.
at the end of a transfer I noticed an error with client.log.blob missing. could...
I have a testing pfsense vm running as a kvm.
The issue we have is that the pfsense kvm is not reachable on lan from other then one vlan. so from workstations on a different vlan - pf can not be reached using http or ssh.
all other linux vm's are reachable.
it runs on this...
we want prioritize network traffic at our switch . it uses cumulus linux and can prioritize traffic based on 802.1q .
that way from the KVM phone system just voip / voice would get a different value then rsync backups.
coming from pve , backups would get a lower value then other types of...
I am looking at this: https://pfsense-docs.readthedocs.io/en/latest/virtualization/virtualizing-pfsense-with-proxmox.html
we have a 7 node cluster.
the nodes connect to a pair of cumulus linux switches using LACP .
there is a bridge at switches with bonds and switch-ports assigned...
we have a vlan that has connection issues a couple of times per week. vlan 3 has most of our server vm's like ldap, dhcp, nextcloud and 20 others.
we can run traceroute to all vlans addresses except vlan 3 . examples next:
# vlan 3
# traceroute mail
traceroute to mail (10.1.3.14), 30 hops...
Hello, we have a pair of Mellanox model sn2700 on order with Cumulus linux on order. We'll be replacing a a stack of netgear m5300 . In advance I am researching how to configure pve + the switches.
we have a 7 node cluster . each node is:
- running 4 ceph nvme osd's , 5-6 vm's and...