I notice that our 2 nodes in our cluster are greyed out now after update and restart of server.
I see this in logs:
ug 4 18:10:57 pve-2 systemd: pvestatd.service: Found left-over process 21897 (vgs) in control group while starting unit. Ignoring.
Aug 4 18:10:57 pve-2 systemd: This...
When I do an resize on lvm partitions or anything runs against lvm.
If I dont then its fine but as soon as I try to make a change to lvm partition it dies. Tried it on few servers same issue on pve-kernel-5.4.44-2-pve.
I see these stuck:
root 5938 0.0 0.0 15848 8724 ? D 08:40 0:00 /sbin/vgs...
I updated proxmox a few days ago and ever since getting weird errors and high load above 300+.
I noticed this today:
root 26559 0.0 0.0 15832 8720 ? D 08:48 0:00 lvs --noheadings --separator : -o lv_attr,lv_name,data_percent,metadata_percent
root 26635 0.0 0.0...
I have been noticing errors for example like this in containers /var/log/messages that the error should not be complaining on.
For example in container 230 this error shows in /var/log/messages
Apr 5 14:17:10 wit kernel: [2434869.384450] audit: type=1400 audit(1586089030.548:16540)...
We have sucessfully managed to create cPanel templates with everything required for easy setup for production use and it works very well.
Unfortunately however we have to keep editing the conf files in /etc/pve/lxc/CT.conf file constantly to add:
is there a...
When backing up a server we found that server feezes just when this is invoked and load goes from 0.8 avg to around 700+
Nov 30 12:18:01 cp-6 qemu-ga: info: guest-ping called
Nov 30 12:18:01 cp-6 qemu-ga: info: guest-fsfreeze called
Nov 30 12:18:01 cp-6 qemu-ga: info: executing fsfreeze hook...
root@pve-1:~# qm migrate 101 pve-6 --online --with-local-disks --migration_type insecure --migration_network 10.0.0.0/24
/dev/sdc: open failed: No medium found
/dev/sdd: open failed: No medium found
2019-11-25 08:55:28 use dedicated network address for sending migration traffic (10.0.0.136)...
We want to setup a cluster again but find that one question we about puzzled with as per the documentation here:
As said above, it is critical to power off the node before removal, and make sure that it will never power on again (in the existing cluster network) as it is. If you power on the...
I have converted successfully a HW RAID 6 partition to a RAID 5 for more space:
a0 PERC H700 Integrated encl:1 ldrv:1 batt:good
a0d0 2791GiB RAID 5 1x6 optimal
Trying to increase the lvm-thin volume pve/data with the extra space.
Anyone know exact steps? Trying not to break...
We have two old servers running Consumer Grade 1TB SSD disks for last year with no issues whatsoever. However we would like to put in Intel DC Enterprise SSDs now.
I'd like to know as its RAID 10 servers that if we remove one SSD and insert into its place a different SSD with same size but...
I have a server with the following:
H700 Controller and BBU in writeback (512MB) version
8 x SAS HGST Enterprise 10k drives in RAID 10
will it help performance if I follow this guide and add 2 SSDs into the mix for LVM Caching.
I would like to lock down SSH and PRoxmox interface (port 8006) using the PVE Firewall.
Anyone have exact steps to follow as I dont want to lock myself out as this particular server is not in the office but in DC and too lazy to take a drive through if anything goes awry :)
We have a dell r610 server with 6 x 2.5 inch.
Considering one of these options to host a few cPanel VMs on proxmox host.
Which should provide the best performance?
2 x 1TB SSD disks in RAID 1and 4 x 2 TB SSD disks in RAID 10.
Then for VMS we will set OS and MySQL on SSD disks and /home...
What are the risks of making the following a global setting. We host VPS servers ranging from Ubuntu, Debian but mainly cPanel and CentOS servers.
CPanel requires the following :
lxc.aa_profile = unconfined
in each conf file. We dont want to do this manually each time so considering...
Have a a few VPS Servers that I see something strange occuring on.
Traffic is counting like this:
Although its not really using that traffic if we check the switch.
Also if I check the inside on the countainer looks like it may be some sort of "internal" traffic being counted as outbound...
We converted CentOS 6 OpenVZ containers a few months back to LXC via proxmox.
In doing so we got everthing to work.
Now cpanel recommends the following be added so crons, etc. work well:
We did this on all the containers that were done but when we try to SSH...
How do we limit diskio with LXC containers.
Lets say we have one LXC container killing the diskio constantly hence we would like to limit diskio to a good value that is just enough for all containers.
Any way to perform this?
We migrated some OpenVZ containers to LXC Proxmox 5.2.2 and now it does not seem to be able to SSH
Server refused to allocate pty
stdin: is not a tty
We already tried removing or commenting out a line from /etc/rc.sysinit
Does not seem to work.