I have a Dell PowerEdge R710 I installed with Proxmox VE 5.3. I gave the server to a friend and we were able to get Proxmox to connect properly to the Internet from his Home Router, along with the VMs installed on it.
I have an Nginx VM, GitLab VM as well as a few others. The Nginx VM Reverse Proxies the other VMs (so they can connect to the Internet). The GitLab VM (the main point of the server) hosts our project.
Recently someone tried to Git Push too much to the Server, and Nginx kept crashing (maxing out on disk space). I saved a Snapshot of the Nginx VM that wasn't maxed out and rolled back to when it wasn't filled up. This usually would resolve Nginx crashing ... but it wasn't this time.
So I did something more extreme and tried to increase the Disk Space on the Nginx VM by 1GB:
1. In Proxmox:
Proxmox then reports the following:
2. In Nginx VM:
3. In Nginx VM:
This caused FAR more problems, and Nginx kept breaking. I have no idea why, as I'm quite sure this is the proper way to increase the Nginx Disk Space. Unsure what else could be the issue, I decided to restart Proxmox ... and now I can't even remotely connect to Proxmox: not via Proxmox GUI nor via Putty (using the External IP Address).
My friend's network is fine, the Dell PowerEdge R710 is online and Proxmox itself isn't maxed out according to my fiend. All the issues were with the Nginx VM remember. What could be the problem!? I've never had issues connecting to Proxmox since it moved to my friend's home and we got it networking with his Router properly.
I have an Nginx VM, GitLab VM as well as a few others. The Nginx VM Reverse Proxies the other VMs (so they can connect to the Internet). The GitLab VM (the main point of the server) hosts our project.
Recently someone tried to Git Push too much to the Server, and Nginx kept crashing (maxing out on disk space). I saved a Snapshot of the Nginx VM that wasn't maxed out and rolled back to when it wasn't filled up. This usually would resolve Nginx crashing ... but it wasn't this time.
So I did something more extreme and tried to increase the Disk Space on the Nginx VM by 1GB:
1. In Proxmox:
qm resize 100 scsi0 +1G
(100 = Nginx VM ID)Proxmox then reports the following:
For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100.
Size of logical volume pve/vm-100-disk-0 changed from 10.00 GiB (2560 extents) to 11.00 GiB (2816 extents).
Logical volume pve/vm-100-disk-0 successfully resized.
WARNING: Sum of all thin volume sizes (6.09 TiB) exceeds the size of thin pool pve/data and the size of whole volume group (836.12 GiB)!
2. In Nginx VM:
sudo /sbin/lvresize -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
3. In Nginx VM:
sudo /sbin/resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
This caused FAR more problems, and Nginx kept breaking. I have no idea why, as I'm quite sure this is the proper way to increase the Nginx Disk Space. Unsure what else could be the issue, I decided to restart Proxmox ... and now I can't even remotely connect to Proxmox: not via Proxmox GUI nor via Putty (using the External IP Address).
My friend's network is fine, the Dell PowerEdge R710 is online and Proxmox itself isn't maxed out according to my fiend. All the issues were with the Nginx VM remember. What could be the problem!? I've never had issues connecting to Proxmox since it moved to my friend's home and we got it networking with his Router properly.
Last edited: