Search results

  1. Network disconnect almost daily

    Hi, I have an ongoing issue on one host. Network gets disconnected almost daily, on the node all VMs are getting marked with question sign. Then restarting the networking service I am able to see them again but any guest is able to comunicate over internet and all must be rebooted one by one. I...
  2. Server crash

    hi! Happened once again Link was not down at all. In router was operational at 10gb Here when I rebooted the machine But was resulting again down link until I rebooted 3 times.
  3. Server crash

    Plenty of this (these are the last just before the reboot) ul 05 06:49:34 pve2 kernel: ll header: 00000000: ff ff ff ff ff ff d6 05 e1 9b fd 08 08 06 Jul 05 06:49:35 pve2 kernel: IPv4: martian source 172.16.12.90 from 172.16.12.60, on dev eth0 Jul 05 06:49:35 pve2 kernel: ll header: 00000000: ff...
  4. Server crash

    No this is not the case. If memory could be the issue I would get this message in Dell Idrac
  5. Server crash

    here: df -h Filesystem Size Used Avail Use% Mounted on NVME/subvol-126-disk-0 28G 18G 11G 63% / none 492K 4.0K 488K 1% /dev udev 504G 0 504G 0% /dev/tty tmpfs 504G 0 504G 0% /dev/shm tmpfs...
  6. Server crash

    Here it is: root@lxc2~# mount | grep tmpfs none on /dev type tmpfs (rw,relatime,size=492k,mode=755,uid=100000,gid=100000,inode64) udev on /dev/full type devtmpfs (rw,nosuid,relatime,size=528326268k,nr_inodes=132081567,mode=755,inode64) udev on /dev/null type devtmpfs...
  7. Server crash

    Hi, There are 4 containers running 1 x 4GB 2X 12GB 1X10GB The host memory usage is 11.79% (118.81 GiB of 1007.76 GiB)
  8. Server crash

    Hi, I have a server within a cluster that time to time crashes and becomes totally unresponsive. Attaching the last screen recorded before the crash. Within the cluster it is the unique one using LXC containers (very small number). I am sure I am not out of ram because there are something like...
  9. Martian source

    The most strange thing is that any of the guest on that node is using the mentioned IPs. All ips are allocated on other nodes
  10. Martian source

    In my case I have tons of entries like Jun 24 08:04:25 pve2 kernel: IPv4: martian source 185.XXX.XX.255 from 185.22.XX.XX3, on dev lan Jun 24 08:04:25 pve2 kernel: ll header: 00000000: ff ff ff ff ff ff 1e ea 3a 3e 0c f8 08 00 Jun 24 08:09:26 pve2 kernel: IPv4: martian source 255.255.255.255...
  11. Martian source

    Hi, I am having the same issue. How did you sort it out?
  12. Upgrade from 6.4 to 7.1 issue

    You're right. I did not check ceph since I am not using it
  13. Upgrade from 6.4 to 7.1 issue

    Hi, sure, here it is proxmox-ve: 6.4-1 (running kernel: 5.4.162-1-pve) pve-manager: 6.4-13 (running version: 6.4-13/9f411e79) pve-kernel-5.4: 6.4-12 pve-kernel-helper: 6.4-12 pve-kernel-5.4.162-1-pve: 5.4.162-2 pve-kernel-5.4.114-1-pve: 5.4.114-1 pve-kernel-5.4.106-1-pve: 5.4.106-1...
  14. Upgrade from 6.4 to 7.1 issue

    In a cluster I updated all nodes but one (Enterprise repo). On this specific node when finalizing the upgrade I am getting the following message: root@pve2:~# apt dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade...
  15. [SOLVED] Bug Problems On Startup Right After Install

    you should edit /etc/default/grub with your fav editor then run update-grub. The configuration file is /boot/grub/grub.cfg, but you shouldn't edit it directly.
  16. slow migration speed

    Hi, I am migrating some test machines between nodes (10GB connection), The disk is getting transferred pretty fast then the VM state for some reason in extremely slow. Here the migration example: 2022-02-10 11:09:29 migration active, transferred 1.8 GiB of 2.0 GiB VM-state, 4.1 MiB/s...
  17. Renaming disks

    Hi! In order to be able to make migrations and replications I would like to rename the disks of a node to have the same names used on other ones. Both are of LVM-Thin type. What is the best approach to do it?
  18. [SOLVED] Bug Problems On Startup Right After Install

    As a workaround can be to set acpi=off in grub
  19. Newly created Windows VM fails to start right off the bat

    I have Windows VMs with 4 ore more drives running without any sort of issue
  20. Windows 2019 CPU issues

    Are you sure with a reboot I will have it fixed? Because actually on this machine RAM usage is 581.07 GiB out of 1007.78 GiB

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!