Search results

  1. Backup VM failed. LXC works

    I don't think so as the first thing I tried was creating such a file manually and experienced no problems at all.
  2. Backup VM failed. LXC works

    Good question why it happened. In my case I just help a friend to manage his system, remotely ... Don't know exact details of hardware. Friend says he did nothing and backups stopped working. So neither Proxmox itself nor the Backup host were updated (he is just not capable of doing it himself...
  3. Backup VM failed. LXC works

    Am in the same boat now suddenly. Any solutions here? UPDATE: I mounted CIFS folder on the server manually and added is a Directory storage. That way it works.
  4. Proxmox claiming MAC address

    I still run Proxmox 6. On the mentioned host it's only one VM running with the following configuration of the firewall: SMALL UPDATE: For the sake of completeness... The host has two VMs that have Firewall disabled, but those VMs are being used as a template for other hosts and were never up...
  5. Proxmox claiming MAC address

    1. Never used REJECT at all. 2. I have this as the default rule on datacenter level: 3. Done long time ago It seems to be random different MACs all the time, here are examples of abuse messages:
  6. Proxmox claiming MAC address

    It doesn't. Well at least it never worked for me. I applied this and msged DC support - they said it's fine and they don't see any wrong MAC traffic, but after a few days - same story. So the issue comes and goes. Today they even locked my server (!!!). I applied a firewall rule as suggested...
  7. Proxmox claiming MAC address

    Not enough to add it on the datacenter level?
  8. Proxmox claiming MAC address

    Do you mean I have to add a firewall rule to drop any outgoing (?) packets on port 43 for PVE6?
  9. Proxmox claiming MAC address

    I also have same problem. Proxmox VE 6.3-3 Firewall INPUT policy was on DROP as per default: I also altered the sys value: cat /proc/sys/net/ipv4/igmp_link_local_mcast_reports 0 Just received another abuse message :(
  10. Strange Firewall/ipsec behaviour after upgrading to 6.2-11

    I have recently updated a cluster with a few nodes having pretty similar network setup. Each node is connected with a few external networks over ipsec. And just one node behaves crazy (this is really strange). I can't ping any of the networks that are tunneled through the ipsec. Tunnels are...
  11. [SOLVED] PVE 6.0/corosync over WAN (high latency) - looses sync

    Well, I guess you have to read the documentation, because the questions you ask do not make much sense for me now... Especially if this is the service some customer will get...
  12. [SOLVED] PVE 6.0/corosync over WAN (high latency) - looses sync

    What do you mean with "boot your VMs" - of course I am able to. VM transfer functions just OKay within the GUI.
  13. Huge IO performance degradation between proxmox ZFS host and WS19 VM

    agent: 1 balloon: 8192 bootdisk: virtio0 cores: 2 cpu: host,flags=+pcid ide2: local:iso/virtio-win-0.1.173-9.iso,media=cdrom,size=384670K memory: 16384 name: test-ru net0: virtio=0A:63:FA:82:9F:91,bridge=vmbr0 net1: virtio=3E:44:3D:88:E4:54,bridge=vmbr0 numa: 1 ostype: win10 scsi0...
  14. transform virtio-blk to scsi-virtio

    The method still works :) Remember to change the Boot device in the options though:
  15. Poor performance on ZFS compared to LVM (same hardware)

    Sorry to resurrect an old thread, but I am experiencing the very same behavior nowadays with all the latest ZFS 0.8 and PVE 6.1. Described here. Does anybody have a clue?
  16. ERROR: VM 100 qmp command 'guest-fsfreeze-thaw' failed - got timeout

    Is it happening with pve 6.1? For me it was gone with that... BTW You don't need to restart VMs when they are stuck on backups: qm unlock 100
  17. Huge IO performance degradation between proxmox ZFS host and WS19 VM

    For more than a week, I am trying to determine the reason for the following IO performance degradation between proxmox host and a Windows Server 2019 VM(s). I have to ask for your help guys because I've run out of ideas. Environment data: Single proxmox host, no cluster, pve 6.1-8 with ZFS A...
  18. Windows VMs bluescreening with Proxmox 6.1

    So just a short update: I have managed to load an old kernel and running 5.0.21-5-pve. It has no problems. Actual kernel 5.3.18-2-pve produces for me continuous crashes and blue screens with all possible types of messages under WS2019 on two different nodes. Different CPUs, different systems...
  19. Windows VMs bluescreening with Proxmox 6.1

    Experiencing exactly the same on all WS2019 machines on 2 different nodes since the update yesterday :( Anybody has a clue how can I load the previous kernel on a headless machine?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!