Search results

  1. Linux VLAN Interface ohne neustart (reboot)

    Liebe Community, wir haben einen BareMetall Server mit 2 NICs. Die eine ist im Uplink eno1 und die 2. ist ein weiteres Netz. eno2 Wir haben im Proxmox eno1 / vmbr0 zu diesem eno2 / vmbr1 zu diesem. Nun benötigen wir noch ein Interface eno2.11 um darauf eine vmbr2 anzulegen...
  2. proxmox-backup-client transport Verschlüsselung

    Ist ein Backup welches auf einem Debian Client erstellt wird mit proxmox-backup-client transportverschlüsselt? Ich meine nicht die Verschlüsselung des Backup Files, sondern die Verschlüsselung der Übertragung (wie zb per scp). Ich Frage weil ich einen Client backuppen will und der PBS nur über...
  3. Container PVE-zsync falsche Diskgröße auf Zielserver

    Wir benutzen zsync um unsere Server zu migrieren. (https://pve.proxmox.com/wiki/PVE-zsync) pve-zsync sync --source 126 --dest 192.168.1.92:zfs --name nvme --maxsnap 2 --limit 50000 --method ssh --source-user root --dest-user root --verbose Es ist im Proxmox eingestellt, dass die Disk 100GB...
  4. [SOLVED] pve-zsync cron job only once a day

    https://pve.proxmox.com/wiki/PVE-zsync Wie kann ich den Cron einstellen, wann dieser ausgeführt wird (zb 1 Uhr Nachts)? Einfach die /etc/cron.d/pve-zsync editieren? Oder wird diese überschrieben?
  5. Firewall Option Rechenzentrum

    Wir haben die Firewall Option auf RZ Ebene aktiviert. Nun kommen wir nicht mehr an das GUI oder per SSH drauf. Wei können wir die Firewall wieder abschalten?
  6. LXC Container / zfs busy and very high load with D (uninterruptible sleep) processes

    we have proxmox standalone. we have 20 LXC Containers on it. since 7 days one of the containers is "amok". one vm do very high ioload and have 2000 uninterruptible sleep processes vm can not be stopped , so we stopped it with force. we unmount its disk umount -l /rpool/subvol-135-disk-0 we want...
  7. Sicherheitslücke Kernel cve-2020-14386

    Ist die Lücke problematisch für a) LXC Container b) KVM VMs ? Nicht, dass damit ein Container aus der VM ausbrechen kann.
  8. cluster21 kernel: nfs: server ... not responding, timed out

    Wir haben einen externen NFS gelöscht. Im Proxmox unter Rechenzentrum (cluster) -> Storage ist der auch nicht mehr eingetragen. Dennoch haben wie 4 / 7 Nodes die im syslog May 28 09:45:55 cluster21 kernel: nfs: server .... not responding, timed out werfen. Jede Minute. Au einen dieser nodes...
  9. Ceph add monitor any valid prefix is expected rather than "192.168.0.0/24, 192.168.1.0/24".

    pveceph mon create Error: any valid prefix is expected rather than "192.168.0.0/24, 192.168.1.0/24". command '/sbin/ip address show to '192.168.0.0/24, 192.168.1.0/24' up' failed: exit code 1 what is the reason? proxmox 6.1-8 ceph 14.2.8 ceph.conf: [global] <------> auth_client_required =...
  10. 8 Node Cluster / Host key verification failed.

    We can migrate from all of our nodes. but from cluster22 to cluster23 it is not working due Host key verification failed. Migrate from cluster22 to cluster21 works well. migrate from cluster21 to cluster23 works well too. check: /usr/bin/ssh -v -e none -o 'BatchMode=yes' -o...
  11. Snapshot hangs if qemu-guest-agent is running / Cloudlinux

    We use KVM. We use Centos7 with and Cpanel (think it does not matters) and installed qemu-guest-agent. Snapshots working fine. we install cloudlinux yesterday on this machine (based on centos7). Today we like create a new snapshot. it takes over 1h and we abborded it. some log of an other try...
  12. limit IO ( I/O) in LXC Container / ZFS

    Is there a way in Proxmox to limit IO of LXC Container? It is on ZFS. Somethink like lxc config device set ci root limits.read 30MB lxc config device set ci root limits.write 10MB do not work. there is no lxc command. we did not find any in this forum. Thanks so much.
  13. [SOLVED] Limit IO ( I/O ) in LXC Container

    Is there a way in Proxmox to limit IO of LXC Container? It is on ZFS. Somethink like lxc config device set ci root limits.read 30MB lxc config device set ci root limits.write 10MB do not work. there is no lxc command. we did not find any in this forum. Thanks so much.
  14. Proxmox with Corosync+Ceph / how multible "routes" to ceph storage

    We have 3 nodes with 2 private nics eth0 - 192.168.0.0/24 corosync link 1 , ceph storage, proxmox gui and clustercommunication f.e migration eth1 - 192.168.1.0/24 corosync link 0 we have set corosync nodes to have 2 rings (ring0_addr: 192.168.1.2 / ring1_addr: 192.168.0.2). this works well...
  15. [SOLVED] many [KNET ] pmtud: possible MTU misconfiguration detected

    We get many of this errors. Aug 02 07:41:54 storage1 corosync[3283]: [KNET ] pmtud: possible MTU misconfiguration detected. kernel is reporting MTU: 1500 bytes for host 4 link 0 but the other node is not acknowledging packets of this size. Aug 02 07:41:54 storage1 corosync[3283]: [KNET ]...
  16. Corosync redundancy over second nic with public ip

    We have 3 Nodes. Each has eth0 with public ip and eth1 with private. no more NICs possible. privatenet: 192.168.0.0 - eth0 public: different public ips, bit all in same datacenter - eth1 we are not able to change this. 2 private networks are not possible due datacenter restrictions. we set the...
  17. [SOLVED] Proxmox Cluster Ceph OSD Tree in GUI is empty from some nodes

    We use Proxmox with Ceph in 3 Node Setup. All is fine. But now suddenly only node1 can show the Ceph -> OSDs in Webinterface From node2 and node3 the page is empty. Reload button: no error but page is still empty all other pages are fine. f.e. Ceph -> Configuration or Ceph -> Pools from node1...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!