Search results

  1. N

    VMs not starting

    root@node06:~# qm config 100 agent: 1 boot: cdn bootdisk: scsi0 cores: 2 cpu: EPYC ide2: none,media=cdrom memory: 2048 name: vps.xxxxx.com.br net0: virtio=02:00:00:a3:41:29,bridge=vmbr2,rate=5 numa: 0 ostype: l26 scsi0: stor01:100/vm-100-disk-0.qcow2,cache=writethrough,discard=on,size=40G...
  2. N

    Vm not start after upgrade proxmox "bug?"

    Sorry for that, during the night I will restart the servers to test. For an hour I circled the code below and resolved. echo Y > /sys/module/kvm/parameters/ignore_msrs
  3. N

    VMs not starting

    root@node01:/mnt/pve/stor04/images/324# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 1 NUMA node(s)...
  4. N

    VMs not starting

    root@node01:/mnt/pve/stor04/images/324# pveversion -v proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve) pve-manager: 5.4-8 (running version: 5.4-8/51d494ca) pve-kernel-4.15: 5.4-5 pve-kernel-4.15.18-17-pve: 4.15.18-43 pve-kernel-4.15.18-12-pve: 4.15.18-36 corosync: 2.4.4-pve1 criu...
  5. N

    VMs not starting

    i have the same problem after running update on my nodes.
  6. N

    Vm not start after upgrade proxmox "bug?"

    package version: proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve) pve-manager: 5.4-8 (running version: 5.4-8/51d494ca) pve-kernel-4.15: 5.4-5 pve-kernel-4.15.18-17-pve: 4.15.18-43 pve-kernel-4.15.18-12-pve: 4.15.18-36 corosync: 2.4.4-pve1 criu: 2.11.1-1~bpo90 glusterfs-client: 3.8.8-1...
  7. N

    Thin provision qcow2 image

    good morning everyone. I have a question, maybe someone can help me. My qcow2 images are not working on thin-provion, the images on NAS-NFS are being counted on the total size of the disk and not the actual size used by the VM, so I'm always having to have more space even than the clients' vm do...
  8. N

    Drive virtIO to isci

    This working for me? for a in /sys/class/scsi_generic/*/device/timeout; do echo -n "$a "; cat "$a" ; done; for i in /sys/class/scsi_generic/*/device/timeout; do echo 360 > "$i"; done
  9. N

    Drive virtIO to isci

    good morning everyone. I need to change my drive from virtio virtual machines to iscsi and I would like to know if there is any way to do this. I have had a lot of trouble with read mode when my storage migrates from master to slave. In virtio I can not set timeout for disks, in iscsi I can set...
  10. N

    SSL error after join node to cluster

    I have the same problem. My proxmox return this error in web browser: Permission denied, invalid pve tickets help-us staff ^^
  11. N

    [SOLVED] NFS Backup storage

    Thanks for help-me bro. God Bless you!
  12. N

    [SOLVED] NFS Backup storage

    Solved team and partners. Open port to iptables in proxmox server: iptables -A INPUT -p tcp --dport 2049 -j ACCEPT iptables -A INPUT -p tcp --dport 111 -j ACCEPT iptables -A INPUT -p udp --dport 2049 -j ACCEPT iptables -A INPUT -p udp --dport 111 -j ACCEPT And selinux disabled in nfs server. I...
  13. N

    [SOLVED] NFS Backup storage

    Thanks for the reply bro, but I'm confused. NFS is mounted and I can generate backup and send via ssh, but not through the proxmo panel. He gives this error. Do I need to free some ports on the proxmox firewall?
  14. N

    [SOLVED] NFS Backup storage

    Hello team, someone how to solved this? "storage is not online (500)" root@instancia-my01:~# pvesm nfsscan xx.xx.x.xx clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host) command '/sbin/showmount --no-headers --exports xx.xx.x.xx' failed: exit code 1
  15. N

    [SOLVED] Problems after Upgrade

    For me it was worse, after updating all my cluster has a problem. load balance does not work anymore, I have to shut down a network interface to the cluster work again, if I leave the two network interfaces connected the nodes are restarting without stopping and no quorum occurs. Someone from...
  16. N

    VM's network stopped working

    I'm with a similar problem, but it affects the nodes. Before the last update was fine, I restarted the servers and the problems started. I have set bond-tlb and was working fine, now I'm having to use only one eth, if I connect the cable to the other port eth entire cluster goes down...
  17. N

    pve zfs and drbd

    Failed ^^ not work and I'm using the FreeNAS.
  18. N

    [SOLVED] New node

    I solved this running this command: pvecm updatecerts --force reboot Thanks team.
  19. N

    [SOLVED] New node

    I have 7 node, but show 6 node only on webgui.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!