Search results

  1. potetpro

    [SOLVED] Update Unauthorized, after latest update.

    All is active. Ok, so it's working now after i clicked "check". I did this yesterday without any luck, but now it's working :)
  2. potetpro

    [SOLVED] Update Unauthorized, after latest update.

    So after upgrading 4 of our Proxmox servers from 6.0 to 6.1, i get this error: starting apt-get update Hit:1 http://ftp.no.debian.org/debian buster InRelease Hit:2 http://ftp.no.debian.org/debian buster-updates InRelease Hit:3 http://security.debian.org buster/updates InRelease Hit:4...
  3. potetpro

    Shadow protection SPF files, migrate to VM

    Hello. We have a psyhical machine running windows server 2012. This server runs Shadowcopy backup. Is it possible to migrate/import a full image shadow copy image, directly into Proxmox? Thanks.
  4. potetpro

    Ceph SSD, do i need "discard" or "ssd emulation" in vm settings?

    As title says, does this do anything? any good? or any bad? My guess is that Ceph handles this by it self. Thanks :)
  5. potetpro

    Proxmox high memory usage on one server.

    It was a 2012r2 server, we disabled balooning, seems for work for now. To free up the space we just migrated the VM to another host, then back again, and the the memory was back to normal :)
  6. potetpro

    Proxmox high memory usage on one server.

    Hello. We have one server using more memory than it's suppose to do. I have configured zfs to only use 512MB ram max. And i have configured the Ceph OSDs to use 1GB each. But still i am missing root@proxmox1:~# arcstat time read miss miss% dmis dm% pmis pm% mmis mm% arcsz...
  7. potetpro

    Ceph reports wrong pool size after upgrade.

    Yes, they are all updated, and restarted.
  8. potetpro

    Out server crashed in production while live migrating.

    Oh no no, i was manually migrating the VM's when it crashed. I was trying to recreate the senario to check if we did anything wrong. :) I thought initially that the crash was caused by migrating a VM outside its HA-group. But that was not the case.
  9. potetpro

    Ceph reports wrong pool size after upgrade.

    root@proxmox3:~# pvesm status Name Type Status Total Used Available % ceph-hdd rbd active 2824362176 943718976 1880643200 33.41% ceph-ssd rbd active 1695300161 1342886849 352413312...
  10. potetpro

    Ceph reports wrong pool size after upgrade.

    Is the graph determined by the gui running on the server i am connected to? What's the service running the web interface? Maybe i can try restarting it.
  11. potetpro

    Out server crashed in production while live migrating.

    Thought maybe it was because the new server was not in HA group, but when trying to recreate the crash in a virtual environment it did not crash. Are the HA-groups necessary? How does proxmox determine what node is the best?
  12. potetpro

    zfs_arc_max only working on 1 of 4 servers

    Found this post: https://forum.proxmox.com/threads/pve-6-0-cannot-set-zfs-arc_min-and-arc_max.55940/#post-257738 Saying he got it working using: pve-efiboot-tool init /dev/device I am running zfs raid1 on my boot volume, do i run the command against both disks? Device Start End...
  13. potetpro

    zfs_arc_max only working on 1 of 4 servers

    Yes. All of them, one by one. update-initramfs: Generating /boot/initrd.img-5.0.21-1-pve Running hook script 'zz-pve-efiboot'.. Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace.. No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.
  14. potetpro

    Ceph reports wrong pool size after upgrade.

    Seems to be fine after a second reboot of all of the nodes :)
  15. potetpro

    Ceph reports wrong pool size after upgrade.

    Here is another view of how the patch fixed the issue, and then after a reboot of one of the nodes/servers it went back. Is there a proxmox server that is master of how the volumes are read?
  16. potetpro

    Ceph reports wrong pool size after upgrade.

    Hi. Just installed the patch, it worked for a little while, then it went back :( The start of this hourly graph showed the correct value, then we rebooted one of our servers, and it went back to wrong size.
  17. potetpro

    zfs_arc_max only working on 1 of 4 servers

    Hi. my /etc/modprobe.d/zfs.conf on all servers look like: options zfs zfs_arc_max=573741824 I have tried running: update-initramfs -u on all servers. and the result currently is: root@proxmox1:~# cat /proc/spl/kstat/zfs/arcstats | grep c_max c_max 4 67530100736...
  18. potetpro

    Out server crashed in production while live migrating.

    The new server was updated to pve-cluster 6.0.7 while the rest of the cluster was running pve-cluster 6.0.5 The new server also running pve-kernel-5.0.21-2, while the others was running 5.0.21-1 Dunno if that has something to do with it.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!