Recent content by Jannoke

  1. storage is not online (cifs)

    I have the same problem except I'm trying to add storage and get the access denied error. I can connect from command line with smbclient just fine. Running proxmox 8.0.4.
  2. [SOLVED] Remove node from cluster

    I believe permission denied error came from you not having chorus - You should have at least 3 nodes so that chorus would work. If you have two - as soon as one breaks , the whole cluster will go read-only, because remaining node does not have confirmation if he's "in" or "out". Third node can...
  3. ZFS on Proxmox and VM

    Putting this as experience - i did see around 2x worse compression ratio if guest system would use xfs and it would be put on zfs storage in proxmox. Just making and LVM storage on proxmox and using zfs in geuest yealded around 2x better compression. This difference comes probably because of...
  4. Running CEPH? on cheap NVME

    I'm running 10gbe ethernet All nvme's are in either dual or quad carriers that go onto pcie x8 or x16 sockets (running pcie bifurcation). They either have separate forced cooling or full aluminium double sided block heatsink on whole assembly. Temps are also monitored to ensure that there is no...
  5. Running CEPH? on cheap NVME

    With all respect that was not the question. "works" is wage statement. I'm pretty sure it works on usb flashdrives. I'm asking if someone has one some setup on el' cheapo drives.. 2 drives per node etc. and what the performance on direct IO (no random IO) would sound like.
  6. Running CEPH? on cheap NVME

    So i'm slowly getting to the point where my small cluster is getting important enough to have redundancy. I'm already running local storage on zfs and samsung entry level nvme's and performance is great.But looking at moving my mechanical backup to something more "solid". So as i know that ceph...
  7. Opt-in Linux 5.19 Kernel for Proxmox VE 7.x available

    Cool.. i was able to upgrade half of cluster to 5.19 kernel and saw that migration issues between amd and intel dissapeared. But now few days later it seems 5.19 kernel has been scraped as it was replaced with 6.1 kernel. Oh well. Nice timing on my side. Restart all the testing.
  8. [SOLVED] Remove node from cluster

    I just found that there was still my old node data available on the /etc/pve/nodes path. So just to test it out I just created dummy 1.conf file in the qemu-server subfolder and it immidiately added this node to he list of nodes in web ui. Of course it's with quiestion mark. After removing the...
  9. [SOLVED] unknown option 'affinity'

    Just stumbled on that too. yes - nodes were different versions and was fixed with update and reboot of node. @Talha please mark it as solved also.
  10. hibernate a VM via qm command

    I can't much as I don't understand what information you are seeking on this matter https://www.tecmint.com/clear-ram-memory-cache-buffer-and-swap-space-on-linux/
  11. [SOLVED] Remove node from cluster

    You probably won't kill the clustet, but you can backup it and remove it and test if something goes wrong. You can always put it back. Just don't do it on working hours. Anyways thanks for the tip on removing the /etc/pve/nodes/<node> folder. I got to same point where it deleted the node but...
  12. ZFS issues with disk images after reboot

    Just updated to latest Proxmox ( 7.2-11) and started having same issue. The command described above will fix the issue for this boot.
  13. Live migration network performance

    I believe the end speed depends on speed of the cpu single core and contents of memory. I could achieve around 25-30Gbit/s speeds over 100Gbe SFP links, no RDMA (directly connected on 3 node cluster) on AMD epyc 7401P (ceph as storage, so memory copy only). using jumbo frames might drop some...
  14. VM ZFS dataset consumes all available space

    Had the same problem on 5x8TB disk raidz2. 25% of space was lost. It would have been the same as having RAID10 instead (cost of one added drive not considered). It was default 8k volblocksize and 128k recordsize. If I used 64k volblocksize I got away with 10% of wasted space and bumping...
  15. add node that conatins vm's to cluster

    I think you are totally overthinking or not thinking at all. I did not say to rename scsi0. I said rename scsi0 storage name. It has your vm id. So like in my case I have id 1001 machine and like it to be 5001. So: 1) rename 1001.conf => 5001.conf 2) edit the config: scsi0...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!