Search results

  1. J

    Network drops on new VMs, not old

    For me it was RTL8111/8168/8411 driver issue. I saw quite a few people with same problems and it seems to boil down to power management of network card. If this was turned off , it would keep working normally (at the expense of a little higher power consumption on idle). I moved on from this...
  2. J

    storage is not online (cifs)

    I have the same problem except I'm trying to add storage and get the access denied error. I can connect from command line with smbclient just fine. Running proxmox 8.0.4.
  3. J

    [SOLVED] Remove node from cluster

    I believe permission denied error came from you not having chorus - You should have at least 3 nodes so that chorus would work. If you have two - as soon as one breaks , the whole cluster will go read-only, because remaining node does not have confirmation if he's "in" or "out". Third node can...
  4. J

    ZFS on Proxmox and VM

    Putting this as experience - i did see around 2x worse compression ratio if guest system would use xfs and it would be put on zfs storage in proxmox. Just making and LVM storage on proxmox and using zfs in geuest yealded around 2x better compression. This difference comes probably because of...
  5. J

    Running CEPH? on cheap NVME

    I'm running 10gbe ethernet All nvme's are in either dual or quad carriers that go onto pcie x8 or x16 sockets (running pcie bifurcation). They either have separate forced cooling or full aluminium double sided block heatsink on whole assembly. Temps are also monitored to ensure that there is no...
  6. J

    Running CEPH? on cheap NVME

    With all respect that was not the question. "works" is wage statement. I'm pretty sure it works on usb flashdrives. I'm asking if someone has one some setup on el' cheapo drives.. 2 drives per node etc. and what the performance on direct IO (no random IO) would sound like.
  7. J

    Running CEPH? on cheap NVME

    So i'm slowly getting to the point where my small cluster is getting important enough to have redundancy. I'm already running local storage on zfs and samsung entry level nvme's and performance is great.But looking at moving my mechanical backup to something more "solid". So as i know that ceph...
  8. J

    Opt-in Linux 5.19 Kernel for Proxmox VE 7.x available

    Cool.. i was able to upgrade half of cluster to 5.19 kernel and saw that migration issues between amd and intel dissapeared. But now few days later it seems 5.19 kernel has been scraped as it was replaced with 6.1 kernel. Oh well. Nice timing on my side. Restart all the testing.
  9. J

    [SOLVED] Remove node from cluster

    I just found that there was still my old node data available on the /etc/pve/nodes path. So just to test it out I just created dummy 1.conf file in the qemu-server subfolder and it immidiately added this node to he list of nodes in web ui. Of course it's with quiestion mark. After removing the...
  10. J

    [SOLVED] unknown option 'affinity'

    Just stumbled on that too. yes - nodes were different versions and was fixed with update and reboot of node. @Talha please mark it as solved also.
  11. J

    hibernate a VM via qm command

    I can't much as I don't understand what information you are seeking on this matter https://www.tecmint.com/clear-ram-memory-cache-buffer-and-swap-space-on-linux/
  12. J

    [SOLVED] Remove node from cluster

    You probably won't kill the clustet, but you can backup it and remove it and test if something goes wrong. You can always put it back. Just don't do it on working hours. Anyways thanks for the tip on removing the /etc/pve/nodes/<node> folder. I got to same point where it deleted the node but...
  13. J

    ZFS issues with disk images after reboot

    Just updated to latest Proxmox ( 7.2-11) and started having same issue. The command described above will fix the issue for this boot.
  14. J

    Live migration network performance

    I believe the end speed depends on speed of the cpu single core and contents of memory. I could achieve around 25-30Gbit/s speeds over 100Gbe SFP links, no RDMA (directly connected on 3 node cluster) on AMD epyc 7401P (ceph as storage, so memory copy only). using jumbo frames might drop some...
  15. J

    VM ZFS dataset consumes all available space

    Had the same problem on 5x8TB disk raidz2. 25% of space was lost. It would have been the same as having RAID10 instead (cost of one added drive not considered). It was default 8k volblocksize and 128k recordsize. If I used 64k volblocksize I got away with 10% of wasted space and bumping...
  16. J

    add node that conatins vm's to cluster

    I think you are totally overthinking or not thinking at all. I did not say to rename scsi0. I said rename scsi0 storage name. It has your vm id. So like in my case I have id 1001 machine and like it to be 5001. So: 1) rename 1001.conf => 5001.conf 2) edit the config: scsi0...
  17. J

    [SOLVED] ZFS partitions get rebuilt/activated by mdadm

    I am not interested in disabling auto assembly. I was interested disabling auto assembly for zd*. Since I can't think of way make negative glob ( can it be even done?) then I just ended matching for scsi and nvme devices. I have never been good at glob/regex. # by default (built-in), scan all...
  18. J

    [SOLVED] ZFS partitions get rebuilt/activated by mdadm

    It's said even on proxmox documentation itself - you can test out proxmox cluster inside proxmox :). It's not uncommon to have these setups if there are some dev machines or migration clones from real world servers. I sometimes make test upgrade/reinstallation on proxmox and then dump the...
  19. J

    add node that conatins vm's to cluster

    Under /etc/pve/qemu-server there are configs for vm's. If you want to change the id you can rename the files. root@phoenix:/etc/pve/qemu-server# ls -ll total 25 -rw-r----- 1 root www-data 392 Feb 22 01:39 1001.conf -rw-r----- 1 root www-data 445 Feb 22 01:39 1002.conf -rw-r----- 1 root www-data...
  20. J

    add node that conatins vm's to cluster

    I believe you just described proxmox recommended way of doing it. Yes - you can backup and then restore to different ID. You can also clone the current vm and this way you get new clone with new id and can delete old one (if you have storage space). As a last resort you can edit .conf files to...