Search results

  1. I

    pulse fills syslog with flood messages

    Even though it seems like an attempt for intrusion, is far from that. The service isn t exposed to the internet, the ip comes from the server accommodating pulse and it begins immediately after I deploy the agent via script provided by the program it self after I fill in the proxmox ip...
  2. I

    pulse fills syslog with flood messages

    Hi After installing pulse monitor for proxmox (in LXC), i have flooded in messages, like below DemoProx sshd-session[32039]: Connection closed by authenticating user root 192.168.20.15 port 50520 [preauth] These messages are non stop and port changes in each line. The preauth tag means it...
  3. I

    Proxmox with ZFS + SSDs: Built-in TRIM cron job vs zfs autotrim?

    PVE ver 8.3.5 and haven't yet figured out how trim / discard options work in conjunction with each other. I am aware of Autotrim being a synchronous trim that is issued after a block has been deleted on your pool. Since it might have an impact on the performance of the Vms, let's leave this...
  4. I

    CT migration from one node to another

    Hi Noticed today that after backup of an LXC, when tried restoring it to another node I got a msg .... tar: ./etc/vzdump/pct.conf: time stamp 2024-11-03 11:50:15.351796943 is 10308.553838456 s in the future tar: ./etc/vzdump/pct.fw: time stamp 2024-11-03 11:50:15.371796943 is...
  5. I

    Solved - Win VM migration between nodes and domain trust

    Changing it back to it's initial number made the trick and the VM started and user be able to login to the domain. Bottom line, issue was the automatic change of vmgenid value after restore. New edit: Tried that with a VM having as extra an efi and a TPM storage and worked as well. Only...
  6. I

    Solved - Win VM migration between nodes and domain trust

    If you want to see it before you change it wouldn 't that be something like qm get ID --vmgenid 106 assuming the VM has id of 106. Tired a couple of combinations to no effect
  7. I

    Solved - Win VM migration between nodes and domain trust

    Is vmgenid somewhere visible in the gui or only inside the .conf file of the VM?
  8. I

    Solved - Win VM migration between nodes and domain trust

    Unfortunately not only helps but raises a hell more questions without them being necessary (one of them might be where the DcCloneConfig.xml is located since is being mentioned a lot(in the vm, in the hypervisor iteslf, which path...etc)). The article speaks of moving the DC itself rather than a...
  9. I

    Solved - Win VM migration between nodes and domain trust

    Hi Yesterday, I started the process of transferring all Vms from the old node to the new one. Vms include a Domain Controller, a File Server, an SQL server, 2 win 10 & 11 machines for some services and a RDS server. All servers are based on Win2019 OS. Hardware is different the servers...
  10. I

    Blocksize / Recordsize / Thin provision options

    This is the after step. What do you think about results is the point here.
  11. I

    Blocksize / Recordsize / Thin provision options

    For me there is a winner, I just need to replicate that assumption from others as well. I m not a storage built architect. This think is a job by itself. ashift 12 and block of 8 instead of default 16. Has almost in all situation even by a little better IOPS and more importantly latency is less...
  12. I

    Blocksize / Recordsize / Thin provision options

    Well, with examples this time, I believe that my initial urge of 8k block size and ashift 12 and not the default 16 according to my easy-plain calculations waw correct ..... I guess. So I run Iometer (I couln t find the benefit to jsut bench the underlying raw storage) inside a WinServ2019...
  13. I

    [SOLVED] Hyper-Threading vs No Hyper-Threading; Fixed vs Variable Memory

    That's what I thought, not what you meant. yes ... because only then the kvm service will restart too to take the changes. Why you were so cautions of a restart? It s not that bad. I m doing it once in a few weeks for updates of Proxmox and stuff.
  14. I

    [SOLVED] Hyper-Threading vs No Hyper-Threading; Fixed vs Variable Memory

    The part that cache = none would give just max performance . So with cache=none the OS's cache communicated not with the host layer but the zfs file system directly and that gives reliability? That is the part I thought it was the opposite. I was <<wrongly>> aware that that is why you choose...
  15. I

    [SOLVED] Hyper-Threading vs No Hyper-Threading; Fixed vs Variable Memory

    Why? What is the difference between 2 shutdown modes? Yet, since I'm using this option as a good practice (mean the write back). Just <<none>> and <<This gives max reliability, saves Ram by not double storing data in Ram and does not lie to the guest os.>> doesn t glue together. Care to...
  16. I

    [SOLVED] Hyper-Threading vs No Hyper-Threading; Fixed vs Variable Memory

    Mine as well. I was under the impression that write back was the safe option for data integrity and it was a specific option for win vms. For llinux it was set to off. Observed this in many configuration videos. Can you set it to off afterwards? I guess yes but never mind to double checking...
  17. I

    [SOLVED] Hyper-Threading vs No Hyper-Threading; Fixed vs Variable Memory

    For all of you running win VMs (Specially servers), which option do you use for cache? : write back , write through or none?
  18. I

    [SOLVED] Hyper-Threading vs No Hyper-Threading; Fixed vs Variable Memory

    Any specific reason you had that disabled at first place? Noticed if you choose for SCSI Controller, SCSI Controller Single, it auto-enabled it by default. i ve also read that single has better performance without mentioning why though.
  19. I

    Blocksize / Recordsize / Thin provision options

    I know about fio in general. Since it is installed in the OS how come and it benches the raw storage? It seems to bench the OS layer where data gets accessed. You mentioned it to your link as well <<you should always benchmark the final layer on which you access your data>>. So in my case...
  20. I

    Blocksize / Recordsize / Thin provision options

    Ok but with SSDs not advertising the true internal page sizing most of the times (all the times now that I come to think about it) the ashift value would be wrong nevertheless. Already read some posts mentioning with SSDs always go with ashift =13 -> 8k blocksize for disks but how would you know...