Search results

  1. I

    Storage for small clusters, any good solutions?

    Let's assume that you have an application that runs on a Linux VM. That application is mission critical to you and your business. Now you could setup the hypervisor and network and storage and so on to be HA, which in return makes your VM HA. If a node goes down, another node will take over the...
  2. I

    Storage for small clusters, any good solutions?

    That is the money quote. But IMHO even more important, in many, many cases, making the server HA was the wrong approach to begin with. Making the application itself HA would be way cheaper, offer way better performance, less complex, and easier for maintenance.
  3. I

    SSD ZFS Pool keeps increasing in usage space

    No worries. IMHO the IT world is complicated enough. So I like to keep stuff K.I.S.S. In your situation I would either A: change RAIDZ to a 3 way mirror, get another TrueNAS system with HDDs as ZFS storage system and put files on NFS/SMB shares from there. B: Add a few HDDs to your Proxmox...
  4. I

    SSD ZFS Pool keeps increasing in usage space

    Let me try a different way. Imagine how you write stuff. You have 16k volblockstorage. So proxmox offers 16k blocks to the VM. Now your VM fills one such block. We assume that the data is totally incompressible, wo we really have to write 16k. How would that look from a storage perspective...
  5. I

    Storage for small clusters, any good solutions?

    Take this with a huge grain of salt. I don't know you or your customers :) IMHO you probably don't need HA. Redudant PSU and local storage is more than enough. And your next part is a good explanation why. That is not automatically real HA. That is what I often see in SMBs and call "Sure this...
  6. I

    SSD ZFS Pool keeps increasing in usage space

    I gave you one possible answer, but you choose to ignore it. As a funny coincidence you answered one of my questions by posting this picture; your volblocksize is 16k. So my theory was right. So again, every 1TB VM disk will not only use 1TB but roughly 1.4TB. To prevent this you have...
  7. I

    Offline USB Backup mit PBS – Removable Datastore zu langsam, welche Alternativen nutzt ihr?

    Das ist ein ziemlicher edge case, der sich mit PBS nicht umsetzten lässt. PBS kann keine traditionell inkrementellen Backups. Wenn du es also 1:1 wie bei Veeam umsetzten möchtest, wirst vermutlich scheitern. Wenn du offen bist das System zu überdenken, ist lösbar. Das ist die sauberste Lösung...
  8. I

    SSD ZFS Pool keeps increasing in usage space

    You would have to provide a lot more information than what you posted here. Otherwise we have to make educated guesses ;) So just cluster, no HA? So you use ZFS and VMs use local RAW drives on ZFS? There are a few problems with that. Short: A: Don't use RAIDZ for VM! Use mirror instead. B...
  9. I

    PVE to PBS

    https://pbs.proxmox.com/docs-2/user-management.html
  10. I

    PVE to PBS

    It depends. Windows uses the User / share permission and uses a more restrictive AND permission logic. But for Users and groups, it uses the less restrictive OR permission logic. And Synology does this stuff again differently by using the more restrictive AND permission logic "your user might...
  11. I

    PVE to PBS

    of course, why should it not? no need to censor rfc1918 :) Looks like a permission error. Take a look at this: https://pbs.proxmox.com/docs/ Now what tripped me up the first time setting it up, was this paragraph here: Permissions for API tokens are always limited to those of the user. So...
  12. I

    PVE to PBS

    You don't have to, but I would highly recommend it. Also make sure it is offsite in case of a fire.
  13. I

    Performance Probleme

    Sorry, natürlich lz4. Denke ich auch. Das führt dann zu einem "richtigen" 1M record + 1M record der nur zu 10% aus Daten und zu 90% aus Nullen besteht. Daraus resultiert ein 2M write ohne Kompression. Mit lz4 wäre es selbst bei nicht komprimierbaren Daten nur 1.1M gewesen, weil der zweite...
  14. I

    ZFS - shifting ARC to L2ARC

    I would want to emphasize few points: This is especially relevant for Proxmox, since by default you will only with the small default 16k volblocksize. Instead of having a huge RAW VM disk, that is a pita to backup and has a 16k volblocksize, you could also create a dataset with 1M record...
  15. I

    ZFS - shifting ARC to L2ARC

    Don't worry, that rule is old and bad and useless. Proxmox will by default just use max 10% of your RAM. So if you have 32, that is 3,2. Depends on the use case. L2ARC will just store things that evicted ARC. So if your ARC has a hit ratio of +90%, I would say no L2ARC at all. Good thing about...
  16. I

    Guest agent not running on Windows 11

    Today I had the issue again with 0.1.271. The strange thing is that it has a clean exit code. C:\Windows\System32>sc qc QEMU-GA [SC] QueryServiceConfig ERFOLG SERVICE_NAME: QEMU-GA TYPE : 10 WIN32_OWN_PROCESS START_TYPE : 2 AUTO_START...
  17. I

    Performance Probleme

    Einverstanden Das glaube ich hingegen nicht. AFAIK ist die record size zwar generell wie du richtig erwähnt hast eine max Variable, ABER die record size kann für ein einzelnes File nicht variabel sein. Ein File als 1MB + 16k record zu speichern ist also AFAIK nicht möglich. Es müssen zwei 1MB...
  18. I

    Performance Probleme

    Das ist IMHO falsch. Angenommen dein recordsize ist 1M. Ein chunk kann von 4MB auf 1.1M komprimiert werden von PBS. Nun müssen zwei 1M records angelegt werden, sprich 2M storage mit compression off. Mit LZO wären es nur 1.1M gewesen. Gibt eigentlich fast nie einen Grund, LZ4 bei ZFS nicht...
  19. I

    Guest agent not running on Windows 11

    Today I had the issue again with 0.1.271. It is a lot less often than with 0.1.285 (this has been running fine since 24. Nov), but the underlying issue seems still be there. Even rebooting did not help. I had to go to services and start guest agent. And then I needed to do another reboot so...
  20. I

    NFS mount on PVE 8 or PVE 9 keeps needing different ports to be accessible.

    Sorry, I don't really know. This is just a guess: according to mountd -p num or -P num or --port num Specifies the port number used for RPC listener sockets. If this option is not specified, rpc.mountd will try to consult /etc/services, if...