Recent content by freebee

  1. F

    zfs problems on simple rsync

    You are using nvme, you can try mq-deadline. De none is the default for nvme disks. But is uncommon you have this problems on nvme.
  2. F

    zfs problems on simple rsync

    UPDATE: NOOP: A simple scheduler that operates on a FIFO queue without any additional reordering, ideal for devices that already have their own scheduler, such as SSDs. Deadline: Aims to minimize the response time for any I/O operation, giving each operation a deadline before which it must be...
  3. F

    zfs problems on simple rsync

    Yes. What work for me: SSD. Some ssds don´t really is 4k. When you format SSD in ZFS the standard is ashift 12. I turn to 9. Raid controllers: Some raid controllers just don´t got to the zfs the performance need for write. So, So when zfs is faster at writing than disk, a timeout or lock...
  4. F

    HA in storage / HA in PROXMOX.

    Good morning everybody. I planned the following scenario and would like to know if anyone here has set up something similar or if something in the services has any limitations. I will have two OmniOS servers, with ZFS and using COMSTAR for iscsi. The two storage servers have several Intel...
  5. F

    Two servers on cluster: Making second server a master.

    Thanks. I just remove the old node using the pvecm delnode (this updates the corosync) and remove from corosync the two_node and wait_all paramters. Then pvecm add on each new node and everything goes well. Best regards.
  6. F

    Two servers on cluster: Making second server a master.

    So, this is the problem. I have more servers to add but I can't. The master server had a problem and I only had the secondary one. I can't add new servers. So simplifying the question: can I delete the master and the secondary becomes the main? Then i can add more servers.
  7. F

    Two servers on cluster: Making second server a master.

    Hello everybody. I have the following problem. There were 2 clustered servers (two_nodes: 1) The first server had a problem and was shut down. However, this first server was the master, where the cluster was created. Is there any way to make the second server master? When i click on cluster, a...
  8. F

    zfs problems on simple rsync

    Hi. My setup is simple. A proxmox VE (zfs on all disks), with a proxmox backup server virtualized. The proxmox backup server is temporary inside. So, in proxmox backup server virtualized, i need rsync a datastore to another, (the first disk in one SSD and the second disk in another and both is...
  9. F

    Kernel 5.15 on PVE 8

    Hi. How is the 6.5 for AMD processors ? I have two servers in test. proxmox 7 and proxmox 8. The 5.15 kernel is working stable on AMD, but 6.2.16-12-pve (on proxmox 8) stop sometimes. I try to let more stable make some configurations like here (https://wiki.archlinux.org/title/Ryzen), but no...
  10. F

    Move io_uring from default (important)

    Don´t be emotional. The forum is technical. Kind regards.
  11. F

    Move io_uring from default (important)

    Hi. Don´t take this personally to you. I just mentioned the meltdown as an example of a problem that most people didn't know about. The fact your scenario working doesn´t mean have no bugs. Again is not an adequate response. This does not change the io_uring problems with other people. If you...
  12. F

    Move io_uring from default (important)

    Meltdown and Spectre were no complaints for most users too and were a big problem. But the discussion is the problems pointed out here. The fact OpenZFS has a tool to test the io_uring does not give out bugs, and not all conditions were tested (for example, the VM -> backup -> openzfs using...
  13. F

    Move io_uring from default (important)

    Hello. Thank you for the answer. I just ask to not default on all configurations: Not designed for use with openzfs. Relation with lost data yet. Security problems. Google is more radically, removing from kernel. If we know could have problems, is not rational to default to this option. I´m...
  14. F

    Move io_uring from default (important)

    Hi. Is not .raw or .qcow2. Is ZFS.
  15. F

    Move io_uring from default (important)

    Hi. Thank you for the answer. The io_uring is not designed for openzfs (default used by proxmox) and does not benefit from performance. https://github.com/openzfs/zfs/issues/8716 Have other problems pointed on this email: io_uring and POSIX read-write concurrency guarantees...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!