Search results

  1. F

    Method to Copy PBS Backups Elsewhere?

    Second day of tests. mount -t cifs "//192.168.xxx.xx/g" /mnt/smb/gdrive_backup -o credentials=/etc/smbcredentials/backup.cred,domain=WORKGROUP,iocharset=utf8,vers=3.11,noserverino fio --name=randrw --ioengine=libaio --direct=1 --sync=1 --rw=randrw --bs=3M --numjobs=1 --iodepth=1 --size=20G...
  2. F

    Method to Copy PBS Backups Elsewhere?

    fio --name=randrw --ioengine=libaio --direct=1 --sync=1 --rw=randrw --bs=3M --numjobs=1 --iodepth=1 --size=20G --runtime=120 --time_based --rwmixread=75 randrw: (g=0): rw=randrw, bs=(R) 3072KiB-3072KiB, (W) 3072KiB-3072KiB, (T) 3072KiB-3072KiB, ioengine=libaio, iodepth=1 fio-3.33 Starting 1...
  3. F

    Method to Copy PBS Backups Elsewhere?

    Hi. I will test another approach. I will install a windows and use the gdrive official client for. Then, share the gdrive using smb protocol and access by proxmox backup server (mounting like smb share and add it to datastore). I think the performance and verification of backups will be better...
  4. F

    Proxmox Datacenter Manager Offline Replication

    Hi. Is the Offline Replication between two Data Centers working ? If yes, can someone give the feedback ? Best Regards.
  5. F

    Hard Disk double size when replication is active.

    I read about the thin provisioning. Solved.
  6. F

    Hard Disk double size when replication is active.

    Hi. I have a situation. A virtual machine has 1450GB disk. When i active the replication, the size grow very much: NAME USED AVAIL REFER MOUNTPOINT SAS0/vm-701-disk-0 2.44T 2.13T 1.01T - # zfs list -t snapshot NAME...
  7. F

    OS Type Windows and hv-tlbflush

    Hi. Is there a file inside proxmox that I can edit and, when the OS type is Windows, it will automatically add hv-tlbflush to the processor?
  8. F

    zfs problems on simple rsync

    You are using nvme, you can try mq-deadline. De none is the default for nvme disks. But is uncommon you have this problems on nvme.
  9. F

    zfs problems on simple rsync

    UPDATE: NOOP: A simple scheduler that operates on a FIFO queue without any additional reordering, ideal for devices that already have their own scheduler, such as SSDs. Deadline: Aims to minimize the response time for any I/O operation, giving each operation a deadline before which it must be...
  10. F

    zfs problems on simple rsync

    Yes. What work for me: SSD. Some ssds don´t really is 4k. When you format SSD in ZFS the standard is ashift 12. I turn to 9. Raid controllers: Some raid controllers just don´t got to the zfs the performance need for write. So, So when zfs is faster at writing than disk, a timeout or lock...
  11. F

    HA in storage / HA in PROXMOX.

    Good morning everybody. I planned the following scenario and would like to know if anyone here has set up something similar or if something in the services has any limitations. I will have two OmniOS servers, with ZFS and using COMSTAR for iscsi. The two storage servers have several Intel...
  12. F

    Two servers on cluster: Making second server a master.

    Thanks. I just remove the old node using the pvecm delnode (this updates the corosync) and remove from corosync the two_node and wait_all paramters. Then pvecm add on each new node and everything goes well. Best regards.
  13. F

    Two servers on cluster: Making second server a master.

    So, this is the problem. I have more servers to add but I can't. The master server had a problem and I only had the secondary one. I can't add new servers. So simplifying the question: can I delete the master and the secondary becomes the main? Then i can add more servers.
  14. F

    Two servers on cluster: Making second server a master.

    Hello everybody. I have the following problem. There were 2 clustered servers (two_nodes: 1) The first server had a problem and was shut down. However, this first server was the master, where the cluster was created. Is there any way to make the second server master? When i click on cluster, a...
  15. F

    zfs problems on simple rsync

    Hi. My setup is simple. A proxmox VE (zfs on all disks), with a proxmox backup server virtualized. The proxmox backup server is temporary inside. So, in proxmox backup server virtualized, i need rsync a datastore to another, (the first disk in one SSD and the second disk in another and both is...
  16. F

    Kernel 5.15 on PVE 8

    Hi. How is the 6.5 for AMD processors ? I have two servers in test. proxmox 7 and proxmox 8. The 5.15 kernel is working stable on AMD, but 6.2.16-12-pve (on proxmox 8) stop sometimes. I try to let more stable make some configurations like here (https://wiki.archlinux.org/title/Ryzen), but no...
  17. F

    Move io_uring from default (important)

    Don´t be emotional. The forum is technical. Kind regards.
  18. F

    Move io_uring from default (important)

    Hi. Don´t take this personally to you. I just mentioned the meltdown as an example of a problem that most people didn't know about. The fact your scenario working doesn´t mean have no bugs. Again is not an adequate response. This does not change the io_uring problems with other people. If you...
  19. F

    Move io_uring from default (important)

    Meltdown and Spectre were no complaints for most users too and were a big problem. But the discussion is the problems pointed out here. The fact OpenZFS has a tool to test the io_uring does not give out bugs, and not all conditions were tested (for example, the VM -> backup -> openzfs using...
  20. F

    Move io_uring from default (important)

    Hello. Thank you for the answer. I just ask to not default on all configurations: Not designed for use with openzfs. Relation with lost data yet. Security problems. Google is more radically, removing from kernel. If we know could have problems, is not rational to default to this option. I´m...