Search results

  1. Z

    Proxmox VE 7.0 (beta) released!

    For a new cluster, As suggested in post n.18 by t.lamprecht, I would like to try, on a test cluster, the upgrade from version 6.4 to version 7.0. After that I would like to know, for native 7.0 installations, if do you recommend to continue using ZFS as root file system or to use the new BTRFS...
  2. Z

    Proxmox VE 7.0 (beta) released!

    Hi to all, we are planning to create a new Proxmox Cluster (with Ceph, possibly 16) within 3 weeks, and I would like to know if you plan to release Proxmox 7 Stable by that date and if not, would you advice to use Proxmox 7 Beta instead of Proxmox 6.4? In other words is Proxmox 7 ready for...
  3. Z

    Unable to mount backup

    Hi, it works! Thank you
  4. Z

    Unable to mount backup

    Hi, I have your same problem with Linux VM. Instead with Windows VM I can mount the image and it works.
  5. Z

    Proxmox and native (original) Ceph Octopus

    The question is: can I use the ceph native repository? is it will works? If I will use ceph native repository, when I will update Proxmox, could I have some problems?
  6. Z

    Proxmox and native (original) Ceph Octopus

    Hi, we are planning to create a Proxmox cluster (8 nodes) for computing and use a native (original) Ceph cluster (Octopus always update to the last version) installed on Debian Buster. Now I have some questions: For Ceph "client" which repository should I use? The original proxomox repository...
  7. Z

    Ceph Single Node vs ZFS

    Hi, I made a test with a RAID10 + one spare disk, but I'm a little confused by the results... I run the test with: fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=16384M --numjobs=4 --runtime=240 --group_reporting and the result of fio is: Jobs: 4...
  8. Z

    Ceph Single Node vs ZFS

    Hi, I opted to use ZFS. Now I should decide whether to use RAIDZ2 or RAID10 with the fifth disk as SPARE. I have currently tested RAIDZ2 with: fio --name=randwrite --output /NVME-1-VM --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=16384M --numjobs=4 --runtime=240...
  9. Z

    Ceph Single Node vs ZFS

    Hi, Yes, I know that Ceph needs at least 3 nodes, and for a stand alone server ZFS is the "natural" choice. My question arose because in the near future the customer will add two more servers and with ceph I would have had the Storage ready. It would have been enough to add nodes to Ceph ...
  10. Z

    Ceph Single Node vs ZFS

    Hi, a client of mine currently has only one server with the following characteristics: Server: SUPERMICRO Server AS -1113S-WN10RT CPU: 1 x AMD EPYC 7502 32C RAM: 512 GB ECC REC NIC 1: 4 x 10Gb SFP+ Intel XL710-AM1 (AOC-STG-I4S) NIC 2: 4 x 10Gb SFP+ Intel XL710-AM1 (AOC-STG-I4S) SSD for OS ...
  11. Z

    Proxmox QinQ

    Hi, so if I understand correctly, just tag the bond with the outer vlan (in the exaple vlan 10) and then tag the vm nic with the inner vlans (in the example 34 and 35)? Thank you PS: I will wait impatiently for the stable version of the new sdn feature....
  12. Z

    Proxmox QinQ

    Hi, I would like to start using Q in Q.... This is a typical scenario: bond0: eno1 + eno2 bond0.10 : vlan 10 bond0.20: vlan 20 vmbr10: bridge assigned to VMs of customer A vmbr11: bridge assigned to VMs of customer B Customer A have 10 VMs, 4 of them have to "ping" between them but not with...
  13. Z

    Ceph Public and Cluster networks with different configuration

    Hi, within a few weeks I will have to configure a Proxmox 5-node Cluster using Ceph as storage. I have 2 Cisco Nexus 3064-X switches (48 ports 10Gb SFP+) and I would like to configure the ceph networks, for each node, in the following way:: Ceph Public: 2 x 10Gb Linux Bond in Active/Standby...
  14. Z

    [SOLVED] Set primary slave for bond

    Hi Sebastian, in my test-lab I have proxmox version 6.1-8 and under linux bond I don't have this feature... :oops:
  15. Z

    kernel-4.15.18-8 ZFS freeze

    Where can I find "ZFS discussion"? thank you
  16. Z

    kernel-4.15.18-8 ZFS freeze

    Do you think that by updating to Proxmox 6.1 the problem can be solved?
  17. Z

    kernel-4.15.18-8 ZFS freeze

    kern.log: Mar 17 12:45:29 dt-prox1 kernel: [215168.671085] ------------[ cut here ]------------ Mar 17 12:45:29 dt-prox1 kernel: [215168.671087] kernel BUG at mm/slub.c:296! Mar 17 12:45:29 dt-prox1 kernel: [215168.671115] invalid opcode: 0000 [#1] SMP PTI Mar 17 12:45:29 dt-prox1 kernel...
  18. Z

    kernel-4.15.18-8 ZFS freeze

    Hi, I've read now this post and as i wrote in this post https://forum.proxmox.com/threads/proxmox-5-4-stops-to-work-zfs-issue.63849/#post-298631 I have the same identical problem (last time two days ago) and I don't know what to do anymore. I have update bios, proxmox (with pve-subscription)...
  19. Z

    Proxmox 5.4 stops to work (zfs issue?)

    Hi, I have 8 banks of 32 GB of RAM, for a total of 256 GB DDR4 ECC REG.
  20. Z

    Proxmox 5.4 stops to work (zfs issue?)

    Hi jsengupta, this a screenshot o my situation: My only doubt is for /dev/nvme0n1... Thank you