Search results

  1. M

    tips for shared storage that 'has it all' :-)

    Thanks for all valuable input provided. Much appreciated.
  2. M

    tips for shared storage that 'has it all' :-)

    Thanks for the replies, mir and bbgeek17, appreciated. The intention (specially for this upcoming PoC) is to use what we already have in place, so we're not going to buy anything, and need no support. If we go PRD, specially the support part will of course change. I will checkout regular LVM on...
  3. M

    tips for shared storage that 'has it all' :-)

    Hi all, As many, we are also contemplating a move from broadcom/vmware to proxmox, and are starting with a PoC now. I ran proxmox in the past with ceph cluster, so I know how great that combination it, but ceph is (now) not going to happen where I work, so: no ceph. At the institute we have a...
  4. M

    pve generated interfaces.d/sdn uses wrong --to-source IP address

    Hi, We have a 5-node cluster, 1.2.3.192 - 1.2.3.196, using a 10G direct fibre connection between the five (called dev hsl) and the following /etc/network/interfaces on host pve3: root@pve3:/etc/network# cat interfaces # network interface settings; autogenerated # Please do NOT modify this...
  5. M

    Feedback on Using a Single /24 for All Traffic in a Proxmox meshed Cluster, with Ceph

    Hi Gilou, Thanks for your response. Appreciated. It's "full mesh" for the first three of the five nodes, and the remaining two are not participating in the full mesh ceph replication, have no OSD's, but are in the same /24 ip range, and in the same pve cluster. They would have access to the...
  6. M

    Feedback on Using a Single /24 for All Traffic in a Proxmox meshed Cluster, with Ceph

    Hi, We’ve reviewed the relevant wiki articles, and we’re looking for feedback on a networking strategy for our Proxmox and Ceph cluster setup. Specifically, we aim to avoid using multiple arbitrary IPs and would prefer to use a single /24 network for all traffic, including Ceph cluster traffic...
  7. M

    ZFS pool import fails on boot, but appears to be imported after

    We are seeing the same with our PoC PBS installation. PBS installed on a simple supermicro DOM, with 8 storage disks in a zfs raidz2 config. During boot, the import fails: root@pbs:~# systemctl status zfs-import@storage\\x2dbackup.service ● zfs-import@storage\x2dbackup.service - Import ZFS...
  8. M

    [SOLVED] backups from cron | api

    Ah I found post on changing ownership: You can manually change the owner (it's in the file called 'owner' in the backup group dir) All is well now! :-)
  9. M

    [SOLVED] backups from cron | api

    Right! So the environment variable is always required. That helps, thanks Hannes. Agreed on the documentation. Specifically some more examples would help for understanding better. :-) Anyway, it works now, except this: "Error: backup owner check failed (root@pam!server != root@pam) I guess...
  10. M

    [SOLVED] backups from cron | api

    Hi Hannes! I escaped like \!, because I was unsure where to start quoting. This gets me further, new error: root@server:~# proxmox-backup-client backup root.pxar:/ --repository root@pam\!cab1a7-9f-4xxxxxab9-b7-26bf4625@1.2.3.4:backup-repo --include-dev /var Error: error building client for...
  11. M

    [SOLVED] backups from cron | api

    Hi, Trying to setup a cron job, for doing regular backups to PBS. Trying to use an API token for authentication. I selected my backup-user, created an API token. And now want to use that token with my backup command: root@server:~# proxmox-backup-client backup root.pxar:/ --repository...
  12. M

    synchronize onsite pbs backups to an offsite pbs installation

    Hi, Is it possible to configure synchronization of a local PBS datastore to a remote PBS install? Perhaps specify which backups to keep offsite, and for how long, etc? We are currently rsyncing VMs to onsite freenas, and then use zfs snapshots/replication to achieve offsite copies of the...
  13. M

    Meltdown and Spectre Linux Kernel fixes

    yes, some more attempts at apt dist-upgrade got this resolved. Not sure why this happens.Second host finished just fine.... Thanks Fabian, and sorry for the noise.
  14. M

    windows 2008 guest, uefi, install iso doesn't boot

    Ah super!! Interesting info, your reply here is _very_ appreciated! :-) Any idea when this patch would become included in proxmox? Of is there a a way to disable those "hyper_v enlightments" for this specific machine in it's machine-id.conf config file on proxmox?
  15. M

    wheezy, fstrim resulting in blocked requests on ceph

    Hi, ceph version 0.94.7 (d56bdf93ced6b80b07397d57e3fa68fe68304432), wheezy kernel 3.2.0-4-amd64 ceph.conf: [global] auth client required = cephx auth cluster required = cephx auth service required = cephx cluster network = 10.10.89.0/24 filestore xattr use omap = true...
  16. M

    wheezy, fstrim resulting in blocked requests on ceph

    Hi, Today we changed storage for a (debian wheezy) VM to SCSI with virtio SCSI and and rebooted. Came up fine. Storage is on ceph. (three node cluster, 10G network, with total of 12 OSDs) Then we issued "fstrim -v / " on the VM and some trouble appeared: in the wheezy guest we received...
  17. M

    unexplained regular drops in ceph performance

    Using atop I can also see that on avarage 3 disks (osd/journal disks) generate 90 - 100% usage during ceph bench. This drops to only one disk during the 0 MB/sec moments.
  18. M

    unexplained regular drops in ceph performance

    Hi all, We see the following output of ceph bench: > root@ceph1:~# rados bench -p scbench 600 write --no-cleanup > Maintaining 16 concurrent writes of 4194304 bytes for up to 600 seconds or 0 objects > Object prefix: benchmark_data_pm1_36584 > sec Cur ops started finished avg MB/s cur...