Search results

  1. A

    Ceph problem

    There are (were?) two disks on the filesystem. my guess is that vm-80624-disk-0 is the efi disk. I would attach vm-80624-disk-1 to the vm, and see what happens; It may just boot correctly in which case you would remove the replace scsi0 with that image. SOMEONE was clearly editing that VM's...
  2. A

    Backup cannot be restored, allegedly not enough memory

    I never tried. it looks like vzdump doesnt alllow resize on restore, at least not anywhere I can see. you would have to unpack the config from the tarball, edit it, and put it back- not worth the effort. there's gotta be some other way. @t.lamprecht would this be an issue with the destination...
  3. A

    Backup cannot be restored, allegedly not enough memory

    Well, I dont see anything wrong with your filesystem; its probably an issue within your guest as @t.lamprecht suggests. Try to increase the size of your rootfs in the backup configuration?
  4. A

    Ceph problem

    well your disks are present. What makes you say relevent logs would be helpful. --edit You have your efi disk and boot on the same disk. this is probably why it wont boot.
  5. A

    Backup cannot be restored, allegedly not enough memory

    No it doesnt ;) grep sdb /etc/mtab df -i /dev/sdb
  6. A

    Backup cannot be restored, allegedly not enough memory

    That is odd. I'm guessing you're out of inodes. Whats the format and configuration of /mnt/sdb?
  7. A

    Proxmox stuck on "Loading initial ramdisk when booting.

    cant speak to anyone other then then OP @jonsj, but his case is a failing disk.
  8. A

    Backup cannot be restored, allegedly not enough memory

    Please post the log of your failed backup job. you can find it in /var/log/vzdump.
  9. A

    Out of memory

    this usually a "ramdisk too big" error. Where did you get the image you're booting? As an aside, I dont ever bother with the gui installer and thank our dev overlords for finally putting in a TUI installer. its faster and works much more often ;)
  10. A

    Ceph problem

    please post the output of rbd ls -p "poolname for ssd-pool" you can grep for 80624 but if you believe its not there, may as well see what is.
  11. A

    CEPH and Disks Attached to Raid Controllers.

    you would need a replication cluster system, eg drbd. There is no officially supported method for this, and at least in the past the only options were susceptible to split brain problems and consequently removed.
  12. A

    [Solved] Why do i need 3 Nodes for an HA Cluster?

    AHHH here it is. Yes, that is exactly what what it means. you can bellyache all you like, but at the end of the day the existing provider has no responsibility to care about your wants/needs. THEY provide this support, and consequently only their priorities matter. You have two choices, both...
  13. A

    [Solved] Why do i need 3 Nodes for an HA Cluster?

    why not? fork it. I skimmed through that, but the short version is: what is the concern? if the devs change the license upstream, that doesnt invalidate your existing fork. you can and do state whatever you want. not sure what the connection is. just because you have opinions doesnt obligate...
  14. A

    [Solved] Why do i need 3 Nodes for an HA Cluster?

    If I had to guess, they had to offer soft watchdog to support the homelab crowd, and since it worked "well enough" there was no point to support that AND multiple other options.
  15. A

    [Solved] Why do i need 3 Nodes for an HA Cluster?

    That used to be an option a bunch of versions ago. For some reason the devs removed it.
  16. A

    [Solved] Why do i need 3 Nodes for an HA Cluster?

    Stonith (the principal) is a requirement for any cluster of resources. its not a product, its a method to ensure survival of a known good provider. Thats like saying you've never heard of electricity and therefore its a niche product ;) --edit: stonith is effectively the only way to handle a...
  17. A

    CEPH and Disks Attached to Raid Controllers.

    Yes, and no. The problem with this approach is that single disk raid0 luns is that it is still controller bridge devices, which means the LBAs presented to the host are not the actual disk LBAs, and is subject to controller caching logic even if you turn it off/writethrough. It will work, but...
  18. A

    [Solved] Why do i need 3 Nodes for an HA Cluster?

    Not possible. No one can make the tradeoff decisions in designing storage for you. Start here: https://pve.proxmox.com/wiki/Storage#_storage_types These are your options. then google what you're not familiar with/not understand for the different shared storage types. When you start having some...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!