Recent content by alexskysilk

  1. PVE from RAMFS & LXC

    I see. that makes more sense then. The answer lies in the documentation for ramfs (https://github.com/torvalds/linux/blob/v4.17/Documentation/filesystems/ramfs-rootfs-initramfs.txt) In short- you should be able to use the ramdisk for virtual machines, but for containers it would require a...
  2. PVE from RAMFS & LXC

    be useful to look at your storage.cfg as well, but it doesnt look like your storage is actually writable.
  3. Setup a small Cloud

    Your problem can be solved in a number of ways, but the most impactful would be to adapt your website to docker/kubernetes. what that will allow you is to scale your compute resources with load- and you can run it anywhere including on proxmox. This isnt easy and would require a substantial...
  4. Ceph with iSCSI Storage

    This is not a use case for ceph. ceph is designed for multiple silos of disk drives. since you have two silos with raid, you will likely want to look at gluster if you want replication or just iscsi+lvm if you dont.
  5. is it possible to auto trim for lxc disks?

    well, that would run the script once and will not impact any writes following. my thought was some kind of autodiscard but it doesnt appear that this is possible. Thanks for your thoughts :) I will continue to tweak the cron schedule.
  6. is it possible to auto trim for lxc disks?

    Interesting suggestion. How would you suggest to structure it? it doesnt seem to fall into any of the predefined stages...
  7. is it possible to auto trim for lxc disks?

    that is exactly how I'm dealing with it at the moment. was wondering if there was possible to do it automatically at mount.
  8. is it possible to auto trim for lxc disks?

    I have a cluster that has relatively heavy IO and consequently free space on the ceph storage is constantly constrained. I'm finding myself performing fstrim on a more and more frequent interval. Is there a way to auto trim a disk for an lxc container?
  9. ZFS Central Storage

    neat idea- sure. design as suggested- relatively simple. Making it work in real life without connectivity loss on failover, data loss, have engineering capability for the MANY edge cases, able to actually support a production environment- practically impossible. this as a DIY will cause you pain.
  10. [SOLVED] How to issue CTRL-C to a process in a container?

    You dont actually need to be INSIDE the container to kill a process since its exposed to the root, but the better question why in heavens are you running a process that needs sigint to stop... I can think of no use case where its "the proper way." Do yourself a favor and daemonize it. look up...
  11. [SOLVED] ceph nautilus move luminous created osd to another node.

    This is normal when using volumes. if you look inside of ceph-70 directory you will find a symlink named block pointing to the actual device.
  12. How to use SAS (SAN) disks ?

    please provide the output of lsblk vgs lvs
  13. [SOLVED] Ceph Luminous to Nautilus upgrade issues

    Just out of curiousity, why are you replacing your 10g network with a 100g? considering your OSDs are HDDs not only are you not going to get much benefit, but at that cost (nics plus switches) you would get MUCH more bang for your buck replacing your HDDs with SSDs...
  14. preconfigured firewall rules and overrun conntrack table

    WRT question 1, adding the interfaces to the raw table does not fix the problem as the filter table still processes the rules regardless, and there is no NOTRACK option for the filter table AFAICT. Other suggestions most welcome.
  15. ENOUGH WITH THE pve-root sizing crap!!

    I've run proxmox on supermicro 16GB SATA DOM devices before without issue, BUT this was on relatively low utilization boxes. If you intend to run anything production quality you'd want to move swap and /var to more robust devices.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!