alexskysilk's latest activity

  • A
    Looks like you're mounting the same filesystem twice. post the content of ceph fs ls /etc/pve/storage.cfg
  • A
    The source is not the relevant part, insofar that makes a software trustworthy or not. theres plenty of open source malware. The admonition here is not to run software you found on the internet without knowing what it does- expecially if it has...
  • A
    As others pointed out, this isnt actually an option (at least not a valid one licensing-wise.) Its also a whole lot less efficient than a handful of terminal services hosts; I would pause here to discuss WHAT your clients use on their remote...
  • A
    alexskysilk replied to the thread HA Migration.
    Isnt is the opposite of is.
  • A
    Completely up to you. You can have multiple pools with multiple crush setups using the same disks, but generally speaking unless you have different OSD classes a single pool is likely what you want. SQL wants smaller object sizes (like 16k,) but...
  • A
    alexskysilk replied to the thread iscsi error?.
    Probably not, but the presence of that message suggests there are optimizations available for you to take advantage of. My original suggestion stands ;)
  • A
    alexskysilk replied to the thread HA Migration.
    You must have missed this part.
  • A
    alexskysilk replied to the thread raidz1-0 added disk.
    As explained, thats not what you did. Its also not possible until zfs 2.3 gets rolled out. You're gonna need to destroy the pool and recreate.
  • A
    alexskysilk replied to the thread iscsi error?.
    alua is a feature on the target. The host will attempt to distribute traffic as allowed. Read the documentation of your storage for proper configuration of multipathd.
  • A
    alexskysilk replied to the thread HA Migration.
    Ensure that: 1. store iscsi_data is set up shared, and that the destination node is included in the allowed node list. 2. store iscsi_data ISNT SET TO LVM-THIN.
  • A
    ... I wont correct you, but you really should go back and read how networking works in a linux hypervisor environment. Suffice it to say it doesnt work like you think it does.
  • A
    I gather from this that you are using a single interface for all traffic, including corosync. This is bad practice and can lead to the exact behavior you are seeing. If you want to eliminate the possibility of corosync interruption, do not...
  • A
    Was responding to your quoted question. I dont have any idea how you would do it, as it wouldnt occur to me to try- and if it was possible I'd be reporting it as a bug.
  • A
    as long as you have enough resources in the "small" section, just move the workload over, then shut down all nodes in the "large" section. perform your required maintenance and boot them back up. edit there is another possibility; keep both the...
  • A
    It effectively undoes the whole point of a container. may as well do whatever it is you're doing on the host since you don't have meaningful separation doing it in the way you are.
  • A
    Thats also in the logs. If I have to guess, its a networking issue. post the content of /etc/network/interfaces and /etc/pve/corosync.conf for validation and recommendations.
  • A
    All answers are in your logs. start there.
  • A
    Yes. Wrong audience. HyperV can and does work well, but this is a Proxmox forum ;) No. whatever you do, make sure to architect the STORAGE first as so much of the rest of the deployment (HA, performance) is going to be tied to it. you're gonna...
  • A
    so, allow me to summarize your argument: You have customers who dont want three nodes. Those customers balk at having a quorum device that looks like a pve node, or a quorum witness appliance. for reasons. Is that about the gist? Let me ask you...
  • A
    OK, but that's a whole lot different than sweeping statements with respect to what the industry is doing. PVE lacks proper tooling to use shared block storage (be it iscsi, fc, NVOF, SBP, etc) to its full potential. Those are still the preferred...