alexskysilk's latest activity

  • A
    alexskysilk reacted to Dark26's post in the thread Proxmox VE 9.0 BETA released! with Like Like.
    Very excited with this feature : Snapshots for thick-provisioned LVM shared storage, e.g., for setups connected over iSCSI or Fibre Channel to a SAN.
  • A
    post the config for 115.conf
  • A
    yes. just not with 3 different vms. hell, you dont need a vm for this.
  • A
    If you're getting ssh issues, the fault is not with the PVE version. did you name the new node something you've used in the past? dont do that. reinstall pve on the new node FROM SCRATCH and give it a new name.
  • A
    looks like there was an issue with the disk conversion. Flatten the parent vmdk and copy it directly instead of converting it.
  • A
    alexskysilk replied to the thread Proxmox PoC.
    looks like you have internet connectivity issues and/or ipv6 taking priority. you can force apt to use ipv4 by adding -o Acquire::ForceIPv4=true to your apt command, eg apt -o Acquire::ForceIPv4=true update && apt -o Acquire::ForceIPv4=true...
  • A
    You dont have your imported disk mapped to any bus; if you look on the GUI you will see it sitting there at the bottom unassigned. Once you DO map it, you would need to change your boot order to reflect it.
  • A
    Any solution is use case dependent, which is why this is left for you (the operator) to define, and you can find multiple documents making what seem to be antagonistic recommendations. more pgs/OSD mean more granularity, meaning better seek...
  • A
    You're concerned with optimal PG count when your cluster is lopsided. you have two nodes with HDDs, two nodes with a lot of SSD and 2 nodes with too little. any HDD device class rule would not be able to have a replication:3 rule, and an SSD...
  • A
    attach your boot disk to ide when installing. you can change it afterwards.
  • A
    without seeing the original config file, the only advice I can offer is to remove existing xe-guest-utilities and install qemu-guest-agent before the conversion, and making sure any uefi/secure boot are matched at the destination.
  • A
    Its worth revisiting what ceph is and how it works. ceph is software defined storage, which is to say there is an algorithm and rules. In a normal virtualization workload the pool rules look like this: replicated size 3 shards (members) in a pg...
  • A
    alexskysilk replied to the thread SPDK support.
    If you have a storage product that integrates with Linux in a way that cannot be addressed by PVE, I'd suggest you contact the devs through the normal channels :) But in general, if you can use LVM and ZFS with it, it IS a normal block device...
  • A
    Terminal services just requires a TS license. vgpu requires supported nvidia hardware and license- its worth nothing that the latter option is quite a bit more expensive. edit the latter option also requires sufficient enterprise or edu windows...
  • A
    Not with only three nodes. There is nowhere for redistribution to be deployed to. This would be a problem with all three nodes as well. Each OSD is monitored for high water mark. OSDs that reach the high water mark will become read only. By the...
  • A
    alexskysilk replied to the thread SPDK support.
    While I cant speak to PVE planning, I can tell you spdk is not meant to replace anything pve provides. PVE depends on linux provided apis for storage and networking; If spdk was deployed to provide said APIs PVE would be able to use them without...
  • A
    makes sense, and is a good solution for your workload. zfs edges out raid due to features like integrated shadow copies, and performance differences should be negligible. Likely not. Your use case is large assets (photographs of size.) ARC works...
  • A
    JBODs would not have a patrol read option. But more to the point; what is the use case for this storage? raidz2 on HDDs would have TERRIBLE performance for virtualization workloads. edit- slogs dont do what you think they do...
  • A
    There's no such thing as a h720p. 14g dells used h730 (SAS3.) and yes, these are excellent and the only thing I ever needed to replace on them was the bbu. BUT bear in mind I never ran these past around 6 years old. they can probably last 12...
  • A
    I hope you understand that h710p are over 20 years old technology. I dont know what h720p is (if you mean 730 that's better, but still quite old.) If you are relying on an h710 in 2025 your idea of "production" is seriously in need of...