Hi @theb2b ,
What is the output of :
pvesm status
Does the output contain a storage pool that corresponds to your location?
If it does, what is the output of: pvesm list [storage_pool_name]
If it does not, where exactly is the file located? What is the name of the file?
What VM do you want to...
Since your storage device is capable of providing both File/NFS and Block/iSCSI storage, you should use appropriate protocol for each purpose.
ISO are files - NFS
VM disks are block - iSCSI
The line is gray for VM disks, as you can use NFS there as well. However, for ISO you should stick to...
That UUID is supplied by Synology, its part of SCSI device properties. I'd put this question on Synology forum, as this is not a PVE specific query. Perhaps someone has already solved it there.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
PVE does not currently provide single pane of glass management for multiple clusters, therefor there is no API endpoint to request information about multiple clusters. That mashed array is produced by Cosinvest. Likely, by sending individual API requests to each cluster that you have defined...
No, PVE will do it automatically if you made PVE aware of the pool.
"local-lvm" is part of default PVE installation and PVE is aware of it in your case.
This workflow is not part of normal PVE management, but there is nothing wrong with it per say. Many people take a slice of the larger...
Hi @e-ferrari , welcome to the forum.
mounted where?
this is raw storage pool. volumes will be carved out from it as necessary
it correct for raw files. Examine your file with "qemu-img info [filename]" to learn more about its structure
dont directly edit config file until you really know...
yes, that correct behaviour
If you plan to have this storage connected to more than one node: https://www.youtube.com/watch?v=eZXXa7ujoks
If its just one node: https://pve.proxmox.com/wiki/Storage:_LVM_Thin
https://www.youtube.com/watch?v=zIoDXWKsorg
Blockbridge : Ultra low latency all-NVME...
Hi @Normicro , welcome to the forum.
connecting a big-vendor array via iSCSI to a Linux host (which PVE is at basic level) is described in many places:
https://www.hpe.com/psnow/doc/a00062068en_us
https://community.hpe.com/t5/hpe-eva-storage/msa2012i-and-iscsi-initiator-setup/td-p/4329757...
Hi @hander , welcome to the forum.
In this case the PVE is an NFS client and your remote NFS storage is NFS server.
The NFS server controls access permissions. The fact that your client (PVE) receives a R/O error from the server indicates a misconfiguration of your NAS. You should review your...
Hi @viperbmw69 , welcome to the forum.
This is not a Proxmox issue, but generic Linux/LVM management query. The terms to search for: lvm volume active
i.e...
The recommended _production_ approach is to have the same hardware. Differences in CPU models will cause issues and you will need to use HA groups, drop down to lowest common denominator, or virtualized CPU.
PVE does not have a built-in equivalent of CSV/VMFS, so you will need to put in more...
Hi @Ragul , welcome to the forum.
Unfortunately your question is lacking details.
An example of some of the information needed can be found here: https://forum.proxmox.com/threads/where-did-my-drive-go.114790/#post-496334
Cheers,
Blockbridge : Ultra low latency all-NVME shared storage for...
You should review #15 in the other thread and follow the tasks listed there
good luck
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Here's the data reported in our low-latency KB article. The hardware and software are now previous generation, but the results are still valid. Performance might be slightly faster on more modern hardware.
All measurements are taken at the block device in the guest or on the host using FIO (not...
Hi @dptinc , welcome to the forum.
Try : ip a
https://access.redhat.com/sites/default/files/attachments/rh_ip_command_cheatsheet_1214_jcs_print.pdf
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
If your LUNs are used as shared storage, then the metadata that describes the PV, VG, and LVM pool is already recognized by all nodes, when properly set up. If it was not, you'd have data corruption.
So whether you create a concatenated group or stripped group, it's a one-time step that will be...
no, .4 is your client IP. The log indicates which client sent the request.
"500" is Internal server error. If it was wrong password, it would be 401. For successful authentication it would be 200.
What are the outputs of (in code </> tags) of:
systemctl |grep pve
pveversion
pveversion -v...
We've created a similar purpose script for our customer. The main advantages are:
- can be run against any node in the cluster, or floating IP
- does error checking
- works on VMs with specified tag
- removes snapshots older than specified limit...
did you enable 2fa or some other password enhancing feature and forgot about it?
Is there anything in "journalctl -f" when you are trying to login?
What about tail -f /var/log/pveproxy/access.log
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
Hi @NatO ,
Unfortunately, I don't have an answer to the question. Once the folks at Veeam diagnose and reproduce the issue described above, we'll have a clearer picture.
If corruption occurs in the backup flow, then the source data for restoration has questionable integrity. They might restore...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.