Hi all,
My base question:
Is it possible to create an iSCSI pool on ProxMox, or I can just attached an existing one (from TrueNas for example) and give to my hosts?
If I can create, how? Does it needed a full, empty drive, or if I have free disk space on my LVM drive I can also do it?
The...
Well, I've found a solution (sort of...)
As I wrote before, I could create a new vm, but only with IDE interface. Then I've found a solution in the forum to change interface from IDE to virtio, with detaching, and editing the created 32G IDE disk. Then I've found a solution to increase size with...
root@pve:~# pvesm alloc BigData 105 '' 256G
WARNING: gpt signature detected on /dev/bigdata/vm-105-disk-2 at offset 512. Wipe it? [y/n]: [n]
Aborted wiping of gpt.
1 existing signature left on the device.
Failed to wipe signatures on logical volume bigdata/vm-105-disk-2.
lvcreate...
I could create a new vm with SeaBios, i440fx, and an IDE 32GB HDD, then I could change settings to OVMF, q35. and I could add an UEFI disk to it. But when I want to create a new harddisk with virtio (or sata) this message appeared:
lvcreate 'bigdata/vm-105-disk-2' error: Aborting. Failed to...
all is come from this:
WARNING: gpt signature detected on /dev/bigdata/vm-105-disk-0 at offset 512. Wipe it? [y/n]: [n]
Aborted wiping of gpt.
1 existing signature left on the device.
Failed to wipe signatures on logical volume bigdata/vm-105-disk-0.
TASK ERROR: unable to create VM 105 -...
I've done it under proxmox shell, not from a livecd? Is it a problem?
I've made inactive the vg because otherwise when I execute e2fsck -b 32768 /dev/sdb1 or fsck -fy /dev/sdb1 it said /dev/sdb1 is in use.
Should I try it with a livecd?
I also don't know the filesystem lvm use. fdisk says ID 83...
before I delete it:
root@pve:~# vgchange -an bigdata
0 logical volume(s) in volume group "bigdata" now active
root@pve:~# fsck.ext2 /dev/bigdata
e2fsck 1.46.2 (28-Feb-2021)
fsck.ext2: Is a directory while trying to open /dev/bigdata
The superblock could not be read or does not describe a...
Before I destroy my disk:
root@pve:~# wipefs /dev/sdb
DEVICE OFFSET TYPE UUID LABEL
sdb 0x1fe dos
root@pve:~# wipefs /dev/sdb1
DEVICE OFFSET TYPE UUID LABEL
sdb1 0x218 LVM2_member dgHM6z-RquE-9xxx-ZEKn-ZxRZ-T04J-QpnT7a
this would help?
wipefs...
Hmm, bigdata lvm not in pool, should it be in it?
root@pve:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
vm-100-disk-0 bigdata -wi-a----- 120.00g
vm-100-disk-1 bigdata -wi-a-----...
There was VM101, and VM105, but I deleted them, with the proper way: detach and remove disks, then delete VM with purge.
And no, it isn't work with other ID's :( Tryed with 106, 200, 123...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.