Intel Celeron... not much data on it. Whitch one exactly? 2 or 4 threads?
For performance check overall benchmarks. I guess we won't predict exact performance on your specific needs. VM OS is one thing. What will it run is another...
I'd go 4-thread celeron I suppose. BTW: double check...
You didn't just FORMAT your drive. Plain format (typically) won't harm much data on a disk. What you did is wrote new data on TOP of old ones so that will literally shred your previous data layout etc. More than that, you've changed filesystem on it.
Sure u might get lucky with recovering some...
Hi there.
Any ideas on installing megacli/megasasraid/megasasctl on PVE 7?
The only working solution for me was installing 6 + hwraid packages, then upgrade to 7. What about plain 7? Any ready packages yet?
Hi there.
I installed remotely ifupdown2 on my PVE 7 using SSH console (putty). I believe ifupdown2 install process seem to remove ifupdown. I did NOT perform update ifupdown packages prior to ifupdown2 installation (not a recommended move as I read after that). During installation I got dc...
All righty then...
So for now, I'decided to abandon ZFS for "big" storage and use hardware raid instead. I also managed to increase total disk pool to 8x1TB sata. That combined with BBU-backed raid controler in RAID-6 mode seems to be more fault tolerant compared to my 4-bay qnap (with 2x RAID1...
Uhm. That seems quite ineficient for usable storage. I mean 2.6TB usable ot of 6TB's with expexted single drive failure... Or am I missing something crucial while creating ZFS pool initially? Both online ZFS calcs + PM Web-Gui shows I shall have over 4TB usable?
In that case I might consider...
Hi there...
I've installed PM on my testlab:
- 4-core xeon
- 32GB DDR3 ECC
- 2x600GB SAS (hardware PCI-E RAID1) - boot drive with PM itself + VM boot VD's...
- 6x1TB 2,5" mixed SATA drives (IT mode via mothreboard integrated sata ports)
My goal was to launch a VM NAS of some sort...
I've...
I've reconfigured bond to balance-tlb, then balance-alb - same thing - no network under VM. I've also installed ubuntu CT - same thing. Ping 1.1.1.1 = network is unreachable.
BTW: on my test environment I'm using 8-port Netgear gigabit desktop switch plus TL-R470+ router...
Hi there.
I'm testing out nic bonding since my test server has 3x1GB NICs. I want to achieve fault tolerance + increased performance. LAN environment has no advanced/managed switches so for example 802.3ad is not an option.
I've tried various types of bonding, for now I have balance-rr what's...
Hi there.
My 1st hardware RAID-1 volume is set as main Proxmox boot + rest as LVM for VM use.
Now I have a new pair of disks (2nd RAID1). I'm a bit confused with configuring secondary RAID1 storage as VMs VDs + backups.
Easiest way is to create an ext4 full capacity partition and mount it as...
For VMA then only solution for now is to create multiple backup tasks with single VDs then (if needed) restoring backups as multiple VMs then move vd's between them - unless there's an easier option.
is it relatively easy tweak for upcoming updates or adding this feature is probably a long-term process?
I'd love to see an easy "include/exclude switching feature" as one of restore steps.
Unpacking backups is somehow possible, but might take literally HOURS along with tons of temporary free...
These snapshots were made during or short after initial VM setup after install. So yes, those might vary quite significantly compared to 'current' VM state.
Fortunately I managed to install additional (also larger) RAID-1 volume and use it all for vm vd's. I've partitioned, it then created vg...
Quick question. Is there an easy way to perform a 'custom' recovery meaning restore only specified VDisks (not all?
I have VM with OS + data stored on separate vdisks. I want to restore only OS one...
One more thing... Will that be any of use?
****@ip:~# mc
Left File Command Options Right
┌<─ /etc/lvm/archive ──────────────────────.[^]>┐┌<─ ~ ──────────────────────────────────────.[^]>┐
│.n Name │ Size │Modify time ││.n Name │ Size...
*** @ip:~# vgchange -a y raid1_2x1200GB
Check of pool raid1_2x1200GB/daneVM failed (status:1). Manual repair required!
0 logical volume(s) in volume group "raid1_2x1200GB" now active
Have no idea what happened. Last task I remember was to create 250GB VD on it and copying some stuff from...
Hi there.
I recently added new pair of server disks (hardware raid1). I've also created new LVM_Thin and moved boot-vd of my VM from my old LVM.
It worked like a charm for a few days. Just now I saw my VM in a boot-loop mode - "no bootable device".
In proxmox gui everything seemed to look just...
Forgot to mention that this lvm-thin also contained 2 oooold snapshots from initial install process of vm. Snaps took additional ca 20GB yet both snapshots+vdisks left over 30GB free space.
The 'only' solution to recover some of free space was to delete one of snapshots. Yet after a week or so...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.