We have a HPE DL380 G9 with six-teen 2.5" storage bays. This is going to be used as a Windows Image Deployment server. Imaging up to around 30 computers at a time. We also have a plethora of 1.92TB HPE branded SSDs. If I can I'd like to aggregate two or potentially four 10Gig NIC ports and run...
@bbgeek17 & @leesteken It worked! :)
30 16 * * * /usr/sbin/shutdown -h now
I'd like to see it work more than once before I let myself think it's the solution but it's good to see it do what I wanted.
That's a whole other confusing thing I opted to cheat instead of confront. It seems without Internet access if I set the BIOS clock to the current time PROXMOX will set it to UTC time which for me (EST) is supposed to be +4 or 5hrs? So if the BIOS says 10AM PROXMOX says 6AM EDT...
I'd have to...
Oh wait...
0 0 = Midnight
0 1 = 1AM
0 2 = 2AM
etc...
0 23 would be 11PM
I'm an idiot I do want 30 16 for 4:30PM not 30 15. Well in either case what I wanted crontab to do wasn't a compatible request...
Will follow up Tuesday evening when I have access to the server again. It doesn't get used...
PROXMOX provided example inside the crontab editor:
...
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
...
0 5 would suggest 6AM not 5AM unless it was 1 - 24...
And I did test both (30 16, 30 15)...with...
Ah, right, it's basically the absolute file path to the command. It still needs it's arguments. I guess the last question I have for now then would be is crontab hours 0-23 or 1-24? Cause looking up crontab online suggests 0-23 but the commented section of crontab -e on proxmox suggests 1-24...
There's an actual file called shutdown? interesting. So I should try:
30 15 * * * /usr/sbin/shutdown
I won't have a chance to test it until Tuesday so I'll get back to you on that.
Most simple solution I could think of was crontab. So, as root I created a cronjob entry:
30 15 * * * /root/shutdown.sh
Then created /root/shutdown.sh with simply:
shutdown now
When 4:30PM rolls around, nothing happens...I've checked the system time with the date command and verified the time...
@Dunuin It might be out of nowhere for you but a year later your post just make my late night and the rest of the week-end a heck of a lot less stressful.
One of my mirrored boot drives failed, luckily not the one with the boot-loader (NVMe so no hot swap). I learned tonight that ZFS really...
This just looks to have worked for one of two VM's. The other is having a hissyfit about...something...
progress 32% (read 43980685312 bytes, duration 93 sec)
_22-16_40_10.vma.zst : Decoding error (36) : Corrupted block detected
vma: restore failed - short vma extent (3167888 < 3801600)...
root@pve:~# pvesm status
Name Type Status Total Used Available %
local dir active 1885862016 9028864 1876833152 0.48%
local-zfs zfspool active 1876833272 96 1876833176 0.00%...
Maybe you know the answer to my new problem. PROXMOX will not let me import the VMs to the new server:
root@pve:~# qmrestore vzdump-qemu-117-2022_10_22-16_26_51.vma.zst 100 --storage rpool
restore vma archive: zstd -q -d -c /root/vzdump-qemu-117-2022_10_22-16_26_51.vma.zst | vma extract -v -r...
Quick background information. I want to setup some UNIX based virtualized tools at my work. I asked for permission and got it under the condition that it not be connected to the company network/internet. This is fine.
What I'm looking to do is install PROXMOX on a small spare PC in the office...
Hello, as a fun project I decided to attempt to get NVIDIA’s vGPU on a Kepler GRID K2 working.
I’ve been inching my way to success but I’ve now hit a brick wall. I need to install vgpu-kvm drivers. I sourced recent drivers (460/470 series) but after attempting to install them the installer...
It looks like that did it. For both this server and the previous one with the same issue.
I'll run some experiments to really make sure it's working but otherwise it looks like the problem is solved.
I can't explain why but it still feels strange that suddenly there was a switch from...
Alright. I removed intel_iommu=on from GRUB.
I added intel_iommu=on to cmdline:
root=ZFS=rpool/ROOT/pve-1 boot=zfs
intel_iommu=on
Ran proxmox-boot-tool refresh
Rebooted the server.
Same error:
Re-checking cat /proc/cmdline:
initrd=\EFI\proxmox\5.13.19-2-pve\initrd.img-5.13.19-2-pve...
Looks like the answer is not (output of cat /proc/cmdline):
initrd=\EFI\proxmox\5.13.19-2-pve\initrd.img-5.13.19-2-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs
I am running ZFS on the boot media. I have a ZFS Mirror configured on a pair of SSD's which PROXMOX boots from.
Also if I try to run...
Hello, I'm running PROXMOX 7.1-7 on a Intel Xeon platform and I'm attempting to get hardware pass-through working.
I've enabled VT-d in the BIOS.
I've added intel_iommu=on to GRUB_CMDLINE_LINUX_DEFAULT
I've ran proxmox-boot-tool refresh
The system persists in telling me that IOMMU isn't...
I've spent the latter part of the last 24 hours trying to setup some VM's from backup onto PROXMOX 7.1-4. I was upgrading from an older version.
Relevant hardware:
Motherboard: Supermicro X10DRi-T (BIOS: 3.4a)
CPU(s): Intel Xeon E5-2698v3
Hardware pass-through worked great until I performed a...
It did. I later learned that the vfio-pci driver is designed to be used with basically any hardware device that the host can do without. It's not strictly for GPUs.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.