Simply installed sshfs and got this warning.
It refers to pve-cluster and ceph-fuse, so I ask if this can be something to considered.
I suppose Proxmox Team had his good reasons for maintaining the "old" fuse instead of fuse3.
# apt install sshfs
Reading package lists... Done
Building...
Playing with a test cluster.
After disconnecting one node to test HA migration (VMs replication active), I have some VMs corrupted (hangs on boot like the disks are corrupted).
From systemctl:
● zfs-import@zp01.service loaded failed failed...
I can't understand why replication stops if I disconnect the cable in the "local" network while it continue if I disconnect the "internet" network.
I supposed it uses the "local" link for replication and the alternate link if the "local" fail.
I have this test cluster (3 nodes, 2 nic)...
Hi,
Is there some fast way/tricks to move VMs from an old PVE 7.0 where they are stored as normale files to a new PVE 8.0 using ZFS (block evice)?
I suppose the simplier answer may be to use backup/restore, but I am searching for something like directly copy folder images because I have about 3...
I have 3 node ceph cluster, each node has 4x3.84TB disks to use as OSD.
VM images are about 3 TB.
Does Is it better to create a pool with all the 4x3.84 TB for each node (15TB), or create 2 pools of 2x3.84 TB for each node (7.5TB)?
Thanks, P.
I have a PVE 7.x cluster with 3 nodes (2 PVE with VM/CT + 1 PVE/PBS).
No ceph.
I want to add 3 new PVE 8.x nodes where to install also ceph and when it is ready move the VMs from the old PVE 7.x nodes that will be dismissed (the PBS updated).
Can I mix 7.x and 8.x?
Thanks, P.
I have a cluster with 2 x SCALE-4 (at RBX), that have 2 SSD for PVE system and 4/6 NVME (Samsung MZQLB). + BK/Storage server
They are using 2 NVME disks as storage so I have 2 spare NVME disks available.
CPU are 2× Intel Xeon Gold 6226R - 16c/32t
RAM 384 GB
Public bandwidth 1 Gbps, Private...
We are considering a new installation, so hints are welcomed.
We are planning to use a couple of Dell 750s servers
Initial config will be:
128 GB RAM
2 x 4310 CPU
2 x 480 GB SSD
2 x 1.9 TB NVME (U.2) disks
standard 2 x 1 Gbit NIC
Then We add:
2 x SSD Intel D7 P5520 U.2 7680 GB
And more when...
Fresh VPE 7.x.
2 LXC container
I could do standard backup until yesterday, now it fails any time I try.
Fails on local storage, on mounted dir and on PBS.
A couple of failed log:
Header
Proxmox
Virtual Environment 7.1-10
Datacenter
Some guests are not covered by any backup job.
Logs
()
INFO...
I need to use some single IP FO on a new dedicated OVH server.
Unlike the old servers, for this, OVH claims that I need to use NAT because the old mac-address routing for IP FO is no more supported (in this server).
And I can't insert a single IP FO in vrack.
Is there a clean way to nat an...
Proxmox itself provides a variety of basic templates for the most common Linux distributions, Ok.
But I was wondering about the difference to use this templates Versus use an updated template.
Today I was planning to installa a new CentOS 8 container (I know CentOS 8 is declared dead).
I...
My curiosity: I have this server that I known it has an old disk that isn't in "perfect status" (many recent errors reported by smartctl -a).
The GUI Disks table reports:
GUI SMART Values reports:
But a complete smartctl -a reports:
=== START OF INFORMATION SECTION ===
Model Family...
I have a test PVE box That I use as backup/spare.
Reading of the new PBS since some month ago I decided to try an install following the
instructions for "Proxmox Backup Server on Proxmox VE" (https://pbs.proxmox.com/docs/installation.html#install-proxmox-backup-server-on-proxmox-ve).
Update PVE...
I have this small box that I use as firewall for my office/lab test.
There are only 2 little VM running (both pfsense box with 1 GB RAM, minimal network traffic):
# qm list
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
21971 pfSenseF1 running 1024...
Until Proxmox 4.x I used to add lines like:
ifconfig vmbr0:74 99.99.99.99 broadcast 99.99.99.99 netmask 255.255.255.255
in /etc/rc.local
In Proxmox 5.x (Debian 9.x), no more /etc/rc.local.
I suppose I can create a service to have it again available, or I may play with some @reboot action in...
Trying to boot I obtain:
Volume group "pve" not found
Cannot process volume group pve
Unable to find LVM volume pve/root
Gave up waiting for root device. Common problems:
- Boot args (cat /proc/cmdline)
- Check rootdelay= (did the system wait long enough?)
- Check root= (did the...
I don't know if it can considered a bug, but I noticied that istalling Proxmox VE (both 4.x and 5.x), into disks that was already used for installation (test or ...), cause many types of problems.
It can simply not complete the installation or complete it but you last having a system that is a...
More a Debian question that a specific Proxmox question ...
I usually configure my personal box with 12 tty (tty1-ttyC --> ALT-F1 to ALT-F12).
Until Proxmox 3.x I define them in /etc/inittab and /etc/securetty (if I remenber well).
Proxmox4 uses Debian 8 and things are differents.
Tried to...
I am migrating my olds Proxmox 3.4 to new 4.4 servers.
Same hardware, same VMs, the node's cpu load is noticeable higher in 4.4 than in 3.4 (1.5-3.x in 4.4 Vs 0.3-1.1 in 3.4).
And was worst before restoring the VMs as chroot (--rootfs local:0)
If I try to monitor the cpu usage into an LXC...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.