NAS drives not accessible

cdsJerry

Renowned Member
Sep 12, 2011
218
8
83
I'm building a new machine to run a new installation of Proxmox 8 and will then transfer all my VMs over.

Following the instructions on the WIKI a copied the /etc/pve, as well as /etc/passwd, /etc/network/interfaces, /etc/resolv.conf and /etc/pve/storage.cfg files from the old server to the new one. I then rebooted the new server and it does show the external storage drives, but with a question mark on each of them. If I click on them I get a mount error: mount nfs: No such device (500). The device is still there on the old server so I'm sure it's a configuration problem on the new server, but how do I find it? When I look at the storage.cfg file it looks OK to me with entries such as:
nfs: ProxNas3
export /mnt/HD/HD_a2/ProxNAS3
path /mnt/pve/ProxNas3
server 192.168.x.xxx
content backup,images,iso,vztmpl
prune-backups keep-last=1

On a separate issue the new machine shows two entries in the Datacenter. It shows pve, but also shows one called proxmox2, which I think was the name of the Proxmox before I did the installation. I thought the installation would wipe out everything on the HDD so maybe this came over with the restored files from /etc/pve? How do I delete it?
 
I'm sure it's a configuration problem on the new server, but how do I find it
check output of "journalctl -n 500" and/or "journalctl -f".
Check output of "pvesm status"
Check output of: systemcl (any services failed?)

I am not sure about erroneous name in GUI, as I dont fully understand what you copied. It looked like entire /etc/pve (?) but that doesnt overwrite existing config that may already be there. You should examine the context of /etc/pve for more details

Additionally - if you know that old server still has access to NFS storage, that implies the server is still up. But in your procedure description you only say that you copied /etc/network/interfaces, but do not say if you changed it. That probably leads to duplicate IP.

I'd recommend being more precise and careful in your steps.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
check output of "journalctl -n 500" and/or "journalctl -f".
Check output of "pvesm status"
Check output of: systemcl (any services failed?)

I am not sure about erroneous name in GUI, as I dont fully understand what you copied. It looked like entire /etc/pve (?) but that doesnt overwrite existing config that may already be there. You should examine the context of /etc/pve for more details

Additionally - if you know that old server still has access to NFS storage, that implies the server is still up. But in your procedure description you only say that you copied /etc/network/interfaces, but do not say if you changed it. That probably leads to duplicate IP.

I'd recommend being more precise and careful in your steps.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

I'm running the two systems on different IP addresses (both LAN and WAN) so I have both units up and running but only have VMs running in one unit. I can trash the second unit and start over but I need to know what I did wrong first so I don't do it again on the rebuild. I was following the instructions in the WIKI at https://pve.proxmox.com/wiki/Upgrade_from_7_to_8 under New installation since I'm going to be installing this on a new computer (which will arrive next week. I'm practicing on a different computer to make sure I know what I'm doing, which obviously I don't. LOL) I only copied the files listed in the GUI but I did also restore the files listed in step 2. It said it was required to back them up so it seemed reasonable that I must also restore them. But maybe that's where I went wrong? Maybe I never use them again, but then why is it so important to back them up?

Here's the output from pvesm status
Code:
root@pve:~# pvesm status
mount error: mount.nfs: No such device
mount error: mount.nfs: No such device
mount error: mount.nfs: No such device
Name                  Type     Status           Total            Used       Available        %
Backups-IOMEGA         nfs   inactive               0               0               0    0.00%
ProxNas3               nfs   inactive               0               0               0    0.00%
WDnas2                 nfs   inactive               0               0               0    0.00%
local                  dir     active        98497780         2829580        90618652    2.87%
local-lvm          lvmthin     active      1793077248               0      1793077248    0.00%

I don't know what I'm looking at but the journalctl -n 500 command gave several pages that look like this:
Code:
Jul 28 11:44:14 pve pvedaemon[966]: mount error: mount.nfs: No such device
Jul 28 11:44:15 pve pvestatd[963]: mount error: mount.nfs: No such device
Jul 28 11:44:15 pve pvestatd[963]: mount error: mount.nfs: No such device
Jul 28 11:44:15 pve pve-firewall[957]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
Jul 28 11:44:15 pve pvestatd[963]: mount error: mount.nfs: No such device
Jul 28 11:44:16 pve pvedaemon[965]: mount error: mount.nfs: No such device
Jul 28 11:44:16 pve pvedaemon[965]: mount error: mount.nfs: No such device
Jul 28 11:44:16 pve pvedaemon[965]: mount error: mount.nfs: No such device
Jul 28 11:44:18 pve pvedaemon[965]: mount error: mount.nfs: No such device
Jul 28 11:44:18 pve pvedaemon[965]: mount error: mount.nfs: No such device
Jul 28 11:44:18 pve pvedaemon[965]: mount error: mount.nfs: No such device

I'm shutting down for the weekend but I'll follow up on Monday. Thanks in advance.
 
You dont have proper full installation on your test server, either packages and/or kernel are missing/wrong.

Since you are not really doing new _replacement_ installation, ie your server is not a true one for one restore (new IP, name,etc), you are on your own with trying to untangle partial configuration restores.
Overall this will be a good exercise for you to get familiar with nuts and bolts of PVE.
Good luck.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
You dont have proper full installation on your test server, either packages and/or kernel are missing/wrong.

Since you are not really doing new _replacement_ installation, ie your server is not a true one for one restore (new IP, name,etc), you are on your own with trying to untangle partial configuration restores.
Overall this will be a good exercise for you to get familiar with nuts and bolts of PVE.
Good luck.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Interesting. I did a full installation from the ISO DVD, which it said would wipe everything on the drive in the process. How would that have been less than a full install? Once installation was finished I copied the files from the instructions but edited them to set the new IPs to avoid conflicts. Maybe I'll wipe out the new machine and try it again. Maybe I copied something I didn't realize I was grabbing. Adding the NFS drives is my weakest area where I struggle. There's no way to add them in the gui that I've found.

I notice that in the system network the network devices are named enp3S0 and enp4s0 while on the old server they're named eno1 and eno2, but the bridges have the same name on both machines. I wouldn't think that would be what breaks the NFS since it connects to the bridge. I'm grasping at straws. I'll start over next week. Maybe something will dawn on me over the weekend. Thank you again.
 
You dont have proper full installation on your test server, either packages and/or kernel are missing/wrong.

Since you are not really doing new _replacement_ installation, ie your server is not a true one for one restore (new IP, name,etc), you are on your own with trying to untangle partial configuration restores.
Overall this will be a good exercise for you to get familiar with nuts and bolts of PVE.
Good luck.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
It's a new day and I just now see what you were saying. But actually I AM creating a true restore for that machine. It's new hardware but it's an exact replacement for my backup server. None of the settings are different from my backup server. They are different from my primary server (different IPs) but it's actually the same as for the backup server it's replacing. I misspoke when I said I was changing the IPs. In my mind they're changed from the primary server but they actually aren't changed from the backup server I'm replacing. My excuse is that it was late on a Friday.

So following the instructions from the WIKI it _should_ have worked. The network drives all show up but have question marks on them and show as empty.
 
You dont have proper full installation on your test server, either packages and/or kernel are missing/wrong.

Since you are not really doing new _replacement_ installation, ie your server is not a true one for one restore (new IP, name,etc), you are on your own with trying to untangle partial configuration restores.
Overall this will be a good exercise for you to get familiar with nuts and bolts of PVE.
Good luck.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I started over again following the instructions in the WIKI. The server comes up with the correct IPs however the network storage still shows a question mark on all of them. Running systemct1 shows repeating lines that read:
Code:
Jul 31 05:18:45 pve pvestatd[969]: mount error: mount.nfs: No such device
Jul 31 05:18:45 pve pvestatd[969]: mount error: mount.nfs: No such device
Jul 31 05:18:45 pve pvestatd[969]: mount error: mount.nfs: No such device
Jul 31 05:18:46 pve pve-firewall[963]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information.
Jul 31 05:18:49 pve pve-ha-lrm[1054]: unable to write lrm status file - unable to open file '/etc/pve/nodes/pve/lrm_status.tmp.1054' - No such file or directory
Jul 31 05:18:54 pve pve-ha-lrm[1054]: unable to write lrm status file - unable to open file '/etc/pve/nodes/pve/lrm_status.tmp.1054' - No such file or directory

I went in and followed the instructions to change the unit over to a non-enterprise machine since this is after all an attempt to get this machine up and running pre-production. I then did the apt-get update and apt-get dist-upgrade commands and got the following error:
Code:
Reading changelogs... Done
Extracting templates from packages: 100%
Preconfiguring packages ...
dpkg: unrecoverable fatal error, aborting:
 unknown system user '_chrony' in statoverride file; the system user got removed
before the override, which is most probably a packaging bug, to recover you
can remove the override manually with dpkg-statoverride
E: Sub-process /usr/bin/dpkg returned an error code (2)

The drives still show ? on them and can't be accessed. I ran the following command:
Code:
root@pve:~# systemctl status pvestatd
● pvestatd.service - PVE Status Daemon
     Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled; preset: enable>
     Active: active (running) since Mon 2023-07-31 06:20:00 HDT; 42min ago
    Process: 16688 ExecStart=/usr/bin/pvestatd start (code=exited, status=0/SUCCES>
   Main PID: 16692 (pvestatd)
      Tasks: 1 (limit: 28743)
     Memory: 82.0M
        CPU: 28.399s
     CGroup: /system.slice/pvestatd.service
             └─16692 pvestatd


Jul 31 07:01:30 pve pvestatd[16692]: mount error: mount.nfs: No such device
Jul 31 07:01:40 pve pvestatd[16692]: mount error: mount.nfs: No such device
Jul 31 07:01:40 pve pvestatd[16692]: mount error: mount.nfs: No such device
Jul 31 07:01:40 pve pvestatd[16692]: mount error: mount.nfs: No such device
Jul 31 07:01:50 pve pvestatd[16692]: mount error: mount.nfs: No such device
Jul 31 07:01:50 pve pvestatd[16692]: mount error: mount.nfs: No such device
Jul 31 07:01:50 pve pvestatd[16692]: mount error: mount.nfs: No such device
Jul 31 07:02:00 pve pvestatd[16692]: mount error: mount.nfs: No such device
Jul 31 07:02:00 pve pvestatd[16692]: mount error: mount.nfs: No such device
Jul 31 07:02:01 pve pvestatd[16692]: mount error: mount.nfs: No such device

Keep in mind my linux is really weak. I'm just trying different things I've found in the forum but so far none of my darts have hit the target. My best guess is that there's something in version 8 that doesn't handle the NFS drives the same way as prior versions. I've read several threads from others having this problem after the update to 8. My version 7.3-4 production machine still accesses them fine.

Suggestions please?
 
Jul 31 07:02:00 pve pvestatd[16692]: mount error: mount.nfs: No such device
Jul 31 07:02:00 pve pvestatd[16692]: mount error: mount.nfs: No such device
Jul 31 07:02:01 pve pvestatd[16692]: mount error: mount.nfs: No such device[/CODE]

Keep in mind my linux is really weak. I'm just trying different things I've found in the forum but so far none of my darts have hit the target. My best guess is that there's something in version 8 that doesn't handle the NFS drives the same way as prior versions. I've read several threads from others having this problem after the update to 8. My version 7.3-4 production machine still accesses them fine.

Suggestions please?

I reached out to a person who works with Linux daily. We attempted a clean installation several times but always had the same result as above. He could force a mount with manual commands but Proxmox would never mount the nfs. Changing the storage.cfg didn't help either.

Eventually we decided to give up and install on a different machine. Keep in mind the machine we'd been working on has been running Proxmox for several years. When we switched to the new machine everything worked just fine, without any tweaks, except for one NFS drive that was trying to mount at a share that was only partly correct on the NAS device. The share is /nfs/ProxmoxStore and Proxmox kept mounting the share as /nfs/ and would then show no contents in the folder. We changed the storage.cfg file manually and rebooted and the NAS showed up correctly and everything is working.

I have no idea why we can no longer get Proxmox to work on the hardware it's been using for so long but have given up. The time spent cost more than the new hardware.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!