I have tried to re-deploy it several times with different types of PVCs, but it yields the same error.
(combined from similar events): MountVolume.SetUp failed for volume "openshift-1gb-09" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: -- description=Kubernetes transient mount for /var/lib/kubelet/pods/14e95326-c963-11e9-b2f2- 405cfda57fc1/volumes/kubernetes.io~nfs/openshift-1gb-09 --scope -- mount -t nfs storinator01.fedorainfracloud.org:/srv/nfs/openshift-1gb-09 /var/lib/kubelet/pods/14e95326-c963- 11e9-b2f2-405cfda57fc1/volumes/kubernetes.io~nfs/openshift-1gb-09 Output: Running scope as unit: run-r2dcc56a9e83d4618b725208daa9aeff8.scope mount.nfs: access denied by server while mounting storinator01.fedorainfracloud.org:/srv/nfs/openshift-1gb-09
Oops. There was a export problem. I have fixed that now... should be working.
:radio:
Metadata Update from @kevin: - Issue close_status updated to: Fixed - Issue status updated to: Closed (was: Open)
I am testing it right now and it doesn't seem to work. The error is the same.
Metadata Update from @rpitonak: - Issue status updated to: Open (was: Closed)
Yeah, I made 10 RWO 1g volumes... but they are already all gone through. ;(
We have things set to reclaim, in case someone needs something off an old volume, but I guess I hadn't counted on how fast they would get used. ;)
I have reclaimed all the volumes in reclaim state. We will need to figure out a better way of handing things moving forward. Perhaps just make a bunch more, or perhaps some scripting... will ponder on it.
any updates here?
There was a volume you were trying to use that was provisioned incorreectly. I went and adjusted it so it would work... but if you could kill that pod and release that pvc and get another one that would be great.
Login to comment on this ticket.