These were removed as part of the remove of kube. They were originally added here as a result of this BZ
We need to determine if tools like openshift-ansible depend on them being in the host for gluster storage to work or if it all works inside containers for them.
A longer term approach would be to get package layering support in an ansible module so that we could worry about stuff like this less.
Metadata Update from @dustymabe: - Issue tagged with: F27, host
@mmicene @jarrpa - please comment
The mount.glusterfs command (provided in glusterfs-fuse RPM) is required to be able to make use of GlusterFS volumes in kube/openshift. Without it the nodes won't be able to provide those volumes to their containers.
@jarrpa - i'll ask the naive question before someone else does in this thread; is there is no way to do this by running mount.glusterfs command from inside a container?
The answer can be "yes, but with pain". We just want to gauge the level of pain :)
@dustymabe Honestly, I wouldn't know. :) Never tried! I don't know enough about Atomic to truly understand how that would work, unfortunately. I'm also a bit unfamiliar with the glusterfs-fuse packaging, so i don't know if the RPM has other packaging or configuration dependencies.
@jarrpa - could we bring in someone who knows better? maybe we can chat on IRC tomorrow and help figure some things out.
@dustymabe I'm out in CA for a conference through tomorrow. I'm still in CA on Friday but would be more open to an IRC or audio/video meeting.
@jarrpa ok - let's get something together and chat with some people who know more about glusterfs-fuse and openshift-ansible.
I just looked at this on f27 w/ layering. Here's what's added:
ceph-common-1:12.2.0-1.fc27.x86_64 glusterfs-3.12.1-1.fc27.x86_64 glusterfs-client-xlators-3.12.1-1.fc27.x86_64 glusterfs-fuse-3.12.1-1.fc27.x86_64 glusterfs-libs-3.12.1-1.fc27.x86_64 gperftools-libs-2.6.1-3.fc27.x86_64 leveldb-1.18-1.fc26.x86_64 libbabeltrace-1.5.3-1.fc27.x86_64 libcephfs2-1:12.2.0-1.fc27.x86_64 librados2-1:12.2.0-1.fc27.x86_64 libradosstriper1-1:12.2.0-1.fc27.x86_64 librbd1-1:12.2.0-1.fc27.x86_64 librgw2-1:12.2.0-1.fc27.x86_64 lttng-ust-2.10.0-2.fc27.x86_64 python-cephfs-1:12.2.0-1.fc27.x86_64 python-prettytable-0.7.2-11.fc27.noarch python-rados-1:12.2.0-1.fc27.x86_64 python-rbd-1:12.2.0-1.fc27.x86_64 python-rgw-1:12.2.0-1.fc27.x86_64 snappy-1.1.4-5.fc27.x86_64 userspace-rcu-0.10.0-3.fc27.x86_64
note: to get this information I package layered dnf then ostree admin unlock --hotfix then dnf install blah.
ostree admin unlock --hotfix
here's some more detailed information:
to add glusterfs-fuse:
[root@vanilla-f27atomic ~]# dnf install glusterfs-fuse | tee Fedora 27 - x86_64 9.9 MB/s | 66 MB 00:06 Last metadata expiration check: 0:00:06 ago on Mon 18 Sep 2017 06:24:59 PM UTC. Dependencies resolved. ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: glusterfs-fuse x86_64 3.12.1-1.fc27 updates-testing 149 k Installing dependencies: glusterfs x86_64 3.12.1-1.fc27 updates-testing 589 k glusterfs-client-xlators x86_64 3.12.1-1.fc27 updates-testing 875 k glusterfs-libs x86_64 3.12.1-1.fc27 updates-testing 406 k Transaction Summary ================================================================================ Install 4 Packages Total download size: 2.0 M Installed size: 7.5 M
for ceph it's a little more:
[root@vanilla-f27atomic ~]# dnf install ceph-common | tee 'Last metadata expiration check: 0:00:42 ago on Mon 18 Sep 2017 06:24:59 PM UTC. Dependencies resolved. ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: ceph-common x86_64 1:12.2.0-1.fc27 updates-testing 15 M Installing dependencies: gperftools-libs x86_64 2.6.1-3.fc27 fedora 287 k leveldb x86_64 1.18-1.fc26 fedora 148 k libbabeltrace x86_64 1.5.3-1.fc27 fedora 195 k libcephfs2 x86_64 1:12.2.0-1.fc27 updates-testing 451 k librados2 x86_64 1:12.2.0-1.fc27 updates-testing 2.9 M libradosstriper1 x86_64 1:12.2.0-1.fc27 updates-testing 346 k librbd1 x86_64 1:12.2.0-1.fc27 updates-testing 1.1 M librgw2 x86_64 1:12.2.0-1.fc27 updates-testing 1.8 M lttng-ust x86_64 2.10.0-2.fc27 fedora 260 k python-cephfs x86_64 1:12.2.0-1.fc27 updates-testing 125 k python-prettytable noarch 0.7.2-11.fc27 fedora 41 k python-rados x86_64 1:12.2.0-1.fc27 updates-testing 309 k python-rbd x86_64 1:12.2.0-1.fc27 updates-testing 174 k python-rgw x86_64 1:12.2.0-1.fc27 updates-testing 119 k snappy x86_64 1.1.4-5.fc27 fedora 61 k userspace-rcu x86_64 0.10.0-3.fc27 fedora 98 k Transaction Summary ================================================================================ Install 17 Packages Total download size: 24 M Installed size: 77 M Is this ok [y/N]: N
so gluster is 7M installed and ceph is 77M.
talked in the atomic working group meeting today. the current proposal is that we keep gluster in since it is rather small, and leave ceph out.
Please voice objections/concerns here, otherwise we'll take action soonish.
It looks like while support may have come in docker, kubernetes still doesn't support the sidecar style container that would allow us to pull these from AH. There is a proposal from the OpenShift team to add that support to k8s here:
https://trello.com/c/7r78wcDN/42-8-design-containerized-mount-for-volume-filesystem-plugins-client-container
The card is old, and the various links are some what historical, but it looks like the team is still working on it.
My thoughts are things still need to be shipped in the host until that's fixed.
in that case we should add them both back. Any objections?
+1 on adding them both back until they can be side-loaded.
What about package layering?
I'm fine w/ leaving them in, though.
I'd prefer to have ansible support for package layering and livefs support to not be experimental before relying on layering :(
PRs:
The gluser/ceph rpms are now back in f27 and rawhide. Closing this issue.
Metadata Update from @dustymabe: - Issue close_status updated to: Fixed - Issue status updated to: Closed (was: Open)
Log in to comment on this ticket.