I see ipa-replica-install fail with this in the middle of the normal output:
Unexpected error - see /var/log/ipareplica-install.log for details: CalledProcessError: Command '/bin/systemctl restart ipa.service' returned non-zero exit status 1
ipactl start:
Failed to start pki-tomcatd Service Shutting down ipa: INFO: File "/usr/lib/python2.7/site-packages/ipaserver/install/installutils.py", line 616, in run_script return_value = main_function()
File "/usr/sbin/ipactl", line 478, in main ipa_start(options)
File "/usr/sbin/ipactl", line 251, in ipa_start raise IpactlError("Aborting ipactl")
ipa: INFO: The ipactl command failed, exception: IpactlError: Aborting ipactl Aborting ipactl
Version-Release number of selected component (if applicable): ipa-server-3.2.1-1 pki-base-10.0.3-2 tomcat-7.0.40-2 java-1.7.0-openjdk-1.7.0.25-2.3.10.7
Steps to Reproduce: 1. Install IPA server 2. ipa-replica-prepare 3. sftp gpg file 4. ipa-replica-install
Actual results: replica appears to install but, shows failures. Closer inspection shows IPA not started.
Expected results: clean install and IPA running.
Additional info:
From tail of /var/log/ipareplica-install.log:
2013-07-10T00:15:05Z DEBUG args=/bin/systemctl restart ipa.service 2013-07-10T00:17:12Z DEBUG Process finished, return code=1 2013-07-10T00:17:12Z DEBUG stdout= 2013-07-10T00:17:12Z DEBUG stderr=Job for ipa.service failed. See 'systemctl status ipa.service' and 'journalctl -xn' for details.
2013-07-10T00:17:12Z INFO File "/usr/lib/python2.7/site-packages/ipaserver/install/installutils.py", line 616, in run_script return_value = main_function()
File "/usr/sbin/ipa-replica-install", line 732, in main ipaservices.knownservices.ipa.enable()
File "/usr/lib/python2.7/site-packages/ipapython/platform/fedora16/service.py", line 116, in enable self.restart(instance_name)
File "/usr/lib/python2.7/site-packages/ipapython/platform/base/systemd.py", line 116, in restart ipautil.run(["/bin/systemctl", "restart", self.service_instance(instance_name)], capture_output=capture_output)
File "/usr/lib/python2.7/site-packages/ipapython/ipautil.py", line 322, in run raise CalledProcessError(p.returncode, arg_string)
2013-07-10T00:17:12Z INFO The ipa-replica-install command failed, exception: CalledProcessError: Command '/bin/systemctl restart ipa.service' returned non-zero exit status 1
And this is what I see in /var/log/messages:
Jul 10 10:31:07 qe-blade-09 systemd[1]: Stopping PKI Tomcat Server pki-tomcat... Jul 10 10:31:07 qe-blade-09 pkidaemon[10395]: An exit status of '143' refers to the 'systemd' method of using 'SIGTERM' to shutdown a Java process and can safely be ignored. Jul 10 10:31:08 qe-blade-09 systemd[1]: pki-tomcatd@pki-tomcat.service: main process exited, code=exited, status=143/n/a Jul 10 10:31:08 qe-blade-09 systemd[1]: Stopped PKI Tomcat Server pki-tomcat. Jul 10 10:31:08 qe-blade-09 systemd[1]: Unit pki-tomcatd@pki-tomcat.service entered failed state. Jul 10 10:31:08 qe-blade-09 systemd[1]: Stopping PKI Tomcat Server.
Investigation showed that the CA clone successfully configured, but that sometime after being started about three times in succession, the CS.cfg appears to be truncated. In fact, it was 8096 bytes large - which is probably the size of a file buffer.
My current theory is that the CS.cfg got snarled up because all the buffers were not flushed when the server was shut down and restarted. We can probably reproduce this by starting and stopping a server repeatedly. What we do right now is store a status variable for getStatus() when the server comes up -- but this means writing the whole CS.cfg to disk.
So we need to - 1. fix the buffer problem. Make sure we use a print writer or something to write buffers immediately to disk 2. move the status parameter to a separate (much smaller) file.
Well - that theory didn't work. I tried restarting my server 50 times in succession with no effect on CS.cfg. So, the problem might be in that IPA also writes to CS.cfg directly. I think we need to make sure that the CS server is down before writing to CS.cfg
Closing this ticket until this problem resurfaces.
Metadata Update from @nkinder: - Issue set to the milestone: 10.0.4
Dogtag PKI is moving from Pagure issues to GitHub issues. This means that existing or new issues will be reported and tracked through Dogtag PKI's GitHub Issue tracker.
This issue has been cloned to GitHub and is available here: https://github.com/dogtagpki/pki/issues/1254
If you want to receive further updates on the issue, please navigate to the GitHub issue and click on Subscribe button.
Subscribe
Thank you for understanding, and we apologize for any inconvenience.
Log in to comment on this ticket.