rgmanager: Fix VM restart issue
Problem description:
* node A starts vm:foo. Before starting vm:foo, it asks
the rest of the cluster if they have seen vm:foo
* node B receives a status inquiry request from node A.
It then executes a status check on that VM to see if it
is running. It's not, so status returns 1. At this
point, node B sets a NEEDSTOP flag.
* Suppose you disable the VM on node A and start it on
node B now. At this point, the NEEDSTOP flag is still
persisted on node B, but is ignored by the start/status
checks.
* If you then do a configuration update, the NEEDSTOP flag
is -still- there. After a configuration update (or during
a special "recover" operation", the NEEDSTOP flag is used
by rgmanager to decide what resources need to be stopped
or not. Presence of this flag does NOT alter service state.
* Rgmanager does its reconfiguration, sees the NEESTOP flag,
and stops the virtual machine. Because the state has not
actually changed according to rgmanager (NEEDSTOP is
succeeded by NEEDSTART if a resource's parameters have changed,
for example), the next status check causes a recovery of
the VM and then the VM is restarted.
Solution:
* Don't set NEEDSTOP during STATUS_INQUIRY
Signed-off-by: Lon Hohberger <lhh@redhat.com>