#49945 Ticket 49926 - Add replication functionality to dsconf
Closed 3 years ago by spichugi. Opened 5 years ago by mreynolds.
mreynolds/389-ds-base ticket49926  into  master

@@ -3029,8 +3029,9 @@ 

                   * temporarily mark it as "unavailable".

                   */

                  slapi_ch_free_string(&agmt->maxcsn);

-                 agmt->maxcsn = slapi_ch_smprintf("%s;%s;%s;%" PRId64 ";unavailable", slapi_sdn_get_dn(agmt->replarea),

-                                                  slapi_rdn_get_value_by_ref(slapi_rdn_get_rdn(agmt->rdn)), agmt->hostname, agmt->port);

+                 agmt->maxcsn = slapi_ch_smprintf("%s;%s;%s;%" PRId64 ";unavailable;%s", slapi_sdn_get_dn(agmt->replarea),

+                                                  slapi_rdn_get_value_by_ref(slapi_rdn_get_rdn(agmt->rdn)), agmt->hostname,

+                                                  agmt->port, maxcsn);

              } else if (rid == oprid) {

                  slapi_ch_free_string(&agmt->maxcsn);

                  agmt->maxcsn = slapi_ch_smprintf("%s;%s;%s;%" PRId64 ";%" PRIu16 ";%s", slapi_sdn_get_dn(agmt->replarea),

file modified
+2
@@ -26,6 +26,7 @@ 

  from lib389.cli_conf import saslmappings as cli_sasl

  from lib389.cli_conf import pwpolicy as cli_pwpolicy

  from lib389.cli_conf import backup as cli_backup

+ from lib389.cli_conf import replication as cli_replication

  from lib389.cli_conf.plugins import memberof as cli_memberof

  from lib389.cli_conf.plugins import usn as cli_usn

  from lib389.cli_conf.plugins import rootdn_ac as cli_rootdn_ac
@@ -79,6 +80,7 @@ 

  cli_sasl.create_parser(subparsers)

  cli_pwpolicy.create_parser(subparsers)

  cli_backup.create_parser(subparsers)

+ cli_replication.create_parser(subparsers)

  

  argcomplete.autocomplete(parser)

  

file modified
+27 -16
@@ -80,6 +80,7 @@ 

      formatInfData,

      ensure_bytes,

      ensure_str,

+     ensure_list_str,

      format_cmd_list)

  from lib389.paths import Paths

  from lib389.nss_ssl import NssSsl
@@ -3286,21 +3287,28 @@ 

                                     ldif_file, e.errno, e.strerror)

                  raise e

  

-     def getConsumerMaxCSN(self, replica_entry):

+     def getConsumerMaxCSN(self, replica_entry, binddn=None, bindpw=None):

First of all, I think, we really should get away from this legacy methods/objects and we should use existing structures for Replicas, RUV, Agreements.
This method can be put to Agreement(DSLdapObject) and we can benefit from it. The object already has the binddn, bindpw, consumer instance information, etc.

In the later comments, I'll point out what can be changed with what.

Okay, but we need to provide an external bind dn and password for all the status related functions We can not rely on the credentials in the instance object because if it's LDAPI, like in the UI, we can not contact a remote consumer.

Okay, good point!

But it may not be a bind either. It could be GSSAPI, TLS, or other. We shouldn't have "helper" wrappers like this because they limit us to narrow methods of operation, and are not composable.

We should be taking a DirSrv object, and calling what is needed from that. So I think we should change this function.

          """

This is for connecting to remote replicas. We can not use what is in the existing Dirsrv object, it must be provided externally. There are not a lot of good options here.

          Attempt to get the consumer's maxcsn from its database

Actually I'm not suing this function anymore. I wrote a new one inthe agreement class. This one is only needed for legacy replication. I'd actually like to remove it.

          """

-         host = replica_entry.getValue(AGMT_HOST)

-         port = replica_entry.getValue(AGMT_PORT)

-         suffix = replica_entry.getValue(REPL_ROOT)

+         host = replica_entry.get_attr_val_utf8(AGMT_HOST)

+         port = replica_entry.get_attr_val_utf8(AGMT_PORT)

+         suffix = replica_entry.get_attr_val_utf8(REPL_ROOT)

          error_msg = "Unavailable"

  

+         # If we are using LDAPI we need to provide the credentials, otherwise

+         # use the existing credentials

+         if binddn is None:

+             binddn = self.binddn

+         if bindpw is None:

+             bindpw = self.bindpw

+ 

          # Open a connection to the consumer

          consumer = DirSrv(verbose=self.verbose)

          args_instance[SER_HOST] = host

          args_instance[SER_PORT] = int(port)

-         args_instance[SER_ROOT_DN] = self.binddn

-         args_instance[SER_ROOT_PW] = self.bindpw

+         args_instance[SER_ROOT_DN] = binddn

+         args_instance[SER_ROOT_PW] = bindpw

          args_standalone = args_instance.copy()

          consumer.allocate(args_standalone)

          try:
@@ -3317,7 +3325,7 @@ 

                  # Error

                  consumer.close()

                  return None

-             rid = replica_entries[0].getValue(REPL_ID)

+             rid = ensure_str(replica_entries[0].getValue(REPL_ID))

          except:

              # Error

              consumer.close()
@@ -3330,8 +3338,9 @@ 

              consumer.close()

              if not entry:

                  # Error out?

+                 self.log.error("Failed to retrieve database RUV entry from consumer")

                  return error_msg

-             elements = entry[0].getValues('nsds50ruv')

+             elements = ensure_list_str(entry[0].getValues('nsds50ruv'))

              for ruv in elements:

                  if ('replica %s ' % rid) in ruv:

                      ruv_parts = ruv.split()
@@ -3345,16 +3354,17 @@ 

              consumer.close()

              return error_msg

  

-     def getReplAgmtStatus(self, agmt_entry):

+     def getReplAgmtStatus(self, agmt_entry, binddn=None, bindpw=None):

          '''

          Return the status message, if consumer is not in synch raise an

          exception

          '''

          agmt_maxcsn = None

-         suffix = agmt_entry.getValue(REPL_ROOT)

-         agmt_name = agmt_entry.getValue('cn')

+         suffix = agmt_entry.get_attr_val_utf8(REPL_ROOT)

+         agmt_name = agmt_entry.get_attr_val_utf8('cn')

          status = "Unknown"

          rc = -1

+ 

          try:

              entry = self.search_s(suffix, ldap.SCOPE_SUBTREE,

                                    REPLICA_RUV_FILTER, [AGMT_MAXCSN])
@@ -3373,7 +3383,8 @@ 

              dc=example,dc=com;test_agmt;localhost;389;unavailable

  

          '''

-         maxcsns = entry[0].getValues(AGMT_MAXCSN)

+ 

+         maxcsns = ensure_list_str(entry[0].getValues(AGMT_MAXCSN))

          for csn in maxcsns:

              comps = csn.split(';')

              if agmt_name == comps[1]:
@@ -3384,19 +3395,19 @@ 

                      agmt_maxcsn = comps[5]

  

          if agmt_maxcsn:

-             con_maxcsn = self.getConsumerMaxCSN(agmt_entry)

+             con_maxcsn = self.getConsumerMaxCSN(agmt_entry, binddn=binddn, bindpw=bindpw)

              if con_maxcsn:

                  if agmt_maxcsn == con_maxcsn:

                      status = "In Synchronization"

                      rc = 0

                  else:

-                     # Not in sync - attmpt to discover the cause

+                     # Not in sync - attempt to discover the cause

                      repl_msg = "Unknown"

-                     if agmt_entry.getValue(AGMT_UPDATE_IN_PROGRESS) == 'TRUE':

+                     if agmt_entry.get_attr_val_utf8(AGMT_UPDATE_IN_PROGRESS) == 'TRUE':

                          # Replication is on going - this is normal

                          repl_msg = "Replication still in progress"

                      elif "Can't Contact LDAP" in \

-                          agmt_entry.getValue(AGMT_UPDATE_STATUS):

+                          agmt_entry.get_attr_val_utf8(AGMT_UPDATE_STATUS):

                          # Consumer is down

                          repl_msg = "Consumer can not be contacted"

  

@@ -169,7 +169,7 @@ 

              str_attrs[ensure_str(k)] = ensure_list_str(attrs[k])

  

          # ensure all the keys are lowercase

-         str_attrs = dict((k.lower(), v) for k, v in str_attrs.items())

+         str_attrs = dict((k.lower(), v) for k, v in list(str_attrs.items()))

  

          response = json.dumps({"type": "entry", "dn": ensure_str(self._dn), "attrs": str_attrs})

  
@@ -969,7 +969,7 @@ 

          # This may not work in all cases, especially when we consider plugins.

          #

          co = self._entry_to_instance(dn=None, entry=None)

-         # Make the rdn naming attr avaliable

+         # Make the rdn naming attr available

          self._rdn_attribute = co._rdn_attribute

          (rdn, properties) = self._validate(rdn, properties)

          # Now actually commit the creation req

@@ -83,6 +83,14 @@ 

                  retstr = "equal"

          return retstr

  

+     def get_time_lag(self, oth):

+         diff = oth.ts - self.ts

+         if diff < 0:

+             lag = datetime.timedelta(seconds=-diff)

+         else:

+             lag = datetime.timedelta(seconds=diff)

+         return "{:0>8}".format(str(lag))

+ 

      def __repr__(self):

          return ("%s seq: %s rid: %s" % (time.strftime("%x %X", time.localtime(self.ts)),

                                          str(self.seq), str(self.rid)))

file modified
+327 -18
@@ -10,13 +10,13 @@ 

  import re

  import time

  import six

- 

+ import json

+ import datetime

  from lib389._constants import *

  from lib389.properties import *

  from lib389._entry import FormatDict

- from lib389.utils import normalizeDN, ensure_bytes, ensure_str, ensure_dict_str

+ from lib389.utils import normalizeDN, ensure_bytes, ensure_str, ensure_dict_str, ensure_list_str

  from lib389 import Entry, DirSrv, NoSuchEntryError, InvalidArgumentError

- 

  from lib389._mapped_object import DSLdapObject, DSLdapObjects

  

  
@@ -33,16 +33,25 @@ 

      :type dn: str

      """

  

-     def __init__(self, instance, dn=None):

+     csnpat = r'(.{8})(.{4})(.{4})(.{4})'

+     csnre = re.compile(csnpat)

+ 

+     def __init__(self, instance, dn=None, winsync=False):

          super(Agreement, self).__init__(instance, dn)

          self._rdn_attribute = 'cn'

          self._must_attributes = [

              'cn',

          ]

-         self._create_objectclasses = [

-             'top',

-             'nsds5replicationagreement',

-         ]

+         if winsync:

+             self._create_objectclasses = [

+                 'top',

+                 'nsDSWindowsReplicationAgreement',

+             ]

+         else:

+             self._create_objectclasses = [

+                 'top',

+                 'nsds5replicationagreement',

+             ]

          self._protected = False

  

      def begin_reinit(self):
@@ -59,6 +68,7 @@ 

          """

          done = False

          error = False

+         inprogress = False

          status = self.get_attr_val_utf8('nsds5ReplicaLastInitStatus')

          self._log.debug('agreement tot_init status: %s' % status)

          if not status:
@@ -67,33 +77,300 @@ 

              error = True

          elif 'Total update succeeded' in status:

              done = True

+             inprogress = False

          elif 'Replication error' in status:

              error = True

+         elif 'Total update in progress' in status:

+             inprogress = True

+         elif 'LDAP error' in status:

+             error = True

  

-         return (done, error)

+         return (done, inprogress, error)

  

      def wait_reinit(self, timeout=300):

          """Wait for a reinit to complete. Returns done and error. A correct

          reinit will return (True, False).

- 

+         :param timeout: timeout value for how long to wait for the reinit

+         :type timeout: int

          :returns: tuple(done, error), where done, error are bool.

          """

          done = False

          error = False

          count = 0

          while done is False and error is False:

-             (done, error) = self.check_reinit()

+             (done, inprogress, error) = self.check_reinit()

              if count > timeout and not done:

                  error = True

              count = count + 2

              time.sleep(2)

          return (done, error)

  

+     def get_agmt_maxcsn(self):

+         """Get the agreement maxcsn from the database RUV entry

+         :returns: CSN string if found, otherwise None is returned

+         """

+         from lib389.replica import Replicas

+         suffix = self.get_attr_val_utf8(REPL_ROOT)

+         agmt_name = self.get_attr_val_utf8('cn')

+         replicas = Replicas(self._instance)

+         replica = replicas.get(suffix)

+         maxcsns = replica.get_ruv_agmt_maxcsns()

+ 

+         if maxcsns is None or len(maxcsns) == 0:

+             self._log.debug('get_agmt_maxcsn - Failed to get agmt maxcsn from RUV')

+             return None

+ 

+         for csn in maxcsns:

+             comps = csn.split(';')

+             if agmt_name == comps[1]:

+                 # same replica, get maxcsn

+                 if len(comps) < 6:

+                     return None

+                 else:

+                     return comps[5]

+ 

+         self._log.debug('get_agmt_maxcsn - did not find matching agmt maxcsn from RUV')

+         return None

+ 

+     def get_consumer_maxcsn(self, binddn=None, bindpw=None):

+         """Attempt to get the consumer's maxcsn from its database RUV entry

+         :param binddn: Specifies a specific bind DN to use when contacting the remote consumer

+         :type binddn: str

+         :param bindpw: Password for the bind DN

+         :type bindpw: str

+         :returns: CSN string if found, otherwise "Unavailable" is returned

+         """

+         host = self.get_attr_val_utf8(AGMT_HOST)

+         port = self.get_attr_val_utf8(AGMT_PORT)

+         suffix = self.get_attr_val_utf8(REPL_ROOT)

+         protocol = self.get_attr_val_utf8('nsds5replicatransportinfo').lower()

+ 

+         result_msg = "Unavailable"

+ 

+         # If we are using LDAPI we need to provide the credentials, otherwise

+         # use the existing credentials

+         if binddn is None:

+             binddn = self._instance.binddn

+         if bindpw is None:

+             bindpw = self._instance.bindpw

+ 

+         # Get the replica id from supplier to compare to the consumer's rid

+         from lib389.replica import Replicas

+         replicas = Replicas(self._instance)

+         replica = replicas.get(suffix)

+         rid = replica.get_attr_val_utf8(REPL_ID)

+ 

+         # Open a connection to the consumer

+         consumer = DirSrv(verbose=self._instance.verbose)

+         args_instance[SER_HOST] = host

+         if protocol == "ssl" or protocol == "ldaps":

+             args_instance[SER_SECURE_PORT] = int(port)

+         else:

+             args_instance[SER_PORT] = int(port)

+         args_instance[SER_ROOT_DN] = binddn

+         args_instance[SER_ROOT_PW] = bindpw

+         args_standalone = args_instance.copy()

+         consumer.allocate(args_standalone)

+         try:

+             consumer.open()

+         except ldap.LDAPError as e:

+             self._instance.log.debug('Connection to consumer ({}:{}) failed, error: {}'.format(host, port, e))

+             return result_msg

+ 

+         # Search for the tombstone RUV entry

+         try:

+             entry = consumer.search_s(suffix, ldap.SCOPE_SUBTREE,

+                                       REPLICA_RUV_FILTER, ['nsds50ruv'])

+             if not entry:

+                 self.log.error("Failed to retrieve database RUV entry from consumer")

+             else:

+                 elements = ensure_list_str(entry[0].getValues('nsds50ruv'))

+                 for ruv in elements:

+                     if ('replica %s ' % rid) in ruv:

+                         ruv_parts = ruv.split()

+                         if len(ruv_parts) == 5:

+                             result_msg = ruv_parts[4]

+                         break

+         except ldap.LDAPError as e:

+             self._instance.log.debug('Failed to search for the suffix ' +

+                                      '({}) consumer ({}:{}) failed, error: {}'.format(

+                                          suffix, host, port, e))

+         consumer.close()

+         return result_msg

+ 

+     def get_agmt_status(self, binddn=None, bindpw=None):

+         """Return the status message

+         :param binddn: Specifies a specific bind DN to use when contacting the remote consumer

+         :type binddn: str

+         :param bindpw: Password for the bind DN

+         :type bindpw: str

+         :returns: A status message about the replication agreement

+         """

+         status = "Unknown"

+ 

+         agmt_maxcsn = self.get_agmt_maxcsn()

+         if agmt_maxcsn is not None:

+             con_maxcsn = self.get_consumer_maxcsn(binddn=binddn, bindpw=bindpw)

+             if con_maxcsn:

+                 if agmt_maxcsn == con_maxcsn:

+                     status = "In Synchronization"

+                 else:

+                     # Not in sync - attempt to discover the cause

+                     repl_msg = "Unknown"

+                     if self.get_attr_val_utf8(AGMT_UPDATE_IN_PROGRESS) == 'TRUE':

+                         # Replication is on going - this is normal

+                         repl_msg = "Replication still in progress"

+                     elif "Can't Contact LDAP" in \

+                          self.get_attr_val_utf8(AGMT_UPDATE_STATUS):

+                         # Consumer is down

+                         repl_msg = "Consumer can not be contacted"

+ 

+                     status = ("Not in Synchronization: supplier " +

+                               "(%s) consumer (%s)  Reason(%s)" %

+                               (agmt_maxcsn, con_maxcsn, repl_msg))

+         return status

+ 

+     def get_lag_time(self, suffix, agmt_name, binddn=None, bindpw=None):

+         """Get the lag time between the supplier and the consumer

+         :param suffix: The replication suffix

+         :type suffix: str

+         :param agmt_name: The name of the agreement

+         :type agmt_name: str

+         :param binddn: Specifies a specific bind DN to use when contacting the remote consumer

+         :type binddn: str

+         :param bindpw: Password for the bind DN

+         :type bindpw: str

+         :returns: A time-formated string of the the replication lag (HH:MM:SS).

+         :raises: ValueError - if unable to get consumer's maxcsn

+         """

+         agmt_maxcsn = self.get_agmt_maxcsn()

+         con_maxcsn = self.get_consumer_maxcsn(binddn=binddn, bindpw=bindpw)

+         if con_maxcsn is None:

+             raise ValueError("Unable to get consumer's max csn")

+         if con_maxcsn == "Unavailable":

+             return con_maxcsn

+ 

+         # Extract the csn timstamps and compare them

+         match = Agreement.csnre.match(agmt_maxcsn)

+         if match:

+             agmt_time = int(match.group(1), 16)

+         match = Agreement.csnre.match(con_maxcsn)

+         if match:

+             con_time = int(match.group(1), 16)

+         diff = con_time - agmt_time

+         if diff < 0:

+             lag = datetime.timedelta(seconds=-diff)

+         else:

+             lag = datetime.timedelta(seconds=diff)

+ 

+         # Return a nice formated timestamp

+         return "{:0>8}".format(str(lag))

+ 

+     def status(self, winsync=False, just_status=False, use_json=False, binddn=None, bindpw=None):

+         """Get the status of a replication agreement

+         :param winsync: Specifies if the the agreement is a winsync replication agreement

+         :type winsync: boolean

+         :param just_status: Just return the status string and not all of the status attributes

+         :type just_status: boolean

+         :param use_json: Return the status in a JSON object

+         :type use_json: boolean

+         :param binddn: Specifies a specific bind DN to use when contacting the remote consumer

+         :type binddn: str

+         :param bindpw: Password for the bind DN

+         :type bindpw: str

+         :returns: A status message

+         :raises: ValueError - if failing to get agmt status

+         """

+         status_attrs_dict = self.get_all_attrs()

+         status_attrs_dict = dict((k.lower(), v) for k, v in list(status_attrs_dict.items()))

+ 

+         # We need a bind DN and passwd so we can query the consumer.  If this is an LDAPI

+         # connection, and the consumer does not allow anonymous access to the tombstone

+         # RUV entry under the suffix, then we can't get the status.  So in this case we

+         # need to provide a DN and password.

+         if not winsync:

+             try:

+                 status = self.get_agmt_status(binddn=binddn, bindpw=bindpw)

+             except ValueError as e:

+                 status = str(e)

+             if just_status:

+                 if use_json:

+                     return (json.dumps(status))

+                 else:

+                     return status

+ 

+             # Get the lag time

+             suffix = ensure_str(status_attrs_dict['nsds5replicaroot'][0])

+             agmt_name = ensure_str(status_attrs_dict['cn'][0])

+             lag_time = self.get_lag_time(suffix, agmt_name, binddn=binddn, bindpw=bindpw)

+         else:

+             status = "Not available for Winsync agreements"

+ 

+         # handle the attributes that are not always set in the agreement

+         if 'nsds5replicaenabled' not in status_attrs_dict:

+             status_attrs_dict['nsds5replicaenabled'] = ['on']

+         if 'nsds5agmtmaxcsn' not in status_attrs_dict:

+             status_attrs_dict['nsds5agmtmaxcsn'] = ["unavailable"]

+         if 'nsds5replicachangesskippedsince' not in status_attrs_dict:

+             status_attrs_dict['nsds5replicachangesskippedsince'] = ["unavailable"]

+         if 'nsds5beginreplicarefresh' not in status_attrs_dict:

+             status_attrs_dict['nsds5beginreplicarefresh'] = [""]

+         if 'nsds5replicalastinitstatus' not in status_attrs_dict:

+             status_attrs_dict['nsds5replicalastinitstatus'] = ["unavilable"]

+         if 'nsds5replicachangessentsincestartup' not in status_attrs_dict:

+             status_attrs_dict['nsds5replicachangessentsincestartup'] = ['0']

+         if ensure_str(status_attrs_dict['nsds5replicachangessentsincestartup'][0]) == '':

+             status_attrs_dict['nsds5replicachangessentsincestartup'] = ['0']

+ 

+         # Case sensitive?

+         if use_json:

+             result = {'replica-enabled': ensure_str(status_attrs_dict['nsds5replicaenabled'][0]),

+                       'update-in-progress': ensure_str(status_attrs_dict['nsds5replicaupdateinprogress'][0]),

+                       'last-update-start': ensure_str(status_attrs_dict['nsds5replicalastupdatestart'][0]),

+                       'last-update-end': ensure_str(status_attrs_dict['nsds5replicalastupdateend'][0]),

+                       'number-changes-sent': ensure_str(status_attrs_dict['nsds5replicachangessentsincestartup'][0]),

+                       'number-changes-skipped:': ensure_str(status_attrs_dict['nsds5replicachangesskippedsince'][0]),

+                       'last-update-status': ensure_str(status_attrs_dict['nsds5replicalastupdatestatus'][0]),

+                       'init-in-progress': ensure_str(status_attrs_dict['nsds5beginreplicarefresh'][0]),

+                       'last-init-start': ensure_str(status_attrs_dict['nsds5replicalastinitstart'][0]),

+                       'last-init-end': ensure_str(status_attrs_dict['nsds5replicalastinitend'][0]),

+                       'last-init-status': ensure_str(status_attrs_dict['nsds5replicalastinitstatus'][0]),

+                       'reap-active': ensure_str(status_attrs_dict['nsds5replicareapactive'][0]),

+                       'replication-status': status,

+                       'replication-lag-time': lag_time

+                 }

+             return (json.dumps(result))

+         else:

+             retstr = (

+                 "Status for %(cn)s agmt %(nsDS5ReplicaHost)s:"

+                 "%(nsDS5ReplicaPort)s" "\n"

+                 "Replica Enabled: %(nsds5ReplicaEnabled)s" "\n"

+                 "Update In Progress: %(nsds5replicaUpdateInProgress)s" "\n"

+                 "Last Update Start: %(nsds5replicaLastUpdateStart)s" "\n"

+                 "Last Update End: %(nsds5replicaLastUpdateEnd)s" "\n"

+                 "Number Of Changes Sent: %(nsds5replicaChangesSentSinceStartup)s"

+                 "\n"

+                 "Number Of Changes Skipped: %(nsds5replicaChangesSkippedSince"

+                 "Startup)s" "\n"

+                 "Last Update Status: %(nsds5replicaLastUpdateStatus)s" "\n"

+                 "Init In Progress: %(nsds5BeginReplicaRefresh)s" "\n"

+                 "Last Init Start: %(nsds5ReplicaLastInitStart)s" "\n"

+                 "Last Init End: %(nsds5ReplicaLastInitEnd)s" "\n"

+                 "Last Init Status: %(nsds5ReplicaLastInitStatus)s" "\n"

+                 "Reap Active: %(nsds5ReplicaReapActive)s" "\n"

+             )

+             # FormatDict manages missing fields in string formatting

+             entry_data = ensure_dict_str(status_attrs_dict)

+             result = retstr % FormatDict(entry_data)

+             result += "Replication Status: {}\n".format(status)

+             result += "Replication Lag Time: {}\n".format(lag_time)

+             return result

+ 

      def pause(self):

          """Pause outgoing changes from this server to consumer. Note

          that this does not pause the consumer, only that changes will

          not be sent from this master to consumer: the consumer may still

-         recieve changes from other replication paths!

+         receive changes from other replication paths!

          """

          self.set('nsds5ReplicaEnabled', 'off')

  
@@ -122,6 +399,34 @@ 

          """

          return self.get_attr_val_utf8('nsDS5ReplicaWaitForAsyncResults')

  

+ 

+ class WinsyncAgreement(Agreement):

+     """A replication agreement from this server instance to

+     another instance of directory server.

+ 

+     - must attributes: [ 'cn' ]

+     - RDN attribute: 'cn'

+ 

+     :param instance: An instance

+     :type instance: lib389.DirSrv

+     :param dn: Entry DN

+     :type dn: str

+     """

+ 

+     def __init__(self, instance, dn=None):

+         super(Agreement, self).__init__(instance, dn)

+         self._rdn_attribute = 'cn'

+         self._must_attributes = [

+             'cn',

+         ]

+         self._create_objectclasses = [

+                 'top',

+                 'nsDSWindowsReplicationAgreement',

+             ]

+ 

+         self._protected = False

+ 

+ 

  class Agreements(DSLdapObjects):

      """Represents the set of agreements configured on this instance.

      There are two possible ways to use this interface.
@@ -149,11 +454,15 @@ 

      :type rdn: str

      """

  

-     def __init__(self, instance, basedn=DN_MAPPING_TREE, rdn=None):

+     def __init__(self, instance, basedn=DN_MAPPING_TREE, rdn=None, winsync=False):

          super(Agreements, self).__init__(instance)

-         self._childobject = Agreement

-         self._objectclasses = [ 'nsds5replicationagreement' ]

-         self._filterattrs = [ 'cn', 'nsDS5ReplicaRoot' ]

+         if winsync:

+             self._childobject = WinsyncAgreement

+             self._objectclasses = ['nsDSWindowsReplicationAgreement']

+         else:

+             self._childobject = Agreement

+             self._objectclasses = ['nsds5replicationagreement']

+         self._filterattrs = ['cn', 'nsDS5ReplicaRoot']

          if rdn is None:

              self._basedn = basedn

          else:
@@ -167,6 +476,7 @@ 

              raise ldap.UNWILLING_TO_PERFORM("Refusing to create agreement in %s" % DN_MAPPING_TREE)

          return super(Agreements, self)._validate(rdn, properties)

  

+ 

  class AgreementLegacy(object):

      """An object that helps to work with agreement entry

  
@@ -194,7 +504,6 @@ 

          :type agreement_dn: str

          :param just_status: If True, returns just status

          :type just_status: bool

- 

          :returns: str -- See below

          :raises: NoSuchEntryError - if agreement_dn is an unknown entry

  
@@ -208,7 +517,7 @@ 

                  Last Update End: 0

                  Num. Changes Sent: 1:10/0

                  Num. changes Skipped: None

-                 Last update Status: 0 Replica acquired successfully:

+                 Last update Status: Error (0) Replica acquired successfully:

                      Incremental update started

                  Init in progress: None

                  Last Init Start: 0

@@ -16,6 +16,7 @@ 

  from lib389._mapped_object import DSLdapObject

  from lib389.utils import ds_is_older

  

+ 

  class Changelog5(DSLdapObject):

      """Represents the Directory Server changelog. This is used for

      replication. Only one changelog is needed for every server.
@@ -25,9 +26,9 @@ 

      """

  

      def __init__(self, instance, dn='cn=changelog5,cn=config'):

-         super(Changelog5,self).__init__(instance, dn)

+         super(Changelog5, self).__init__(instance, dn)

          self._rdn_attribute = 'cn'

-         self._must_attributes = [ 'cn', 'nsslapd-changelogdir' ]

+         self._must_attributes = ['cn', 'nsslapd-changelogdir']

          self._create_objectclasses = [

              'top',

              'nsChangelogConfig',
@@ -37,7 +38,7 @@ 

                  'top',

                  'extensibleobject',

              ]

-         self._protected = True

+         self._protected = False

  

      def set_max_entries(self, value):

          """Configure the max entries the changelog can hold.

@@ -163,12 +163,12 @@ 

                                 help="Specifies the filename of the input LDIF files."

                                      "When multiple files are imported, they are imported in the order"

                                      "they are specified on the command line.")

-     import_parser.add_argument('-c', '--chunks_size', type=int,

+     import_parser.add_argument('-c', '--chunks-size', type=int,

                                 help="The number of chunks to have during the import operation.")

      import_parser.add_argument('-E', '--encrypted', action='store_true',

                                 help="Decrypts encrypted data during export. This option is used only"

                                      "if database encryption is enabled.")

-     import_parser.add_argument('-g', '--gen_uniq_id',

+     import_parser.add_argument('-g', '--gen-uniq-id',

                                 help="Generate a unique id. Type none for no unique ID to be generated"

                                      "and deterministic for the generated unique ID to be name-based."

                                      "By default, a time-based unique ID is generated."
@@ -176,11 +176,11 @@ 

                                      "it is also possible to specify the namespace for the server to use."

                                      "namespaceId is a string of characters"

                                      "in the format 00-xxxxxxxx-xxxxxxxx-xxxxxxxx-xxxxxxxx.")

-     import_parser.add_argument('-O', '--only_core', action='store_true',

+     import_parser.add_argument('-O', '--only-core', action='store_true',

                                 help="Requests that only the core database is created without attribute indexes.")

-     import_parser.add_argument('-s', '--include_suffixes', nargs='+',

+     import_parser.add_argument('-s', '--include-suffixes', nargs='+',

                                 help="Specifies the suffixes or the subtrees to be included.")

-     import_parser.add_argument('-x', '--exclude_suffixes', nargs='+',

+     import_parser.add_argument('-x', '--exclude-suffixes', nargs='+',

                                 help="Specifies the suffixes to be excluded.")

  

      export_parser = subcommands.add_parser('export', help='do an online export of the suffix')
@@ -190,21 +190,21 @@ 

      export_parser.add_argument('-l', '--ldif',

                                 help="Gives the filename of the output LDIF file."

                                      "If more than one are specified, use a space as a separator")

-     export_parser.add_argument('-C', '--use_id2entry', action='store_true', help="Uses only the main database file.")

+     export_parser.add_argument('-C', '--use-id2entry', action='store_true', help="Uses only the main database file.")

      export_parser.add_argument('-E', '--encrypted', action='store_true',

                                 help="""Decrypts encrypted data during export. This option is used only

                                         if database encryption is enabled.""")

-     export_parser.add_argument('-m', '--min_base64', action='store_true',

+     export_parser.add_argument('-m', '--min-base64', action='store_true',

                                 help="Sets minimal base-64 encoding.")

-     export_parser.add_argument('-N', '--no_seq_num', action='store_true',

+     export_parser.add_argument('-N', '--no-seq-num', action='store_true',

                                 help="Enables you to suppress printing the sequence number.")

      export_parser.add_argument('-r', '--replication', action='store_true',

                                 help="Exports the information required to initialize a replica when the LDIF is imported")

-     export_parser.add_argument('-u', '--no_dump_uniq_id', action='store_true',

+     export_parser.add_argument('-u', '--no-dump-uniq-id', action='store_true',

                                 help="Requests that the unique ID is not exported.")

-     export_parser.add_argument('-U', '--not_folded', action='store_true',

+     export_parser.add_argument('-U', '--not-folded', action='store_true',

                                 help="Requests that the output LDIF is not folded.")

-     export_parser.add_argument('-s', '--include_suffixes', nargs='+',

+     export_parser.add_argument('-s', '--include-suffixes', nargs='+',

                                 help="Specifies the suffixes or the subtrees to be included.")

-     export_parser.add_argument('-x', '--exclude_suffixes', nargs='+',

+     export_parser.add_argument('-x', '--exclude-suffixes', nargs='+',

                                 help="Specifies the suffixes to be excluded.")

@@ -147,7 +147,7 @@ 

                  result += "%s (%s)\n" % (entrydn, policy_type.lower())

  

      if args.json:

-         return print(json.dumps(result))

+         print(json.dumps(result))

      else:

          log.info(result)

  

The added file is too large to be shown here, see it at: src/lib389/lib389/cli_conf/replication.py
@@ -7,14 +7,11 @@ 

  # --- END COPYRIGHT BLOCK ---

  

  import ldap

- import json

- from ldap import modlist

  from lib389._mapped_object import DSLdapObject, DSLdapObjects

  from lib389.config import Config

- from lib389.idm.account import Account, Accounts

+ from lib389.idm.account import Account

  from lib389.idm.nscontainer import nsContainers, nsContainer

  from lib389.cos import CosPointerDefinitions, CosPointerDefinition, CosTemplates

- from lib389.utils import ensure_str, ensure_list_str, ensure_bytes

  

  USER_POLICY = 1

  SUBTREE_POLICY = 2
@@ -146,6 +143,9 @@ 

          # Add policy to the entry

          user_entry.replace('pwdpolicysubentry', pwp_entry.dn)

  

+         # make sure that local policies are enabled

+         self.set_global_policy({'nsslapd-pwpolicy-local': 'on'})

How is this related to the ticket?

Just killing two birds with one stone...

+ 

          return pwp_entry

  

      def create_subtree_policy(self, dn, properties):
@@ -187,6 +187,9 @@ 

                                              'cosTemplateDn': cos_template.dn,

                                              'cn': 'nsPwPolicy_CoS'})

  

+         # make sure that local policies are enabled

+         self.set_global_policy({'nsslapd-pwpolicy-local': 'on'})

+ 

          return pwp_entry

  

      def get_pwpolicy_entry(self, dn):

file modified
+51 -16
@@ -7,7 +7,6 @@ 

  # --- END COPYRIGHT BLOCK ---

  

  import ldap

- import os

  import decimal

  import time

  import logging
@@ -17,8 +16,6 @@ 

  from lib389._constants import *

  from lib389.properties import *

  from lib389.utils import normalizeDN, escapeDNValue, ensure_bytes, ensure_str, ensure_list_str, ds_is_older

- from lib389._replication import RUV

- from lib389.repltools import ReplTools

  from lib389 import DirSrv, Entry, NoSuchEntryError, InvalidArgumentError

  from lib389._mapped_object import DSLdapObjects, DSLdapObject

  from lib389.passwd import password_generate
@@ -27,13 +24,10 @@ 

  from lib389.changelog import Changelog5

  

  from lib389.idm.domain import Domain

- 

  from lib389.idm.group import Groups

  from lib389.idm.services import ServiceAccounts

  from lib389.idm.organizationalunit import OrganizationalUnits

  

- from lib389.agreement import Agreements

- 

  

  class ReplicaLegacy(object):

      proxied_methods = 'search_s getEntry'.split()
@@ -883,6 +877,7 @@ 

                  return False

          return True

  

+ 

  class Replica(DSLdapObject):

      """Replica DSLdapObject with:

      - must attributes = ['cn', 'nsDS5ReplicaType', 'nsDS5ReplicaRoot',
@@ -987,29 +982,38 @@ 

      def _delete_agreements(self):

          """Delete all the agreements for the suffix

  

-         :raises: LDAPError - If failing to delete or search for agreeme        :type binddn: strnts

+         :raises: LDAPError - If failing to delete or search for agreements

          """

          # Get the suffix

          self._populate_suffix()

+ 

+         # Delete standard agmts

          agmts = self.get_agreements()

          for agmt in agmts.list():

              agmt.delete()

  

-     def promote(self, newrole, binddn=None, rid=None):

+         # Delete winysnc agmts

+         agmts = self.get_agreements(winsync=True)

+         for agmt in agmts.list():

+             agmt.delete()

+ 

+     def promote(self, newrole, binddn=None, binddn_group=None, rid=None):

          """Promote the replica to hub or master

  

          :param newrole: The new replication role for the replica: MASTER and HUB

          :type newrole: ReplicaRole

          :param binddn: The replication bind dn - only applied to master

          :type binddn: str

+         :param binddn_group: The replication bind dn group - only applied to master

+         :type binddn: str

          :param rid: The replication ID, applies only to promotions to "master"

          :type rid: int

- 

          :returns: None

          :raises: ValueError - If replica is not promoted

          """

  

-         if not binddn:

+ 

+         if binddn is None and binddn_group is None:

              binddn = defaultProperties[REPLICATION_BIND_DN]

  

          # Check the role type
@@ -1025,8 +1029,14 @@ 

              rid = CONSUMER_REPLICAID

  

          # Create the changelog

+         cl = Changelog5(self._instance)

          try:

-             self._instance.changelog.create()

+             cl.create(properties={

+                 'cn': 'changelog5',

+                 'nsslapd-changelogdir': self._instance.get_changelog_dir()

+             })

+         except ldap.ALREADY_EXISTS:

+             pass

          except ldap.LDAPError as e:

              raise ValueError('Failed to create changelog: %s' % str(e))

  
@@ -1044,7 +1054,10 @@ 

  

          # Set bind dn

          try:

-             self.set(REPL_BINDDN, binddn)

+             if binddn:

+                 self.set(REPL_BINDDN, binddn)

+             else:

+                 self.set(REPL_BIND_GROUP, binddn_group)

          except ldap.LDAPError as e:

              raise ValueError('Failed to update replica: ' + str(e))

  
@@ -1169,12 +1182,13 @@ 

  

          return True

  

-     def get_agreements(self):

+     def get_agreements(self, winsync=False):

          """Return the set of agreements related to this suffix replica

- 

+         :param: winsync: If True then return winsync replication agreements,

+                          otherwise return teh standard replication agreements.

          :returns: Agreements object

          """

-         return Agreements(self._instance, self.dn)

+         return Agreements(self._instance, self.dn, winsync=winsync)

  

      def get_rid(self):

          """Return the current replicas RID for this suffix
@@ -1187,6 +1201,7 @@ 

          """Return the in memory ruv of this replica suffix.

  

          :returns: RUV object

+         :raises: LDAPError

          """

          self._populate_suffix()

  
@@ -1201,11 +1216,29 @@ 

  

          return RUV(data)

  

+     def get_ruv_agmt_maxcsns(self):

+         """Return the in memory ruv of this replica suffix.

+ 

+         :returns: RUV object

+         :raises: LDAPError

+         """

+         self._populate_suffix()

+ 

+         ent = self._instance.search_ext_s(

+             base=self._suffix,

+             scope=ldap.SCOPE_SUBTREE,

+             filterstr='(&(nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff)(objectclass=nstombstone))',

+             attrlist=['nsds5agmtmaxcsn'],

+             serverctrls=self._server_controls, clientctrls=self._client_controls)[0]

+ 

+         return ensure_list_str(ent.getValues('nsds5agmtmaxcsn'))

+ 

      def begin_task_cl2ldif(self):

          """Begin the changelog to ldif task

          """

          self.replace('nsds5task', 'cl2ldif')

  

+ 

  class Replicas(DSLdapObjects):

      """Replica DSLdapObjects for all replicas

  
@@ -1239,6 +1272,7 @@ 

              replica._populate_suffix()

          return replica

  

+ 

  class BootstrapReplicationManager(DSLdapObject):

      """A Replication Manager credential for bootstrapping the repl process.

      This is used by the replication manager object to coordinate the initial
@@ -1255,7 +1289,8 @@ 

          self._must_attributes = ['cn', 'userPassword']

          self._create_objectclasses = [

              'top',

-             'netscapeServer'

+             'netscapeServer',

+             'nsAccount'

This breaks all replication tests on 1.3.x:

[14/Sep/2018:13:47:48.679411028 -0400] - ERR - slapi_entry_schema_check_ext - Entry "cn=replication manager,cn=config" has unknown object class "nsAccount"

              ]

          self._protected = False

          self.common_name = 'replication manager'

Description:

Add replication functionality to the dsconf. This includes
repl config, agmts, winsync agmts, and cleanallruv/abort cleanallruv

Adjusted the backend options to use hyphens for consistency

https://pagure.io/389-ds-base/issue/49926

Reviewed by: ?

rebased onto 0a07d40606f5dc445c7d4460975c75b03a6ffa7c

5 years ago

rebased onto bfcc2e03dddd3e98dbaf3f29f59241d9cf3fc1bb

5 years ago

First of all, I think, we really should get away from this legacy methods/objects and we should use existing structures for Replicas, RUV, Agreements.
This method can be put to Agreement(DSLdapObject) and we can benefit from it. The object already has the binddn, bindpw, consumer instance information, etc.

In the later comments, I'll point out what can be changed with what.

If you need to get RUV and the nsds5AgmtMaxCSN, you can use Replica(DSLdapObject) methods (which uses RUV(DSLdapObject)).

replicas = Replicas(to_instance)
replica = replicas.get(suffix)
ruv = replica.get_ruv()
ruv.get_attr_val_utf8('nsds5AgmtMaxCSN')

Instead of using this old CSN object, I think, we can use RUV(DSLdapObject). It already has the method 'RUV().is_synced(other_ruv)'. So we can add the method 'get_time_lag' there.

I think we should put the method here, in Agreement(DSLdapObject) and name it something like 'get_status'.

The rest looks good.

Just to sum up my idea:
Current lib389 is spread over all other different modules. Sometimes it repeats itself in different forms. It makes any change harded. I think we should put the code in DSLdapObject modules only so it is always easy to find a tool when you need to implement something (or write a test).

For example, _replication.py looks redundant to me and I propose do not use it and put the code in the existing structures (probably, we don't even need a new class for CSN, we can use RUV(DSLdapObject) from Replica and put the methods we need there).

DirSrv().getConsumerMaxCSN and DirSrv().getReplAgmtStatus also are pretty ugly (you fixed some parts but the rest is still badly written). The functionality is already in DSLdapObject design. I did write how you can get MaxCSN and getReplAgmtStatus can be moved to Agreement(DSLdapObject) but modified of course so it uses DSLdapObjects.

Also, please, write docstrings for the not private methods you add to lib389 API. And maybe some basic CLI tests if you have something short in mind.

Okay, but we need to provide an external bind dn and password for all the status related functions We can not rely on the credentials in the instance object because if it's LDAPI, like in the UI, we can not contact a remote consumer.

rebased onto 22827a722e9dacf63aaa4da1a210ab6326a2779a

5 years ago

rebased onto fadad548ba34d0d50c127662d6cb795a0590ad3a

5 years ago

But it may not be a bind either. It could be GSSAPI, TLS, or other. We shouldn't have "helper" wrappers like this because they limit us to narrow methods of operation, and are not composable.

We should be taking a DirSrv object, and calling what is needed from that. So I think we should change this function.

As above - these should all be on a dsldapobject and we take a dirsrv object that is bound "however" it wants (ldap, gssapi, other).

How is this related to the ticket?

Isn't there a RUV object somewhere? We should avoid raw searches if possible ....

This is for connecting to remote replicas. We can not use what is in the existing Dirsrv object, it must be provided externally. There are not a lot of good options here.

I'm afraid winsync is very alive and still being used by a lot of customers. There are no plans to deprecate it. It definitely complicated the CLI :-(

Again, this is for connecting to remote replicas where we can not reused the existing credentials - not a lot of good options here

Just killing two birds with one stone...

The RUV class does raw searches - also the RUV object can not be a DSLdapObject, why? Because of some "magic" that renames the DN after you search for it. :-/ It's a corner case

Actually I'm not suing this function anymore. I wrote a new one inthe agreement class. This one is only needed for legacy replication. I'd actually like to remove it.

rebased onto ace42b94f03b05e1e5439d67a2b26370f82aa3cc

5 years ago

rebased onto 06507e57eb2d60bf67c321c68088dfbf042a3550

5 years ago

Improved the consumer dirsrv object (for getting repl agmt status) to use the secure port if the agmt is using LDAPS.

Also added options to create the replication manager entry when enabling replication for a suffix

And added the option to initialize an agreement after creating it.

rebased onto c3b0553634176f7f338149dc6cde221d89544001

5 years ago

Add docstrings to the new functions in the lib389 classes

I think this won't work correctly. It should be :param binddn:, not :param: binddn:. The same for :type binddn:. :returns: is right

In the replication.py, if we specify --bind-dn as a name 'repl_manager', it will try to put it to the replica entry nsDS5ReplicaBindDN: repl_manager and it will fail.

So, I think, we should either validate the parameter, or put to the 'cn=repl_manager,cn=config' DN.
The same for agreements (the commands there also have --bind-dn parameter.

Small nitpick, but I think, worth fixing.
When we enable replication, we set --rid. And when we set some new values, we specify --replica-id. It is a bit inconsistent.

You don't have 'disable' command because of the typo:

# Disable
agmt_disable_parser = agmt_subcommands.add_parser('enable', help='Disable replication agreement')
agmt_disable_parser.set_defaults(func=disable_agmt)

The rest seems to work fine, thank you!

In the replication.py, if we specify --bind-dn as a name 'repl_manager', it will try to put it to the replica entry nsDS5ReplicaBindDN: repl_manager and it will fail.
So, I think, we should either validate the parameter, or put to the 'cn=repl_manager,cn=config' DN.

But the parameter says "bind-dn", which means you need to use a DN, not a name. This is also clearly stated in the usage. But adding a DN validator is easy

But the parameter says "bind-dn", which means you need to use a DN, not a name. This is also clearly stated in the usage. But adding a DN validator is easy

Yeah, I think too that it is the best way. I was just trying things because 'create-manager' accepts the name (cn=NAME,cn=config). So it looked a bit inconsistent to me as 'a user'.

rebased onto 6295c5eaa68bee734e728819791aac4be6503665

5 years ago

Changes made. So I removed the option to create the repl manager when enabling replication. It is confusing, and I think it's fine to have it as an extra step(especially since it's optional)

Ithink generally the question is "do you want dsconf replication to be a recipe process" or do you want it to be "nuts and bolts bucket of parts", perhaps that's where my issues with this change are (that I have had at the back of my mind).

IMO we need to do more "recipe" not more "nuts and bolts". No one likes setting up repl by hand, even i dread it and I'm a developer of the project. How does an admin feel?

Perhaps in the future we'll add a second repl-wizard command that does things the "recipe" way?

Sure, looks good to me! You have my ack!

Ithink generally the question is "do you want dsconf replication to be a recipe process" or do you want it to be "nuts and bolts bucket of parts", perhaps that's where my issues with this change are (that I have had at the back of my mind).
IMO we need to do more "recipe" not more "nuts and bolts". No one likes setting up repl by hand, even i dread it and I'm a developer of the project. How does an admin feel?
Perhaps in the future we'll add a second repl-wizard command that does things the "recipe" way?

I'm assuming by "recipe" you mean the server just uses default values for almost everything but host/port/suffix/protocol/etc. That's easy, and I think that's what you are recommending. I can just add that to this PR, but we still need the existing fine grained control over all settings/objects. Anyway I'd rather do it all now than later :-p

No, there is actually a class in lib389 that can do all the agreements and auth config for you with per-server binds and stuff. It's here:

https://pagure.io/389-ds-base/blob/master/f/src/lib389/lib389/replica.py#_1264

Rather than you saying 'link that machine, and do this etc', you literally just go "here are two servers, make replicate" and it does. It automates cert auth, binds, replica ID creation, makes sure change logs exist, and it can also be used in existing replication topologies. I make the topology_replica use it by default a while back, but perhaps that was undone?

So I think we need both: one cli ui for the "raw" replication bits for people who want it, but then this is just like "hey this server is ROreplica now kgo" kind of thing.

Ohhh, one more thing: It also randomly generates the passwords, so you never need to disclose them to the other server, nor do you need to know the replication account pw, as each server has a replication bind account in the suffix being replicated, so it adds security as you can revoke an indivudual host from the topology.

Think the IPA topology code, but like .. better.

No, there is actually a class in lib389 that can do all the agreements and auth config for you with per-server binds and stuff. It's here:
https://pagure.io/389-ds-base/blob/master/f/src/lib389/lib389/replica.py#_1264
Rather than you saying 'link that machine, and do this etc', you literally just go "here are two servers, make replicate" and it does. It automates cert auth, binds, replica ID creation, makes sure change logs exist, and it can also be used in existing replication topologies. I make the topology_replica use it by default a while back, but perhaps that was undone?
So I think we need both: one cli ui for the "raw" replication bits for people who want it, but then this is just like "hey this server is ROreplica now kgo" kind of thing.

I was hoping you weren't going to say that haha. Yeah that only works if you have the correct credentials for each server. So LDAPI won't work in that case - so its not a feature set the UI can use (not easily). What that really means is that it's an RFE for 1.4.1. Once the UI is wrapped up in 1.4.0, then we can definitely add this functionality!!

rebased onto b4b3128f15faf69f0bb93b2ad806f2a0f95e267b

5 years ago

rebased onto 4881826

5 years ago

Pull-Request has been merged by mreynolds

5 years ago

This breaks all replication tests on 1.3.x:

[14/Sep/2018:13:47:48.679411028 -0400] - ERR - slapi_entry_schema_check_ext - Entry "cn=replication manager,cn=config" has unknown object class "nsAccount"

@vashirov - correct, it uses the new schema that's only in 1.4.0

Ok, I opened PR#49953 to address this.

I was hoping you weren't going to say that haha. Yeah that only works if you have the correct credentials for each server. So LDAPI won't work in that case - so its not a feature set the UI can use (not easily). What that really means is that it's an RFE for 1.4.1. Once the UI is wrapped up in 1.4.0, then we can definitely add this functionality!!

Yes, you only need credentials for each server to create the agreements, but otherwise it "just works". We'll look at this for the CLI in the future.

I was hoping you weren't going to say that haha. Yeah that only works if you have the correct credentials for each server. So LDAPI won't work in that case - so its not a feature set the UI can use (not easily). What that really means is that it's an RFE for 1.4.1. Once the UI is wrapped up in 1.4.0, then we can definitely add this functionality!!

Yes, you only need credentials for each server to create the agreements, but otherwise it "just works". We'll look at this for the CLI in the future.

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This pull request has been cloned to Github as issue and is available here:
- https://github.com/389ds/389-ds-base/issues/3004

If you want to continue to work on the PR, please navigate to the github issue,
download the patch from the attachments and file a new pull request.

Thank you for understanding. We apologize for all inconvenience.

Pull-Request has been closed by spichugi

3 years ago