There are quite a few ways that migration can be done, both for testing and production uses. Any scenario not specifically mentioned here should be considered unsupported.
These scenarios are not enforced by code. At best we can prompt the user
for confirmation if we believe that they are non-conformant but I choose
the trade-off of allowing for unsupported migrations to the flexibility
of not trying to force square pegs into round holes.
There are a few points to consider.
**Replicas**
All references to replicas in the existing deployment will not be migrated. There is no mechanism to one-by-one replace each server with a new one using migration. One will be migrated and new replicas will need to be manually added directly to that using ipa-replica-install.
**Kerberos**
The Kerberos master key will not be migrated. Kerberos principals are retained but the keys are not.
**IDs**
Ideally all uid, gid, SID, etc. will be maintained in migration so that it is seamless. No mass changing ownership and group of files should be required.
**Certificates**
The existing CA will be abandonded in favor of a CA on the new installation.
If the realm, domain and CA subject base (default is O=REALM) are identical between the two installations then there is no way, other than the CA private key, to distinguish between the original CA and the new CA. Therefore no certificates will be maintained in the migration. All certificates other than those already present in the new IPA server as part of installation will need to be re-issued.
If one of these doesn't match then the certificates will be retained but the backing PKI will be lost so there will be no possibility of renewals or revocation (no OCSP or CRL).
**DNS**
The DNS entries will be migrated. Whether this will be maintained is a decision for the end-user. From an IPA perspective these are just data.
If DNS is not enabled in the new installation (--setup-dns) then the service will not be configured/available but the data should still be available at least via direct LDAP calls (IPA in some places checks to see if DNS is configured and will skip lookups if it is not).
##### Production to new production
Preconditions:
* There is a existing production IPA server (or servers)
* There is a new IPA server installation with the same realm and domain
Optional:
* If desired the realm and domain may be changed. The migrated data will accomodate these changes but this will require reconfiguration of all clients, etc. beyond just re-enrollment.
Result:
* All valid IPA entries will be migrated
* All ids (uid, gid, SID, etc) will be maintained
* All certificates issued from the previous CA will be dropped unless the CA subject base DN, or the realm, is changed in the new deployment.
* All clients must re-enroll to the new deployment
* Users will have to migrate their passwords to generate Kerberos and other keys
##### Production to new staging
Preconditions:
* There is an existing production IPA server (or servers)
* There is a new IPA server installation with a different realm and domain (e.g. staging.example.test)
Result:
* All valid IPA entries will be migrated
* All ids (uid, gid, SID, etc) will be re-generated
* All certificates from the previous CA will be preserved
* Given this is a new staging deployment there will be no enrolled clients. The host entries from the production deployment will exist but all keys are dropped.
* Users will have to migrate their passwords to generate Kerberos and other keys
##### From IPA backup
The migration tool will have the capability to do offline migration using an LDIF file. An IPA backup is a tar ball that contains the IPA data in EXAMPLE-TEST-userRoot.ldif
Preconditions:
* A backup from an existing IPA installation exists
* There is a new IPA server installation with the same realm and domain
Optional:
* If desired the realm and domain may be changed. The migrated data will accomodate these changes but this will require reconfiguration of all clients, etc. beyond just re-enrollment.
Result:
* All valid IPA entries will be migrated
* All ids (uid, gid, SID, etc) will be maintained
* All certificates issued from the previous CA will be dropped unless the CA subject base DN, or the realm, is changed in the new deployment.
* All clients must re-enroll to the new deployment
* Users will have to migrate their passwords to generate Kerberos and other keys
The current code generally works minus a few bugs and RFEs, some of which are resolved
by doing an IPA-to-IPA migration instead. It should be maintained for now and migrated
to the standalone client in the future.
These bugs should be considered for the existing plugin.
* https://pagure.io/freeipa/issue/3096 - error when migrating unknown schema
* https://pagure.io/freeipa/issue/3100 - Check for userPassword in migration
* https://pagure.io/freeipa/issue/4738 - [RFE] ipa migrate-ds should provide option for creating UPG from posixGroup objectClass
* https://pagure.io/freeipa/issue/5020 - migrate-ds: does not show migrated users if an error happened during group migration
* https://pagure.io/freeipa/issue/5693 - Passwords become "expired" when migrating from directory server to IPA
* https://pagure.io/freeipa/issue/6105 - migrate-ds is not completely ignoring attributes.
* https://pagure.io/freeipa/issue/6360 - ipa migrate-ds does not rename uniquemember/member attributes properly
* https://pagure.io/freeipa/issue/6380 - ipa migrate-ds should print warning for referrals
* https://pagure.io/freeipa/issue/7368 - ipa migrate-ds converts groupofuniquenames objects to groupofnames, but leaves groupofuniquenames objectclass present
* https://pagure.io/freeipa/issue/7749 - `ipa migrate-ds` fails to migrate user and group data from directory server to IDM.
## Logging
By default the log file will be /var/log/ipa-migrate.log and will be appended to
and not overwritten. This is so it can reflect multiple runs if they are required.
At least the DN of all entries written should be logged (pkey may be sufficient).
Logging by object type could be handy and should natural since this is how the
objects will be sorted.
DEBUG logging may want to show gory details, particularly when merging entries.
## Implementation
The command will be named ipa-migrate. It must determine whether the remote server is
actually an IPA server or not. ipaclient/discovery.py::ipacheckldap may be re-usable.
The standalone client should use a unique context, migration. This will
allow for a separate configuration file.
## Feature Management
### UI
No UI option will be provided. This is command-line client only.
### CLI
Overview of the CLI commands.
#### IPA to IPA
Advance knowledge of the DIT substantially reduces the number of options
necessary for migration.
| Option | Description |
| --- | --- |
| --dry-run | try the migration without writing data |
| --force | ignore errors and keep going |
| --version | version of the tool |
| --quiet | output only errors |
| --log-file | log to the given file |
| --help | this message |
The DM password will be prompted for interactively.
| Argument | Description |
| url | ldap url for remote IPA server |
#### pure LDAP
Will remain unchanged unless one of the bug fixes requires it (perhaps for the UPG
ticket).
### Configuration
N/A
## Upgrade
N/A
## Test plan
There are currently no tests for migration.
Some simplisitic approaches for starting testing might include:
* Count the number of entries that will be migrated and ensure they were migrated, by type (hosts, groups, etc).
* Verify that the services enabled on the remote side are enabled after migration (NIS, ACME, etc).
* Double-check, perhaps spot-checking, memberOf
* Migrate a password to ensure it was imported properly
We have a data generation script in freeipa-tools that may be leveraged to generate the data but it currently generates a LOT of entries which is likely too much for automation.
## Troubleshooting and debugging
Include as much information as possible that would help troubleshooting:
- Does the feature rely on existing files (keytabs, config file...)
- Does the feature produce logs? in a file or in the journal?
- Does the feature create/rely on LDAP entries?
- How to enable debug logs?
- When the feature doesn't work, is it possible to diagnose which step failed? Are there intermediate steps that produce logs?