NSS identifies RSA private keys by setting the PKCS #11 CKA_ID attribute to the SHA-1 digest of the public key (modulus).
CKA_ID
JSS' private key unwrap implementation can supply this modulus value (a byte array) with a leading null byte. This results in the digest (CKA_ID) not matching what the rest of NSS expects, e.g. when adding the corresponding certificate, NSS fails to associate it with the private key.
This is resulting in Dogtag lightweight CA key replication failures, as of Fedora 28.
Apparently the problem did not occur with the old DB backend, only with the new SQL backend. Or there was some other change in NSS that landed in f28, which prompted this issue.
Fraser Tweedale has a proposed fix for this issue via the following pull request: * https://github.com/dogtagpki/jss/pull/1
Metadata Update from @mharmsen: - Custom field component adjusted to None - Custom field feature adjusted to None - Custom field origin adjusted to None - Custom field proposedmilestone adjusted to None - Custom field proposedpriority adjusted to None - Custom field reviewer adjusted to None - Custom field type adjusted to None - Custom field version adjusted to None
Metadata Update from @mharmsen: - Issue set to the milestone: 4.4.3
Metadata Update from @mharmsen: - Issue priority set to: blocker
Additional discussion from the PR
Is it always safe to remove a leading NULL byte? What happens if the CKA_ID is already correct but coincidentally starts with a NULL byte? The case can occur one out of 256 times. Or does NSS always remove the first leading byte no matter what?
this function strips the leading NULL byte from the modulus input value (which I think is DER-encoded?), not from the resulting digest which is used as the CKA_ID.
extractPublicValue()
((RSAPublicKey)publicKey).getModulus().toByteArray()
getModulus
RSAKey
BigInteger
toByteArray()
The patch may chop off the least significant byte. That looks ... strange.
Ah yeah, that's right, big-endian int. I believe it is a quirk of BigInteger.toByteArray(). One question I still have (yet to verify) is if it is a change in Java BigInteger behaviour that occurred in f28, or if (more likely) the Java behaviour did not change but instead, NSS when using old DB backend, and/or NSS pre-f28, automatically skipped/ignored a leading null byte when computing the digest of the modulus, and now (as of f28/SQL NSSDB) it does not.
BigInteger.toByteArray()
I think I need to more thoroughly investigate this...
(Btw is is the most significant byte that gets dropped, only when it's zero ,which seems to be all or most of the time).
aaah, I double reversed the polarity of the big endian representation. It's fine and safe to remove a leading NULL byte. Is it always just one or could there be more NULL bytes?
This ticket might be relevant: https://pagure.io/dogtagpki/issue/2884
In this ticket BigInteger was incorrectly used to encode unsigned fixed-length byte array (i.e. key ID) into signed variable-length (instead of fixed-length) hex number. If the original byte-array started with 0x00 or 0xff, the resulting hex number could not be decoded correctly into the original array with just a simple toByteArray().
The solution was to extend the most significant bit of the decoded array: https://github.com/dogtagpki/pki/blob/master/base/util/src/com/netscape/cmsutil/crypto/CryptoUtil.java#L2057-L2121
Here is the unit test: https://github.com/dogtagpki/pki/blob/master/base/util/test/com/netscape/cmsutil/crypto/KeyIDCodecTest.java
More info.
First, it is definitely a discrepancy between dbm and sql backend behavoiur. In brief, the dbm backend recomputes the CKA_ID using the modulus of the key that was actually imported, and the SQL backend does not. More details in BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1583027
Second, OpenJDK's BigInteger.toByteArray() behaviour: a single leading zero byte is included for non-negative integers whose topmost bit in the big-endian bytearray of the two's complement representation of the number is set. This is to distinguish e.g. {0, 255} = 255 from {255} = -1.
{0, 255}
{255}
Metadata Update from @ftweedal: - Issue priority set to: None (was: blocker) - Issue set to the milestone: None (was: 4.4.3)
Thanks for investigating!
Now your workaround for JSS makes sense. Are you planning to push the JSS workaround or do you want to wait for NSS to provide and deliver a fix?
@cheimes I think we do want to land this JSS change. It is up to NSS as to whether they will fix it there. I'm not sure if it will be considered a true bug, or just a "quirk" of DBM backend combined with a lack of clear documentation about how CKA_ID gets generated, and what inputs you need to generate the "correct" CKA_ID for a given key.
Fix pushed to jss master: 0a13321d8b59c8edbce98763cb335c4ccaed0603.
Metadata Update from @ftweedal: - Issue close_status updated to: fixed
Login to comment on this ticket.