#885 GPG verification
Opened 8 years ago by ankursinha. Modified a year ago

Quite a few of us GPG sign our commits - I just saw Github implement "GPG verification" where they let the user upload their public key and mark each commit as verified when the key matches. I think this will be a really cool feature for pagure too!

Their blog post on this is here: https://github.com/blog/2144-gpg-signature-verification


I agree that it would be cool but I fear doing it might be tricky, something to investigate

+1

And there's also a glimpse of indicating the trustworthiness based on Web of Trust (if I am logged in, I am able to view I also verified the other committer's identity, based on my signature present in the public keyring she uploaded to her account; depending on the complexity and the approach, this might work also transitively as it should be).

The info about a users gpg key could be retrieved from FAS, no need to upload it here. Also the information about which keys are signed by a use can be retrieved from gpg keysservers like https://keys.fedoraproject.org

The info about a users gpg key could be retrieved from FAS, no need to upload it here.

We should behave here as we do for ssh keys, if it's empty, retrieve from FAS, else ignore the info from FAS. Let's not forget that pagure is designed to be self-hostable, without FAS.

Note that if we only use keyservers, we require everyone to have their key in the public keyservers.
Not everyone might like that.
Also, having a single key doesn't work: what happens if I cycle my key? Would Pagure suddenly mark all my previous signatures invalid?.

For the people wondering: my thought is that someone could upload their (projects') public key to their website and Pagure.
That would allow people to manually download the key from their website to verify it locally, and also Pagure to verify the signatures, without having the keys on the public keyservers for easy indexing and what not (privacy anyone?).

Also, having a single key doesn't work: what happens if I cycle my key? Would Pagure suddenly mark all my previous signatures invalid?.

I don't think so - if you committed something using the right key, it was verified then - in the future this commit remains verified - so it isn't verified in the now, it's verified at the time of the commit. (or so I think)

I don't think so - if you committed something using the right key, it was verified then - in the future this commit remains verified - so it isn't verified in the now, it's verified at the time of the commit. (or so I think)

That would mean you verify at the time of push.
What would happen with all of the commits I had already pushed?

I think verify at time of display is likely better.

For the people wondering: my thought is that someone could upload their (projects') public key to their website and Pagure.
That would allow people to manually download the key from their website to verify it locally, and also Pagure to verify the signatures, without having the keys on the public keyservers for easy indexing and what not (privacy anyone?).

+1 This seems a good model, and what Github seem to have settled on too. They do one key per user, but having a key per project would maybe be even better?

+1 This seems a good model, and what Github seem to have settled on too. They do one key per user, but having a key per project would maybe be even better?

I think we might want to support both, but at least keys per user.

Also, GitHub allows any number of keys in your account, whihc is a good thing for the case I mentioned: "Also, having a single key doesn't work: what happens if I cycle my key? Would Pagure suddenly mark all my previous signatures invalid?."

ah, I hadn't realised they did multiple keys - that makes things a lot simpler then.

So, the backend part of the GPG verification is now merged into it s own branch.
Let me write down entirely what process I had in my mind to finish this all, so that someone can work on that when they want (I might do some more parts myself at some point).

Adding a key:
1. The user would go to their profile, and be able to upload their key, and/or specify a keyid to pull down from the keyserver.
2. The key gets added to the database as "unverified", and will not yet be used.
3. The user now gets a challenge, which could be a json string with something like: {"type": "pagureverification", "host": "$website", "user": "$username"}. Concrete example: {"type": "pagureverification", "host": "pagure.io", "user": "puiterwijk"}.
4. We ask the user to run that statement through gpg --clearsign.
5. We tell people to copy-paste the signed statement into Pagure.

We can now be sure that the owner of the key is the owner of account $username on $website at the moment of the GPG signature timestamp. This would allow us to offer this signature for other people to verify that we have verified the ownership of the key, without allowing the signature to be reused on other websites. This would also mean that we can always verify whether we actually verified the key in the future, so that we can be sure that an attacker (or admin) didn't just inject their own key into the database.

Keyring:
Next up would be to generate a keyring used to actually verify signatures on commits and/or tags.

One idea would be to try to verification with a new, temporary, keyring. Verification will fail, but should give us the keyid of the key that was used to sign. We would then look up that key in the database, verify it, insert it into the temporary keyring, and redo the verification. This is likely the more secure way, but will be heavy on performance.

The other idea would be to do something like SSH keys: we generate a keyring where we add all keys as soon as they are verified, with a special button in the admin panel to regenerate the entire keyring, and just use this one keyring on every verification attempt. This would save us some verification attempts (no more double verifications etc), but would mean the effort to keep the keyring updated is bigger.

Updating:
People should be able to update their keys (add more signing subkeys etc or add revocation certificates) at any moment in time.
We should also look at running a crontab which regularly scans the public keyservers whether the keys are on there (we would NOT add them because that might impact users' privacy), and if it is, update the key from there to get new subkeys and/or revocations.

Displaying:
The code in the gpg_check branch will verify on the releases page and on the commit view page whether the object is signed, and pass that to the templates.
The templates should then display the current signatory state to the user.
Note that for commits, we must make very sure to indicate that this signature is ONLY for this specific commit, meaning that this commit is the ONLY thing that can be trusted, not all of its parents or children: only tags sign all of the entire history into the signature.

@puiterwijk Just curious, will the verification be performed on-the-fly when needed, upon explicit user's request or will there be a caching layer (supposing that past commits/tags, once verified cannot change the status)?

Also integration with FAS as suggested by @till should be considered, at least in the form of a hook to be used later on, I think.

@jpokorny So, I would say that it's verified on-the-fly, when the page gets requested (that's what my code currently does). The reason for that is to immediately reflect the current state if a user removes their key without having to go through all cached entries upon key removal to see which we need to revoke.

For integration with FAS, we could say that if the user has no key yet in Pagure, but has a key ID in FAS, we:
1. Create an empty record in the database containing just the key ID
2. We try to request the key from the keyservers once
3. If it's on the keyservers, we add it to the database and then mark it as non-verified.
4. If it's not on the keyservers, we mark it as "not really there", so that we don't reattempt to download the key on every login attempt.

Note that I'm not opposed to a general cache for performance reasons, but I'd suggest to make it just delete the entire cache as soon as any of the keys in the entire system change (aka, added, updated or removed), regardless of whether or not the key was used to sign and of the cached entries.
This would prevent a complicated cache purging algorithm.

  1. The user now gets a challenge, which could be a json string with something like: {"type": "pagureverification", "host": "$website", "user": "$username"}. Concrete example: {"type": "pagureverification", "host": "pagure.io", "user": "puiterwijk"}.
  2. We ask the user to run that statement through gpg --clearsign.
  3. We tell people to copy-paste the signed statement into Pagure.
    We can now be sure that the owner of the key is the owner of account $username on $website at the moment of the GPG signature timestamp. This would allow us to offer this signature for other people to verify that we have verified the ownership of the key, without allowing the signature to be reused on other websites. This would also mean that we can always verify whether we actually verified the key in the future, so that we can be sure that an attacker (or admin) didn't just inject their own key into the database.

AFAICS it does not prevent attackers or admins to inject keys, since they can also inject the respective signature. Also I am not convinced why pagure would need to verify that a user owns a key they want to add. Btw. an easier method to verify ownership of keys (that contain encrypt subkeys) would be to encrypt a secret to the key and let the user decrypt it. If the secret is sent via encrypted e-mail, it should be a lot more convinient that creating a clear signature of a JSON string. Also it is not obvious for users why they should sign arbitrary strings.

Keyring:
Next up would be to generate a keyring used to actually verify signatures on commits and/or tags.
One idea would be to try to verification with a new, temporary, keyring. Verification will fail, but should give us the keyid of the key that was used to sign. We would then look up that key in the database, verify it, insert it into the temporary keyring, and redo the verification. This is likely the more secure way, but will be heavy on performance.

I do not understand why this is the most secure way. Since there is already a database with all the keys that are allowed to sign a commit, just put them into a temporary keyring and use it to verify the signature. If it verifies, everything is good, otherwise not. This also avoids using the key ID for anything, which should not be used at all IMHO but only fingerprints.

For integration with FAS, we could say that if the user has no key yet in Pagure, but has a key ID in FAS, we:
1. Create an empty record in the database containing just the key ID
2. We try to request the key from the keyservers once
3. If it's on the keyservers, we add it to the database and then mark it as non-verified.
4. If it's not on the keyservers, we mark it as "not really there", so that we don't reattempt to download the key on every login attempt.

Please make it not store/require a key ID but a full fingerprint to make sure it is exactly the key that is meant to be. The new FAS version will only store fingerprints IIRC.

Please also see https://evil32.com/ about GPG key IDs.

The user now gets a challenge, which could be a json string with something like: {"type": "pagureverification", "host": "$website", "user": "$username"}. Concrete example: {"type": "pagureverification", "host": "pagure.io", "user": "puiterwijk"}.
We ask the user to run that statement through gpg --clearsign.
We tell people to copy-paste the signed statement into Pagure.
We can now be sure that the owner of the key is the owner of account $username on $website at the moment of the GPG signature timestamp. This would allow us to offer this signature for other people to verify that we have verified the ownership of the key, without allowing the signature to be reused on other websites. This would also mean that we can always verify whether we actually verified the key in the future, so that we can be sure that an attacker (or admin) didn't just inject their own key into the database.

AFAICS it does not prevent attackers or admins to inject keys, since they can also inject the respective signature.

How so? They would need the private key to do so. If you leak your private key, then sure, admins could add it, but then so could anyone, anywhere...

Also I am not convinced why pagure would need to verify that a user owns a key they want to add. Btw. an easier method to verify ownership of keys (that contain encrypt subkeys) would be to encrypt a secret to the key and let the user decrypt it. If the secret is sent via encrypted e-mail, it should be a lot more convinient that creating a clear signature of a JSON string. Also it is not obvious for users why they should sign arbitrary strings.

The reason for the signing vs encrypting is mostly so that we can show the proof to others.
If we would just encrypt a string for them that they then have to decrypt, WE could be sure that we verified the keys, but we can't prove that to others, meaning that others will just have to trust Pagure on that.

Keyring:
Next up would be to generate a keyring used to actually verify signatures on commits and/or tags.
One idea would be to try to verification with a new, temporary, keyring. Verification will fail, but should give us the keyid of the key that was used to sign. We would then look up that key in the database, verify it, insert it into the temporary keyring, and redo the verification. This is likely the more secure way, but will be heavy on performance.

I do not understand why this is the most secure way.

Because this would prevent any cache invalidation errors.
As I said in my next message: "Note that I'm not opposed to a general cache for performance reasons, but I'd suggest to make it just delete the entire cache as soon as any of the keys in the entire system change (aka, added, updated or removed), regardless of whether or not the key was used to sign and of the cached entries. This would prevent a complicated cache purging algorithm.".
My main concern here is that cache invalidation is a hard problem, and I don't want to display incorrect information because we happened to have an error in it somewhere.

Since there is already a database with all the keys that are allowed to sign a commit, just put them into a temporary keyring and use it to verify the signature. If it verifies, everything is good, otherwise not. This also avoids using the key ID for anything, which should not be used at all IMHO but only fingerprints.

Right, in my messages I keep mentioning the key id, but in my mind fingerprint == key id, since the key ID is just the last couple of bytes of the fingerprint and everyone should have abandoned the 32bit key ids by now.
In all my messages here, please read s/key id/fingerprint/. I realize it's the wrong terminology, but that's what I mean if I say key ID.

On Fri, Aug 19, 2016 at 12:45:29AM +0000, pagure@pagure.io wrote:

puiterwijk added a new comment to an issue you are following:
``

The user now gets a challenge, which could be a json string with something like: {"type": "pagureverification", "host": "$website", "user": "$username"}. Concrete example: {"type": "pagureverification", "host": "pagure.io", "user": "puiterwijk"}.
We ask the user to run that statement through gpg --clearsign.
We tell people to copy-paste the signed statement into Pagure.
We can now be sure that the owner of the key is the owner of account $username on $website at the moment of the GPG signature timestamp. This would allow us to offer this signature for other people to verify that we have verified the ownership of the key, without allowing the signature to be reused on other websites. This would also mean that we can always verify whether we actually verified the key in the future, so that we can be sure that an attacker (or admin) didn't just inject their own key into the database.

AFAICS it does not prevent attackers or admins to inject keys, since they can also inject the respective signature.

How so? They would need the private key to do so. If you leak your private key, then sure, admins could add it, but then so could anyone, anywhere...

In my understanding an attackers would inject keys that they created, so
they have the private key. This then allows them to sign the wrong GIT
commits. I do not see why an attacker/admin would inject public keys
for which they do not own the private key. They do not really gain
anything from this.

Also I am not convinced why pagure would need to verify that a user owns a key they want to add. Btw. an easier method to verify ownership of keys (that contain encrypt subkeys) would be to encrypt a secret to the key and let the user decrypt it. If the secret is sent via encrypted e-mail, it should be a lot more convinient that creating a clear signature of a JSON string. Also it is not obvious for users why they should sign arbitrary strings.

The reason for the signing vs encrypting is mostly so that we can show the proof to others.
If we would just encrypt a string for them that they then have to decrypt, WE could be sure that we verified the keys, but we can't prove that to others, meaning that others will just have to trust Pagure on that.

I can also sign above challenge that contains your username with my GPG
key and then as an attacker put key ID on pagure (given I have an attack
that allows me to compromise pagure). Others do not really
benefit from this AFAICS. Therefore others have to trust pagure anyhow.

Keyring:
Next up would be to generate a keyring used to actually verify signatures on commits and/or tags.
One idea would be to try to verification with a new, temporary, keyring. Verification will fail, but should give us the keyid of the key that was used to sign. We would then look up that key in the database, verify it, insert it into the temporary keyring, and redo the verification. This is likely the more secure way, but will be heavy on performance.

I do not understand why this is the most secure way.

Because this would prevent any cache invalidation errors.

There is no cache involved in my description as well:

Since there is already a database with all the keys that are allowed to sign a commit, just put them into a temporary keyring and use it to verify the signature. If it verifies, everything is good, otherwise not. This also avoids using the key ID for anything, which should not be used at all IMHO but only fingerprints.

On Fri, Aug 19, 2016 at 12:45:29AM +0000, pagure @pagure.io wrote:

puiterwijk added a new comment to an issue you are following:
``

How so? They would need the private key to do so. If you leak your private key, then sure, admins could add it, but then so could anyone, anywhere...

Right, fair enough, I didn't realize that that was the aim you were going for, sorry.

Also I am not convinced why pagure would need to verify that a user owns a key they want to add. Btw. an easier method to verify ownership of keys (that contain encrypt subkeys) would be to encrypt a secret to the key and let the user decrypt it. If the secret is sent via encrypted e-mail, it should be a lot more convinient that creating a clear signature of a JSON string. Also it is not obvious for users why they should sign arbitrary strings.

Fair enough. So yeah, let's go with the encrypt version then.

Because this would prevent any cache invalidation errors.

There is no cache involved in my description as well:

The keyring is a sort of cache, as it is not the original, authoritative source of the data (that would be the database).
I was mostly talking about maintaining the keys in the keyring as cache invalidation.
So as said, if we go with a keyring that we store, we should just blow away the entire keyring any time any of the keys change, and regenerate it entirely.

Since there is already a database with all the keys that are allowed to sign a commit, just put them into a temporary keyring and use it to verify the signature. If it verifies, everything is good, otherwise not. This also avoids using the key ID for anything, which should not be used at all IMHO but only fingerprints.

The last update was 6 years ago, no further requests, updates or actionable tasks since then, but interesting feature request / discussion, let's move it to milestone 6.0 for now and decide later

Metadata Update from @wombelix:
- Issue set to the milestone: 6.0

a year ago

Login to comment on this ticket.

Metadata