I have an implementation for an internal API, the requirement is to implement some sort of basic authentication instead of oauth (generating a token).
Do you think there’s any difference between using just an API key vs using a client id + secret?
For what I see it’d be just like saying “using a password” vs “using a user and a password”.
I’d recommend having a username/client id just because that way if/when you have multiple clients, they won’t all be sharing a common secret key. Also, you can have different sets of permissions for different clients.
Also, it’s important for APIs to take steps to mitigate replay attacks.
Basically, the idea is that you have a secret key that both the server and client know. The client includes a “token” with each request. The token is generated by appending a current timestamp to the secret key, hashing the result, and then appending the client id and the same timestamp to the secret key. The server checks that the timestamp is within a certain amount of time ago and if not rejects the request. (Say, 30 seconds. This does require that the client and server’s clocks are synced, but that’s usually not an issue in today’s world.) The server then uses the client id to look up the secret key for that particular client, appends the timestamp provided, hashes the same way the client did, and checks that the hashes match. If they don’t the server rejects the request.
Example:
The reason why this mitigates replay attacks is the 7th step there. If one hash gets intercepted, a bad actor can wreak havoc for 30 seconds (or, rather, however long you tune it to, balancing security concerns with any risks involved with things breaking because of clock drift or latency), but no more. With just a client id and secret key, the bad actor could wreak havoc until the dev team managed to notice something was up and change the secret key.
The reason the timestamp has to be included in the hashed value is because if only the secret key was included, the hash would never change. Then if the hash was intercepted once, you’d be back to the situation of a bad actor wreaking havoc until the dev team manually changed the secret key.
Also, given that you’re hashing the secret key, the secret key isn’t in the request and cannot be determined from intercepted requests.
(Of course, if a bad actor roots a box and is able to get the secret key value itself, they can then generate valid hashes/tokens to their heart’s content until the dev team changes the secret key, but that’s not the sort of thing this authentication scheme is meant to protect against.)
Edit: One more thing to mention here. Keep in mind that it’s not going to be terribly easy to go changing your authentication method later. If you want to change how authentication works, you’ll have to go to all clients and get them to change how they make requests. This is one of those cases where futureproofing is warranted. Better to do it “right” and with features that should mostly work for your purposes for the foreseeable future. Even if you don’t know that you’ll have multiple clients right now, it’s good to plan as if you might some day.
Also, aside from the security implications related to having client ids in the requests, the client id can also be used to track things like resource usage or TPS on a per-client basis. Which is really handy when your app is overwhelmed at 3:00am and you need to tell the client to ease off. You’ll only know which client to talk to if you can track that kind of information on a per-client basis.
Oh, yeah, I had forgotten about this kind of attack.
Do you think this is a concern if the calls are only meant to be done in an internal network?
The modern way of thinking about security of internal networks is to assume there is no such thing (marketing term: Zero Trust).
I’d consider it a concern if I was in your shoes, yes. Mostly for the reasons jflorez mentioned.
Major security breaches wherein an attacker gained has access to an internal, private network happen not infrequently. Target (the retail chain) leaked a ton of their customers’ credit card to attackers over the course of (IIRC) months. The attackers couldn’t have done it (at least not in the way they did) without first breaching Target’s private corporate network.
Never underestimate the risk of an attack coming from the inside.
Also once you have an implementation with a certain kind of authentication other devs are likely to copy what you have successfully deployed and then your security assumptions will make it into public facing code without much consideration