Yes, people have faced this a lot. The question you need to ask isn't "how do I do this" but "which of the existing well-understood and established methods best fits my needs".
Look into solutions to used to generate keys in sharded and distributed databases.
Options include:
Allocate non-overlapping ranges of IDs to each machine. One common approach is to set your sequences up to increment in blocks of (say) 100 and give each machine a unique offset, so machine A generates 101, 201, 301, 401, etc and machine B generates 102, 202, 302, 402, ... . Works great until you need to add machine #101. You can use a bigint key and leave space for say 10,000 machines; that way by the time you reach this limitation you will've redesigned the whole system three times anyway.
(Before you say "I'll run out of keys" ... max bigint is
9223372036854775807
so with a 10000 machine range you're looking at about 1015 or 250 keys per table per machine, which is about a 21 petabyte table at 25 bytes per record including overhead, ie really really tiny records).Use a composite primary key like
(machine_id, locally_generated_sequence_id)
. Ideal unless you're dealing with braindead ORMs that insist on single-valued primary keys.Use huge random primary keys (UUIDs) with the
uuid
data type, giving a 128-bit space and absolutely tiny probability of a collision. Works OK in most cases and despite the birthday paradox collisions are pretty unusual. The paranoid can have plans in place to deal with collisions.Use a common service to generate big blocks of keys. Call it infrequently and store the available keys locally. This tends to hamper concurrent write performance on each node due to lock contention over the key table so I don't recommend it.
... probably more I haven't remembered or have never heard of.
The simplest option is generally to allocate non-overlapping IDs. It's easy, it's transparent and it just works.