Good questions. I am not an expert on database or caching technologies so take everything I say on this subject with a grain of salt.
My main goal is to make it possible to horizontally scale connector clusters by making each instance interchangeable. Every connector instance should be able to handle any packet that comes through it. Directing specific packets to specific instances can be done as a performance optimization, but the design shouldn’t depend on specific instances always getting specific packets.
I think so. I would imagine using a fast system like Redis among a bunch of different connectors. At some point, it would overwhelm the performance of Redis but I think the current throughput is pretty far away from that now.
Absolutely. The way I was imagining using it was that you would update the balance in the fast caching system and dump the whole packet or the relevant details from it into a proper persistence layer. If the cache dies you would need to bring it up again based on the balances derived from the transaction history in the proper database.
For what it’s worth, I also think TiDB sounds like an interesting technology.
Anyone with more DB/cache scaling experience want to weigh in on this?