How many payments can a single Interledger connector handle at a time?

#1

Has anyone done load testing to find out how many payments a single connector can handle at once?
What are the bottlenecks performance-wise on Interledger, if any?

Would be open to gathering such data if anyone had suggestions for how to do it.

1 Like
#2

Some community members made a Jmeter add-on for load testing, but I cannot remember the results that were reported on the community call. I am sure it has come a long way since its initial testing.

Perhaps this is a question better answered by @d1no007 and @austin_king, but I think your results will vary greatly in the wild. Something as simple as encoding and decoding times could vary between different implementations that utilize different threading models, but that’s kind of boring, so let’s look at other potential factors.

The ledger (and associated payment channel implementation) mostly dictate settlement times, but can be made more flexible by using credit to speed up fulfillment. However, you might run into liquidity exhaustion and re-balancing issues, throttling connector efficiency. I think @kincaid (or perhaps somebody else from Kava) might have some more information on how long the round-trip time is for a ledger plug-in like the Lightning Plug-in.

Under the assumption of unlimited liquidity when going from asset A to asset B, this is a non-issue, but practically speaking, it is important for connectors to choose reliable routing methods and prioritize fulfillment over preparing new packets.

At the link layer, you can see how Interledger might scale by using ILP over HTTP, instead of using BTP over WS. If you are running a connector using BTP, you need to configure different WebSocket URLs and ILP addresses for each instance.

For more information on why using BTP over web-sockets could lead to scaling issues, read Evan’s post: Thoughts on Scaling Interledger Connectors. There is some amount of overhead associated with persistent HTTP connections, but it seems like a viable solution for the reasons @emschwartz detailed in the article.

Fortunately, I think most scaling problems could be solved naively by creating more instances laterally. I think liquidity exhaustion will be the biggest scaling issue that smaller connectors run into, but as @emschwartz could probably tell you, it’s just kind of foreign to think in terms of money-width right now, instead of band-width.

I am excited to hear other people’s thoughts on the topic.

1 Like
#3

Great breakdown @ekrenzke of the various factors in play. For raw ILP packets and no settlement, I feel like you can get to thousands of packets/sec just by tuning the existing implementation. The Jmeter numbers from @adrianhopebailie backed that up if I recall.

For the latest settlement related numbers, checkout the readme for switch-api. The current ETH and XRP paychan implementations can settle roughly 200x the credit limit per second. Lightning is about an order of magnitude slower.

2 Likes
#4

We’ve found the scaling limits on the existing JS connector to always be CPU bound. Since node is single threaded it’s very effective to simply scale horizontally.

We’ve achieved ~2k packets per second using 6 connectors each with ~80% CPU. That would come out to roughly 400 pps per connector given a full CPU.

Seeing as this connector is not optimized for performance and we have more systems level language implementations in the works I think the performance roof is much, much higher than this.

6 Likes