Has anyone done load testing to find out how many payments a single connector can handle at once?
What are the bottlenecks performance-wise on Interledger, if any?
Would be open to gathering such data if anyone had suggestions for how to do it.
Has anyone done load testing to find out how many payments a single connector can handle at once?
What are the bottlenecks performance-wise on Interledger, if any?
Would be open to gathering such data if anyone had suggestions for how to do it.
Some community members made a Jmeter add-on for load testing, but I cannot remember the results that were reported on the community call. I am sure it has come a long way since its initial testing.
Perhaps this is a question better answered by @d1no007 and @austin_king, but I think your results will vary greatly in the wild. Something as simple as encoding and decoding times could vary between different implementations that utilize different threading models, but that’s kind of boring, so let’s look at other potential factors.
The ledger (and associated payment channel implementation) mostly dictate settlement times, but can be made more flexible by using credit to speed up fulfillment. However, you might run into liquidity exhaustion and re-balancing issues, throttling connector efficiency. I think @kincaid (or perhaps somebody else from Kava) might have some more information on how long the round-trip time is for a ledger plug-in like the Lightning Plug-in.
Under the assumption of unlimited liquidity when going from asset A to asset B, this is a non-issue, but practically speaking, it is important for connectors to choose reliable routing methods and prioritize fulfillment over preparing new packets.
At the link layer, you can see how Interledger might scale by using ILP over HTTP, instead of using BTP over WS. If you are running a connector using BTP, you need to configure different WebSocket URLs and ILP addresses for each instance.
For more information on why using BTP over web-sockets could lead to scaling issues, read Evan’s post: Thoughts on Scaling Interledger Connectors. There is some amount of overhead associated with persistent HTTP connections, but it seems like a viable solution for the reasons @emschwartz detailed in the article.
Fortunately, I think most scaling problems could be solved naively by creating more instances laterally. I think liquidity exhaustion will be the biggest scaling issue that smaller connectors run into, but as @emschwartz could probably tell you, it’s just kind of foreign to think in terms of money-width right now, instead of band-width.
I am excited to hear other people’s thoughts on the topic.
Great breakdown @ekrenzke of the various factors in play. For raw ILP packets and no settlement, I feel like you can get to thousands of packets/sec just by tuning the existing implementation. The Jmeter numbers from @adrianhopebailie backed that up if I recall.
For the latest settlement related numbers, checkout the readme for switch-api. The current ETH and XRP paychan implementations can settle roughly 200x the credit limit per second. Lightning is about an order of magnitude slower.
We’ve found the scaling limits on the existing JS connector to always be CPU bound. Since node is single threaded it’s very effective to simply scale horizontally.
We’ve achieved ~2k packets per second using 6 connectors each with ~80% CPU. That would come out to roughly 400 pps per connector given a full CPU.
Seeing as this connector is not optimized for performance and we have more systems level language implementations in the works I think the performance roof is much, much higher than this.
@spearl Hi, so sorry to bother you. I am a master student who is focusing on the interoperability of blockchains and now investigating the Interledger. We are so interested in the performance bottleneck of interledger and noticed your reply. Is it possible for you to share the benchmark tools/codes of how you conduct the experiment test to get the result? Or could you point out how we users could test the performance bottleneck?
HI @Fy45, there are ways to scale Interldeger connectors horizontally. We are doing this right now on the network and have seen days where we are handling 3,000 TPS. I don’t have any tools to share with you at the moment, but hopefully this provides some context for the state of the network today.
Hi @Fy45,
There was no benchmarking involved in the numbers I quoted above. Those were numbers we were consistently seeing in our production infrastructure metrics.
If you’d like to performance test the JS connector I would advise you to simply run a connector yourself with the inspector enabled and connected to an instance of Chrome Dev Tools. Then run a few processes sending packets through the connector and examine the flame graphs.
Like I mentioned above, the JS connector was not created for performance and there are probably many trivial bottlenecks that could easily be optimized. However, javascript is a terrible language to use if you’re aiming for pure performance which is why there hasn’t really been any concerted effort to fix bottlenecks. Most of the community is working on the several connector implementations that are focusing on robustness and performance as primary features. Obviously any work on improving the current connector is greatly appreciated and will be used widely but the future lies in other implementations.