Rafiki - A new connector implementation

@matdehaast, @don, and I released a BETA of our new connector implementation Rafiki last week.

I wrote a blog post about it here.

If you have questions or comments please post them in this thread.

8 Likes

What currencies does it handle?

It’s a connector. It’s not currency specific.

Good stuff! I have a bunch of specific questions and reactions below but my main thoughts are:

  • I’m a big fan of sending deserialized ILP packets through the different components and putting settlement messages in ILP packets so there’s only one interface to each component
  • I don’t totally understand the importance of the distinction between rules and protocols. Given the point above, it seems like every component is just something that takes in a Prepare packet and returns a Fulfill or Reject, independent of whether it adds business or protocol logic.
  • I’m curious about the memory usage of the per-peer pipelines. It seems like this would pretty severely limit the number of peers a single instance can handle at once. I would think the memory usage would be lower if the logic of the pipelines were separated from the account/peer-specific configuration such that the rule/protocol function would take in the Prepare and the peer’s details and apply its logic accordingly.

the Swahili word for friend (exactly the kind of connector you want to peer with). It is the name of the coolest character in The Lion King

Awesome.

  1. Isolate the router.

I’m not sure I totally understand this point. Is the idea that there should be one component that just looks at the destination address and routing table to determine the next hop, and a separate one should apply the exchange rate? Am I right in thinking those two are part of the same process but just that one comes after the other?

  1. Support dynamic configuration

Great.

In an ideal architecture connectors establish a very simple channel with their peers over which they exchange nothing but ILP packets.

When you first proposed this, I really didn’t like the idea but I’ve been totally sold on it since then. Whether or not it’s the “right” way to do it in terms of layering and whatnot, it simplifies the design a bunch so I’m very happy with it.

It also adds a special peer to the routing table at startup, representing the connector itself ( self ), and then adds the necessary protocols to the outgoing pipeline for self (e.g. the echo protocol).

I don’t totally understand this design choice. What’s the logic behind having this self pipeline?

  1. Separate business logic and protocol logic

Just to clarify: by protocol logic, you mean something like “is the packet expired” or “does the fulfillment match” and by business logic you mean “what is the max in flight limit for this peer”, right?

The major difference is that we create an instance of each rule for each peer.

Does “an instance” mean that the rule function applies some different logic based on the peer’s configuration or that there’s a separate closure in memory representing that logic + configuration?

rules are very lightweight and not bound to a pipeline, rather they are chained together when the peer is setup

Sorry if this is a silly question but what’s the difference between being bound to a pipeline and the chain of rules/protocols for a peer?

This allows peers to be setup ad-hoc with different rule pipelines so, for example, an internal peer that doesn’t settle with its peers could choose not to use balance middleware as a way to optimise its processing pipeline.

It seems like this could also be handled by passing everything through the same chain of functions but using the account/peer-specific configuration to determine whether the function does something or just passes it through. I don’t think adding another Promise into the chain slows it down much, but it seems like having components duplicated for each peer will increase the memory usage (though I may be totally wrong about how JS handles those different objects or closures).

allow specific protocols to be disabled or configured differently per peer in future

Is there a different pipeline of rules and protocols per endpoint or per peer?

This could be via config read in by the process or via the API exposed by app .

Would this support open signup methods like mini accounts, or would it do away with that model?

There are not likely to be a huge variety of endpoint implementations; one for each possible transport (HTTP, gRPC, WebSockets, raw TCP).

Great point.

Inside the app packets are already deserialized so there is also no performance hit on the processing pipelines.

:+1:

Where peers wish to exchange settlement related messages with one another we expect them to use ILP packets in the address space peer.settle.* .

:+1:

2 Likes

I am going to reply with my own views and @don and @adrianhopebailie can chime in where they agree/disagree. From the outset I would state the implementation was developed with the following in mind (order loosely based on priority):

  • Developer friendliness (extending and maintaining and introduction into ILP)
  • Extensibility by operators (ie adding custom logic)
  • Ability to run in distributed architecture
  • Performance

The above doesn’t mean that we don’t believe this implementation won’t be performant!

  • I’m a big fan of sending deserialised ILP packets through the different components and putting settlement messages in ILP packets so there’s only one interface to each component

Agreed, this creates a much cleaner way of dealing with ILP packets throughout the system.

  • I don’t totally understand the importance of the distinction between rules and protocols. Given the point above, it seems like every component is just something that takes in a Prepare packet and returns a Fulfill or Reject, independent of whether it adds business or protocol logic.

We also debated this quite a bit and it came down to something very simple. Protocol is what are things determined by the protocol spec. IE if I ran a connector, it would at a minimum have to handle these functions. These being stuff like CCP, Echo etc. Rules are peer level logic that are not necessarily required to operate a connector. This is balance, throughput etc.

It may seem superfluous to create different naming schemes. But it creates a very clear distinction and line in the stand of what is protocol and what is ops level rules for peering. Again this speaks to first points on the thoughts when developing the new architecture.

I’m curious about the memory usage of the per-peer pipelines. It seems like this would pretty severely limit the number of peers a single instance can handle at once. I would think the memory usage would be lower if the logic of the pipelines were separated from the account/peer-specific configuration such that the rule/protocol function would take in the Prepare and the peer’s details and apply its logic accordingly.

That could be possible but without any raw data I cannot agree or disagree with you… My intuition tells me this though this wont be as big a concern as we think. My logic for this is that high throughput peers will be operating on dedicated instances of sharded connectors within a cluster and lower throughput peers (of which there will be many) can be distributed amongst many. But again I would argue for the case where this becomes limiting, you are probably better off using high performance implementation, such as the Rust one you are working on.

I’m not sure I totally understand this point. Is the idea that there should be one component that just looks at the destination address and routing table to determine the next hop, and a separate one should apply the exchange rate? Am I right in thinking those two are part of the same process but just that one comes after the other?

The isolation of the router had two aspects. One was we are using a standalone routing component in our work with Mojaloop and two we saw it as an opportunity to change the current thinking. Should one component determine the next hop and point it in that direction… I would argue YES! That is routing. Exchange and rates is another concern. I think we should open another discussion regarding this as @sappenin and I have had quite a long conversation regarding this. There are quite a few merits to making routing just routing and the conversion another concern.

When you first proposed this, I really didn’t like the idea but I’ve been totally sold on it since then. Whether or not it’s the “right” way to do it in terms of layering and whatnot, it simplifies the design a bunch so I’m very happy with it.

I am glad you feel this is how it should be! The more and more we think of it, we believe that a connector should be able to accept any unsolicited connection via any endpoint type (TCP, gRPC, HTTP etc). Then ILP packets are exchanged in the peer.* space to auth, setup settlement etc and finally get to the point they can exchange non peer.* ILP packets. This would make interoperability of implementations much easier as well!

I don’t totally understand this design choice. What’s the logic behind having this self pipeline?

It allows the direct logical connector to be addressed directly. IE if your address is g.harry.alice this gives a clear pipeline with which to add logic in which you can handle packets directly addressed to yourself. Further with the pipeline implementations you can add custom logic to this pipeline easily to do some funky stuff :joy:

Just to clarify: by protocol logic, you mean something like “is the packet expired” or “does the fulfillment match” and by business logic you mean “what is the max in flight limit for this peer”, right?

  • CCP
  • ILDCP
  • Validate Fulfilment
  • Echo

Things that are physically defined in the protocol. Stuff such as throughput are not protocol definations but operation nice to haves and hence peering ‘Rules’

Does “an instance” mean that the rule function applies some different logic based on the peer’s configuration or that there’s a separate closure in memory representing that logic + configuration?

Each instance is an instantiation of a custom function for that peer pipeline

Sorry if this is a silly question but what’s the difference between being bound to a pipeline and the chain of rules/protocols for a peer?

Going to defer to @adrianhopebailie here

It seems like this could also be handled by passing everything through the same chain of functions but using the account/peer-specific configuration to determine whether the function does something or just passes it through. I don’t think adding another Promise into the chain slows it down much, but it seems like having components duplicated for each peer will increase the memory usage (though I may be totally wrong about how JS handles those different objects or closures).

You could do that, but essentially lets say you have lots of rules for one peer and none for the other. In your model it would still need to pass through every service. In ours it would just pass through the ones that would apply to that peer.

Quoting Stefan here, “ILP will need to optimise for raw packet throughput above all else”.
So why not minimize the golden path where you can?

Is there a different pipeline of rules and protocols per endpoint or per peer?

Hmm interesting, I think currently it is per endpoint as we have 1-1 mapping of endpoint to peer but it is effectively per peer. We could effectively create a hybrid endpoint that binds multiple incoming endpoints to the single peer endpoint.

Would this support open signup methods like mini accounts, or would it do away with that model?

That is very soon the goal and something we looking towards the community to help develop as we are working on Mojaloop stuff. I think the hope is this will be done before the end of the ILP summit. Maybe we can do a mini hackathon there to finish it?! :laughing:

Thanks for the comments and feedback @emschwartz. Lots of food for thought for me and good to be challenged on our ideas and assumptions :+1:

2 Likes

Yes. In our model (credit to @justmoon for the idea) you would use business rules to covert the incoming amount and outgoing amount but the router simply does routing.

In fact you can build a non-ilp router that uses a vaguely similar addressing scheme (like IP) using the ilp-router module.

So the theory is:

  1. Pick a currency and scale for your connector.
  2. Convert all incoming amounts to that
  3. Apply an incoming fee
  4. Route the packet
  5. Apply an outgoing fee
  6. Convert the outgoing amount to the correct outgoing currency

The reality is, most connectors will pick a currency that is the same as most of their peers so either step 2 or 6 are not needed (often both).

The logic for 3 and 5 is much simpler because all fees are calculated and configured using the same currency and scale.

Step 4 can be highly optimized because it’s JUST routing. Exercise for the enthusiast is to re-write the routing table in C :slight_smile:

It makes the router significantly simpler. It has no concept of “self”, just routes and peers. The connector creates an empty routing table and inserts itself as a peer called “self”.

All routes that go to “self” are effectively the addresses of the connector.

Also, all of the same things that are available for peers are also available for the outgoing pipeline to “self” so you can configure rules and protocols.

There is no such thing as pipelines when the connector starts up. At least not in the sense that they exist in ilp-connector. E.g. You can’t execute the startup pipeline, shutdown pipeline etc.

Rules are modelled on Nodejs stream.Duplex (or specifcally stream.Transform) but on a variation of the Duplex interface I’ve called a DuplexRequestStream where a write returns a Promise which resolves when the request that was written has been read and replied to.

In the same way as you can pipe() one Duplex into another you can do the same with a DuplexRequestStream.

When you add a peer to an instance of Connector you provide instances of all the Rules that you want applied to packets to and from that peer and the connector chains these together by piping them to each other in both directions. (A Rule is a bi-directional DuplexRequestStream. I.e. It has an incoming and an outgoing stream).

So each chain of rules (pipeline) is unique. In fact, the order of rules could be different even if the same rules are used.

Finally, there is no reference to the “pipeline”. Once the rules are chained together all that the connector has is a reference to functions for injecting input into the chain of rules and handling output.

If a reference to an individual rule instance is required then it must be held by the app before passing the rule to the connector when adding the peer.

Per peer. Currently we attach an Endpoint to the end of the pipeline and this abstracts away the concept of sending and receiving ILP packets but in reality this could also contain multiple Endpoint instances that represent multiple physical connections.

I.e. From the perspective of the Connector, the Endpoint instance it is passed when a peer is added is the interface for sending and receiving packets to and from that peer. How this is actually done under the hood is abstracted away.

For some clarity, have a look at how we implemented the HTTP2 endpoint and server.

Alternatively:

  1. Route the packet
  2. Apply the exchange rate, which may include some fee

I don’t see why you need the connector to have a currency and scale in order to separate routing from exchange rates. Can’t they just be two different components chained together (in the same way as everything else is chained together, because they just take in Prepare packets, send them on, and return Fulfills or Rejects)?

I’d be careful with modeling things on that interface. Streams are generally a huge pain to work with. Having objects that you can call with a Prepare and get a Promise that resolves to a Fulfill or Reject sounds good though.

1 Like

How do you know the incoming currency and scale on the outgoing pipeline when you’re applying an exchange rate and fee?

You are coupling the components together again which is exactly what we’re trying to move away from.

If you normalize the currency and scale early in the incoming pipeline it becomes much easier to reason about any rules that apply to the amount. Otherwise your rules need to be reconfigured to match the current rate of exchange. (We haven’t had this issue with ilp-connector yet because everyone uses XRP).

It also means that if you have a cluster of connectors that peer with each other you can assume they all use the same internal currency and scale.

I disagree. The Nodejs interfaces are a bit messy because they support lots of legacy stuff but I really like streams.

1 Like

In the OutgoingService model I’m using, the OutgoingRequest type has both the from and the to accounts. When the account is loaded up from the database originally, all of the static properties like asset_code, asset_scale, max_packet_amount, etc are attached to the Account object and passed through each of the subsequent services.

The only rules I can think of right now that are related to the currency are the max packet amount and the throughput one. Are there any others? The way these work in the Service model is that the max_packet_amount is a detail configured on each account, stored in the database, and loaded up with the other account details. The max packet amount Service just looks at that value on the account and rejects the packet if it’s too high.

I assume that whatever each account/peer uses is stored in the database, which doesn’t seem so complicated to me.

1 Like

@adrianhopebailie Hi Adrian, this sounds great! When could we please have a tutorial on Medium on how to set this up and running with some config examples on how to connect it to the reference js ILSP connector, Moneyd, etc? Like the one you provided for the reference connector.

Until then, could you please provide a short explanation on how to setup peering with the reference js connector for example? Hopefully we will be able to expand from there.

Does it support MoneydGUI like the reference connector? If not, is a GUI planned?

Thank you

1 Like

Working on it!

We’re focused on getting a good settlement implementation done first and then we’ll post up some guidance on how to use Rafiki.

1 Like

Not yet but we do plan on making our admin API compatible with the ilp-connector admin API so the GUI should work out the box.