Plugin architecture and ledger integrations

Is the Interledger.js plugin architecture the best way to do blockchain/ledger integrations?

Background: What is a plugin?

A library that connects to a specific type of ledger and handles settlement operations, such as sending payment channel updates/claims or on-ledger transfers. Plugins can abstract away different types of bilateral communication methods and the differences between settlement ledgers.

Note: the term “plugin” has most often been used to refer to part of an Interledger SDK that a developer would bundle with an application, rather than a standalone component.

If you aren’t familiar with Interledger plugins, you might want to look at:

Is this the right architecture going forward?

Internal Plugins

Pros

Cons

  • Each ledger plugin theoretically needs to be re-implemented in every programming language we want to have SDKs for
  • The current design was built for the assumption that Interledger peers would settle every ILP packet using a fast settlement mechanism like a payment channel. This does not scale well to multi-instance connector deployments, where the overhead of signing, sending, and verifying payment channel updates/claims may quickly outweigh the benefits to minimizing bilateral credit risk
  • JS plugins currently handle fairly complex balance logic, which also needs to be re-implemented for each plugin

External Settlement Engine

Pros

  • Only need one implementation of a ledger integration in a single programming language
  • Possibly more scalable for multi-instance connector deployments (see the section entitled “Separating Clearing and Settlement” in Thoughts on Scaling Interledger Connectors)

Cons

  • Need to standardize the API other components will use to trigger or react to settlement events
    • Should all settlement engines be expected to expose a common HTTP (or other transport) API?
    • Should the settlement engine interact directly with the database (as mentioned in the Thoughts on Scaling Interledger Connectors post)? If so, which databases should we support, or are we going to create an abstraction for the database?
  • Disruption to what we’re already doing

Both?

We could assume that internal plugins are used only to abstract away message transports like BTP and HTTP and separate settlement functionality into a separate service. If we go that route though, we need to define both APIs.

Another idea worth mentioning was Adrian’s suggestion of sending settlement-related messages inside ILP packets. This would simplify the internal “plugin” API down to a single call that sends out an ILP Prepare packet and asynchronously returns either an ILP Fulfill or Reject packet.

Thoughts?

1 Like

Each ledger plugin theoretically needs to be re-implemented in every programming language we want to have SDKs for

At the risk of increasing complexity, could you write a library in a language that allows you to create bindings for others and then interoperate that way?

I hate the plugin model (but I think you know that already :slight_smile: )

I am a big fan of separating the ILP stuff (routing packets, validating fulfillments and expiries) from the business logic (checking balances, settling with peers).

The problem with the plugin model is that plugins serve two functions, they are the link between two connectors for exchanging ILP packets and also the logic for settlements and other stuff that doesn’t fit easily into middleware.

The result is a very strong coupling between connectors and plugins resulting in the connector code using plugin instances as a proxy for anything related to an account. The fact that balances are tracked in the connector but settlements done in the plugin is already a clue that the abstractions don’t work.

I don’t think we have had issues with this to-date because we haven’t had enough variety in how connectors are deployed for the issues to manifest.

@matdehaast, @don and I spent some time brainstorming a connector design today that we’re going to prototype. We’re building on some of what we did for ilp-connector v23-beta.

At a high level our plan is to have a stand-alone routing component that is VERY simple and is included as a dependancy of ilp-connector. (We need this for some work we are doing in the Mojaloop project anyway) This component will only do routing and have no concept of accounts or route updates (just peers and a routing table).

The ilp-connector adds the business rules that wrap the router using middleware pipelines to do things like balance checking, amount and expiry conversion for incoming and outgoing packets etc. It also hosts the “sub-protocol” controllers for things like ILDCP and CCP.

We’re borrowing an idea from Stefan that the router has a base currency and no backend so when a packet comes in it simply decides on the outgoing link and routes it out (like a …router). The middleware on either side of the router can convert the incoming amount to the internal currency and then outgoing currency before and after routing respectively (same for the expiry).

By making the middleware pluggable you can implement different settlement models, e.g.:

  • Middleware that does no balance checking
  • Middleware that uses a balance on a shared external system
  • Middleware that keeps an in-memory balance and accepts settlement messages via an API to update the balance after a settlement event.

Instead of a plugin we’re planning on having a very simple abstraction called an endpoint which is an interface for sending and receiving ILP packets. Implementations of an endpoint can then use whatever they want to exchange the packets (HTTP, BTP, UDP).

We’re hoping to have something to demo soon so perhaps for the 20 Feb community call

2 Likes

Hi, I’m happy I found this forum - I’d joined the W3C community group it seems to be dead.

I contribute to the Komodo Platform and one of the earlier implementations of a plugin was sent to me in October 2018 to check out https://github.com/nuevax/ilp-plugin-komodo-paychan

Our lead dev (jl777) has used the “ex-ILP” Crypto Conditions data format as the basis for serializing objects to the op_return and using these in blockchain transactions. A simple example in the case of conditional payments I wrote about today referencing @adrianhopebailie older blog post https://i.mylomylo.com/ceo-dies-cryptocurrency-will-conditional-payments/ - now that the conditional payment “business logic” is handled in a smart-contract, how it can be used in an ILP kind of application layer way (thru browser). Of course this functionality is exposed by RPC on localhost etc.

I noted in another thread (t/eos-plugin-connector/34) there is chat about a C++ connector.

I think this would be useful because most bitcoin/zcash forks would benefit from such an integration from the backend/daemon level.

Unfortunately I’m quite a noob with ILP and C++ is 15+ years since my last hands on experience.

Komodo allows for blockchain creation (runtime forks) and this type of ILP integration would benefit any new blockchain projects (from the daemon perspective). WIth KMD Crypto Conditions, these types of CCvouts could be quite useful in making intelligent web payments with the “business logic” kept in the crypto conditions in the blockchain daemon.

I’ll read up on the original post and try to get understand how/where I can provide useful discussion :grimacing:

Thanks for the up to date forum topics - I wasn’t sure if ILP was still a live project from keeping a lazy eye on it, despite COIL’s streaming payments stuff.

Cheers
Mylo

Instead of a plugin we’re planning on having a very simple abstraction called an endpoint which is an interface for sending and receiving ILP packets. Implementations of an endpoint can then use whatever they want to exchange the packets (HTTP, BTP, UDP).

Sounds a lot like a telephone/video call, where the signalling (ring ring) and the media (voice/video) are separated.

e.g. signalling (SIP) is a standard port (and could be tcp or udp), and on answer, there is the description (SDP) of where the media will be available, with handshaking to establish the parties are supposed to be communicating with each other, by streaming RTP packets on a given port - either p2p or via a proxy/switch.

@adrianhopebailie Can you clarify this setup? For example, will this involve self. accounts, or how will the connector know to forward to the router?

I’m thinking this should really be called a Link. In the Java abstraction, I’m considering having different types of Links, such as a DataLink (has the sendData method and handlers for receiving data) and potentially something similar called a MoneyLink (as opposed to a single Plugin interface).

Your comment above has me wondering though - should the ILP Link Layer be modeled around endpoints (in which case call this thing an Endpoint as you have), or should it be modeled around the connection between two parties (and be called a Link)?

I’m in support of doing an external settlement engine. I think having one implementation that works and has a security audit done on it is probably the best way going forward for adoption. However, it’s not super clear to me how this might look.

I think the public connector still needs to keep track of some public settlement engine information (i.e. the overall capacity of what it can handle for various currencies) to avoid having to query the settlement engine for everything for performance reasons.

Additionally, an external settlement engine decouples key management from the connector with a public endpoint and moves it somewhere that can be private but also have live replicas without running a full connector for those replicas to work. Having these decoupled services could allow for greater flexibility down the line.

A cool thing that you could potentially do is write a protocol to delegate your payment channel management to someone else’s external settlement engine. Sort of like an explicit watchtower. If you leverage secure enclaves, you can do this in a secure way, however there’s no insurance in terms of availability there (i.e. if the watchtower goes down, you’re done for, but I’d have to do some more thinking on this for a proper protocol to be fleshed out).

Seems like gRPC or HTTP2 is the way to go if you want to do load balancing with the common tools out there. I would expect most connectors to place this behind a private network in the cloud for security reasons. What would be the tradeoffs in your opinion? Seems like gRPC would have some good defaults. I’m thinking that it’s fine to use protobufs as a standard.

This is partially language dependent since it depends if you’re going to leverage a database controller (which we probably should to make maintaining easy). Could be very different depending on if we use Rust, Typescript or something else.

Would love to hear thoughts on this.

I’m looking forward to seeing what you guys have come up with. In the meantime, I also want to share a sketch of the design we’ve been iterating on in the Rust implementation that may have some similarities.

High level points:

  • Every component is a Service that exposes the same API so they can be chained together into a connector, a standalone STREAM receiver, etc (credit to Carl Lerche’s work on the Rust Tower framework)
  • Services use deserialized ILP packets to minimize the number of times the packet is (de)serialized, and because most Services need to look at the data in the packets
  • Settlement messages do not have a separate sendMoney abstraction. They are either sent inside ILP packets or they are handled outside of the pipeline for ILP packets (for example, sent to a different HTTP URL path and handled directly by the settlement engine) (credit to @adrianhopebailie)
  • Each Service defines a trait (like an interface) to the data store with specific functions it needs (like get_routing_table, or atomically_update_balances). A data store could be in-memory, use a fast external system like Redis, or use a combination of database and pubsub technologies, and each store implementation would implement as many of the traits as possible.
  • There is a common AccountId that all Services and data stores use (instead of the plugins separately managing account details)

Service interface

The Service is a thing with two methods. poll_ready is used to determine whether the Service is ready to accept more requests. call takes a Request and asynchronously returns either an ILP Fulfill or Reject packet. A Request is an ILP Prepare packet with a from and to attached.

“Servers”, such as a BTP or HTTP server, accept an instance of a Service and call it with the deserialized Request. “Clients”, such as an outgoing HTTP client, implement this trait. “Middleware”, such as a balance updater, are passed a Service instance and implement the trait. Middleware pass on Requests by calling their inner Service and may modify the Request or the response, or respond directly without calling the inner Service.

pub type AccountId = u64;

pub struct Request {
    pub from: Option<AccountId>,
    pub to: Option<AccountId>,
    pub prepare: Prepare,
}

pub trait Service {
    type Future: Future<Item = Fulfill, Error = Reject> + Send + 'static;

    fn poll_ready(&mut self) -> Poll<(), ()> {
        Ok(Async::Ready(()))
    }

    fn call(&mut self, request: Request) -> Self::Future;
}

Note that @sentientwaffle and I are also looking into whether the Request's Option types can be replaced with traits so that the compiler can statically verify that Services are being chained together correctly (for example, a Service like an outgoing HTTP client should always be passed a to account.

Connector flow

Notes:

  • All of the Services are optional and they can be strung together in different configurations
  • The BTP Server and HTTP Server implement Service as a passthrough so they can be chained (credit to @sappenin)
  • The BTP Server and BTP Outgoing Services use the same pool of open sockets. Incoming connections to the Server are added to the pool, and the pool may be instantiated with URLs to connect to
  • I was previously working with a model that involved branching services such that the Router would decide which service to pass the Request to. This got complicated and @sappenin suggested thinking of the Connector as a single chain of Services, such that each Service would decide for itself whether to pass on the Request or handle it itself.
  • If settlement messages are sent in ILP packets, there could be Services that handle those and update the balances in the data store. Alternatively, all such packets could be forwarded to a separate settlement engine.
4 Likes

Update: Based on input from @adrianhopebailie and @sentientwaffle, I broke the Service trait into two: IncomingService and OutgoingService.

Benefits:

  • Services do not need to check whether the to field is present in the request if they require it
  • The compiler can check whether the services are being chained together appropriately (it won’t let an OutgoingService that requires a to field to be passed to a service that won’t set the to because it expects an IncomingService)
  • Services that can be used on both the incoming and outgoing sides, such as an implementation of BTP, can more explicitly separate the functionality for both sides instead of relying on the to field to determine whether it’s being called to handle an incoming request or send an outgoing one
pub type AccountId = u64;

pub struct IncomingRequest {
    pub from: AccountId,
    pub prepare: Prepare,
}

pub struct OutgoingRequest {
    pub from: AccountId,
    pub to: AccountId,
    pub prepare: Prepare,
}

pub trait IncomingService {
    type Future: Future<Item = Fulfill, Error = Reject> + Send + 'static;

    fn handle_request(&mut self, request: IncomingRequest) -> Self::Future;
}

pub trait OutgoingService {
    type Future: Future<Item = Fulfill, Error = Reject> + Send + 'static;

    fn send_request(&mut self, request: OutgoingRequest) -> Self::Future;
}

pub type BoxedIlpFuture = Box<Future<Item = Fulfill, Error = Reject> + Send + 'static>;
1 Like

While working on a Java implementation of an ILPv4 Connector, I’ve come to a packet-flow design that’s very similar to what @emschwartz posted above under Connector Flow, except that instead of a single “pipeline”, my implementation is split-up into three parts:

  • Incoming Pipeline: Structured as a chain of incoming LinkFilters, this handles everything related to a packet incoming on a particular link. If a packet is not handled by a filter/handler in this chain, then the ILP packet is handed off to the Packet Switch.
  • Packet Switch: Maps the incoming ILP packet to an outgoing ILP packet by determining the new amount+expiry, as well as the “next hop” Link that a packet should be forwarded on.
  • Outgoing Pipeline: Structured as a chain of outbound LinkFilters, this handles everything related to a Packet being sent out on an outgoing Link.

Here’s the visual: