Codius Security and Interledger

So I’ve had some thoughts and prior discussion about how the computational model of Codius is secure, and without any kind of verifiable computation, it’s unclear to me how you can ensure that someone executes your code. It seems like it’s the responsibility of the client to figure out whether or not the Codius endpoint is doing something malicious and then cut-off payments from there.

But if you don’t leverage a secure enclave, i.e. Keystone for RISC-V or Intel SGX, then why wouldn’t I just run a Codius node and trick people into paying me? Seems trivial given that there’s nothing preventing me from creating multiple identities (sybil resistance), etc.

Therefore the solutions are to:

  • Federate the network somehow (unclear how you can do this without any sybil resistance or universal identity, maybe URLs are enough and if someone gets scammed the damage is limited because the URLs will get a bad reputation?)
  • Require secure enclaves to run contracts (actually a good idea, I think it does a better job of leveraging end-to-end arguments than running a full on consensus layer for smart contracts)

Am I missing something here? I know that you can have secure enclave providers on Codius, but if you don’t require it, it’s pretty easy to fool people into performing real computations. The alternatives are slow Homomorphic encryption which heavily limits what kind of jobs you can do, or if you can divide the computations into some kind of samples, sort of like a map-reduce job, then maybe you can do some kind of tit-for-tat protocol.

1 Like

An important thing to note is that the client selects their host deliberately. If you pick some random host by crawling the network, they might not really be running their code. If you pick a Codius host run by a well-respected hosting company, then they’re less likely to be lying to you.

The network wouldn’t scale, though, if the only viable hosts were ones with brand recognition. I predict that lots of people will create reputation services on top of Codius, wherein the reputation service rates Codius hosts by automatically uploading contracts to the hosts and probing them to make sure that they’re really running the code they say they are. A reputation service could also do some manual steps, like verifying the business running the host. The great thing is that all Codius uploaders have Interledger access, so they can easily make a paid API call to the repuation service.

The idea of a reputation service isn’t set in stone and isn’t a part of the protocol, it’s just a natural thing that people would build on top of it.

Even with a host you trust, you might want to further strengthen your security. Maybe you don’t trust Amazon to run your code, but you would trust that Amazon, Google, and Microsoft would not all conspire to run your code falsely. For that kind of problem, you would use a consensus algorithm, multisign scheme, or multi-party computation, whichever is appropriate.

Codius isn’t like Ethereum, though. It doesn’t run on a blockchain and it doesn’t run your code on multiple machines by default. The Codius way to run your code in multiple places is to upload it in multiple places. You need to write the consensus code yourself, but that’s something that can be solved by friendly libraries rather than making the underlying Codius network expensive and complicated.

That’s pretty much what we’re going off of: if you want people to trust you then you should use the some domain and build up trust.

This isn’t mutually exclusive with the existing way that it works. If you can prove you’re running the code you say you’re running, then a reputation service or an individual user could take that into account when deciding whether to upload to you.

I don’t think that we should require secure hardware because it creates a single point of failure.

1 Like

I’m not convinced of Truebit’s real-world practicality specifically, but some form of compute arbitration at the application layer could definitely provide a measure of confidence. If the arbitrator is central/trusted or small-scale multiparty (such as the Google/AWS/Microsoft solution above), you obviously have the same confidence that arbitration is unbiased as the current status quo in cloud computing. And if you need more certainty, using a trustless computer like Ethereum as the arbitrator works well. The annoying part is still efficiency/worst-case runtime for an algorithm has to factor in arbitration, which is network/latency-constrained and extremely slow (in practice if not mathematically) under the Truebit model even with binary execution search. Which is why in practice I think a k-peers model (where k nodes run the same job, with k acting as a paid-for confidence slider) may be more useful despite being more exposed to collusion on unencrypted or Sybil jobs. Would basically allow per-job consensus computers at customized scale.

My intuition is that this all essentially reduces to the Halting Problem which is why I don’t think there will be a “clean” solution barring a major theoretical breakthrough.

Interestingly though, the quicker/more restrictive forms of homomorphic encryption may be naturally verifiable and I don’t think they should be so readily dismissed - faulty computation simply returns gibberish, no?