SLP241 Bunnie — Precursor: Open Source Hardware — (Feat. Nicolas Dorier as guest host)

Stephen Chow
44 min readJan 22, 2021

Link to the YouTube audio: https://youtu.be/b4RHeGUDThM

Stephan Livera: Bunnie and Nicolas, welcome to the show!

Bunnie: Hey! Great to be here. Thanks for having us!

Nicolas Dorier: Thanks a lot!

Stephan Livera: Great! My listeners are already very familiar with Nicolas, and Nicolas is gonna be a guest co-host this episode. And Bunnie, let’s hear a little bit about you? Where did you come from? What was your background?

Bunnie: Yeah I guess you could say I’m more of an electrical engineering background. I do a lot of open hardware work. I’ve designed things ranging from 802.11 Wi-Fi chips all the way to nanophotonic integrated circuits in the past. And lately I’ve been looking a lot at building secure open source systems, things that you can trust. Trying to figure out how deep the rabbit hole goes, because in my background—since I have a lot of background in doing things like integrated circuits and building technology at that level—I have a sort of grasp over how little control we have over what happens at certain lower layers of technology. And so the latest project I have going on is Precursor, which is an attempt to try and wrestle back some of that control, trying to peel back some of those layers and get them back into the users hands.

Nicolas Dorier [04:45]: For a Bitcoiner I think a nice way as well to present Bunnie is like the Pieter Wuille of open hardware, basically! The only difference is that you don’t have a Pieter Wuille fact website about him but maybe that will come!

Stephan Livera [04:57]: So I think one interesting question — and I know you’ve done a talk on this question — is around how we might initially at the first glance think, Oh! Okay, open source means it’s more secure and we can trust it more! And yet I think you were basically dispelling that notion saying, Well actually, there’s a lot of things we don’t necessarily — that doesn’t go all the way! Why is that?

Bunnie [05:18]: So open source is a necessarily but not sufficient condition for trust. The problem at the end of the day: imagine you can get the source code for your browser and you can compile it. And even if you can compile it, do you really know what’s going on on the inside? You really don’t! Because there’s just like so many millions of lines of code on the inside and all it takes is one bad line of code to create an .EXE that could cause people to steal your private keys, right? And so the other half of trust is it can’t be something so complicated that’s intractable for you to audit or to understand at the end of the day. [05:51] So trustability almost becomes a property of a system that you have to design in from the top. When you talk about trustability from this absolute, self-sovereign standpoint, this trust from the standpoint of like, Oh this is an Apple iPhone — I trust it because of a brand! Right? But I don’t — that in and of itself is more of like a faith, it’s not a fact-based trust. It’s based upon social opinions and repercussions if they violate the trust. Oh, it’s against their corporate interest currently to violate the trust, whatever it is — because of the branding and whatnot. But that’s not a brass tacks, physics-based trust. When I’m talking about trust from this standpoint, If you want to stay and have an evidence based thread based on things that you can measure with your own two eyes and own two hands and convince yourself on from the bottom-up, a system has to be designed to be simple enough to facilitate that within a humanly-reasonable amount of time! To say that you’re gonna spend 5 years auditing your browser’s source code is not feasible because 5 years from now you’re gonna have patches that are gonna be absolutely essential for you to be secure anyways, and for you to audit those patches will take another 7 months or whatever it is . So you have this eternal cycle of patches you have to keep looking at and you never get any work done, and also you never really catch up with the current status quo. And that’s one of the problems with having an open source system that is too complicated. It’s not physically possible to get to a point where you can trust everything inside of it!

Nicolas Dorier [07:14]: An interesting point you’re making — you mention that most people see security as a trust in a label. On my side I got this kind of experience as well in the banking industry for custody, where it doesn’t matter where at the level the thing is secure. Basically at the end of the day, they trust their label. Like some piece of equipment that they have been using for 20 years and has never failed them. It’s kind of the trust inside the label that consumers have, but I think that you have big audiences that don’t trust the label — one of them of course is the Bitcoiners! But I think another type that are mentioned actually is whistleblowers like Edward Snowden or the like that really wish for this kind of device, right?

Bunnie [08:00]: That’s absolutely right! And so in a way at a sociological level, trusting on labels is a thing that we almost have to do, because we don’t have enough time in the day to look at absolutely everything in our life! This is a little bit tangential, but another thing I worry about is my food! How do I know there isn’t heavy metal on the inside or some pesticide or something that can make me sick on the inside? You can worry yourself sick worrying about these types of things and never eat, but at some point I just have to trust that this restaurant is gonna serve me good food! And maybe the argument is that the chef eats the food too! At the very least they’ll be as sick as I will be, and they have a self-interest in it, right? That’s more of a sociological-type trust. There’s a point where it’s appropriate and you have to fall back on it, but I feel like on the technology front, we’ve just surrendered so much to just black boxes with no explanation on the inside! And the other key point is that the trust system can come unravelled so quickly with a single backdoor patch. It’s just castles built upon sand right now!

Stephan Livera [09:04]: Yeah I think that’s a really interesting way to put it. So in the Bitcoin world we have a saying, “Don’t Trust, Verify,” but as you say, there’s only so much that’s practical for the average person, the normal person out there. And maybe you’ll get someone who’s a little bit more enthusiastic and they’re willing to go through things. And in the software world, that’s things like compiling the software or doing GPG verify to check that the signature matches and doing some level of what we might call Web of Trust or social networks and saying, Oh okay! Nicolas Dorier is a well-known guy and he signed off on this thing and I know these other guys signed off there! But I think the interesting point is that it’s not so easy to do with hardware, right?

Bunnie [09:44]: Yeah there’s a couple of levels of issues in hardware. One is obviously, do you even trust the base design itself? Like, does this piece of hardware — the manufacturer that makes it — are they reliable? Does it have the features they say it does? Does it not have extra features they didn’t tell you about? These are sort of the issues you have. And then you have the specific instance of the hardware that you’re working with. And this is a key point of difference between software and hardware. In software, you have cryptographic tools to transfer trust. We can hash things and we can sign them and we can verify them very quickly at the point of view. So if Nicolas goes ahead and signs off on a package and says, I’ve looked at the source code and this is good and you can use it — you can reproduce his efforts by hashing it in a way and checking the signature and you can say, Well at least I know that I have exactly that piece of code that he signed off on and I’m using that. When it comes to hardware, you can’t do that! There’s no hash function for hardware. There’s no easy way for you to snap a finger and say, this is exactly the Bitcoin wallet that the manufacturer intended me to have! You know, what if someone put a small chip on the inside to record your keystrokes? What if there’s a little exfiltration device on the inside to exfiltrate your keys or whatever it is? These types of things do happen—they sound a little crazy, but I mean there was recently some presentations at this year’s CCC where they showed some, again, live implants in the wild, like just people finding them inside of their [secure?] phones and whatnot. They do happen! And that’s also a level of thing to worry about, particularly when you start talking about things like Bitcoin which have inherent value in and of itself.

Nicolas Dorier [11:18]: I remember in one of your talks you were really — I think it was a talk you made at the beginning of this year where you were explaining this trust problem. It was very well said! And basically your goal seemed to [be to] move the verification of hardware at the place of use. So how far do you feel you are from this ideal with the Precursor and also with the Betrusted environment that you are working on right now?

Bunnie [11:47]: Right. So what Nicolas is referring to is that, in the security world we have this phrase called the Time of Check versus Time of Use problem. So if you go ahead and check that your browser was okay on the server, and then you went ahead and downloaded and someone man-in-the-middled your download and you didn’t check it again — that’s called a Time of Check versus a Time of Use. It was okay in the server but the Time of Use is actually after you download it—someone can go ahead and modify it before you use it. And so in software there’s a well known principle: you always try to check it right before you use it, the exact copy you’re gonna run is the thing that you’ve hashed and checked against the signature! Not some other copy in the cloud or on your disk. [12:21] In hardware, similar concept: there’s a Place of Check versus a Place of Use. So one of the more laughable suggestions I’ve seen is that people talk about like, Oh these sovereign fabs! Let’s build a foundry on our country’s soil, and that will help solve this problem of trust in supply chain! It’s laudable on a number of fronts, but I think it’s more of a political stunt to try and create local jobs. Which is fine, you can do that! But the reason why it doesn’t help the Place of Check, Place of Use problem is that it’s been established that a common vector for exploits is the couriers who deliver the package, or—if you’re buying off of a distribution site like Amazon — the customers who buy it tamper with it and then return it back! So people can buy like a hard drive, remove the tamper evidence seals, modify the firmware, implant something inside, and then restore the seals, and then send it back to Amazon. And then the distributor’s like, Okay! Well they returned the product! All the seals are in place! We’ll just resell this to someone else! You can’t target who it goes to, but the fact of the matter is you’re starting to put these devices out which have these implants on the inside. [13:22] And these are vectors that are completely close to the end-user. There’s nothing about where the chips are made or where the factories are located or whatever it is! So you do all of this effort to create billions of dollars to spend on a sovereign fab, and at the end of the day the Place of Check is not the Place of Use. You still have this big disconnect in terms of a very trivial exploit path. And when I say trivial I mean also part of the problem is if you think about the incentives involved, you’re trying to ship maybe say a Bitcoin wallet and you want to secure $1 Million of Bitcoin on it, and the person delivering it to you — how much are they being paid? What’s their stake in guaranteeing the integrity of that packet? That person — what’s their vulnerability to a bribe or to being distracted? The stakeholders don’t have aligned interests in terms of getting you the level of integrity of the hardware that you’re expecting to secure your secrets with. And so this becomes the existential problem that Precursor is trying to deal with. I haven’t mentioned in detail what Precursor is yet on the podcast! For listeners who don’t know, Precursor is an open hardware development platform for secure applications. And it’s goal is to address these issues we’re talking about so far in this podcast. You can go to precursor.dev, that’s a website and it’ll bounce you to our current crowdfunding site where we just closed a round of crowdfunding for it. And we plan on shipping the product in the late part of this year, 2021. And the idea is to explore the primitives and the necessary tools that we need to close this Time of Check to Time of Use gap. One of the cool parts about Precursor is that we use what’s called a Field-Programmable Gate Array (FPGA) as the computing element, which means that instead of trusting a third-party to create your CPU—which has your instructions, which has your encodings, which has all the debug backdoors, all this sort of stuff — normally we just trust ARM or Intel or someone like this to just create this CPU and we just take it as a black box! Crazy that we take it as a black box, but we all do! We don’t know what hidden instructions are on the inside of it, right? Instead of that, you’re given the source code description of a RISC-V CPU, and you can compile it yourself into the gates on the FPGA and then load it onto the device yourself, if you want to be able to get to that level of trust. It takes a little more effort than using the stuff right out of the box, but the point is that you can do it—you are empowered to do it. And then people say, Well, but of course, what if the FPGA itself has a piece of closed source silicon? You know, it’s turtles all the way down! This is true. It’s true that there are still attack surfaces on everything — you can’t seal all the doors. But, we’ve really moved the goal posts forward! Because in the case of just getting a closed source CPU blob, someone could put a whole hidden instruction on the inside that just sort of socks away some bytes of memory into a small register somewhere inside the chip, and it’d be very hard to discover this! You know, you could put a lock on it so you can’t fuzz it out, but it can only be activated with certain instruction sequences — whatever it is. And it doesn’t take a lot of gates to do that sort of thing, right? With the source code at least with our device, you can look at it and see [that] none of these things exist! [16:18] The instructions decoders are decoding only the instructions we say they are! No more, no less! And they’re doing it properly! And then once you compile it down to the gates on the FPGA, the compilation process itself is a little bit random. We can’t predict where the critical bits will land inside of this several hundred-thousand gate device. And because of that unpredictability, it makes it very difficult for someone a priori to implant something that will pull out that one critical bit. So now, any implant that tries to pull out or replicate what would be a trivial attack on a closed source piece of silicon, now becomes a very large — you’re almost like implanting every bit of your RAM, you could imagine, as opposed to just looking at a single word bank at a well known location — now we’re talking about orders of magnitude [greater] difficulty in terms of executing implants and detectability on fundamental parts such as the CPU, and the structure and the integrity of it.

Stephan Livera [17:16]: So if I’ve understood you correctly then, you’re saying that essentially, in the current world with ARM and Intel and so on, a lot of it is closed source and [it] would not be apparent from a visual inspection. And so what you’re saying is, this is taking us one step closer to that vision of: the end-user is able to more clearly detect if there’s something different about the device because — as I understand you, I might get this part wrong—but the memory allocation is more random and therefore that just necessitates a bigger change to the physical device?

Bunnie: [17:52]: Yeah! I’m covering a lot of ground in a short period of time here! Let’s tease apart two aspects of the design: (1) is the physical device itself and (2) is what’s inside the actual chips. So let’s start with the physical device itself, and so this is actually a little bit of a side jog from when we were talking about Intel and AMD. So first you have to trust the physical device itself is correct before we talk about anything else. Precursor is made to facilitate that type of inspection — it’s very simply constructed: a single-sided circuit board. We give you reference images of it so you don’t have to be technical to understand where you’re going: you just have to visually do a diff between the two things to see if anything’s different on the inside. And it’s simply constructed, so it’s not like you need crazy amounts of microscopes or whatever it is to do that type of thing. But that only gets you as far as a very gross, physical, like, What is this extra little black blob here with wires on it? Is that supposed to be on there? No! Okay, so that must be an implant on the microphone or something like this. That’s the kind of thing you can tell at that level. Now, when you get to the CPU level, that happens all inside those little black blobs of epoxy! [19:02] And in order to inspect those, you would have to have a desktop-sized microscope that costs several million dollars. It’s not practical for end-users to do that. And so our answer to that is that we then turn the CPU into almost basically a software description of a CPU. So all of those tricks we talked about in terms of signing and hashing and so forth, we could have a third-party auditor look at our CPU code — the description of our architecture, our equivalent description of an Intel or ARM CPU—we give it to you at a level so that someone like Nicolas could look at [it]. He could sign off and say, This is good! You can take that code and confirm the integrity all the way to the point of your house. And then you can compile that code yourself into the gates of one of these black boxes. Now we’re all admitting that we can’t know what’s inside the black box, but— where the key trick happens is that— inside the black box, when we give you our gates, it’s a generic sea of gates! There are 30,000–40,000 LUTs on the inside, and we don’t know what does what! They’re all just generic — they do simple operations like Adds and Subtracts — a little bit simpler than that, but let’s just say it’s at that level. And then we have a program that maps our description of the CPU to those LUTs (look-up tables) on the inside. And that mapping is pseudo-random. Basically you can’t — very small changes in the source — we basically put a random seed on the inside, is the trick that we use. And it foils people’s attempt to take that generic sea of gates and bias it away, that would allow someone to exploit the code that we put on the inside. That’s the basic argument. So we basically take the hard, gnarly hardware problem of knowing what’s inside of our CPU — we turn it into a software problem — and we put it into a generic sea of gates which has a mapping property that makes it difficult for someone, in advance, to predict where your exact implementation will go. And that’s where the security property comes from. [21:04] If you [run] the numbers, it becomes cryptographically secure at the complexity levels that we’re talking about.

Nicolas Dorier [21:13]: For [the listeners], FPGA basically is like the 3D printer of electronics. You have this, and you just fit that out [with] any source code you want and make it do whatever you want. In this case, making a CPU out of it. FPGA looks like anybody [else’s]. It’s really harder — at the component level — to pull out some kind of attacks in the middle because when the FPGA ships it can be used for anything!

Bunnie [21:42]: That’s a really good example: a 3D printer for hardware! I mean it’s not physically extruding things but it’s a very similar concept.

Nicolas Dorier: What I really saw as well with FPGA is that, for the first time [I too] can learn how to make my own CPU and I think it’s very powerful! It’s a really powerful concept! It’s something that cannot be stopped! I guess if [for] FPGA there is a lot of money for terror everywhere in the world, so I guess from a political condition, say the US can prevent China from making CPUs and try to impose tariffs on CPUs from some specific manufacturer in China, but when we are talking about FPGAs—since FPGAs are like kind of a commodity hardware — it can be manufactured quite anywhere. That touches on another topic that later I want to touch on. It will make the Precursor more easily sourceable because basically the FPGA — they [can be] produced anywhere, right?

Bunnie [22:39]: I think [at a] meta-level you’re correct! In practice, we have to design in a single chip, because we have to make a circuit board at the end of the day and validate it. There’s other laws we have to deal with like EMC compliance — we can’t just swap out chips and then use the same FCC designation. It’s other complications!

Nicolas Dorier: But somebody that understands Precursor will be able to do this though!

Bunnie [23:03]: Absolutely! The key point is: even if my venture were to fail, or if I were to — for some reason tomorrow cease to exist — someone else could pick it up, the source code is there, they can pick their favorite FPGA in a matter of days if not weeks, relay out the core design, and then use a different FPGA and you’d be up and running again! I think your point of there not being a single point of failure in the chain that is hard to work around is very valid! Whereas right now, if you wanted an alternative to Intel and AMD that’s really hard to get a functioning alternative to that!

Nicolas Dorier [23:38]: There was also one point I wanted to know: in current Bitcoin wallets, lots of hardware wallets are using what’s called secure elements, and apparently those secure elements are kind of a big blob of, “Trust me, don’t verify.” On your side, how do you protect those kinds of secrets without relying on such blobs that actually can be very easily targeted because for sure it will be doing something secret, so it’s a very [large] honeypot of [a] backdoor!

Bunnie [24:07]: That’s a really good question! So now there’s a third or fourth dimension of security which is tamper-resistance. Up until this point we were talking about, Have we taken the red pill yet? Basically — are we in reality? If someone gives you a piece of hardware that’s been tampered with, you’re sort of in the Matrix! You can’t break out of it — there’s nothing you can do! That’s the existential question that Precursor has been trying to answer is like, Are we living in the real world, or has someone put us in a cage and we don’t even realize it? The second aspect is, once you know you’re in the real world, how do you know that you’re continuing to live in that world, and that someone hasn’t tampered with it, and also, can you basically leave your wallet somewhere, walk away, and know that — even if someone stole it — your secrets are safe? That’s the tamper-resistance aspect. I can go into this for days! This is a huge subject of research in the silicon industry. But as a general principle: if you talk to anyone who has practiced extraction of secrets from silicon — who has pulled out keys from silicon, and I’ve done this, and some of my colleagues have done this — our general opinion is that even the super-ultra-secure tamper-resistant guys, they’ll hold up maybe like a day or two against a well-equipped lab. There’s just no amount of — and if you get into the circles and you start talking [with] the guys about the unpublished work, you would just — your jaw would drop! The most powerful technique recently is this technique called Backside Imaging. So it used to be that you had to attack from the topside-down and you had to [strip] the wires — it was actually a very difficult attack! Now actually a lot of chips these days, they thin them down before they put them in the device to make the devices thinner. The chips are so thin that you can actually see the parasitic photons that the transistors emanate coming out of the backside of the chip! So the chip, as mounted, if you just look at it with a very sensitive camera — and this is cryogenically cooled stuff so it’s not like everyday sorts of stuff — you can actually see the patterns of bits firing real-time without doing anything! You just turn it on and you can see the side-channel coming out of them! And because of this, even the most secure chips with all of these — they talk about secure meshes on top and PUFs (physically unclonable function)—you can just turn them on! They’re just emitting—out the backside — all their secrets! Right? You just have to have a good enough device [that] depacks it well enough to see it. So this is where we’re at today! [These are the] tamper-resistance issues that we have today! From that respect, I have an article that I wrote that’s part of my Precursor campaign that’s a deep dive into this. When you realize how bad [this] situation is, the problem is that we’ve had such a long-running streak of untruth in advertising in this, people don’t even know how scared they should be of what’s on it! [26:36] So just the goal I have with Precursor is to be brutally honest as to what we can and cannot do. So I took my own design — another one of my lives: I’ve done things like extract the security keys from the original XBOX, I’ve done a lot of security work in the past — I took my best crack at my own design, and of course I found a problem! This is not a surprise, and it turns out that the strength of Precursor from a tamper-resistance standpoint is as strong as the epoxy that you can put on it. Basically, if you can break off the epoxy and the metal shield—and you’re a skilled attacker — you can extract the keys with enough time. A couple hours. But at this point you’ll definitely know that someone has attacked it — you would see the evidence of the attack. It does take some time. But I would say that’s roughly equivalent to about the security of a lot of devices offered. And honestly people say, Oh, my smartphone is so secure! A lot of them have just — plug-in a USB cable, run the software — it’s jailbroken. So that’s a zero-touch break. [27:33] There’s no evidence of any sort of tampering with that type of break issue. So at least [the Precursor] doesn’t have any issues that are that bad, but yeah if someone physically took your device and they had a specially designed CNC mill to remove the epoxy from it, they could read out the keys. And to react to that I introduced a self-destruct mechanism inside the device. If you are very paranoid, you can program what are called Volatile Keys — they’re RAM-backed. Unfortunately, it really doesn’t fit the use-case of archival storage of high-value secrets like Bitcoin, because the problem is that if you forget to charge your battery, you lose everything in the wallet! It’s not the right solution for that, but for example if you have a device and it’s to communicate with friends and you want to keep it very secure and if it gets into the wrong hands you want to make sure all the comms are wiped — that’s actually perfect because the moment the battery is not charged anymore, all the keys are gone, and everything on the device is unreadable. So that gives you a bounded sort of risk on very volatile, high-value secrets, but the problem in Bitcoin of storing long-term non-volatile secrets — so like your wallet keys, that type of thing — it’s a very hard problem!

Stephan Livera [28:46]: Yeah so it’s almost like a dead man’s switch, if you will. So I guess we could talk a little bit about the potential applications of Precursor. I think you’ve mentioned that it could be like a password manager or a Bitcoin wallet or some kind of messaging platform. Are these basically the main uses that you can think of?

Bunnie: Right! So basically Precursor itself came from a project that was originally called Betrusted. And the genesis of it was I actually did a research paper with Ed Snowden on the Introspection Engine, which was an attempt to convert an iPhone into a trustable device by monitoring its radios and saying, Okay well if the radio is not behaving in the way that I expect then the iPhone’s been compromised and therefore you should burn it, break it, or throw it away. The outcome of that paper was that the iPhone was just too complex! There’s too many false signals coming out that we couldn’t create a reliable fool-proof trigger that wouldn’t eventually make you just ignore all [the] alarms. The vendor itself would — you know, the iPhone would turn on even in Airplane Mode to do scans of the environment! We don’t know why — apparently it’s vendor-sanctioned behavior! Apple doesn’t comment on it, but that’s just how it is, right? So when you have this situation where vendors are doing things that are already sketchy and you can’t — how are you supposed to find malware in that situation, right? So I thought about that for a long time and I was like, Look, we really need to solve the problem! And this is where I started arriving at the principles of simplicity, openness — these types of things. This is kind of the genesis of a lot of these principles was that effort. And so the direct target of that has always been supporting activists, journalistspeople who are working in dangerous situations who need very high levels of security and they’re up against state-level actors who have very little penalty for killing the person in question, and a lot of motivation to do it! We’re talking about high-stakes poker here, right? [30:25] Which is why the Precursor design is so incredibly paranoid! And the goal was to make a product called Betrusted and we want it to be just out-of-the-box simple. Safe defaults — you didn’t have to be a tech wizard to use it. Basically, if you had one of these devices [and] you could do a basic amount of inspection on it, you knew you were okay! Precursor came about because we got the hardware of that finished, but the software for it is still a long way off! Software, it turns out, is very hard, it’s very complicated, and that’s where all the bugs start to come in. That’s where all the scary, unknowable complexities start to happen! And writing really good software that’s simple, auditable, and understandable takes a very long time! And so we made the decision that there’s other people in other communities who may benefit from a piece of hardware that‘s designed to this level of security and paranoia. For example, people who just want to manage passwords. People want authenticator tokens they can trust—true second-factors. People want Bitcoin wallets. These types of things are applications that could probably use the same hardware platform that we’ve created. And so we branded it Precursor to make it very clear that it is a predecessor—it’s a harbringer of other things. It’s something that you add your code to [in order] to become that application. And that’s why we bill it as an open source hardware development platform. It’s not a product in and of itself! And the idea is: put it out there and maybe other communities will find things—this is part of the reason why I’m on your podcast! To let your people know about this possibility and maybe get interest and see what things I didn’t expect could be useful to run on a platform like Precursor!

Stephan Livera [33:25]: I’m curious as well: so other people are interested in these kinds of things, they might also have looked at other projects like the Librem 5 or the PinePhone. How would Precursor distinguish from those kinds of projects?

Bunnie: So the Librem 5 and the PinePhone — those are wonderful open source projects, and they do have open source down to the schematic level of the hardware. But there problem is [that] they still have the CPU problem that I described to you in that, like, We don’t know what goes in — I think that they’re ARM-based CPUs from vendors that don’t share the source code of the CPU. So there’s a layer there that they can’t get into in terms of trust and reliability. And also they aren’t on the reduced attack surface train — they’re not about the simplified trust-inspectability. They’re more looking to replace your full-featured smart phone with a device that has a better trustability aspect to it. But you still have the complexity problem in the devices. And that’s a deliberate choice [that] users should be allowed to make! Like, I don’t think that you can live your life practically in this complicated technological world with a device that is so simple that you could inspect it! There’s just sort of like this Faustian choice that we have to make as humans: we don’t have enough time in our lifetimes to learn all of technology, but we need to learn enough to be able to secure ourselves! So the dichotomy I’m sort of approaching—at least personally — what I do is, there’s a certain set of things that I do on devices that I don’t know if they have something on the inside! But it also doesn’t matter! Like, why not? Browsers are convenient, Intel computers are cheap, let’s go ahead and use these things and do day-to-day things that don’t matter. But then the things that I really do care about — the things that are risky and high-value, I then put into this [other] device that I trust. That’s the dichotomy that I’m trying to move towards. So you actually would imagine a Librem or a PinePhone would work hand-in-hand with a Precursor device. You would actually carry both of them. One of them for your daily browser, and the other one for your security keys.

Nicolas Dorier [35:29]: One thing I want to come back to is you were saying that a motivated attacker basically could probably find out where — reading secrets. But I saw that you plan as well to have a custom-made file system for the Precursor with plausible deniability. And actually that’s a very familiar concept for Bitcoiners because with Bitcoiners, one thing they are doing to protect themselves from extraction of secrets directly from the device — is that they choose a password on top of it. So to get the private key you need basically what is written in the device plus this password, and the password is never stored. But the nice thing as well is that in our protocol, if you put a different password you get a different wallet, so that’s how we use plausible deniability. If I understand you, you are doing it at the OS level, right?

Bunnie [36:24]: Yes! That’s right. Absolutely. So to clarify, we also would fully intend that you would have password unlocks. We don’t rely fully on the device’s security for anything at the end of the day. So yes the secondary passwords are important, and yes, we also intend to implement a tree of passwords if you will. So that the secrets will unravel over time. And I said the software’s really hard !— you know, plausible deniability is a very hard problem, and we are actually going all the way down to the file system level and re-implementing the file system to that even an attacker who has the ability to read your raw disk image out constantly could not tell that there were certain secrets on the inside. We make the free space of your disk look exactly like erased files. And so if you forget your password, you literally have deleted the file. It’s literally the same operation. The operating system, to delete a file will—if you request to do one — all it will do is just re-encrypt it to a random key and then delete the key. That’s what it does, right? The other trick that we plan to do is that, instead of every file being like—normally [when] you’re managing a file it has a certain size, you open it, you scan through the file to position, you read it—that’s the story, like the positive extraction for a file. Our files are actually databases on the inside! So once you refer to them, you open them, they have keys and you can sort of query them for the contents. What this allows us to do is that as you type in more passwords to your system, we can actually merge in secure overlays. So your contact book—when you type in your initial password — will just have your everyday contacts: your Mom, your girlfriend, whatever it is, things that everyone knows that you have, right? And then, if you type in another password, it actually can — across all your files, all your applications, add an overlay to it of that particular password-linked contacts. That’s the goal! And so now in addition to that, you don’t have to reload your applications or whatever it is, it’s just [that] you now see a new set of chat history and new things just appear. When you clear that password — they go away. And you can have multiple passwords set. And so the idea is that at some point in time, if someone says, I force you to show me these passwords. You have to unlock your device for you to enter this country, and show me everything! You enter your passwords that you remember at the time, and you hand it to them, and that’s it! They can’t say that there are more passwords or not more passwords, they could do a low-level disk scan — they won’t find any evidence that there’s files that are not decrypted or there’s things that aren’t being accessed, because that’s what the free space looks like! All the free space looks like a file that you might have forgotten the password to. That’s kind of the way we’re trying to structure the operating system, but it’s really hard! Like this is actually a very hard cryptographic problem, and it’s going to take a long time for us to write it.

Nicolas Dorier [38:57]: Yeah I was surprised that you do this because even like creating such a file system I will expect that it’s a huge endeavor! I think the first one had been done by Julian Assange actually, if I am correct. But I think there has been many other [apps] then but I don’t see any other that broke through. So I was surprised that basically you are developing that by yourself as well?

Bunnie: Our core team is on it! I spent the last 2 years developing the hardware. Hopefully the hardware stays relatively stable now, and the next 2 years I’m gonna spend hashing out a lot of the software issues. There’s another guy, Sean Cross, who’s been absolutely instrumental. He’s been writing all of the core OS stuff, and we have a couple other contributors who are working on the graphics and rendering side of things. We’re hoping to get more of the community to start contributing to the software. But yeah it takes time! I mean this is just a full admission that this is hard to do and it takes time, which is why Precursor is a thing and Betrusted is not!

Nicolas Dorier [40:00]: I think you really want developers to start using what you’re doing, basically. And as a developer right now, I really want — imagine! — to make a Bitcoin wallet! Like, my only way right now for doing that will be to wait for the Precursor, right?

Bunnie: I mean, there’s a number of ways that you can do it! So Precursor in and of itself, we actually have a software emulator for it online, if you go to our repo and you look for https://github.com/betrusted-io/xous-core the operating system itself is written to run in what’s called Hosted Mode, so you can either do it in a full hardware emulator or you can do it with half of it based in a UI rendering running on your machine. It’s really primitive — all we have is message passing going on right now inside of it. But we’re trying to facilitate some in that area early on. But you don’t even have to use Xous, our operating system. At its core it’s a RISC-V CPU, and you could port it to almost any other platform that you want! And I imagine more practically speaking, the early Bitcoin people probably don’t want to wait for us to implement all of these fancy plausible deniability features and all of our fancy multi-lingual method inputs. Like we’re putting a lot of effort in before we even get to the application level! In which case, I fully expect someone is just gonna port some RTOS to it or Linux or whatever it is and you’re off to the races! It’s like any other platform. The problem is that when you port one of those RTOSs to it, you’re exposed to all of the vulnerabilities in those real-time operating systems! The trust level — they’re pretty good actually! I mean honestly, they’re probably okay, but we’re writing everything for our system in Rust, so we get better guarantees on memory safety and whatnot. A lot of RTOSs are in C, so if you have buffer overruns—good luck! There’s these types of security properties that you have to deal with. Honestly if you want to port an RTOS, one I’m very familiar with is called ChibiOS — you could probably do it in a couple, three days, if you’re an experienced OS porter. And you could have enough functionality to build a Bitcoin wallet pretty simply on top of it! So I kind of expect that’s what’s gotta happen before we get to our level of security on our software side.

Stephan Livera [42:10]: So in terms of the device, it doesn’t have like a cellular modem, right? So it would be primarily out of the box, you would use it with Wi-Fi, or how would that part of it work? Or is mainly like you plug it in?

Bunnie: Yeah it has a Wi-Fi chip on the inside — it’s hardware-firewalled. The intent is that if you want to use it on the go, you tether it to a phone or find a hotspot or something like this. There’s a lot of reasons why we’re avoiding a cellular modem on this dev. The system is architected so that later on if you want to extend to include a cellular modem or something like this — it can! From the standpoint of just holding really core to our principles of transparency, simplicity, supply chain management — all these types of things — the cellular modems have a lot of challenges in that area!

Stephen Livera [42:55]: If we’re talking about it from a Bitcoin perspective, the user might be looking for ways to pass back and forth data — that might be a Bitcoin transaction to be signed — that kind of thing. Currently in the community, there’s interest in using air-gapped methods, things like QR codes or Micro SD cards. Are there any ideas on how the device could be made either — maybe a camera is attached? Or is it more like it would be plugged in to transfer across the unsigned transaction, that kind of thing?

Bunnie: Yeah I think if you really want to do an air-gapped transfer, the easiest way to do it with the hardware that you have would be to modulate over the microphone. So it would still be plugged in via the microphone cable, like the headphone cable itself, you’d plug something in by the audio. I guess you could also plug in an actual microphone itself and just hold it up to the speaker if you wanted to. But we actually have some experience doing air-gapped acoustic modulation of data, which I think is fast enough for Bitcoin transactions! You’re not gonna send a movie with acoustic modulation, but a few hundred bits is not a problem!

Nicolas Dorier [43:53]: Bitcoin transactions is between one hundred bytes and maximum one kilobyte, so I don’t know if it’s not good enough?

Bunnie: Yeah, we get rates — easily, it would take like 5 or 10 seconds to transfer something.

Nicolas Dorier: Another interesting property of doing this is that transferring data by Wi-Fi I guess it’s relying on some chip that you cannot easily inspect, but I guess transmitting data via acoustic is way more basic

Bunnie: Yeah that’s right. The good news is that for the audio path, the entire audio path is in the trusted domain. Whereas, if we were to add a video camera, we would have to add a fairly complicated chip that we don’t know anything about, that would have to process the video and whatnot. That being said, I did design the system so that a hacker could extend it maybe to put a camera on the inside. One of the thoughts I had is it would be nice to get QR codes if you want. They’re convenient, they’re fast, they’re easy to use. Again, from the standpoint of just purity of the hardware, I didn’t want to cross that boundary until we saw it. But the acoustic samples—it’s just an AD/DA converter. It’s just a dumb analog to digital converter codec chip that we have, and then all the DSP happens inside the CPU. The good news is that we have — from a different, unrelated project — our team actually has a lot of experience with acoustic data modulation. For the pandemic, we were contracted to come up with some proposals for acoustic beacons for contact tracing for privacy conscious people. And so we created ultrasonic data transfer methods for getting bits of cryptographic data back and forth between two tokens at a range of one meter or so. You don’t hear it, but the data can be transferred over that distance. And so we have that code base! And that’s a thing that works over free space. We have to deal with all the background noise, and everything like this! If you plug it into a cable between your computer — and I would recommend doing that anyways, because why would you want to broadcast your Bitcoin transaction anyways over free space! Put it over a wire anyways! Even though a wire’s a wire. But it should work quite well in terms of the data rates and the modulation.

Nicolas Dorier [46:03]: Oh! That’s awesome! So there are tons of Bitcoin wallets in the market right now but there is nobody that has gone by this route of using acoustic data — it’s very interesting!

Bunnie: Yeah. The other cool thing about acoustic data modulation is that you can do it entirely browser-side. So you could have a Javascript program that can modulate it. And another sort of fun hack is that you could even conceiveably make a record. Like if you want to really hardcore store something — take the thing, burn it into record, burn it into the grooves of a record, and then you can play it back later on into your device by the record! It’s a full analog reproduction of your digital data! Like you can just use a record needle plus the analog amps. No computer will touch your data! It’s an archival format — so cool, right?

Stephan Livera [46:52]: It’s like those stenographic techniques where they encode something into an image kind of thing but even crazier. You mentioned also the trusted domain and an untrusted domain. Can you just touch on that part for a little bit?

Bunnie: Yeah. There’s what’s inside the box and there’s what’s outside the box. And obviously for the device to interact with the rest of the world, we have to have something that touches the outside world, and that’s the untrusted domain. So the untrusted domain is basically a set of firewalls and circuitry to — we worry about things like power emission side-channels and attack surfaces through USB, attack surfaces through Wi-Fi, and so that when we talk about untrusted domain we have these set of chips that focus the attack surface down to an innumerable set of signals that we can think about, reason about, and test. And those funnel them into the trusted domain which — as its name implies — that’s where all the action happens! Where all the private keys and everything is located. And the trusted domain is what you actually want to audit and make sure everything is working correctly. And the untrusted domain helps make sure that the rest of the world can’t get an accidental side-channel or some insight into what’s happening inside the trusted domain. It’s just another layer of security that we added.

Stephan Livera [48:03]: Nicolas, any final questions before we start wrapping this one up, from your side?

Nicolas Dorier: So in the Bitcoin community, most developers are more on the software side rather than the hardware side. Except a few individuals. But lots of them are like happy to try to build physical things. So for example I’m thinking about @21isenough. He’s doing like a Lightning Network ATM. And basically he’s doing this hardware construction, he’s documenting everything. But the problem is —because we are mostly software developers — we are still immature in the way to open source what we’re doing at the hardware level. So what kind of advice do you have for example to a hardware wallet company to properly open source entirely their design? What’s the best way of doing this?

Bunnie [49:06]: So there are a set of standards created by the Open Source Hardware Alliance (OSHWA) that give pretty good guidelines about what you should publish to be in line with community norms for open source hardware. There’s actually a certification process. You can actually register your design with the OSHA and get an actual certification number for your design as being compliant to those standards. But basically it involves releasing the schmatics and the circuit board design in a format that is one that’s also editable, but one that’s also readable. It’s a little bit tricky in the hardware world because there are still a lot of proprietary design tools still in use. And that’s just the reality of it. There is just not enough investment in the hardware realm to have a full open source stack all the way down to the chips. It’s just not there yet. They’re working on it! But we’re getting there.

Nicolas Dorier [49:58]: But one way of doing that is like, when you are creating your PCB you’re using some of those proprietary tools. But out of it you’re producing some artifacts like for example BOM, Bill of Materials. You can generate the Bill of Materials but the Bill of Materials can be just in text form so like other people can easily review it. Or even for the schematic actually, the schematic could be exported in SVG. It becomes way easier to collaborate between people! But right now it’s not really [widespread] right?

Bunnie: Yeah I can’t remember off the top of my head what the OSHWA‘s specific implementation is, but I always ship my schematics both in the native tool format and in PDF format so that anyone who just wants to read it can look at it without a special tool. But then someone who is a hardware designer and has the tools can also edit it without having to laboriously copy and reproduce all the schematics and make mistakes along the way!

Nicolas Dorier [51:01]: Yeah but basically what you are doing is you make your hardware there, you have this fixed design and then you upload it following this standard. But like you said, Precursor is kind of like a developer board, so what will happen I guess from when you would ship it, is that people will build their own customized Precursor by changing some part of the hardware or like whatever [they’re] doing. But by doing that basically it’s a more collaborative approach. It’s not like they develop everything on their side and then ship it at the end and then never touch it again. I think it will be more in need of collaboration between hardware developers. Also, having a way to compare different designs like, Oh, how [is] my Precursor that I modified with my own stuff different from the original one? I understand that you have already this kind of tool that when you are finished building you can present it in a proper way, but you also need this back and forth collaboration!

Bunnie [51:54]: Yeah. The collaborative aspect of hardware is still a work in progress. There’s a number of services currently trying to create like the GitHub for hardware — that type of thing. There’s a lot of challenges actually to really making hardware extremely collaborative, and a lot of it boils down to the exact same problem that we were talking about earlier of the whole, What is this individual unit in your hand mean relative to the reference design? A classic problem in hardware is that even my design files — the ones that I create — are not synchronized with what goes into the assembly machine on the factory floor! And the reason is that the manufacturer themselves will oftentimes modify it to improve the manufacturability of the design. So very simple example is if I’m trying to cut a piece of metal, and I specify a square edge on it — the cutting bit is round! You can’t get a perfect square corner on that cut. So the manufacturer will make a modification — either they will add an extra drill point, or they’ll actually round out the corner, whatever it is — you don’t get exactly what was in your CAD file. And so what happens is I actually have to go through, send it to the manufacturer, they have to go through their drill bench and then feed back to me what the standard radiuses are—this whole process, and then taking all that back. If someone wants to truly reproduce an exact copy of that design, it’s actually quite hard! There’s these very small subtleties! And it’s like, you don’t have this problem in the software world, because one you compile something, that’s it!

Stephan Livera [53:22]: So it’s deterministic.

Bunnie: Yeah it’s deterministic — it’s done! Like it actually comes down to literally the details of the CNC machine that’s working on that metal case that impart almost a sort of texture onto the case itself. And so these are some of the unique challenges in the hardware world in terms of collaboration. So that’s one sort of extreme example. So you can just imagine now, when you’re at that level of—one hardware guy just patches on some blue wires on something and makes a new version! How do you capture that on the schematic? You know, those flying wires. How do you annotate that? Literal patches need to be communicated. So there’s a lot of challenges there! But, in principle, and what I really do hope is that someone does take Precursor and someone in the Bitcoin community is like, Oh we really need this camera and we’re totally okay with the attack surface of the camera, which is completely fine — you can add it in and make your own version with a camera! I actually designed it so that it’s relatively trivial. Like you could almost take the base unit: 99% of the design is the same, and you just have to modify a couple components to mount a camera in there, write some software, and then you have a very different function device! So the hardware itself, Precursor itself, is modularly designed to facilitate this sort of community growth. I don’t know how it’s gonna evolve! I can’t predict what people will do, what they like, what they don’t like. But oftentimes what happens is it starts with a need. Someone has an itch that they have to scratch — I’m the closest thing to scratching it — and they just want to hack whatever into it, and I try to give them easier places to hack that hopefully will make it easier for the rest of the community to pull it back in. But you know? People surprise me all the time! They come up with elaborate ways that are much harder than it would be. And then it’s hard to share it back and then we have to talk to each other and then have a conversation and be engineers and humans and respect each other and figure out how to build a community, right? But that’s the essence of community building at the end of the day!

Stephan Livera [55:11]: Right. Building off that point you were just making there, is it also a question here that it’s the size of this market right now? It may be that right now most people are complacent—they’re not really that interested in this ideal of verifying for yourself, of having at least the ability to do so — how does that change if, let’s say, Bitcoin gets a lot bigger and a lot more people are interested in this kind of thing. What kinds of changes would we see in terms of how these devices are manufactured and designed and things like that? How much bigger do you think the market would have to get before this becomes a more viable thing for a typical user, as opposed to the more hardcore technical user?

Bunnie [55:52]: Yeah. I guess this conflates two aspects: one is the security aspect and one of the open source aspect. From a security standpoint, I will fully admit, Precursor takes one of the most extreme positions you can get in terms of trustability and security — we make a lot of usability sacrifices in the name of absolute trust. That’s because the threat model is so severe! We really took it to an extreme in terms of designing it. Which means that in practice, a more mass-adopted unit will have more features and have more attack surfaces, but you have to find that balance! On the other hand, if we don’t move the goal posts all the way to the end of the field, we’ll never know where it ends! So that’s the idea, is to do that! So I think it’ll probably be at some point in the mass-adoption curve where we realize some things are really important — like we want to know about our CPU, we want to know about our key store — and other things probably aren’t as important like the camera or the cellular radio interfaces. We can contain those, right? Those are not actual attack surfaces, just theoretical attack surfaces. And then in terms of the trustability and reliability of the hardware, I mean the open source aspect—the ability to look at it and say what’s there or not — that will be driven more by—it’s almost a cat and mouse game! In an ideal world, obviously you just—everyone keeps their word! And then there’s no demand for any of this, right? In the security world — and this is where the dirty secrets of being in the security world [are] — is that, security experts drive both the supply and the demand for their trade. So if a security expert is not having enough work defending computers, they go into the business of making exploits, which then creates demand for their — they’re actually one of the few trades that can sort of legally control supply and demand for their efforts right now, right? In a way, what the demand for this sort of open hardware stuff will depend a bit on [is] what we find out in the wild! If it turns out that in practice no one ever finds Bitcoin valuable to go ahead and do a supply chain exploit to steal $10 Million worth of Bitcoin, then there will be no demand for this type of thing! But I think once you actually get a well documented case out in the wild—this is what happened. You both get the lesson, you get not a theoretical thing, but actually a real thing and this has happened and people are like, Okay, well you gotta avoid that! This is essential! That you have to have this standard of inspectability, otherwise you’re vulnerable to this! Then I think people take it more seriously. But unfortunately, human behavior is really bad at doing things that are just good for them, because — going to the gym, eating well, getting exercise — very hard for people to do! Because it’s sort of all this preventative maintenance type of thing. Whereas once you have a very immediate feedback that something is bad, people have a much easier time of doing something about it. So the market for open hardware and inspectability and are they gonna be exploits inside of my hardware and how important is it to open it up and this reproducability — I think it’s gonna depend in part upon what the adversaries do! The other potential outcome maybe is that we have — and this would be the ideal: we’re all holding hands and rainbows in the world — we have such a fecund and diverse innovation ecosystem around this, the modularity and everyone’s developing this, these open hardware designs are just desirable because we’re [all] so productive and so interesting, right? I wish we could get there but I’m not holding my breath for that outcome. It’s usually driven more by fear than by inspiration and hope.

Stephan Livera [59:11]: So I guess the realistic answer then is: someone has to get pwned for a lot of money before people start taking it more seriously!

Bunnie: I mean come on! I’ll tell you, a Forbes article about someone being pwned for a lot of money because of a supply attack would be like — we’d talk about it for the next decade!

Stephan Livera: Alright Bunnie well look I know we spent about 60 minutes so we want to respect your time but before we let you go can you just give the listeners an update where you’re at with Precursor fundraising and for anybody who wants to support you or take part in it — how can they get involved and follow you?

Bunnie: Precursor itself you can go to precursor.dev. We just closed crowdfunding in December. You can still pre-order units, which means that it’s a slightly higher price right now than the crowdfunding, but you can get one. You can get in the queue for one! And we hope to ship later this year. At this very moment actually we’re in really great shape. We’ve got all the certification ready, all the prototypes put into the pipe. We’re working against the Chinese New Year’s deadline. We have a lot of suppliers all over the world and a lot of them are in Asia so we have to give them all these prototypes before Chinese New Year’s. We’re on track to get—when I say certification ready I’m talking about electromagnetic compliance, this is the FCC’s CE sort of stamps I have to have to import to the EU and the rest of the world. We’re in good shape to get those certifications and so I’m very optimistic at the moment that with the pandemic vaccine coming around and whatnot maybe the supply chain will stabilize a bit and we won’t have as much surprises as last year. bunnie is my Twitter, or check our Precursor.dev and that will land you at all of our stuff including GitHub. All of our GitHub is unfortunately at a slightly counter-intuitive URL. It’s betrusted-io is our GitHub, but that’s because we originated with a project called Betrusted.

Stephan Livera: Thank you very much for joining us!

Bunnie: Thanks for having me!

Nicolas Dorier: Bye!

--

--