Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Determine security requirements for automating kernel builds #31

Open
legoktm opened this issue Dec 23, 2022 · 1 comment
Open

Determine security requirements for automating kernel builds #31

legoktm opened this issue Dec 23, 2022 · 1 comment

Comments

@legoktm
Copy link
Member

legoktm commented Dec 23, 2022

Splitting from freedomofpress/securedrop#6514

On a technical level, to build packages one needs a machine (currently a clean Qubes VM) that has Docker installed, and then:

  1. run make securedrop-core-5.15, wait 2+ hours
  2. run make securedrop-workstation-5.15, wait 2+ hours
  3. upload the build logs to the build-logs repo
  4. sign and upload the source tarballs to S3
  5. copy and upload the debs to apt-test, to kick off kernel testing

Currently these steps are done manually, on maintainer laptops. This seems ripe for automation, especially because it's a slow process.

One important note is that these builds are currently not reproducible (see #3).

So if we were to automate this process, what are the requirements for the build host? Would we be OK if:

  1. it was entirely run on a CircleCI pipeline (or other cloud CI provider, e.g. CodeFresh)?
  2. it was entirely run on a DO droplet we/infra controls?
  3. it was entirely run on a physical machine under FPF control (e.g. in NYO)?
  4. status quo, entirely run on a maintainer laptop

Pinging @L3th3 & @lsd-cat for security input

@thedeadliestcatch
Copy link

@legoktm Headbutting into this for a quick note:

  • I would suggest leveraging a Qubes system that runs on a M/B compatible with coreboot, and a modern CPU (13th gen AFAIK should work). You will speed up the process.
  • The sources and other sensitive directories should be read-only if containerized, and without external network access.
  • This can be done with Docker. You could nest it inside a LXC container that you can snapshot, etc.

Unfortunately there is no easy way to protect non-repro builds, especially if using external infrastructure. We abhor CI for our kernel builds for that reason.

But there is a suggestion, which applies to repro builds:

  • Have a completely offline system that has been sufficiently protected physically run scheduled compilations. Obtain checksums/signatures of the resulting binaries and packages.
  • Verify manually on schedule that the signatures/checksums match with the publicly available packages.

If the builds are reproducible, then anybody can basically verify them in your team.

You could also deploy a data diode (if available to you or you trust someone to be capable enough to build one) to automate the retrieval of the signatures/checksums.

Secure CI is very much an unsolved problem, for now.

As for non repro builds: they would arguably be the most secure, but you give up the seed/structure randomization capabilities of the grsec patch. However, this is also the case for public/published kernel packages: you are already giving up the structures/ABI of the vmlinux and co. Any compile-time mitigations that affect system unpredictability are mutually exclusive with providing external access to the kernel binaries.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants