Introduction
It's not uncommon for officers departing a unit to have assembled a list of items that they would love to communicate to their past selves at the point where they arrived on station. This often takes the form of a "continuity binder", or sometimes a "manifesto." Unfortunately, the latter term has been somewhat taken by the extremist community, and nobody wants an actual binder any more, so this document takes the form of a digital "book" designed to be served up as a static web page. This lends structure and organization to the document; possibly most importantly, this allows for a quick and easy search function that works slightly better than, say, Microsoft Word's find function.
This document tries to capture the knowledge that I don't know is stored anywhere else. That's not all useful information, but it's awful hard to know a priori what information will be useful in the future.
I've tried to keep to some semblance of structure throughout, but please accept my sincere apologies to the reader for the lack of editing, planning, or any actual writing ability!
Iron Bank
Iron Bank is the Department of Defense (DoD)'s "hardened container repository." I put that term in quotes for a reason: it's not at all clear to me that hardened is the proper adjective to use here. Vetted is probably better, but that leads to a whole host of other questions:
- Who does the vetting?
- Perhaps we hand-wave this to say "well the CI/CD pipeline does, of course!"
- Theres an onboarding process, perhaps that's where the vetting occurs?
- What is vetted?
- The authors of software, as the SCRM folks would like to see?
- The people contributing the container to Iron Bank?
- The software itself?
- The container image?
Still, the term is important, because it's in wide use. More than any other product or service at Platform One, Iron Bank has the attention of leaders within the office of the DoD CIO. References to Iron Bank, DCAR, etc. are found sprinkled througoug the DoD CIO Library. For anyone that wants to understand the broader context where Iron Bank sits, and where it comes from, I would recommend familiarizing yourself with all of the documents in the Software Modernization Modern Software Practices section of that page.
The Terrain
Before we cover the history, value, and thoughts on the future of Iron Bank, it behooves us to start with the basics: where would an interested party find the Iron Bank online? For a number of reasons, there's no one place to find Iron Bank online, it's a somewhat disparate collection of services.
Iron Bank Front End (IBFE)
The nominal starting point for Iron Bank is the Iron Bank Front End (IBFE), available at https://ironbank.dso.mil. This is a custom webapp that is developed on Party Bus, earning it a CTF, but it is deployed within the Iron Bank infrastrucuture and doesn't constitute a real Party Bus appliction. I would escribe the primary function of the IBFE as discovery. The IBFE allows any user (TODO: write section on randos) to view the entire catalog of images within the Iron Bank. Users are presenteed with a paginated set of cards, one for each image in Iron Bank, and there is a basic search functionality and some rudimentary filtering available in a sidebar that allows users to narrow down the cards they see. Clicking a card brings the user to a page that provides additional information about the image, and provides a set of links to other resources.
Vulnerabillity Assessment Tracker (VAT)
The Vulnerability Assessment Tracker (VAT) is also a custom webapp developed on Party Bus but deployed on Iron Bank, so all caveats applied to IBFE about being a "real" Party Bus application apply equally to it. VAT is also available to any authenticated user, served up at https://vat.dso.mil. Users are presented with a somewhat more intricate set of filtering options for images and/or tags, and then can filter the findings as well (after selecting an image). The primary function of the VAT is to adjudicate findings in container images. The Iron Bank Docs page has additional information, but in a nutshell:
- Nightly pipeline runs use commercial tools to identify vulnerabilities, which are pushed into VAT by the pipeline
- VAT performs some de-duplication of findings (Anchore and PCC might both find a CVE, for example)
- The image maintainers provide justifications for each finding
- The Iron Bank team reviews those justifications
Steps 3 and 4 in the process aboce take place in VAT. Any user can then go review the findings in VAT. Importantly, findings and justifications can be inherited from base image layers in some cases, so in theory if you are submitting a Java application to Iron Bank and you use an Iron Bank-maintained Java base image, you will only need to justify findings in your application and not in the core Java runtime.
Iron Bank Docs
The Iron Bank Docs page is available to any user, without authentication. It is a static website built using Hugo on the Party Bus Padawan service, and is hosted in the Party Bus environment. It is accessible at https://docs-ironbank.dso.mil/. Although it is built in a different technology than this document (hugo + mkdocs vs mdbook), the intent is the same: easily searchable documentation. If you are reading this document, I recommend familiarizing yourself with the docs webpage. Note that while the actual static content is served up by Party Bus, the content itself is wholly owned by the Iron Bank team, so don't be shy about requesting or suggesting updates!
Repo1
The core conceit of Iron Bank is that unlike Docker Hub or Quay.io, users do not push container images straight into the registry (TODO: a section on VP). Container images are built by the pipleine each night from a Dockerfile that image maintainers provide. If the reader is familiar with the docker build command, they have likely used COPY commands within the Dockerfile to copy artifacts into the image. Whereas docker build will allow you to copy any file from the workding directory when you build the image, Iron Bank requires that you declare your required artifacts in a hardening manifest. This manifest indicates the file name, where to find it, and a cryptographic hash value (signature) for the artifact.
Since at the very least two files (the Dockerfile and the hardening manifest) need to be configuration-controlled, Iron Bank runs a Gitlab Ultimate instance at https://repo1.dso.mil. Each image has it's own repository under the https://repo1.dso.mil/dsop/ group. As a helpful hint: if you navigate directly to the Repo1 homepage, you will be required to log in. If you go to a more specific url and the url is viewable by unauthenticated users, you can view it without logging in. Because the Iron Bank group is public like that, and the url is increbibly easy to remember (dsop is an acronym for DevSecOps Platform), memorizing the path above will allow you to look at the files associated with Iron Bank images without authenticating.
Registry1
Container images aren't particularly usefull if they can't be pulled and run, so Iron Bank of course operates a container registry (Harbor at the time of this writing). The web interface for this registry can be found at https://registry1.dso.mil. You need to log in (with SSO) in order to view any of the images in the web UI. Once you log in, you can generate a pull token (for use with the docker CLI, K8s, or similar) using the menu in the top right. Your username is case-sensitive when you docker login! You can use the registry1 UI to find images without using the IBFE, if you want, but frankly Harbor UI isn't great.
Summary
Iron Bank is sort of a "Docker Hub for the DoD". Today, it isn't a single cohesive thing as much as a combination of useful services. The user-facing services are summarized in the table below.
| Common Name | URL |
|---|---|
| Iron Bank Front End (IBFE) | https://ironbank.dso.mil |
| Vulnerability Assessment Tracker (VAT) | https://vat.dso.mil |
| Iron Bank Docs | https://docs-ironbank.dso.mil |
| Repo1 (Gitlab) | https://repo1.dso.mil |
| Registry1 (Harbor) | https://registry1.dso.mil |
Value of Trusted Sources
I think when it comes to the "build vs buy" decision for enterprise capabilities, the first questions that need to be asked are:
- Does this actually need to be done?
- Does the government need to do it, or can we simply purchase it as a commodity.
I'll use e-mail as an example: when I came on acttive duty in 2008, every base had their own local email infrastructure and domain. Around that same time, the Air Force rolled out a program called "e-mail for life" that gave us a us.af.mil email address, and we could log in to a portal and update our profile so that email address would always forward to our current base-specific email address. Fast forward a decade, and the Air Force had moved completely to Office 365 with a single Air Force tenant, meaning email accounts were centrally managed and there were no more base-specific email addresses, in addition to the fact that the local comm squadrons would never again run an Exchange server. E-mail became a commodity the Air Force just buys annually. We decided we still needed email (unfortunate!) but not that the government needed to operate the servers involved.
I'll use Docker Hub as the punching bag in this example that I think illustrates that (1) the government does need to have a centralized repository of container images and (2) the government does actually need to play a direct role in delivering that.
The Vexing Search For Angr
Within the symbolic execution/binary analysis space, there's an open-source tool called Angr1. Circa 2019, I was getting started using it for the first time. At that time, the installation process was fairly complex (there are a number of C dependencies that need to be built in a certain way to work properly with the overall Python engine), and so the general recommendation was to get started by using the angr/angr image on Dockerhub. Dockerhub and Github use a similar strategy for namespacing, so the Github project for Angr is also angr/angr. The angr/angr project on Dockerhub provides a readme that is identical to the readme in the angr/angr project on Github, giving a user quite a bit of confidence that the two projects are related.
Vexingly2, the Angr project on Github doesn't contain a Dockerfile in the top level of the source tree (I believe at one point it did, which was actually more confusing than the current state). So then, where does the Docker image for Angr come from? Well you see, good reader, that comes from an entirely different repository under the angr group on Github: https://github.com/angr/angr-dev. But you just need to know that, or trust that I know that. Nowhere on the Dockerhub page for Angr will you find that information. This was my first real foray into a topic that I would learn is frequenty called dependency confusion, but in this case it isn't so much a dependency as the top-level program I was trying to run, and there was no malicious intent on anyone's part, but it really highlights a critical issue with any of the major container repositories (and to a large extent, package repositories as well): there's just no good way to identify the provenance of a particular image. Just because the Angr Dockerhub image is built out of the angr-dev repository today, who's to say that tomorrow it won't be built out of some other repository? Reproducible builds are still not a solved problem in 2025, so "just rebuild and check the checksum" isn't even a technically viable option, even if we ignored the fact that such an approach would eliminate one of the largest benefits of using a container registry in the first place: having pre-built containers.
Iron Bank's Value Proposition
In my opinion, the core value proposition of the Iron Bank is that you understand the provenance of the container image3. The container images are re-built every night according to their Dockerfile. That Dockerfile is configuration-controlled in a fixed location. Any artifacts that are inserted into the container image via a COPY command in that Dockerfile come from a source with an associated hash in the hardening manifest. They might just be binary blobs, but they are consistent binary blobs. There's not one release out of every 100 that contains a different version of a blob. If that sounds far-fetched, it isn't: purveyors of malware have become incredibly complex. Some malicious JavaScript files are only served something like one out of every 10,000 or even 100,000 times in order to make the malware less detectable. It isn't at all difficult to conceive of an adversary that would serve up a perfectly intact dependency 99% of the time, and then serve up a malicious version of a file at a particular time when they detect the request comes from Iron Bank pipelines or from DoD IP space overall.
Historically (and today), the DoD runs it's own copies of update servers for things like Windows Update and Red Hat Enterprise Linux updates. Why? Because we need to know what we're installing on our machines. It's not that those servers can't possibly be made to serve up malicious or insecure content, it's that we need auditability and traceability and visibility into what was shipped, under what version identifier, and when. The DoD doesn't need to own the data rights to every piece of software we own, or build it all from source, or have custom versions of every piece of software we use, but we do need to control the distibution into and within our environments.
-
This is a joke on a joke: the intermediate representation (IR) used by Angr is called Vex (it was originally developed for the
valgrindtool suite). The name of Angr is in part a play on words because it uses Vex IR. ↩ -
This often is confused with having provenance for the items inside the container image. I am not extending my claim, at this point, to the actual contents of the image. I am simply claiming that when you find an image in Registry1, you are guaranteed to be able to find the corresponding Dockerfile in Repo1. This is a significantly weaker claim than "Iron Bank secures your sofware supply chain", but the entire point of this sub-chapter is to convince you that even the "weak provenance" guarantee is important and valuable. ↩