IT Sufficiency

45 min read

How to reach self sufficiency in a digital world ? Is it worth and how to lower the technical barriers ?

Digital divide

I had the opportunity to work during 15 years as an IT consultant in developing countries where the daily internet and power outages, combined with the difficulty to make simple international wire transfer, forced me to think on premises while the rest of the world, driven by furious competition, ever lower prices, and multi decimals SLA, was taken on the cloud.

I never felt that I was missing the bandwagon nor that I would be left behind forever with the impossibility to catch up the day I would return to Europe. While I have a theoretical IT background, I always loved the technical side of IT, and was fascinated, from the beginning, by the opportunity given by open-source software to understand how things were working in details.

I simply took that context as an opportunity to grasp the complexity of IT management as a whole, and try to solve every day problems by leveraging solutions within our reach at that time. That’s what gave us an edge.

On premises

I first started a small consultancy business focused on providing a more stable IT environment to operate daily on.

Starting with network services (internet load balancing, automatic fail-overs, DNS filtering, proxying, cache, remote accesses, VPN, …) allows us to extract the most value from narrowband and unreliable internet connections and differentiate ourselves from the competition while building trustful customer relationships. Without really being part of a plan, we ended-up managing after a few years the whole IT infrastructure (desktop computers, printers, servers and the services themselves) for a bunch of customers.

Infrastructure was what motivated me from the start, and I was very early confronted with the limitations of manually managing Linux boxes. We started to use virtualization (proxmox) and DevOps tools like saltstack for declarative state management, but if it helped to achieve a better repeatability of our deployments, it didn’t generate a clear productivity gain because we had to spend some time to glue everything together with scripts, and the whole process was error-prone.

Containers for everything

When docker and coreos came out, I immediately saw the huge potential for our business and started using it in production on bare metal servers way before the docker version 1.0.

With docker distribution (registry at that time), we were able to quickly distribute the same working solutions (nextcloud, mattermost, odoo, unifi, unifi video, …) to every site in a more reliable way and with fewer constraints than virtual machines.

What followed was just a series of small steps that ultimately led us to Kubernetes. From trac based repositories we switched to GitLab, started to build inside CI/CD pipelines, alpine packages and docker images that just consumed them, and then we decided to go for Kubernetes the hard way.

We throw the remaining saltstack states we still used to drive docker compose, and converted everything to Kubernetes and kustomize manifests, eventually even ditch out CoreOS for flatcar linux and docker for crio and crun.

Is it relevant ?

Back to Europe, I started to wonder if the on premises approach would still be relevant here. I started with personal reasons :

  1. To stay on the bleeding edge. If I intended to continue in that area, better have something tangible to apply small updates on, and try to keep up with the pace of innovations with the least effort.

  2. To communicate. I was way too busy to communicate on my work and mainly stayed silent during all these years.

    It would be hard to believe, without anything live to demonstrate, that what I did in those remote countries (so many consider as s… h… countries) was bleeding edge. I also didn’t want to throw everything out of the window.

  3. By utopianism. On one hand we have that hyper centralization created by those IT giants, but on the other hand, technology is deflationary in price and exponential in computing power over time (c.f. The Price of Tomorrow by Jeff Booth).

    I think that even big corporations will have some difficulties to keep growing at the same pace to counterbalance that trend. The day they fail will be the start of decentralization, driven by small, cheap and power efficient appliances, free software, and universal IPv6 access.

    Could a digital mailbox go back where it belongs ? In people houses like its physical counterpart ?

  4. For the will to share. It is crucial to keep thinking of ways of not giving everything for free to cloud services providers and take back the control of our data, metadata, and connections. We have created behemoths by not valuing our data. If there is some controversial AI around, be sure that we didn’t participate unwillingly in their training.

    I have the technical knowledge to stay out of cloud providers without cutting me out of the digital world. I can start from there and see if people are interested to work on lowering the technical barriers.

So I used the work I have done those last 10 years and set up my home cloud cluster: IT Sufficient. I also started to feel that the scope of the project was probably wider than what I initially thought.


What makes this infrastructure special, and by extent what defines IT Sufficiency, is that :

  • It is self-contained: everything is built from source as alpine packages which are then consumed inside other CI/CD pipelines. Once the cluster is bootstrapped with GitLab and its repositories, it can rebuild itself.

  • It is self-sufficient: DNS, email, chat, VoIP, drive, contacts, bookmarks, websites, comments management, web analytics, search engine, repositories, source control, CI/CD, project management, authentication, authorization, databases, … It only relies on an internet connection with fixed IP, alpine official repositories, some GitHub and GitLab repositories and letsencrypt.

  • It offers universal IP access: All services are dual stack and published as such in the DNS. They are directly exposed on IPv6 (because this is what IPv6 is for, and it simplifies routing and lowers latency), and though a proxy-mesh on IPv4.

  • It is fully observable: Prometheus is used to collect metrics on Kubernetes control plane, network appliances (opnsense, unifi, printer, powerdns, databases, applications, …). Grafana’s dashboards are used to explore those data on demand. Alertmanager is used to escalate alerts detected by Prometheus to a matrix server.

  • It easily scales: Thanks to Kubernetes and network volumes under the hood,

  • It is secure: All secrets are managed by Vault. All databases have random users and passwords are rotated every hour. All services are encrypted and use cryptographic features of the underlying protocols (DNSSEC, SPF, DKIM, DMARC, TLS, SSL). rconfd restarts or reloads automatically the services when secrets change without downtime.

  • Is is easy to update: I generally just have to push git tags to build a new version of an external dependency to trigger automatically a multi-projects build: alpine package -> container -> manifest. The process could be driven by RSS feeds to prebuild images upon new version publication (the application of k8s manifests would still be subject to manual control though).

  • It is independent of internet provider: As all services core services are handled internally (DNS, mesh proxy, load-balancer, …), switching the internet provider is just a matter of redefining IPv6 addresses pool and external IPv4 address at the Kubernetes and router levels. The DNS with its reverse IPv6 zone are provisioned automatically (external-dns).

Technology stack

As a reference, this is the technology stack behind










What now ?

The one thing to remember about my journey from Linux packages, to virtualization, to containerization, to orchestration, is that technology, over time, allows you to do more with less, faster and cheaper.

You can argue that complexity is also increasing. Yes, but I think in an asymptotic way. As cloud services providers are abstracting this complexity and sell simpler products (and above all, byproducts derived from your data), I’m sure we can find ways to achieve the same simplification without leaving people premises, without trading privacy for simplicity, and by empowering people instead of impoverishing them.

As extreme as it could be, IT Sufficient is an experiment to show that it is possible, and this blog is a call to people sharing the same concerns to start thinking about how to lower the technical barriers which are still pretty high.


Related posts



Managing roles for PostgreSQL with Vault on Kubernetes

Vault has a database secret engine with a PostgreSQL driver that helps to create short-lived roles with random passwords for your database applications, but putting everything in production is not as simple as it seems.

40 min read



Intalling Kubernetes with cri-o inside flatcar Container Linux

How to run containers without dockershim / containerd by installing cri-o with crun under flatcar Container Linux.

47 min read



A better way to build containers images

How to leverage distributions' packaging tools and CI/CD to build better container images using alpine Linux and Gitlab as examples.

79 min read