Some years ago I started a project which consisted in a java library with a collection of scrapers. Those days I start noting a not so cool trend which was a more closed internet. Despite the facility we have to share knowledge and information these two concepts are valuable and these days everything is tradeable. So to do my part and help in a more open and shared world I decided to start the project which consisted in, as I said, a simple java lib. However libs are inert. They don't do much by themselves. In an effort to bring a little bit of life into the lib I also build a http application which was basically a proxy to the scrapping functionalities.
At that time I also had a Nexus server that was running in an old broken laptop. I used it to manage my maven dependencies. The scraper project depended in some other minor libs that were compiled and uploaded into Nexus. Then, on my maven based projects, I just needed to add this configuration and the maven compiler would happily fetch all the binaries from my home based repo...
<repositories> <repository> <id>balhau-releases</id> <name>balhau nexus</name> <url>http://nexus.balhau.net/nexus/content/repositories/releases</url> <releases> </repository> <repository> <id>balhau-snapshots</id> <url>http://nexus.balhau.net/nexus/content/repositories/snapshots</url> <releases> </repository> </repositories>
...and I was happy this way for some years.
...something happened. I bought a bunch of raspberry pi and decided to build a Kubernetes cluster and replace my old broken laptop. And the experience was awesome. I was able to set the cluster, build and deploy little demo apps and slowly I start migrating my personal infrastructure into Kubernetes. The work consisted in:
- Migrate the mail server
- Migrate a postgresql server
- Migrate a dns server
- Migrate a tor exit node server
- Migrate a a ghost based blog (actually this one)
With a more or less degree of success all these boxes were rearranged and deployed in a new cluster. The migration process was also the reason why this project was born. However during all this process the nexus server was forgotten, or better yet avoided. You see this nexus repository is a fat/bloated java application that if deployed in my kubernetes cluster would consume a fair amount of physical resources. These resources, I believe, are better invested in other mini projects and associated prof of concepts. So for some time I avoided entirely this problem. More, for professional/personal reasons (honestly lack of time and because of being fed up with computers) I kind of abandoned the scraper project. But life still goes on and I recently stumbled again with project. And what do you do when this happens? Yes you guess it, you run this
mvn clean install
...its broken. Yeah, those little pesky personal binaries, once upon a time living in a nexus server, were inaccessible. And I cursed. I believe something like fuck this slipped my mouth. I was kind of frustrated, no way I would install a nexus server on my little raspberry bees. I start to think if there could be any workaround that I could exploit to circumvent this problem. I was think about this while waiting for a maven compilation to finish, and following those lines
Downloaded from central: https://repo.maven.apache.org/maven2/org/jboss/weld/weld-api-parent/1.0/weld-api-parent-1.0.pom (0 B at 0 B/s) Downloading from central: https://repo.maven.apache.org/maven2/org/jboss/weld/weld-api-bom/1.0/weld-api-bom-1.0.pom Downloaded from central: https://repo.maven.apache.org/maven2/org/jboss/weld/weld-api-bom/1.0/weld-api-bom-1.0.pom (0 B at 0 B/s) Downloading from central: https://repo.maven.apache.org/maven2/org/jboss/weld/weld-parent/6/weld-parent-6.pom Downloaded from central: https://repo.maven.apache.org/maven2/org/jboss/weld/weld-parent/6/weld-parent-6.pom (0 B at 0 B/s) Downloading from central: https://repo.maven.apache.org/maven2/javax/annotation/jsr250-api/1.0/jsr250-api-1.0.pom
Suddenly I figured it out. I just need a simple http server not a full fledge nexus repository server. I was kind of embarrassed. I should definitely think about this sooner. Well better late than never. In kubernetes this was a very straightforward task. You just need a folder to put the binaries and then a way to expose it, and I already had both. I had previously exposed a disk in the network via NFS. And turns out that to expose a directory you just need to run this
python -m SimpleHTTPServer 8000
So first I needed to create the container needed to run the http app server
FROM arm32v6/alpine LABEL maintainer "Balhau <email@example.com>" architecture="ARM32v6/armhf" version="0.0.2" RUN apk update && apk upgrade && \ apk add --no-cache bash git openssh python python-dev py2-pip build-base ADD scripts/start-python-server.sh /opt/start-server.sh RUN chmod +x /opt/start-server.sh ENTRYPOINT /opt/start-server.sh
where the start-server.sh script contained the following
cd <nfs_mounted_folder>; python -m SimpleHTTPServer 8000
since I'm trying to avoid being a complete animal I try to abstract this build/push/run process of the container in a classy Makefile
HOST = 10.108.252.69:5000 NS = balhau APP = pyshare ARCH = armv6 VERSION ?= 0.0.2 ARTIFACT=$(HOST)/$(NS)/$(APP):$(ARCH)-$(VERSION) .PHONY: build push shell run start stop rm release build: docker build -t $(ARTIFACT) . push: docker push $(ARTIFACT) run: docker run -it --rm $(ARTIFACT) ash default: build
Then I went on creating the PersistentVolumeClaim the Service and the IngressRule and the Deployment specifications which is the usual in kubernetes infrastructure.
kind: PersistentVolume apiVersion: v1 metadata: name: persistent-volume-for-shared-folder labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: spec: capacity: storage: 10Gi accessModes: - ReadWriteMany nfs: server: <server_ip> path: <nfs_mountpoint>
But then, it strikes me. Now how the heck do I upload the binaries? How? For fuck sake. With nexus all I had to do was to configure my pom.xml file with the following entry
<distributionManagement> <repository> <id>releases</id> <url>http://nexus.balhau.net/nexus/content/repositories/releases</url> </repository> <snapshotRepository> <id>snapshots</id> <url>http://nexus.balhau.net/nexus/content/repositories/snapshots</url> </snapshotRepository> </distributionManagement>
And with this approach I could end up with pesky automation scripts to mimic the maven nexus interaction as a way to exactly reproduce the maven binary upload process to enable distribution of binaries. Life sucks. Isn't there a simpler solution. For sure I'm not the first hammering my head with this. Well, it turns out that there is
I could do just this
<distributionManagement> <repository> <id>releases</id> <url>file:///<nfs_mount_point/repo/releases></url> </repository> <snapshotRepository> <id>snapshots</id> <url>file:///<nfs_mount_point/repo/releases</url> </snapshotRepository> </distributionManagement>
The only missing trick is that I also need to have access on all the machines for which I need to run maven in the deployment task. The neat trick is that this distributionManagement configuration is only needed on the deployment phase, technically when you run the
mvn deploy command. This means that if someone want to just compile and run the binaries/applications they can always run
mvn clean install and maven should be able to fetch the binaries from the http service. For that we just need to replace the first repository configurations with this
<repositories> <repository> <id>releases</id> <name>balhau-releases</name> <url>http://shared.balhau.net/repo/releases</url> </repository> <repository> <id>snapshots</id> <name>balhau-snapshots</name> <url>http://shared.balhau.net/repo/snapshots/</url> </repository> </repositories>
But what about my continuous deployment? It's an hacky inliner which can blind the most conservative of automation engineers.
make; make push; kubectl apply -f webpt-svc.yml; kubectl apply -f webpt-dp.yml
Yes I know, this is far from being a production ready solution. Yes this is only possible because this is a one man enterprise. This solution would not scale and is far from good practices as the vampire is from the sun. But this is not a multi million dollar company is a small apartment with a cheap internet connection and a cheap electricity bill. So all things considered this actually is a fair compromise and a very practical approach. And by the way was also kind of fun to do.
And by the way you can see the end result of the running application here