Upload Docker image to custom repository

docker build -t docker.gadgetwiz.com/myproject/hello-world-python .
docker push docker.gadgetwiz.com/myproject/hello-world-python

Copied from hello-world readme

Successfully built 7d692d619894
Successfully tagged datawire/hello-world:latest

### Run it in Docker

Build the image first, then launch it using `docker run`.

$ docker run --rm -it -p 8000:8000 datawire/hello-world
 * Serving Flask app "server" (lazy loading)
 * Environment: production
   WARNING: Do not use the development server in a production environment.
   Use a production WSGI server instead.
 * Debug mode: on
 * Running on (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger PIN: 121-217-524 - - [03/Apr/2019 17:04:59] "GET / HTTP/1.1" 200 -

### Run it in Kubernetes

Build and push the image first, then launch it using `kubectl run`.

$ kubectl run hello-world --image=ark3/hello-world --port 8000 --expose
service/hello-world created
deployment.apps/hello-world created

$ kubectl get svc hello-world
hello-world   ClusterIP   <none>        8000/TCP   1m

$ kubectl get deploy,po -l run=hello-world
NAME                                DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/hello-world   1         1         1            1           2m

NAME                               READY   STATUS    RESTARTS   AGE
pod/hello-world-776fc969b9-8m457   1/1     Running   0          2m

$ kubectl run curl-from-cluster -it --rm --image=fedora --restart=Never -- curl hello-world:8000
Hello, world!
pod "curl-from-cluster" deleted

$ kubectl logs hello-world-776fc969b9-8m457
 * Serving Flask app "server" (lazy loading)
 * Environment: production
   WARNING: Do not use the development server in a production environment.
   Use a production WSGI server instead.
 * Debug mode: on
 * Running on (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger PIN: 305-616-111 - - [03/Apr/2019 17:15:29] "GET / HTTP/1.1" 200 -

## License

Licensed under Apache 2.0. Please see [License](LICENSE) for details.

Merge GIT Fork with Master

Add an upstream source and name it (

Before doing anything, you need to tell Git where to find the upstream fork.

git remote add realmaster https://github.com/whoiforkedfrom/upstream_repo.git

Then you can use the following commands to update your fork whenever you feel the need.

git fetch realmaster
git checkout master
git merge realmaster/master

Kubernetes on Digital Ocean

After starting Kubernets (configure kubectl automatically):

doctl kubernetes cluster kubeconfig save my-cluster-name

Fix : invalid object doesn’t have additional properties

After installing kubectl with brew you should run:

  1. rm /usr/local/bin/kubectl
  2. brew link --overwrite kubernetes-cli

minikube dashboard

MacBook:~$ minikube dashboard
🔌 Enabling dashboard …
🤔 Verifying dashboard health …
🚀 Launching proxy …
🤔 Verifying proxy health …
🎉 Opening in your default browser…

Describe POD

kubectl describe pod nodehelloworld.example.com

minikube service

MacBook:~/kubernetes-course$ cat first-app/helloworld-service.ymlapiVersion: v1kind: Servicemetadata:name: helloworld-servicespec:ports:- port: 80targetPort: nodejs-portprotocol: TCPselector:app: helloworldtype: LoadBalancerMacBook:~/kubernetes-course$ cat first-app/helloworld-nodeport-service.ymlapiVersion: v1kind: Servicemetadata:name: helloworld-servicespec:ports:- port: 31001nodePort: 31001targetPort: nodejs-portprotocol: TCPselector:app: helloworldtype: NodePortMacBook:~/kubernetes-course$ kubectl create -f first-app/helloworld-nodeport-service.ymlservice/helloworld-service createdMacBook:~/kubernetes-course$ minikube service helloworld-service --urlhttp://$ curl World!
Hello World!MacBook:~/kubernetes-course$ kubectl describe svc helloworld-serviceName:                     helloworld-serviceNamespace:                defaultLabels:                   <none>Annotations:              <none>Selector:                 app=helloworldType:                     NodePortIP:                                  <unset>  31001/TCPTargetPort:               nodejs-port/TCPNodePort:                 <unset>  31001/TCPEndpoints:       Affinity:         NoneExternal Traffic Policy:  ClusterEvents:                   <none>MacBook:~/kubernetes-course$ kubectl get svc helloworld-serviceNAME                 TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGEhelloworld-service   NodePort   <none>        31001:31001/TCP   3m14s


apiVersion: v1kind: Podmetadata:name: nodehelloworld.kubernetes.intelligence.wslabels:app: helloworldspec:containers:- name: docker-demoimage: joelgriffiths/docker-demoports:- containerPort: 3000
kubectl create -f pod-hellowworld.yml

Useful commands

kubectl get pod

kubectl describe pod

kubectl expose pop <pod> –port=444 –name=frontend

kubectl port-forward <pod-name> 8080

kubectl attach <pod-name> -i

kubectl exec <pod-name> – command    # First container

kubectl exec <pod-name> -c <container> – command

kubectl label pods <pod-name> mylabel=awesome


kubectl run -i –tty busybox –image=busybox –restart=Never – sh

POD Commands in hello world example

kubectl get pod

kubectl describe pod nodehelloworld.example.com

kubectl port-forward nodehelloworld.example.com 8081:3000

Getting to the POD directly

kubectl expose pod nodehelloworld.example.com –type=NodePort –name nodehelloworld-service
minikube service nodehelloworld-service –url


MacBook:~/kubernetes-course$ kubectl get service


kubernetes ClusterIP <none> 443/TCP 17h

nodehelloworld-service NodePort <none> 3000:32129/TCP 4m

(See logs)

kubectl attach nodehelloworld.example.com

(Execute command)

kubectl exec nodehelloworld.example.com – ls /app

(Describe service)

kubectl get service

kubectl describe service nodehelloworld-service


(Start Busybox POD)

kubectl run -i –tty busybox –image=busybox –restart=Never — sh

    #/ telnet

Warning FailedScheduling default-scheduler pod has unbound immediate PersistentVolumeClaims

This means that you’re volumes aren’t ready for some reason.

MacBook:~/prod/digitalocean/zalaxy$ kubectl get pvc
mysql-storage Bound pvc-b878edbd-8d5a-11e9-a0d0-965c6120853a 5Gi RWO do-block-storage 41h
website-storage Pending do-block-storage 9m22s
website-www Pending do-block-storage 9m26s
MacBook:~/prod/digitalocean/zalaxy$ kubectl describe pvc website-storage
Name: website-storage
Namespace: zalaxy
StorageClass: do-block-storage
Status: Pending
Annotations: volume.beta.kubernetes.io/storage-provisioner: dobs.csi.digitalocean.com
Finalizers: [kubernetes.io/pvc-protection]
Access Modes:
VolumeMode: Filesystem
Type Reason Age From Message
—- —— —- —- ——-
Normal ExternalProvisioning 3m46s (x26 over 9m33s) persistentvolume-controller waiting for a volume to be created, either by external provisioner “dobs.csi.digitalocean.com” or manually created by system administrator
Mounted By: php-6d9fbfcbf4-t97sf

Spinning up a Kubernetes cluster (notes)


VirtualBox Instance via vagrant

mkdir ubuntu

vagrant init ubuntu/xenial64

vagrant up

vagrant ssh


vagrant get-config????


wget https://github.com/kubernetes/kops/releases/download/1.12.1/kops-linux-amd64

chmod +x kops-linux-amd64

sudo mv kops-linux-amd64 /usr/local/bin/kops


sudo apt-get update

sudo apt-get install python-pip

sudo pip install awscli

aws configure


curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

Spin up kubernetes cluster

kops create cluster –name=kubernetes.intelligence.ws –state=s3://kops-state-joelgriffiths –zones=us-west-2a –node-count=2 –node-size=t2.micro –master-size=t2.micro –dns-zone=kubernetes.intelligence.ws

Edit if necessary

kops edit cluster kubernetes.intelligence.ws –state=s3://kops-state-joelgriffiths

Bring up the cluster

kops update cluster –name kubernetes.intelligence.ws –yes –state=s3://kops-state-joelgriffiths

Spinning up containers

kubectl run hello-minikube –image=k8s.gcr.io/echoserver:1.4 –port=8080

kubectl expose deployment hello-minikube –type=NodePort

kubectl get services


Cleaning up and deleting k8s cluster

kops delete cluster kubernetes.intelligence.ws –state=s3://kops-state-joelgriffiths

kops delete cluster kubernetes.intelligence.ws –state=s3://kops-state-joelgriffiths –yes

Use ed instead of sed

Search and replace with ed

ed can be used in place of sed to modify a file directly.. The format is straightforward if you have an example. Here is an example that changes the mailquota size for Maildir++.

# cat maildirsize
# ed maildirsize << EOF 
> ,s/s[0-9]+ /s552428800 /g 
> wq 
> EOF 
# cat maildirsize

Inject search result into replacement string

What if you want to put the results of the last search back into the
string. You can use the ‘&’ symbol for that.

# cat test
this is test
# ed test << EOF
> ,s/is/& a/g
> wq
# cat test
this is a test

Apache Notes

Setup server-status handler on Apache

1) Download rstatd.
2) Build and install rstatd:

$ tar xvzf rstatd.tar.gz
$ cd rpc.rstatd
$ ./configure --prefix=/usr
$ make
# sudo su
# make install

3) Add a line to /etc/hosts.allow to allow certain hosts to make rstatd requests:


4) Add rstatd entry in /etc/xinetd.d/rstatd:

# default: off
# description: An xinetd internal service which rstatd's characters back to clients.

service rstatd
    type            = RPC
    rpc_version     = 2-4
    socket_type     = dgram
    protocol        = udp
    wait            = yes
    user            = root
    only_from       =
    log_on_success  += USERID
    log_on_failure  += USERID
    server          = /usr/sbin/rpc.rstatd
    disable         = no

5) Restart xinetd:

# /etc/rc.d/init.d/xinetd restart

Force Playbook to fail on any task failure

I needed a playbook to fail complete when any task failed. You would think “any_errors_fatal: True” would do this, but it doesn’t. It marks the hosts as failed and ends the play, but the recap doesn’t say anything about it. Instead it prints out the normal output even though the playbook never completes for any node. It’s extremely deceptive.

As a result, I opened up a ticket with the Ansible devs (https://github.com/ansible/ansible/issues/57138), but they claimed this was a documentation bug.

I complained about that in the user group (https://groups.google.com/forum/#!topic/ansible-devel/MT-bZgISWrs), but I was ignored.

Finally, I decided I needed a workaround. Here it is. I lifted portions of this code in various places, but I can’t recall them all. I apologize if I lifted something from you.

The idea is to catch any failure and set a global variable (using add_host). Then all the nodes can test that global failure and fail as necessary. Here is the code:

- hosts: all
  gather_facts: no
  become: False

  - name: Use block to catch errors
      - name: Generate a random failue
        shell: 'echo $(( $RANDOM % 10 ))'
        register: random
        failed_when: random.stdout|int == 2

      - debug:
          msg: 'This node PASSED'

      - debug:
          msg: 'This node FAILED'

      - name: "Create a dummy host to hold a failure flag"
          name: "BAD_HOST"

  - name: Force EVERYTHING to fail
    fail: msg="One host failed"
    when: hostvars['BAD_HOST'] is defined

EDIT: It looks the Ansible devs are going to implement my suggestion in a future release of Ansible.

Welcome to my World

I’ve left this website untouched for almost a decade. You can find the old site here. Most of the content there is EXTREMELY old, but it’s interesting to peruse. Most of it is basic System Administration stuff with a few code snippets thrown in for fun.

On the new portion of the site, I’m going to be focusing on DevOps, Containers, Kubernetes, and other CI/CD things. I may also talk about some of my personal projects and websites and I bring them live.

Current Projects

Most of these sites are non-operational right now.

BIRDFART.COM – Stickers and Things (Nothing Says Pussy Like a Prius)

ZALAXY.COM – Rent Your Things