Sunday, March 12, 2017

Using OpenShift Origin locally

Below are my thoughts/suggestions based on few months of using OpenShift Origin locally (i.e. all-in-one bundle on local PC)

1. Starting.
The best way to start all-in-one bundle is 'oc cluster up' command 
But by default OpenShift uses  /var directory for etcd and other data, so using custom directories is more usable (as well allows to run different versions/configs of OpenShift), something like cluster_up.sh
#!/bin/sh
oc cluster up --host-data-dir=$(pwd)/my-cluster/etcd \
        --host-volumes-dir=$(pwd)/my-cluster/volumes \
        --host-config-dir=$(pwd)/my-cluster/openshift.local.config \
        --use-existing-config

allows to keep things permanent as well more or less manageable

2. Memory
OpenShift Origin is designed to run on top hardware with many nodes (10k) and it pre-allocates a lot of resources by default (~2Gb of RAM) and very unhappy with swapping.

To reduce memory  additional editing of master-config.yaml is needed (version 1.4.1 and above), adding next values will reduce RAM use down to 400-500Mb


kubernetesMasterConfig:
  apiServerArguments:
    deserialization-cache-size:
    - "500"
    event-ttl:
    - "48h"
    target-ram-mb:
    - "300"



please note, event-ttl is here just for convenience (to keep events for longer period, instead of default 1 hour)

The settings above were tested for few weeks with dozen of docker containers running without any issues



3. Metrics
Unfortunately OpenShift Origin replaced original Kubernetes metrics (cAdvisor) with own integration based on Hawkular and Cassandra . Even with all tuning it still uses over 1Gb of RAM and relatively unusable for local use (though it disabled by default).
As alternative it is possible to use cAdvisor (resource monitoring for docker) , when running via


docker run   --volume=/:/rootfs:ro   --volume=/var/run:/var/run:rw   --volume=/sys:/sys:ro   --volume=/var/lib/docker/:/var/lib/docker:ro   --publish=8080:8080   --detach=false   --name=cadvisor   google/cadvisor:latest


it provides the same data, though it targets docker containers and doesn't provide easy way to sort containers (by example to search one that is using the most CPU or RAM)

4. Network
OpenShift has inbuilt DNS server (SkyDNS) which can be used locally as well, just add it to /etc/resolv.conf as first server (nameserver MASTER_IP_OF_OPENSHIFT) . Then you can use something like mysql.myproject.svc.cluster.local instead of raw IPs

Please note, '.local' domain might not work well with dns interceptors , like avahi-daemon (by default installed on desktop linuxes), you probably should unlink .local from it or use other domain , i.e. edit /etc/avahi/avahi-daemon.conf and add
[server]
domain-name=.alocal



Additionally, you probably should ensure the sysctl net.ipv4.ip_forward is set to 1, without forwarding OpenShift will fail with weird errors (it uses iptables to communicate with docker containers)


5. File system
To have the best performance you probably should enable overlay2 in docker as well use hostmount friendly templates (i.e. with ability to use local fs instead of network based - glusterfs/ceph etc), like mysql/percona one.

also, OpenShift internally uses repeated mounts for providing fresh configs for containers. i.e. something like 'volumes/pods/0d657a7b-eb63-11e6-8cb6-74d02b8fa488/volumes/kubernetes.io~secret/router-token-xpf7m' will be mounted every few minutes.
while this isn't a problem on non-gui installations of Linux it made crazy automounters and other gui tools that monitors mount points, like gvfs-udisks2-volume-monitor   
Though OpenShift uses this technique for reason and there no way to disable it, simple cron will remove mount points (while it may result in restart of some container)
10 * * * * umount $(mount | grep my-cluster | grep secret | cut -d' ' -f3)



6.haproxy issues.
By default OpenShift Origin haproxy (used as load balancer ) binds to all interfaces/IPs on ports 80/443. If you have web server running locally it's convenient in many cases.
While fixing is still in progress  you may create custom haproxy.template and replace bind :port with bind ip:port rules

Sunday, January 8, 2017

SOLVED: Add persistency to integrated OpenShift Origin container registry

While integrated container registry is very useful in some scenarios , i.e. you can push docker images there from your local docker/composer builds, but when cluster restarts all images are gone

Of course you can setup separate registry , but for small/local setups it's not so useful. More simple solution (though not scalable) just allow registry to keep images in filesystem on restart.

Making this is relatively simple.

  1. Allow registry to use host/node file system 

    oc login -u system:admin
    oc adm policy add-scc-to-user hostaccess -z registry -n default

     
  2. replace 'emptyDir' with 'hostPath' volume 

    oc edit dc/docker-registry -n default

    And change

          volumes:
          - emptyDir: {}
            name: registry-storage

    to

          volumes:
          - hostPath:
              path: /some/path/on/node
            name: registry-storage

     
  3. Ensure that path is writeable by group (root) - 775
  4. Re-deploy registry

    oc deploy docker-registry --latest --follow=true -n default





SOLVED: OpenShift events lifetime

OpenShift events are available under web console, "Monitoring" page  and via console 'oc get events'

Their default lifetime is very small (1 hour), because for big clusters thousands and millions of events create huge risk for stability.

But in case of local/small setups it's more useful to increase events lifetime.

in the 'master-config.yaml'  under kubernetesMasterConfig place something like

  apiServerArguments:
    event-ttl:
    - "48h"

to have 48 hours lifetime of events









Wednesday, January 4, 2017

OpenShift Origin template for GitLab Runner


OpenShift Origin template for GitLab Runner

his is a template for easy deployment of GitLab Runner CI into OpenShift cluster

Prerequisites

Installation

  1. Create new project/namespace
    oc login -u developer
    oc new-project prj-gitlab-runner
  2. Import template
    oc create -f https://gitlab.com/oprudkyi/openshift-templates/raw/master/gitlab-runner/gitlab-runner.yaml -n prj-gitlab-runner
  3. Setup Security Context Constraints (SCC) for service accounts used for running containers (anyuid means commands inside containers can run as root)
    oc login -u system:admin
    oc adm policy add-scc-to-user anyuid -z sa-gitlab-runner -n prj-gitlab-runner
    oc adm policy add-scc-to-user anyuid -z sa-minio -n prj-gitlab-runner
  4. Go to web console https://MASTER-IP:8443/console/project/prj-gitlab-runner/overview (where MASTER-IP is IP where cluster is bound) and press "Add to Project" and select "gitlab-runner" template
  5. Fill required fields
    • GitLab Runner Token : one from /etc/gitlab-runner/config.toml
    • GitLab Runners Namespace : prj-gitlab-runner
  6. As well there are some additional options you may configure - docker hub tags for GitLab-Runner and Minio, login/password for Minio etc, though defaults will work as well
  7. After pressing update the deployment will start, it may take few minutes to download required images and preconfigure them
  8. In your Gitlab Project check "Runners" page to have runner activated
  9. Run some CI job , there will be something like
    Waiting for pod prj-gitlab-runner/runner-86251ae3-project-1142978-concurrent-0uzqax to be running, status is Pending
    
    in log output of CI

Persistent cache in directory of your host (optional)

Minio server is not attached to any permanent storage and uses an ephemeral storage - emptyDir. When Minio Service/Pod is stopped or restarted all data will be deleted. Though, while Minio is running, cache is available locally via some path like '/var/lib/origin/openshift.local.volumes/pods/de1d0ff7-d2bb-11e6-8d5b-74d02b8fa488/volumes/kubernetes.io~empty-dir/vol-minio-data-store'
So, you may need to point vol-minio-data-store volume to persistent storage or periodically backup data.
While you can use any storage - NFC/Ceph RDB/GlusterFS and more, for simple cluster setup (with small number of nodes) host path is the simplest. Though if you have more then one Node you should mantain cleanup/sync between nodes by self.
Next steps allow to use local directory /cache/gitlab-runner as storage for Minio
  1. Setup Security Context Constraints (SCC) for Minio container to access Node's filesystem
    oc login -u system:admin
    oc adm policy add-scc-to-user hostmount-anyuid -z sa-minio -n prj-gitlab-runner
  2. Edit dc-minio-service deployment config via OpenSift Web console at https://MASTER-IP:8443/console/project/prj-gitlab-runner/edit/yaml?kind=DeploymentConfig&name=dc-minio-service or from console
    oc project prj-gitlab-runner
    oc edit dc/dc-minio-service
    Replace
        volumes:
        - emptyDir: {}
          name: vol-minio-data-store
    
    with
        volumes:
        - hostPath: 
            path: /cache/gitlab-runner
          name: vol-minio-data-store
    
    After saving, Minio server will be automatically restarted and you can access cache via Minio Web console http://minio-service.prj-gitlab-runner.svc.cluster.local/minio/bkt-gitlab-runner/, you can try to upload file and check if it exists at the /cache/gitlab-runner as well you can force new deploy (restart) of minio and see if it keeps files on restart

Management

  • You can additionally configure gitlab runner via web console at https://MASTER-IP:8443/console/project/prj-gitlab-runner/browse/config-maps/cm-gitlab-runner , by example count of concurent jobs etc, see all possible options at GitLab Runner docs.
    Alternatively you can use console for editing:
    oc project prj-gitlab-runner
    oc edit configmap/cm-gitlab-runner
    After editing you will need to manually "Deploy" gitlab-runner deployment - https://MASTER-IP:8443/console/project/prj-gitlab-runner/browse/dc/dc-gitlab-runner-service or via console
    oc project prj-gitlab-runner
    oc deploy dc-gitlab-runner-service --latest --follow=true
  • Minio Web console is available at http://minio-service.prj-gitlab-runner.svc.cluster.local/ or just grab IP under https://MASTER-IP:8443/console/project/prj-gitlab-runner/browse/services/minio-service and access/secret keys under https://MASTER-IP:8443/console/project/prj-gitlab-runner/browse/dc/dc-minio-service?tab=environment
Source : https://gitlab.com/oprudkyi/openshift-templates/tree/master/gitlab-runner
Mirror: https://github.com/oprudkyi/openshift-templates/tree/master/gitlab-runner