Sunday, March 12, 2017

Using OpenShift Origin locally

Below are my thoughts/suggestions based on few months of using OpenShift Origin locally (i.e. all-in-one bundle on local PC)

1. Starting.
The best way to start all-in-one bundle is 'oc cluster up' command 
But by default OpenShift uses  /var directory for etcd and other data, so using custom directories is more usable (as well allows to run different versions/configs of OpenShift), something like cluster_up.sh
#!/bin/sh
oc cluster up --host-data-dir=$(pwd)/my-cluster/etcd \
        --host-volumes-dir=$(pwd)/my-cluster/volumes \
        --host-config-dir=$(pwd)/my-cluster/openshift.local.config \
        --use-existing-config

allows to keep things permanent as well more or less manageable

2. Memory
OpenShift Origin is designed to run on top hardware with many nodes (10k) and it pre-allocates a lot of resources by default (~2Gb of RAM) and very unhappy with swapping.

To reduce memory  additional editing of master-config.yaml is needed (version 1.4.1 and above), adding next values will reduce RAM use down to 400-500Mb


kubernetesMasterConfig:
  apiServerArguments:
    deserialization-cache-size:
    - "500"
    event-ttl:
    - "48h"
    target-ram-mb:
    - "300"



please note, event-ttl is here just for convenience (to keep events for longer period, instead of default 1 hour)

The settings above were tested for few weeks with dozen of docker containers running without any issues



3. Metrics
Unfortunately OpenShift Origin replaced original Kubernetes metrics (cAdvisor) with own integration based on Hawkular and Cassandra . Even with all tuning it still uses over 1Gb of RAM and relatively unusable for local use (though it disabled by default).
As alternative it is possible to use cAdvisor (resource monitoring for docker) , when running via


docker run   --volume=/:/rootfs:ro   --volume=/var/run:/var/run:rw   --volume=/sys:/sys:ro   --volume=/var/lib/docker/:/var/lib/docker:ro   --publish=8080:8080   --detach=false   --name=cadvisor   google/cadvisor:latest


it provides the same data, though it targets docker containers and doesn't provide easy way to sort containers (by example to search one that is using the most CPU or RAM)

4. Network
OpenShift has inbuilt DNS server (SkyDNS) which can be used locally as well, just add it to /etc/resolv.conf as first server (nameserver MASTER_IP_OF_OPENSHIFT) . Then you can use something like mysql.myproject.svc.cluster.local instead of raw IPs

Please note, '.local' domain might not work well with dns interceptors , like avahi-daemon (by default installed on desktop linuxes), you probably should unlink .local from it or use other domain , i.e. edit /etc/avahi/avahi-daemon.conf and add
[server]
domain-name=.alocal



Additionally, you probably should ensure the sysctl net.ipv4.ip_forward is set to 1, without forwarding OpenShift will fail with weird errors (it uses iptables to communicate with docker containers)


5. File system
To have the best performance you probably should enable overlay2 in docker as well use hostmount friendly templates (i.e. with ability to use local fs instead of network based - glusterfs/ceph etc), like mysql/percona one.

also, OpenShift internally uses repeated mounts for providing fresh configs for containers. i.e. something like 'volumes/pods/0d657a7b-eb63-11e6-8cb6-74d02b8fa488/volumes/kubernetes.io~secret/router-token-xpf7m' will be mounted every few minutes.
while this isn't a problem on non-gui installations of Linux it made crazy automounters and other gui tools that monitors mount points, like gvfs-udisks2-volume-monitor   
Though OpenShift uses this technique for reason and there no way to disable it, simple cron will remove mount points (while it may result in restart of some container)
10 * * * * umount $(mount | grep my-cluster | grep secret | cut -d' ' -f3)



6.haproxy issues.
By default OpenShift Origin haproxy (used as load balancer ) binds to all interfaces/IPs on ports 80/443. If you have web server running locally it's convenient in many cases.
While fixing is still in progress  you may create custom haproxy.template and replace bind :port with bind ip:port rules

Sunday, January 8, 2017

SOLVED: Add persistency to integrated OpenShift Origin container registry

While integrated container registry is very useful in some scenarios , i.e. you can push docker images there from your local docker/composer builds, but when cluster restarts all images are gone

Of course you can setup separate registry , but for small/local setups it's not so useful. More simple solution (though not scalable) just allow registry to keep images in filesystem on restart.

Making this is relatively simple.

  1. Allow registry to use host/node file system 

    oc login -u system:admin
    oc adm policy add-scc-to-user hostaccess -z registry -n default

     
  2. replace 'emptyDir' with 'hostPath' volume 

    oc edit dc/docker-registry -n default

    And change

          volumes:
          - emptyDir: {}
            name: registry-storage

    to

          volumes:
          - hostPath:
              path: /some/path/on/node
            name: registry-storage

     
  3. Ensure that path is writeable by group (root) - 775
  4. Re-deploy registry

    oc deploy docker-registry --latest --follow=true -n default





SOLVED: OpenShift events lifetime

OpenShift events are available under web console, "Monitoring" page  and via console 'oc get events'

Their default lifetime is very small (1 hour), because for big clusters thousands and millions of events create huge risk for stability.

But in case of local/small setups it's more useful to increase events lifetime.

in the 'master-config.yaml'  under kubernetesMasterConfig place something like

  apiServerArguments:
    event-ttl:
    - "48h"

to have 48 hours lifetime of events









Wednesday, January 4, 2017

OpenShift Origin template for GitLab Runner


OpenShift Origin template for GitLab Runner

his is a template for easy deployment of GitLab Runner CI into OpenShift cluster

Prerequisites

Installation

  1. Create new project/namespace
    oc login -u developer
    oc new-project prj-gitlab-runner
  2. Import template
    oc create -f https://gitlab.com/oprudkyi/openshift-templates/raw/master/gitlab-runner/gitlab-runner.yaml -n prj-gitlab-runner
  3. Setup Security Context Constraints (SCC) for service accounts used for running containers (anyuid means commands inside containers can run as root)
    oc login -u system:admin
    oc adm policy add-scc-to-user anyuid -z sa-gitlab-runner -n prj-gitlab-runner
    oc adm policy add-scc-to-user anyuid -z sa-minio -n prj-gitlab-runner
  4. Go to web console https://MASTER-IP:8443/console/project/prj-gitlab-runner/overview (where MASTER-IP is IP where cluster is bound) and press "Add to Project" and select "gitlab-runner" template
  5. Fill required fields
    • GitLab Runner Token : one from /etc/gitlab-runner/config.toml
    • GitLab Runners Namespace : prj-gitlab-runner
  6. As well there are some additional options you may configure - docker hub tags for GitLab-Runner and Minio, login/password for Minio etc, though defaults will work as well
  7. After pressing update the deployment will start, it may take few minutes to download required images and preconfigure them
  8. In your Gitlab Project check "Runners" page to have runner activated
  9. Run some CI job , there will be something like
    Waiting for pod prj-gitlab-runner/runner-86251ae3-project-1142978-concurrent-0uzqax to be running, status is Pending
    
    in log output of CI

Persistent cache in directory of your host (optional)

Minio server is not attached to any permanent storage and uses an ephemeral storage - emptyDir. When Minio Service/Pod is stopped or restarted all data will be deleted. Though, while Minio is running, cache is available locally via some path like '/var/lib/origin/openshift.local.volumes/pods/de1d0ff7-d2bb-11e6-8d5b-74d02b8fa488/volumes/kubernetes.io~empty-dir/vol-minio-data-store'
So, you may need to point vol-minio-data-store volume to persistent storage or periodically backup data.
While you can use any storage - NFC/Ceph RDB/GlusterFS and more, for simple cluster setup (with small number of nodes) host path is the simplest. Though if you have more then one Node you should mantain cleanup/sync between nodes by self.
Next steps allow to use local directory /cache/gitlab-runner as storage for Minio
  1. Setup Security Context Constraints (SCC) for Minio container to access Node's filesystem
    oc login -u system:admin
    oc adm policy add-scc-to-user hostmount-anyuid -z sa-minio -n prj-gitlab-runner
  2. Edit dc-minio-service deployment config via OpenSift Web console at https://MASTER-IP:8443/console/project/prj-gitlab-runner/edit/yaml?kind=DeploymentConfig&name=dc-minio-service or from console
    oc project prj-gitlab-runner
    oc edit dc/dc-minio-service
    Replace
        volumes:
        - emptyDir: {}
          name: vol-minio-data-store
    
    with
        volumes:
        - hostPath: 
            path: /cache/gitlab-runner
          name: vol-minio-data-store
    
    After saving, Minio server will be automatically restarted and you can access cache via Minio Web console http://minio-service.prj-gitlab-runner.svc.cluster.local/minio/bkt-gitlab-runner/, you can try to upload file and check if it exists at the /cache/gitlab-runner as well you can force new deploy (restart) of minio and see if it keeps files on restart

Management

  • You can additionally configure gitlab runner via web console at https://MASTER-IP:8443/console/project/prj-gitlab-runner/browse/config-maps/cm-gitlab-runner , by example count of concurent jobs etc, see all possible options at GitLab Runner docs.
    Alternatively you can use console for editing:
    oc project prj-gitlab-runner
    oc edit configmap/cm-gitlab-runner
    After editing you will need to manually "Deploy" gitlab-runner deployment - https://MASTER-IP:8443/console/project/prj-gitlab-runner/browse/dc/dc-gitlab-runner-service or via console
    oc project prj-gitlab-runner
    oc deploy dc-gitlab-runner-service --latest --follow=true
  • Minio Web console is available at http://minio-service.prj-gitlab-runner.svc.cluster.local/ or just grab IP under https://MASTER-IP:8443/console/project/prj-gitlab-runner/browse/services/minio-service and access/secret keys under https://MASTER-IP:8443/console/project/prj-gitlab-runner/browse/dc/dc-minio-service?tab=environment
Source : https://gitlab.com/oprudkyi/openshift-templates/tree/master/gitlab-runner
Mirror: https://github.com/oprudkyi/openshift-templates/tree/master/gitlab-runner 

Tuesday, December 27, 2016

Install GitLab Runner on openSUSE 42.1

It seems like openSUSE 42.1 isn't supported officially by GitLab Runner (or I didn't find a way)

But it is possible to use packages for other Linux distribution (RHEL 7 seems works)

so , next commands will install gitlab runner

>sudo zypper ar -t YUM https://packages.gitlab.com/runner/gitlab-ci-multi-runner/el/7/x86_64 runner_gitlab-ci-multi-runner
>sudo zypper --gpg-auto-import-keys refresh runner_gitlab-ci-multi-runner
>sudo zypper install gitlab-ci-multi-runner


then you can run

>sudo gitlab-ci-multi-runner register


and proceed with configuration according to docs https://docs.gitlab.com/runner/install/linux-repository.html





laravel-bootstrap-adminlte-starter-kit

Template for websites with basic functionality. It is based on next ideas:
  • have common features already integrated and configured (tests,gulp,bower etc)
  • simplify updates (via git merge from this project)
  • extensive use of .env config (slightly more then original Laravel)
  • 'make' based macro-tool for often used commands 
Home : https://gitlab.com/oprudkyi/laravel-bootstrap-adminlte-starter-kit/
Mirror: https://github.com/oprudkyi/laravel-bootstrap-adminlte-starter-kit 

Introduction

Just base functionality for other projects
Project includes already preconfigured resources:
Creation of new site based on starter kit
#clone original repository
git clone git@gitlab.com:oprudkyi/laravel-bootstrap-adminlte-starter-kit.git my_project

#cd to project
cd my_project

#delete origin
git remote rm origin

#use your new repository as main source 
git remote add origin git@gitlab.com:oprudkyi/web-site.git

#keep original source for updates
git remote add starter-kit git@gitlab.com:oprudkyi/laravel-bootstrap-adminlte-starter-kit.git

#push to your repository
git push -u origin master

 

Keeping your site in sync with starter kit

git fetch starter-kit
git merge starter-kit/master
git push
or
make merge-starterkit
git push

 

Installation

If you are building from the first time out of the source repository, you will need to generate the configure scripts. From the top directory, do:

./bootstrap.sh

Once the configure scripts are generated, 'make' system can be configured. From the top directory, do:

./configure

Run ./configure --help to see other configuration options
Install configured dependencies - tools like composer/bower and components as defined in composer.json, bower.json, package.json :

make download-dependencies

 

Testing

There are a large number of tests that can all be run from the top-level directory.
      make -k check
      make -k check-functional
      make -k check-acceptance
This will make all of the libraries (as necessary), and run through the unit tests defined in each of the client libraries. If a single language fails, the make check will continue on and provide a synopsis at the end.

 

Boostraping development environment

cp .env.example .env
vim .env

#for sqlite db
touch storage/database/play.sqlite
./artisan migrate

#generate key
./artisan key:generate -v

 

Update dependencies

make update-downloaded-dependencies

or

make update-downloaded-dependencies-production
 
it will search for updated packages for composer/npm/bower and install them as well will update lock/json files

 

Mailcatcher integration

Mailcatcher is configured in the .env.example for port 1025, tests use different port (11031) so tests can be run while development instance of maicatcher is running
Run:

make run-mailcatcher
 
Stop:

make stop-mailcatcher

Mailcatcher web gui available via http://127.0.0.1:1080/

 

Javascript/Css assets

Intergration is based on Laravel Elixir and Bower as package system (packages installed into vendor/bower_components).
To call bower use something like
node_modules/.bin/bower install "eonasdan-bootstrap-datetimepicker#latest" --save
or update current dependencies
node_modules/.bin/bower install
The building script is gulpfile.js. System is configured to generate single js file from all packages provided (as well single css file)
if you add/remove packages, you may also need to edit resources/assets/sass/app.scss as well for adding/removing css/scss/js
use 'make gulp' or 'make gulp-watch' to compile resources
custom css rules you can add to resources/assets/sass/starterkit/starter-kit-customize.scss it's applied after any other css, so it's possible to change any css behavior here

 

Manual Deployment Initialization

This isn't adivsed , just for info:
  • First, clone your repository to production/staging server
  • configure production environment
    ./bootstrap.sh
    ./configure --enable-tests=no
    make download-dependencies-production
    chmod 777 -R storage
  • create db (in case of mysql)
sudo mysql 
CREATE DATABASE IF NOT EXISTS starterkit_db CHARSET=utf8;
CREATE USER 'starterkit_user'@'%' IDENTIFIED BY 'starterkit_password';
GRANT ALL PRIVILEGES ON starterkit_db.* TO 'starterkit_user'@'%';
  • create env file
cp .env.example.production .env
vim .env
  • create key
php artisan key:generate
  • migrate db
php artisan migrate
  • optimize app
make production-optimize
it will run next commands:
#clear cache
php artisan view:clear
php artisan config:clear
php artisan route:clear
php artisan clear-compiled

#optimize
composer dump-autoload --optimize


#caching 
php artisan cache:clear
php artisan config:cache
php artisan route:cache
php artisan optimize

#recompile js/css
gulp

 

Manual Deployment

git pull
make download-dependencies-production
php artisan migrate
make production-optimize

Monday, December 19, 2016

Simple library to show JavaScript / jQuery errors as browser alerts

  • forgot to check browser's console log for errors ?
  • got complaints from users 'I click but nothing happens' ?
  • ever taught end-users how to look at browser's console ?
There is small and simple library to avoid or at least reduce possibility of such problems, specially in case if code isn't covered by tests.

It catches normally invisible JavaScript errors and exceptions and provides simple way to show them to developer/tester/user.

Simple errors are caught via window.onerror handler.
Errors inside jQuery callbacks (ajax, onclick etc) are hidden from window.onerror(), so jQuery is patched in way to call error handler.

Installation:

 
bower install js-error-alert --save
npm install js-error-alert

Load base script (it sets window.onerror):

<script src="window_error_handler.js"></script>

Add jQuery hook just after jQuery core code:

<script src="jquery-3.11.1.min.js"></script>
<script src="jquery_error_handler.js"></script>

Use custom handler to show errors:


You can create own JSEH_showUncaughtException(message) function and replace default one, place it at the top of the page (before window_error_handler.js)

var JSEH_showUncaughtException = function(message) {
    "use strict";

    if(typeof message === "undefined") {
        return false;
    }
    alert(message);
};

Enable/disable error handler:

In case you need to dynamically enable/disable handler (by example if merging/minifying tools are used) you can set JSEH_enabled variable to true/false.

var JSEH_enabled = false;  
//in Laravel/layout.blade.php 
var JSEH_enabled =@if(env('JS_ERROR_ALERTER_ENABLED', false)) true @else false @endif ;

Simple test

<script>
test_undefined_variable
</script>

<script>
$(function() {
    test_jquery_undefined_variable
});
</script>


Source : https://github.com/oprudkyi/js-error-alert

jQuery hooks are based on code found at :