Thursday, May 18, 2017

statically typed vs dynamically typed language

Statically typed languages 'type check' at compile time and the type can NOT change. (Don't get cute with type-casting comments, a new variable/reference is created).
Dynamically typed languages type-check at run-time and the type of a variable CAN be changed at run-time.


Python, bash - dynamically typed language
                      - interpretable

c, c++,go  - statically  typed language
                  - compilable

source: http://stackoverflow.com/questions/1517582/what-is-the-difference-between-statically-typed-and-dynamically-typed-languages

Tuesday, May 16, 2017

S2I in OpenShift (Kubernetes) for building Docker container image


This post is about S2I which is source to image process to build application container images for OpenShift.

s2i is available here:

https://github.com/openshift/source-to-image


About S2I :


Source-to-Image (S2I) is a framework that makes it "easy to write images" that take application source code as an input and produce a new image that runs the assembled application as output.

so, input is  application source code
    output is image


Two basic concepts:

1. the build process

2. S2I scripts.


Build process:


During the build process, S2I must place sources and scripts inside the builder image.

So, what is a Builder image here?
 - is one which is going to build the application source. So, it should contains the bits necessary to build the application.

  For example, for building python based application all necessary python libs.


S2I creates a tar file that contains the sources and scripts, then "streams" that file into the builder image.

source + scripts ================> tar =========> builder image ===========> container image.
                                       (compiled into)               (fed to)                                   ( produces)


untar of tar file into default directory /tmp. ( can be modified with --destination flag)

tar + sh is necessary to carry out above operation.
If tar +sh is NOT available, additional container build is required to put both source and script inside the image and then usual s2i build procedure.

After untar, assemble script is executed.


S2I scripts:


assemble
   - builds the application artifacts from a source and places them into appropriate directories inside the image.

run
  - executes your application

save-artifacts(optional)
    - gathers all dependencies that can speed up build processes that follow.
      // for ruby, gems installed, for java m2 contents.

usage (optional)
 - inform how to properly use your image

test/run (optional)
     create a simple process to check if image is running properly.


Creating S2I builder image:


s2i tool -> creating builder images.


builder image contains specific intelligence required to produce that executable image(aka build artifacts).


simple work flow:
 1. download s2i scripts( or use one from inside builder image)
 2. download application source.
 3. s2i streams the scripts and application sources into the builder image container.
 4. it runs the assembler script, which is defined in the builder image.
 5. save the final image.


Builder image -> responsible for actually building the application. (so it has to contain necessary libraries and tools need to build and run the application).


it needs script log to actually perform build and run operations.

 - assemble for build of application
 - run for running of application


// for bootstrapping a new s2i enabled image repo.
// generates skeleton .s2i directory and populate it with sample s2i scripts (which you can start hacking on).

s2i create

Example:
// Here lighttpd-centos7 *future builder image name*
// s2i-lighttpd is directory created
s2i create lighttpd-centos7 s2i-lighttpd


// build test-app using lighttpd-centos7 as builder image , output image is lighttpd-centos7-app
s2i build test/test-app lighttpd-centos7 lighttpd-centos7-app


Building application image using builder image:


// build a application image using builder image

$ s2i build https://github.com/openshift/django-ex centos/python-35-centos7 hello-python

Here:
source - https://github.com/openshift/django-ex
build image -  centos/python-35-centos7 // this should be present either locally / at docker hub.
output tagged image - hello-python


// You can run the built image as below :
$ docker run -p 8080:8080 hello-python

Next, We will see about how to save the built images into a persistent store and to retrieve artifacts while building application.

Wednesday, May 10, 2017

oc types - Kubernetes / OpenShift concepts


All the below info. is available in your command line.
All you need to do is try oc types command :)


Concepts and Types

Kubernetes and OpenShift help developers and operators build, test, and deploy applications in a containerized cloud environment. Applications may be composed of all of the components below, although most developers will be concerned with Services, Deployments, and Builds for delivering changes.

Concepts:

* Containers:
    A definition of how to run one or more processes inside of a portable Linux
    environment. Containers are started from an Image and are usually isolated
    from other containers on the same machine.
 
* Image:
    A layered Linux filesystem that contains application code, dependencies,
    and any supporting operating system libraries. An image is identified by
    a name that can be local to the current cluster or point to a remote Docker
    registry (a storage server for images).
 
* Pods [pod]:
    A set of one or more containers that are deployed onto a Node together and
    share a unique IP and Volumes (persistent storage). Pods also define the
    security and runtime policy for each container.
 
* Labels:
    Labels are key value pairs that can be assigned to any resource in the
    system for grouping and selection. Many resources use labels to identify
    sets of other resources.
 
* Volumes:
    Containers are not persistent by default - on restart their contents are
    cleared. Volumes are mounted filesystems available to Pods and their
    containers which may be backed by a number of host-local or network
    attached storage endpoints. The simplest volume type is EmptyDir, which
    is a temporary directory on a single machine. Administrators may also
    allow you to request a Persistent Volume that is automatically attached
    to your pods.
 
* Nodes [node]:
    Machines set up in the cluster to run containers. Usually managed
    by administrators and not by end users.
 
* Services [svc]:
    A name representing a set of pods (or external servers) that are
    accessed by other pods. The service gets an IP and a DNS name, and can be
    exposed externally to the cluster via a port or a Route. It's also easy
    to consume services from pods because an environment variable with the
    name _HOST is automatically injected into other pods.
 
* Routes [route]:
    A route is an external DNS entry (either a top level domain or a
    dynamically allocated name) that is created to point to a service so that
    it can be accessed outside the cluster. The administrator may configure
    one or more Routers to handle those routes, typically through an Apache
    or HAProxy load balancer / proxy.
 
* Replication Controllers [rc]:
    A replication controller maintains a specific number of pods based on a
    template that match a set of labels. If pods are deleted (because the
    node they run on is taken out of service) the controller creates a new
    copy of that pod. A replication controller is most commonly used to
    represent a single deployment of part of an application based on a
    built image.
 
* Deployment Configuration [dc]:
    Defines the template for a pod and manages deploying new images or
    configuration changes whenever those change. A single deployment
    configuration is usually analogous to a single micro-service. Can support
    many different deployment patterns, including full restart, customizable
    rolling updates, and fully custom behaviors, as well as pre- and post-
    hooks. Each deployment is represented as a replication controller.
 
* Build Configuration [bc]:
    Contains a description of how to build source code and a base image into a
    new image - the primary method for delivering changes to your application.
    Builds can be source based and use builder images for common languages like
    Java, PHP, Ruby, or Python, or be Docker based and create builds from a
    Dockerfile. Each build configuration has web-hooks and can be triggered
    automatically by changes to their base images.
 
* Builds [build]:
    Builds create a new image from source code, other images, Dockerfiles, or
    binary input. A build is run inside of a container and has the same
    restrictions normal pods have. A build usually results in an image pushed
    to a Docker registry, but you can also choose to run a post-build test that
    does not push an image.
 
* Image Streams and Image Stream Tags [is,istag]:
    An image stream groups sets of related images under tags - analogous to a
    branch in a source code repository. Each image stream may have one or
    more tags (the default tag is called "latest") and those tags may point
    at external Docker registries, at other tags in the same stream, or be
    controlled to directly point at known images. In addition, images can be
    pushed to an image stream tag directly via the integrated Docker
    registry.
 
* Secrets [secret]:
    The secret resource can hold text or binary secrets for delivery into
    your pods. By default, every container is given a single secret which
    contains a token for accessing the API (with limited privileges) at
    /var/run/secrets/kubernetes.io/serviceaccount. You can create new
    secrets and mount them in your own pods, as well as reference secrets
    from builds (for connecting to remote servers) or use them to import
    remote images into an image stream.
 
* Projects [project]:
    All of the above resources (except Nodes) exist inside of a project.
    Projects have a list of members and their roles, like viewer, editor,
    or admin, as well as a set of security controls on the running pods, and
    limits on how many resources the project can use. The names of each
    resource are unique within a project. Developers may request projects
    be created, but administrators control the resources allocated to
    projects.
 
For more, see https://docs.openshift.com

Usage:
  oc types [options]

Examples:
  # View all projects you have access to
  oc get projects

  # See a list of all services in the current project
  oc get svc

  # Describe a deployment configuration in detail
  oc describe dc mydeploymentconfig

  # Show the images tagged into an image stream
  oc describe is ruby-centos7

Use "oc options" for a list of global command-line options (applies to all commands).

screen capture for demo recordmydesktop


Screen capture for demo purpose in Linux world:

gtk-recordMyDesktop

You can launch GUI and use it.


If you are not interested in sound,  use ( in cli mode):
# recordmydesktop --no-sound

When you want to stop recording using + C

By default, it records in ogv format. You can directly upload this video to youtube.

I faced some issues and it creates log file in home directory namely, gtk-recordMyDesktop-crash.log
Check it out for troubleshooting.



If you wish to cut portion of video, you can make use of ffmpeg.

ffmpeg -ss 01:05:00 -i  -t 00:05:00 -c copy 
This cuts video from 1 hour 5 mins of time duration 5 minutes. 

time format hh:mins:secs

Wednesday, May 3, 2017

Jenkins and related terminology


Although I have used Jenkins as a consumer, I don't have much idea about terminologies used there(pipeline, artifact, Build, etc)

I started looking into Jenkins more using this document. ( https://jenkins.io/user-handbook.pdf )

Installation - Follow steps from this:

To start the service:
systemctl start jenkins

Now, you can access Jenkins from browser using :

For a sample pipeline, follow steps from this: 

so, that's it. 

There seems to so much info. in Jenkins, but I restricted myself to understanding the main terminologies used.

Please refer the documentation for more  😎

This gets me few things clarified:

  • Artifact:
Immutable file created during pipeline/Build.

  • Build:
Result of single execution of the project.

  • Pipeline:
User-defined model of a continuous delivery pipeline.
A suite of plugins which support implementing, integrating continuous delivery pipelines into Jenkins.
     Pipeline as a code. -> Jenkinsfile. -> project source code.