Thursday, June 29, 2017

markdown preview in google-chrome


You are updating a markdown file and wish to preview how your changes look like.

You can install the below addon:

https://chrome.google.com/webstore/detail/markdown-preview-plus/febilkbfcbhebfnokafefeacimjdckgl?utm_source=chrome-app-launcher-info-dialog


You can edit markdown file using your favourite editor and  preview your changes live in your chrome browser.

Wednesday, June 21, 2017

Execute ansible-playbook faster




Execute ansible faster:

Set the below value in /etc/ansible/ansible.cfg (or) wherever your configuration file is:

------------------------------
[ssh_connection]
pipelining = True
------------------------------




How this helps? 

===========================

pipelining

Enabling pipelining reduces the number of SSH operations required to execute a module on the remote server, by executing many ansible modules without actual file transfer. This can result in a very significant performance improvement when enabled, however when using “sudo:” operations you must first disable ‘requiretty’ in /etc/sudoers on all managed hosts.
By default, this option is disabled to preserve compatibility with sudoers configurations that have requiretty (the default on many distros), but is highly recommended if you can enable it, eliminating the need for Accelerated Mode:

pipelining = False
===========================
Source:  http://docs.ansible.com/ansible/intro_configuration.html#pipelining

Bringup a network interface






========================

If the interface(for eg: eth0) is not UP automatically on system boot, you can temporarily  bring up the interface like:

# dhclient <eth0>

========================

To make it permanent, edit /etc/sysconfig/network-scripts/ifcfg-eth0
Set ONBOOT as yes as:
---------------------------
...
ONBOOT=yes
---------------------------

========================
Here, eth0 is the interface configured.


Tuesday, June 20, 2017

Progress of copy operation


See progress while carrying out copy operation: 

rsync --info=progress2 <source> <destination>

While using cp command, there is currently no direct way to check the progress..you can make use of rsync with 'info' flag as above to see the progress of copy operation.



Tuesday, June 13, 2017

port opened in your machine



Check whether a specific port is opened on your machine:

You can make use of "netstat -tuplen"

# netstat -tuplen



For example, httpd (apache) server listens @ 80 port

# netstat -tuplen | grep httpd
tcp6       0      0 :::80                   :::*                    LISTEN      0          269550     15610/httpd        





Tuesday, June 6, 2017

sudo su to execute bash( and avoid sh)




Problem:

"sudo su" does not read /etc/bashrc and executes "sh" instead of bash



Solution:

You need to add the following lines in your /root/.bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi


that's it. 

Monday, June 5, 2017

save username/password while using github



You need to set this to store credentials in disk


git config credential.helper store


First time it will ask for credentials and stored in disk...afterwards same will be used.

Setting up fresh github repo


Setting up fresh github repo:


1. first visit github and create a user.

2. visit the url  like https://github.com/<test_user>

3. goto "repositories tab" - click on "new"

4. give a name "my_new_repository"

5. Now you should be able to access this link:

https://github.com/<test_user>/<my_new_repository>



Now, open terminal and follow the *sample* workflow to freshly initialize your github repo:


echo "# simple_testing" >> README.md
 

git init
 

git add README.md
 

git commit -m "first commit"

git remote add origin https://github.com/<user name>/simple_testing.git
 

git push -u origin master
 

Here, simple_testing is the "repo" name created.

Friday, June 2, 2017

github - keep your fork in sync



Keep your fork in sync. with original master :



# Now, you are in your local cloned copy of original GitHub repo:

git checkout master


You wish to update master to be in sync with original GitHub repo.



# First, add github location  as "remote"  (1)  - here *upstream* is the name provided by us:

git remote add upstream  https://github.com/original-repo-from-where-you-cloned.git



# fetch all branches - see we are using name provided above :)  (2)

git fetch upstream


# apply all changes from original github location to your branch and then play your changes on top

(3)

git rebase upstream/master

Now your fork is in sync with original repo :) - but only locally. You need to push the changes to get in sync.


// Check status
$ git status
On branch master
Your branch is ahead of 'origin/master' by 15 commits.
  (use "git push" to publish your local commits)
nothing to commit, working directory clean

// Push all changes you pulled from origin/master
$ git push

// Now check status again
$ git status
On branch master
Your branch is up-to-date with 'origin/master'.
nothing to commit, working directory clean


You may check in GitHub UI, whether all changes are in.


=========================================================
Notes:

(1)  git-remote  add

Adds a remote named <name> for the repository at <url>. The command git fetch <name> can then be used to create and update remote-tracking branches <name>/<branch>.

(2) git-fetch - Download objects and refs from another repository

(3)  git-rebase - Reapply commits on top of another base tip

===============================================================

arp program in linux


arp program is used to read( and do more) arp cache, which is maintaining a table to ip address and corresponding mac address.

In order to get arp program, you need to install net-tools.

What other binaries provided by net-tools?

you can make use "dnf --list repoquery net-tools" command:

output :
# dnf repoquery --list net-tools

/usr/bin/netstat
/usr/lib/systemd/system/arp-ethers.service
/usr/sbin/arp
/usr/sbin/ether-wake
/usr/sbin/ifconfig
/usr/sbin/ipmaddr
/usr/sbin/iptunnel
/usr/sbin/mii-diag
/usr/sbin/mii-tool
/usr/sbin/nameif
/usr/sbin/plipconfig
/usr/sbin/route
/usr/sbin/slattach

/usr/share/licenses/net-tools
/usr/share/licenses/net-tools/COPYING
/usr/share/locale/cs/LC_MESSAGES/net-tools.mo
/usr/share/locale/de/LC_MESSAGES/net-tools.mo
/usr/share/locale/et_EE/LC_MESSAGES/net-tools.mo
/usr/share/locale/fr/LC_MESSAGES/net-tools.mo
/usr/share/locale/pt_BR/LC_MESSAGES/net-tools.mo
/usr/share/man/de/man5/ethers.5.gz
/usr/share/man/de/man8/arp.8.gz
/usr/share/man/de/man8/ifconfig.8.gz
/usr/share/man/de/man8/netstat.8.gz
/usr/share/man/de/man8/plipconfig.8.gz
/usr/share/man/de/man8/rarp.8.gz
/usr/share/man/de/man8/route.8.gz
/usr/share/man/de/man8/slattach.8.gz
/usr/share/man/fr/man5/ethers.5.gz
/usr/share/man/fr/man8/arp.8.gz
/usr/share/man/fr/man8/ifconfig.8.gz
/usr/share/man/fr/man8/netstat.8.gz
/usr/share/man/fr/man8/plipconfig.8.gz
/usr/share/man/fr/man8/rarp.8.gz
/usr/share/man/fr/man8/route.8.gz
/usr/share/man/fr/man8/slattach.8.gz
/usr/share/man/man5/ethers.5.gz
/usr/share/man/man8/arp.8.gz
/usr/share/man/man8/ether-wake.8.gz
/usr/share/man/man8/ifconfig.8.gz
/usr/share/man/man8/ipmaddr.8.gz
/usr/share/man/man8/iptunnel.8.gz
/usr/share/man/man8/mii-diag.8.gz
/usr/share/man/man8/mii-tool.8.gz
/usr/share/man/man8/nameif.8.gz
/usr/share/man/man8/netstat.8.gz
/usr/share/man/man8/plipconfig.8.gz
/usr/share/man/man8/rarp.8.gz
/usr/share/man/man8/route.8.gz
/usr/share/man/man8/slattach.8.gz
/usr/share/man/pt/man8/arp.8.gz
/usr/share/man/pt/man8/ifconfig.8.gz
/usr/share/man/pt/man8/netstat.8.gz
/usr/share/man/pt/man8/rarp.8.gz
/usr/share/man/pt/man8/route.8.gz

Sunday, May 28, 2017

gvim failed during git commit


I was using gvim as editor.

Inspite of using proper commit message and quitting gvim properly ( top question in stackoverflow :)),
it failed with message "Aborting commit due to empty commit message."

$ git commit
Aborting commit due to empty commit message.


All you need to do is :

git config core.editor "gvim -f"

Then, try git commit , it should work.



Why it failed ?

Looking at man gvim:

       -f          Foreground.  This option should be used when Vim is executed by a program that
                   will wait for the edit session to finish (e.g. mail).

Friday, May 26, 2017

tar/untar vs copy




tar is faster in most cases when compared to copy:


// directory copy
# tar cf - directory_to_copy/  |  tar xfp -  -C  /myowntarget/

// few files copy
# tar cf - file1 file2 file3   |  tar xfp -  -C  /myowntarget/

// copy all
# tar cf  - *   |  tar xfp -  -C  /myowntarget/ 


- in first tar is "stdout"  which is fed as input to "|"  pipe , extracted again by tar where target directory specified by -C


Some interesting discussion here:

https://superuser.com/questions/788502/why-is-tartar-so-much-faster-than-cp
https://stackoverflow.com/questions/316078/interesting-usage-of-tar-but-what-is-happening


Thursday, May 25, 2017

firewalld - query and open port

-------------------------
Open port 8443/tcp  in firewalld :

// first query
[sarumuga@gant ]$ sudo  firewall-cmd --permanent   --query-port=8443/tcp
no

// add port
[sarumuga@gant ]$ sudo  firewall-cmd --permanent   --add-port=8443/tcp  
success

// verify
[sarumuga@gant ]$ sudo  firewall-cmd --permanent   --query-port=8443/tcp
yes
-------------------------
Open port 53/udp   in firewalld :

// first query
[sarumuga@gant ]$ sudo  firewall-cmd --permanent   --query-port=53/udp
no

// add port
[sarumuga@gant ]$ sudo  firewall-cmd --permanent   --add-port=53/udp
success

//verify
[sarumuga@gant ]$ sudo  firewall-cmd --permanent   --query-port=53/udp
yes

-------------------------

So, for making changes immediate and for future you need to execute two commands :

// immediate - run time
firewall-cmd --add-port=443/tcp

// for future too
firewall-cmd --permanent --add-port=443/tcp

source: http://www.firewalld.org/documentation/man-pages/firewall-cmd.html

Wednesday, May 24, 2017

user with sudo access without password





--------------------------------------------

Often times, I wish to carry out privileged operations while logged in as a user.

You can this line in /etc/sudoers, to avoid password every time.


<username_here>   ALL=(ALL)    NOPASSWD: ALL

I usually add it below this line:

## Same thing without a password
# %wheel    ALL=(ALL)    NOPASSWD: ALL
<username_here>   ALL=(ALL)    NOPASSWD: ALL

--------------------------------------------

You can try commands like this:
#sudo vgs

You can be root simply by:
#sudo su


--------------------------------------------

Thursday, May 18, 2017

statically typed vs dynamically typed language

Statically typed languages 'type check' at compile time and the type can NOT change. (Don't get cute with type-casting comments, a new variable/reference is created).
Dynamically typed languages type-check at run-time and the type of a variable CAN be changed at run-time.


Python, bash - dynamically typed language
                      - interpretable

c, c++,go  - statically  typed language
                  - compilable

source: http://stackoverflow.com/questions/1517582/what-is-the-difference-between-statically-typed-and-dynamically-typed-languages

Tuesday, May 16, 2017

S2I in OpenShift (Kubernetes) for building Docker container image


This post is about S2I which is source to image process to build application container images for OpenShift.


About S2I :


Source-to-Image (S2I) is a framework that makes it "easy to write images" that take application source code as an input and produce a new image that runs the assembled application as output.

so, input -> application source cdoe
    output -> image


Two basic concepts:

1. the build process

2. S2I scripts.


Build process:


During the build process, S2I must place sources and scripts inside the builder image.

So, what is a Builder image here?
 - is one which is going to build the application source. So, it should contains the bits necessary to build the application.

  For example, for building python based application all necessary python libs.


S2I creates a tar file that contains the sources and scripts, then "streams" that file into the builder image.

source + scripts ==========> tar =======> builder image ===========> container image.
                       (compiled into)          (fed to)                                produces)


untar of tar file into default directory /tmp. ( can be modified with --destination flag)

tar + sh is necessary to carry out above operation.
If tar +sh is NOT available, additional container build is required to put both source and script inside the image and then usual s2i build procedure.

After untar, assemble script is executed.


S2I scripts:


assemble
   - builds the application artifacts from a source and places them into appropriate directories inside the image.

run
  - executes your application

save-artifacts(optional)
    - gathers all dependencies that can speed up build processes that follow.
      // for ruby, gems installed, for java m2 contents.

usage (optional)
 - inform how to properly use your image

test/run (optional)
     create a simple process to check if image is running properly.


Creating S2I builder image:


s2i tool -> creating builder images.


builder image contains specific intelligence required to produce that executable image(aka build artifacts).


simple work flow:
 1. download s2i scripts( or use one from inside builder image)
 2. download application source.
 3. s2i streams the scripts and application sources into the builder image container.
 4. it runs the assembler script, which is defined in the builder image.
 5. save the final image.


Builder image -> responsible for actually building the application. (so it has to contain necessary libraries and tools need to build and run the application).


it needs script log to actually perform build and run operations.

 - assemble for build of application
 - run for running of application


// for bootstrapping a new s2i enabled image repo.
// generates skeleton .s2i directory and populate it with sample s2i scripts (which you can start hacking on).

s2i create <image name> <destination directory>

Example:
// Here lighttpd-centos7 *future builder image name*
// s2i-lighttpd is directory created
s2i create lighttpd-centos7 s2i-lighttpd


// build test-app using lighttpd-centos7 as builder image , output image is lighttpd-centos7-app
s2i build test/test-app lighttpd-centos7 lighttpd-centos7-app


Building application image using builder image:


// build a application image using builder image

$ s2i build https://github.com/openshift/django-ex centos/python-35-centos7 hello-python

Here:
source - https://github.com/openshift/django-ex
build image -  centos/python-35-centos7 // this should be present either locally / at docker hub.
output tagged image - hello-python


// You can run the built image as below :
$ docker run -p 8080:8080 hello-python


You can verify the application by using weburl http://localhost:8080  

So, S2I helps to create your docker image just from your github link :) 




Wednesday, May 10, 2017

oc types - Kubernetes / OpenShift concepts


All the below info. is available in your command line.
All you need to do is try oc types command :)


Concepts and Types

Kubernetes and OpenShift help developers and operators build, test, and deploy applications in a containerized cloud environment. Applications may be composed of all of the components below, although most developers will be concerned with Services, Deployments, and Builds for delivering changes.

Concepts:

* Containers:
    A definition of how to run one or more processes inside of a portable Linux
    environment. Containers are started from an Image and are usually isolated
    from other containers on the same machine.
 
* Image:
    A layered Linux filesystem that contains application code, dependencies,
    and any supporting operating system libraries. An image is identified by
    a name that can be local to the current cluster or point to a remote Docker
    registry (a storage server for images).
 
* Pods [pod]:
    A set of one or more containers that are deployed onto a Node together and
    share a unique IP and Volumes (persistent storage). Pods also define the
    security and runtime policy for each container.
 
* Labels:
    Labels are key value pairs that can be assigned to any resource in the
    system for grouping and selection. Many resources use labels to identify
    sets of other resources.
 
* Volumes:
    Containers are not persistent by default - on restart their contents are
    cleared. Volumes are mounted filesystems available to Pods and their
    containers which may be backed by a number of host-local or network
    attached storage endpoints. The simplest volume type is EmptyDir, which
    is a temporary directory on a single machine. Administrators may also
    allow you to request a Persistent Volume that is automatically attached
    to your pods.
 
* Nodes [node]:
    Machines set up in the cluster to run containers. Usually managed
    by administrators and not by end users.
 
* Services [svc]:
    A name representing a set of pods (or external servers) that are
    accessed by other pods. The service gets an IP and a DNS name, and can be
    exposed externally to the cluster via a port or a Route. It's also easy
    to consume services from pods because an environment variable with the
    name _HOST is automatically injected into other pods.
 
* Routes [route]:
    A route is an external DNS entry (either a top level domain or a
    dynamically allocated name) that is created to point to a service so that
    it can be accessed outside the cluster. The administrator may configure
    one or more Routers to handle those routes, typically through an Apache
    or HAProxy load balancer / proxy.
 
* Replication Controllers [rc]:
    A replication controller maintains a specific number of pods based on a
    template that match a set of labels. If pods are deleted (because the
    node they run on is taken out of service) the controller creates a new
    copy of that pod. A replication controller is most commonly used to
    represent a single deployment of part of an application based on a
    built image.
 
* Deployment Configuration [dc]:
    Defines the template for a pod and manages deploying new images or
    configuration changes whenever those change. A single deployment
    configuration is usually analogous to a single micro-service. Can support
    many different deployment patterns, including full restart, customizable
    rolling updates, and fully custom behaviors, as well as pre- and post-
    hooks. Each deployment is represented as a replication controller.
 
* Build Configuration [bc]:
    Contains a description of how to build source code and a base image into a
    new image - the primary method for delivering changes to your application.
    Builds can be source based and use builder images for common languages like
    Java, PHP, Ruby, or Python, or be Docker based and create builds from a
    Dockerfile. Each build configuration has web-hooks and can be triggered
    automatically by changes to their base images.
 
* Builds [build]:
    Builds create a new image from source code, other images, Dockerfiles, or
    binary input. A build is run inside of a container and has the same
    restrictions normal pods have. A build usually results in an image pushed
    to a Docker registry, but you can also choose to run a post-build test that
    does not push an image.
 
* Image Streams and Image Stream Tags [is,istag]:
    An image stream groups sets of related images under tags - analogous to a
    branch in a source code repository. Each image stream may have one or
    more tags (the default tag is called "latest") and those tags may point
    at external Docker registries, at other tags in the same stream, or be
    controlled to directly point at known images. In addition, images can be
    pushed to an image stream tag directly via the integrated Docker
    registry.
 
* Secrets [secret]:
    The secret resource can hold text or binary secrets for delivery into
    your pods. By default, every container is given a single secret which
    contains a token for accessing the API (with limited privileges) at
    /var/run/secrets/kubernetes.io/serviceaccount. You can create new
    secrets and mount them in your own pods, as well as reference secrets
    from builds (for connecting to remote servers) or use them to import
    remote images into an image stream.
 
* Projects [project]:
    All of the above resources (except Nodes) exist inside of a project.
    Projects have a list of members and their roles, like viewer, editor,
    or admin, as well as a set of security controls on the running pods, and
    limits on how many resources the project can use. The names of each
    resource are unique within a project. Developers may request projects
    be created, but administrators control the resources allocated to
    projects.
 
For more, see https://docs.openshift.com

Usage:
  oc types [options]

Examples:
  # View all projects you have access to
  oc get projects

  # See a list of all services in the current project
  oc get svc

  # Describe a deployment configuration in detail
  oc describe dc mydeploymentconfig

  # Show the images tagged into an image stream
  oc describe is ruby-centos7

Use "oc options" for a list of global command-line options (applies to all commands).

screen capture for demo recordmydesktop


Screen capture for demo purpose in Linux world:

gtk-recordMyDesktop

You can launch GUI and use it.


If you are not interested in sound,  use ( in cli mode):
# recordmydesktop --no-sound

When you want to stop recording using + C

By default, it records in ogv format. You can directly upload this video to youtube.

I faced some issues and it creates log file in home directory namely, gtk-recordMyDesktop-crash.log
Check it out for troubleshooting.



If you wish to cut portion of video, you can make use of ffmpeg.

ffmpeg -ss 01:05:00 -i  -t 00:05:00 -c copy 
This cuts video from 1 hour 5 mins of time duration 5 minutes. 

time format hh:mins:secs

Wednesday, May 3, 2017

Jenkins and related terminology


Although I have used Jenkins as a consumer, I don't have much idea about terminologies used there(pipeline, artifact, Build, etc)

I started looking into Jenkins more using this document. ( https://jenkins.io/user-handbook.pdf )

Installation - Follow steps from this:

To start the service:
systemctl start jenkins

Now, you can access Jenkins from browser using :

For a sample pipeline, follow steps from this: 

so, that's it. 

There seems to so much info. in Jenkins, but I restricted myself to understanding the main terminologies used.

Please refer the documentation for more  😎

This gets me few things clarified:

  • Artifact:
Immutable file created during pipeline/Build.

  • Build:
Result of single execution of the project.

  • Pipeline:
User-defined model of a continuous delivery pipeline.
A suite of plugins which support implementing, integrating continuous delivery pipelines into Jenkins.
     Pipeline as a code. -> Jenkinsfile. -> project source code.

Friday, April 28, 2017

GlusterFS concepts and Architecture


I have made a presentation at GlusterFS meetup about GlusterFS - Concepts and Architecture:

You can access slides from here:
http://redhat.slides.com/sarumuga/glusterfs_concepts_arch?token=iTBb9tPy


For trying out GlusterFS in CentOS, you can use this:
https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart


You can join the meetup for gaining more knowledge about GlusterFS:
https://www.meetup.com/glusterfs-India/







Sunday, April 16, 2017

OpenShift origin installation issues and fix


==================================

# Add a entry for master node and slave nodes in /etc/hosts
192.168.122.152 m1
192.168.122.210    c2
192.168.122.144    c1

# setup OpenShift cluster
ansible-playbook byo/config.yml 


# faced this error
fatal: [m1]: FAILED! => {"changed": false, "cmd": ["oc", "create", "-n", "openshift", "-f", "/usr/share/openshift/examples/image-streams/image-streams-centos7.json"], "delta": "0:00:00.181546", "end": "2017-04-16 23:59:08.477861", "failed": true, "failed_when_result": true, "rc": 1, "start": "2017-04-16 23:59:08.296315", "stderr": "Unable to connect to the server: x509: certificate signed by unknown authority", "stdout": "", "stdout_lines": [], "warnings": []}

# check differences between these two files - there are few differences, especially master's IP vs hostname

vimdiff /etc/origin/master/admin.kubeconfig  /root/.kube/config

# remove kube config file
mv  /root/.kube/config  /tmp/

# setup OpenShift cluster again
ansible-playbook byo/config.yml 
----------------------------------------------------------------
PS:
1.
Another error faced is:

TASK [openshift_master : Start and enable master]
FAILED - RETRYING: TASK: openshift_master : Start and enable master (1 retries left).
fatal: [m1]: FAILED! => {"attempts": 1, "changed": false, "failed": true, "msg": "Unable to start service origin-master: Job for origin-master.service failed because the control process exited with error code. See \"systemctl status origin-master.service\" and \"journalctl -xe\" for details.\n"}


2. When verified using journalctl -xe, following is the error:
 http: TLS handshake error from 192.168.122.152:48958: read tcp4 192.168.122.152:8443->192.168.122.152:48958: read: connection reset by peer



----------------------------------------------------------------
All the issues can be resolved by removing /root/.kube/config and rebuilding cluster using ansible-playbook again.



==================================

References:
https://wiki.centos.org/SpecialInterestGroup/PaaS/OpenShift-Quickstart

https://www.clouda.ca/blog/general/openshift-on-centos-7-quick-installation/

Wednesday, April 5, 2017

reset your display size to normal


To set your screen resolution back to normal, use xrandr like this:

$ xrandr -s 0
The “-s” option allows you to specify the size, and the “0″ parameter tells xrandr to reset the screen to its default size

Tuesday, March 28, 2017

Android storage access in PC - usb detection issues in Redmi Note



Redmi Note had some issue in getting detected in PC (while connecting via USB).


This app helped:

MIUI USB Settings
https://play.google.com/store/apps/details?id=co.vincze.usbsettings



First ensure, USB debugging is enabled.

You need to go to additional settings -> Developer options[1] -> USB debugging.

Then Open this app and enable MTP.

Now, Android storage should be available in PC.


[1]
For getting Developer options, you need to click on MIUI version few times. (It will display how many times to click to get Developer options enabled).

yaml format

YAML is a markup language with many powerful features

Rule 1:
YAML uses a fixed indentation scheme to represent relationships between data layers. 
Each level consists of exactly two spaces. DO NOT USE TABS.

Rule 2:
colons:

key value pair using colon.

my_key: my_value

my_key:
  my_value

Rule 3:
Dashes

list of items
- list_value_one
- list_value_two
- list_value_three


For existing files, You can convert tabs to 2 spaces by these commands in Vim::set tabstop=2 expandtab and then :retab.

The suggested syntax for YAML files is to use 2 spaces for indentation,
https://docs.saltstack.com/en/latest/topics/yaml/index.html

source: https://docs.saltstack.com/en/latest/topics/yaml/index.html


vim indentation



-----------------------------------
To temporarily turn off automatic indenting by typing
:set paste
in command mode.

To turn back on:
:unset paste  
-----------------------------------

According to filetype indenting happens.
:filetype indent on

want to know file type? try :set filetype - you will get answer.

-----------------------------------

not so sophisticated:
set ai
set si

-----------------------------------

source: http://www.serverwatch.com/tutorials/article.php/3845506/Automatic-Indenting-With-Vim.htm


sed command example


sed - stands for stream editor.

So, you wish to replace a string in a file.

Example:

sed -ie 's/few/asd/g' hello.txt

Here, 
 
-i in place
 
-e expresssion - here for making use of s/ which is search
 
g - global , to change in the whole line.
 

source:
http://unix.stackexchange.com/questions/159367/using-sed-to-find-and-replace

Tuesday, March 21, 2017

merge changes from one branch to another locally



How to merge changes from one branch to another locally?

# git branch
master

# git chekcout -b new_feature
*new_feature
master

# // add changes in new_feature and get its committed.


Now,

# git checkout master

// This will pull changes FROM new_feature branch TO master branch
# git merge new_feature


Tuesday, March 14, 2017

s3curl dependency - Can't locate Digest/HMAC_SHA1.pm error and fix


s3curl dependency in Centos 7

--------------------------------
Error: while running s3curl:

Can't locate Digest/HMAC_SHA1.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .)
--------------------------------


To solve:
 yum install perl-Digest-HMAC.noarch -y

--------------------------------

pv.86_64 in centos 7 - available in epel release


Even after enabling epel package in Centos, you are not able to see packages
available in epel..say for example pv.x86_64 ?


check "enabled" flag in /etc/yum.repos.d/epel.repo
It should be set to 1.

Toggle it to 1 and install your package.

Monday, March 13, 2017

Create and attach the disk image to qemu-kvm VM

Create and attach the disk image

Execute these steps on the KVM hypervisor host.

cd to the folder where you store your disk images:
cd /var/lib/libvirt/images/

Create the new disk image:
qemu-img create -f raw example-vm-swap.img 1G

We use qemu-img to create a new raw disk image with a size of 1 GB.
Attach the disk to the example virtual machine using virsh:

virsh attach-disk example-vm --source /var/lib/libvirt/images/example-vm-swap.img --target vdb --persistent

We use virsh to attach the disk image /var/lib/libvirt/images/example-vm-swap as a virtio (/dev/vdb) disk to the domain (vm) example-vm.
The --persistent option updates the domain xml file with an element for the newly attached disk.

Note that if you already have a /dev/vdb disk you need to change vdb to a free device like vdc or vdd.


Source:
https://raymii.org/s/tutorials/KVM_add_disk_image_or_swap_image_to_virtual_machine_with_virsh.html#Create_and_attach_the_disk_image




Tuesday, March 7, 2017

Access Gluster volume as a object Storage (via S3)

Building gluster-object in Docker container:



Background:


This document is about accessing a gluster-volume using object interface.

Object interface is provided by gluster-swift. (2)

Here, gluster-swift is running inside a docker container. (1)

This Object interface(docker container) accesses Gluster volume which is mounted in the host.

For the same Gluster volume, bind mount is created inside the docker container and hence can be accessed using S3 GET/PUT requests.






Steps to build gluster-swift container:



git clone docker-gluster-swift containing Dockerfile

$ git clone https://github.com/prashanthpai/docker-gluster-swift.git

$ cd docker-gluster-swift


Start Docker service:
$ sudo systemctl start docker.service

Build  a new image using Dockerfile
$ docker build --rm --tag prashanthpai/gluster-swift:dev .


Sending build context to Docker daemon 187.4 kB
Sending build context to Docker daemon
Step 0 : FROM centos:7
 ---> 97cad5e16cb6
Step 1 : MAINTAINER Prashanth Pai <ppai@redhat.com>
 ---> Using cache
 ---> ec6511e6ae93
Step 2 : RUN yum --setopt=tsflags=nodocs -y update &&     yum --setopt=tsflags=nodocs -y install         centos-release-openstack-kilo         epel-release &&     yum --setopt=tsflags=nodocs -y install         openstack-swift openstack-swift-{proxy,account,container,object,plugin-swift3}         supervisor         git memcached python-prettytable &&     yum -y clean all
 ---> Using cache
 ---> ea7faccc4ae9
Step 3 : RUN git clone git://review.gluster.org/gluster-swift /tmp/gluster-swift &&     cd /tmp/gluster-swift &&     python setup.py install &&     cd -
 ---> Using cache
 ---> 32f4d0e75b14
Step 4 : VOLUME /mnt/gluster-object
 ---> Using cache
 ---> a42bbdd3df9f
Step 5 : RUN mkdir -p /etc/supervisor /var/log/supervisor
 ---> Using cache
 ---> cf5c1c5ee364
Step 6 : COPY supervisord.conf /etc/supervisor/supervisord.conf
 ---> Using cache
 ---> 537fdf7d9c6f
Step 7 : COPY supervisor_suicide.py /usr/local/bin/supervisor_suicide.py
 ---> Using cache
 ---> b5a82aaf177c
Step 8 : RUN chmod +x /usr/local/bin/supervisor_suicide.py
 ---> Using cache
 ---> 5c9971b033e4
Step 9 : COPY swift-start.sh /usr/local/bin/swift-start.sh
 ---> Using cache
 ---> 014ed9a6ae03
Step 10 : RUN chmod +x /usr/local/bin/swift-start.sh
 ---> Using cache
 ---> 00d3ffb6ccb2
Step 11 : COPY etc/swift/* /etc/swift/
 ---> Using cache
 ---> ca3be2138fa0
Step 12 : EXPOSE 8080
 ---> Using cache
 ---> 677fe3fd2fb5
Step 13 : CMD /usr/local/bin/swift-start.sh
 ---> Using cache
 ---> 3014617977e0
Successfully built 3014617977e0
$
-------------------------------

Setup Gluster volume:

Glusterd service start, create and mount volumes

$  su
root@node1 docker-gluster-swift$ service glusterd start


Starting glusterd (via systemctl):                         [  OK  ]
root@node1 docker-gluster-swift$
root@node1 docker-gluster-swift$

Create gluster volume:

There are three nodes where Centos 7.0 is installed.

Ensure glusterd service is started all three nodes(node1, node2, node3) as below:
#systemctl glusterd start


root@node1 docker-gluster-swift$ sudo gluster volume create tv1  node1:/opt/volume_test/tv_1/b1 node2:/opt/volume_test/tv_1/b2  node3:/opt/volume_test/tv_1/b3 force


volume create: tv1: success: please start the volume to access data
Here:

- node1, node2, nod3 are the hostnames,


- /opt/volume_test/tv_1/b1,  /opt/volume_test/tv_1/b2 and /opt/volume_test/tv_1/b3 are the bricks

        - tv1 is the volume name

root@node1 docker-gluster-swift$
root@node1docker-gluster-swift$

Start gluster volume:
root@node1 docker-gluster-swift$ gluster vol start tv1


volume start: tv1: success
root@node1docker-gluster-swift$

root@node1docker-gluster-swift$ gluster vol status

Status of volume: tv1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node1:/opt/volume_test/tv_1/b1         49152     0          Y       5951
Brick node2:/opt/volume_test/tv_1/b2         49153     0          Y       5980
Brick node3:/opt/volume_test/tv_1/b3         49153     0          Y       5980

Task Status of Volume tv1
------------------------------------------------------------------------------
There are no active volume tasks
root@node1 docker-gluster-swift$

Create a directory to mount the volume:
root@node1 docker-gluster-swift$ mkdir -p /mnt/gluster-object/tv1


The path /mnt/gluster-object/ will be used while running Docker container.

mount the volume:

root@node1 docker-gluster-swift$ mount -t glusterfs node1:/tv1 /mnt/gluster-object/tv1

root@node1 docker-gluster-swift$

Verify mount:
sarumuga@node1 test$ mount | grep mnt

node1:/tv1 on /mnt/gluster-object/tv1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

============================

Run command in the new container with gluster mount path:

root@node1 test$ docker run -d -p 8080:8080 -v /mnt/gluster-object:/mnt/gluster-object -e GLUSTER_VOLUMES="tv1" prashanthpai/gluster-swift:dev


feb8867e1fd9c240bb3fc3aef592b4162d56895e0015a6c9cab7777e11c79e06

Here:
-p 8080:8080


publish container port to host.


format :    hostport : containerport



                         (a)                (b)
Note: -v /mnt/gluster-object:/mnt/gluster-object
(a) location where all gluster volumes are mounted in host location
(b) location inside docker where volume is mapped

- e GLUSTER_VOLUMES="tv1"
passing tv1 volume name as environment.


Verify container :
sarumuga@node1 test$ docker ps
CONTAINER ID        IMAGE                            COMMAND                CREATED             STATUS              PORTS                    NAMES
feb8867e1fd9        prashanthpai/gluster-swift:dev   "/bin/sh -c /usr/loc   29 seconds ago      Up 28 seconds       0.0.0.0:8080->8080/tcp   sick_heisenberg

Inspect container and get the IP address:
sarumuga@node1test$ docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'  feb8867e1fd9"
172.17.0.1

============================

Verifying S3 access :

Now, verify S3 access requests to the Gluster volume.

We are going to make use of s3curl(3) for verifying object access.

Create bucket:
# ./s3curl.pl --debug --id 'tv1' --key 'test' --put /dev/null  -- -k -v  http://172.17.0.1:8080/bucket7

Put object
# ./s3curl.pl --debug --id 'tv1' --key 'test' --put  ./README -- -k -v -s http://172.17.0.1:8080/bucket7/a/b/c

Get object
# ./s3curl.pl --debug --id 'tv1' --key 'test'   -- -k -v -s http://172.17.0.1:8080/bucket7/a/b/c

List objects in a bucket request
# ./s3curl.pl --debug --id 'tv1' --key 'test'   -- -k -v -s http://172.17.0.1:8080/bucket7/

List all buckets
# ./s3curl.pl --debug --id 'tv1' --key 'test'   -- -k -v -s http://172.17.0.1:8080/

Delete object
# ./s3curl.pl --debug --id 'tv1' --key 'test'   --del -- -k -v -s http://172.17.0.1:8080/bucket7/a/b/c

Delete Bucket
# ./s3curl.pl --debug --id 'tv1' --key 'test'   --del -- -k -v -s http://172.17.0.1:8080/bucket7

============================



Reference:
(1) GitHub - prashanthpai/docker-gluster-swift: Run gluster-swift inside a docker container.
(2) gluster-swift/quick_start_guide.md at master · gluster/gluster-swift · GitHub
(3) Amazon S3 Authentication Tool for Curl : Sample Code & Libraries : Amazon Web Services