Tuesday, March 28, 2017

Android storage access in PC - usb detection issues in Redmi Note

Redmi Note had some issue in getting detected in PC (while connecting via USB).

This app helped:

MIUI USB Settings

First ensure, USB debugging is enabled.

You need to go to additional settings -> Developer options[1] -> USB debugging.

Then Open this app and enable MTP.

Now, Android storage should be available in PC.

For getting Developer options, you need to click on MIUI version few times. (It will display how many times to click to get Developer options enabled).

yaml format

YAML is a markup language with many powerful features

Rule 1:
YAML uses a fixed indentation scheme to represent relationships between data layers. 
Each level consists of exactly two spaces. DO NOT USE TABS.

Rule 2:

key value pair using colon.

my_key: my_value


Rule 3:

list of items
- list_value_one
- list_value_two
- list_value_three

For existing files, You can convert tabs to 2 spaces by these commands in Vim::set tabstop=2 expandtab and then :retab.

The suggested syntax for YAML files is to use 2 spaces for indentation,

source: https://docs.saltstack.com/en/latest/topics/yaml/index.html

vim indentation

To temporarily turn off automatic indenting by typing
:set paste
in command mode.

To turn back on:
:unset paste  

According to filetype indenting happens.
:filetype indent on

want to know file type? try :set filetype - you will get answer.


not so sophisticated:
set ai
set si


source: http://www.serverwatch.com/tutorials/article.php/3845506/Automatic-Indenting-With-Vim.htm

sed command example

sed - stands for stream editor.

So, you wish to replace a string in a file.


sed -ie 's/few/asd/g' hello.txt

-i in place
-e expresssion - here for making use of s/ which is search
g - global , to change in the whole line.


Tuesday, March 21, 2017

merge changes from one branch to another locally

How to merge changes from one branch to another locally?

# git branch

# git chekcout -b new_feature

# // add changes in new_feature and get its committed.


# git checkout master

// This will pull changes FROM new_feature branch TO master branch
# git merge new_feature

Tuesday, March 14, 2017

s3curl dependency - Can't locate Digest/HMAC_SHA1.pm error and fix

s3curl dependency in Centos 7

Error: while running s3curl:

Can't locate Digest/HMAC_SHA1.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .)

To solve:
 yum install perl-Digest-HMAC.noarch -y


pv.86_64 in centos 7 - available in epel release

Even after enabling epel package in Centos, you are not able to see packages
available in epel..say for example pv.x86_64 ?

check "enabled" flag in /etc/yum.repos.d/epel.repo
It should be set to 1.

Toggle it to 1 and install your package.

Monday, March 13, 2017

Create and attach the disk image to qemu-kvm VM

Create and attach the disk image

Execute these steps on the KVM hypervisor host.

cd to the folder where you store your disk images:
cd /var/lib/libvirt/images/

Create the new disk image:
qemu-img create -f raw example-vm-swap.img 1G

We use qemu-img to create a new raw disk image with a size of 1 GB.
Attach the disk to the example virtual machine using virsh:

virsh attach-disk example-vm --source /var/lib/libvirt/images/example-vm-swap.img --target vdb --persistent

We use virsh to attach the disk image /var/lib/libvirt/images/example-vm-swap as a virtio (/dev/vdb) disk to the domain (vm) example-vm.
The --persistent option updates the domain xml file with an element for the newly attached disk.

Note that if you already have a /dev/vdb disk you need to change vdb to a free device like vdc or vdd.


Tuesday, March 7, 2017

Access Gluster volume as a object Storage (via S3)

Building gluster-object in Docker container:


This document is about accessing a gluster-volume using object interface.

Object interface is provided by gluster-swift. (2)

Here, gluster-swift is running inside a docker container. (1)

This Object interface(docker container) accesses Gluster volume which is mounted in the host.

For the same Gluster volume, bind mount is created inside the docker container and hence can be accessed using S3 GET/PUT requests.

Steps to build gluster-swift container:

git clone docker-gluster-swift containing Dockerfile

$ git clone https://github.com/prashanthpai/docker-gluster-swift.git

$ cd docker-gluster-swift

Start Docker service:
$ sudo systemctl start docker.service

Build  a new image using Dockerfile
$ docker build --rm --tag prashanthpai/gluster-swift:dev .

Sending build context to Docker daemon 187.4 kB
Sending build context to Docker daemon
Step 0 : FROM centos:7
 ---> 97cad5e16cb6
Step 1 : MAINTAINER Prashanth Pai <ppai@redhat.com>
 ---> Using cache
 ---> ec6511e6ae93
Step 2 : RUN yum --setopt=tsflags=nodocs -y update &&     yum --setopt=tsflags=nodocs -y install         centos-release-openstack-kilo         epel-release &&     yum --setopt=tsflags=nodocs -y install         openstack-swift openstack-swift-{proxy,account,container,object,plugin-swift3}         supervisor         git memcached python-prettytable &&     yum -y clean all
 ---> Using cache
 ---> ea7faccc4ae9
Step 3 : RUN git clone git://review.gluster.org/gluster-swift /tmp/gluster-swift &&     cd /tmp/gluster-swift &&     python setup.py install &&     cd -
 ---> Using cache
 ---> 32f4d0e75b14
Step 4 : VOLUME /mnt/gluster-object
 ---> Using cache
 ---> a42bbdd3df9f
Step 5 : RUN mkdir -p /etc/supervisor /var/log/supervisor
 ---> Using cache
 ---> cf5c1c5ee364
Step 6 : COPY supervisord.conf /etc/supervisor/supervisord.conf
 ---> Using cache
 ---> 537fdf7d9c6f
Step 7 : COPY supervisor_suicide.py /usr/local/bin/supervisor_suicide.py
 ---> Using cache
 ---> b5a82aaf177c
Step 8 : RUN chmod +x /usr/local/bin/supervisor_suicide.py
 ---> Using cache
 ---> 5c9971b033e4
Step 9 : COPY swift-start.sh /usr/local/bin/swift-start.sh
 ---> Using cache
 ---> 014ed9a6ae03
Step 10 : RUN chmod +x /usr/local/bin/swift-start.sh
 ---> Using cache
 ---> 00d3ffb6ccb2
Step 11 : COPY etc/swift/* /etc/swift/
 ---> Using cache
 ---> ca3be2138fa0
Step 12 : EXPOSE 8080
 ---> Using cache
 ---> 677fe3fd2fb5
Step 13 : CMD /usr/local/bin/swift-start.sh
 ---> Using cache
 ---> 3014617977e0
Successfully built 3014617977e0

Setup Gluster volume:

Glusterd service start, create and mount volumes

$  su
root@node1 docker-gluster-swift$ service glusterd start

Starting glusterd (via systemctl):                         [  OK  ]
root@node1 docker-gluster-swift$
root@node1 docker-gluster-swift$

Create gluster volume:

There are three nodes where Centos 7.0 is installed.

Ensure glusterd service is started all three nodes(node1, node2, node3) as below:
#systemctl glusterd start

root@node1 docker-gluster-swift$ sudo gluster volume create tv1  node1:/opt/volume_test/tv_1/b1 node2:/opt/volume_test/tv_1/b2  node3:/opt/volume_test/tv_1/b3 force

volume create: tv1: success: please start the volume to access data

- node1, node2, nod3 are the hostnames,

- /opt/volume_test/tv_1/b1,  /opt/volume_test/tv_1/b2 and /opt/volume_test/tv_1/b3 are the bricks

        - tv1 is the volume name

root@node1 docker-gluster-swift$

Start gluster volume:
root@node1 docker-gluster-swift$ gluster vol start tv1

volume start: tv1: success

root@node1docker-gluster-swift$ gluster vol status

Status of volume: tv1
Gluster process                             TCP Port  RDMA Port  Online  Pid
Brick node1:/opt/volume_test/tv_1/b1         49152     0          Y       5951
Brick node2:/opt/volume_test/tv_1/b2         49153     0          Y       5980
Brick node3:/opt/volume_test/tv_1/b3         49153     0          Y       5980

Task Status of Volume tv1
There are no active volume tasks
root@node1 docker-gluster-swift$

Create a directory to mount the volume:
root@node1 docker-gluster-swift$ mkdir -p /mnt/gluster-object/tv1

The path /mnt/gluster-object/ will be used while running Docker container.

mount the volume:

root@node1 docker-gluster-swift$ mount -t glusterfs node1:/tv1 /mnt/gluster-object/tv1

root@node1 docker-gluster-swift$

Verify mount:
sarumuga@node1 test$ mount | grep mnt

node1:/tv1 on /mnt/gluster-object/tv1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)


Run command in the new container with gluster mount path:

root@node1 test$ docker run -d -p 8080:8080 -v /mnt/gluster-object:/mnt/gluster-object -e GLUSTER_VOLUMES="tv1" prashanthpai/gluster-swift:dev


-p 8080:8080

publish container port to host.

format :    hostport : containerport

                         (a)                (b)
Note: -v /mnt/gluster-object:/mnt/gluster-object
(a) location where all gluster volumes are mounted in host location
(b) location inside docker where volume is mapped

passing tv1 volume name as environment.

Verify container :
sarumuga@node1 test$ docker ps
CONTAINER ID        IMAGE                            COMMAND                CREATED             STATUS              PORTS                    NAMES
feb8867e1fd9        prashanthpai/gluster-swift:dev   "/bin/sh -c /usr/loc   29 seconds ago      Up 28 seconds>8080/tcp   sick_heisenberg

Inspect container and get the IP address:
sarumuga@node1test$ docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'  feb8867e1fd9"


Verifying S3 access :

Now, verify S3 access requests to the Gluster volume.

We are going to make use of s3curl(3) for verifying object access.

Create bucket:
# ./s3curl.pl --debug --id 'tv1' --key 'test' --put /dev/null  -- -k -v

Put object
# ./s3curl.pl --debug --id 'tv1' --key 'test' --put  ./README -- -k -v -s

Get object
# ./s3curl.pl --debug --id 'tv1' --key 'test'   -- -k -v -s

List objects in a bucket request
# ./s3curl.pl --debug --id 'tv1' --key 'test'   -- -k -v -s

List all buckets
# ./s3curl.pl --debug --id 'tv1' --key 'test'   -- -k -v -s

Delete object
# ./s3curl.pl --debug --id 'tv1' --key 'test'   --del -- -k -v -s

Delete Bucket
# ./s3curl.pl --debug --id 'tv1' --key 'test'   --del -- -k -v -s


(1) GitHub - prashanthpai/docker-gluster-swift: Run gluster-swift inside a docker container.
(2) gluster-swift/quick_start_guide.md at master · gluster/gluster-swift · GitHub
(3) Amazon S3 Authentication Tool for Curl : Sample Code & Libraries : Amazon Web Services

Sunday, March 5, 2017

centos -get installed package with date

How do you get list of installed packages with date?

rpm -qa --last


Wednesday, March 1, 2017

How do you get AWS_SECRET_ACCESS_KEY ? AWS access

How do you get  AWS_SECRET_ACCESS_KEY ?

You need to generate one and save the generated one somewhere.

Only during generation you will be able to access secret access and you need to store it somewhere safely.  If you don't store it during generation, you need to create one freshly.

You can only have maximum two, so delete older one(hopefully nobody should use it now, or else they will be screwed :) ) and generate newer one and use AWS_SECRET_ACCESS_KEY from that.

How to access ?


Login using your credentials.

Click on -> Your Security credentials -> Access Key ID and Secret Access key.

Only two can be active at a time.

If you do not have the secret access key, You can delete older one and generate newer one.
After generation, it will prompt you whether to save the file.
Note down the values.
This is the only instance you will be prompted to save file/note down the secret key.

If you miss this, you need to generate new one again :)

Refer this: