Bala's Blog

JOY OF PROGRAMMING

Express function to handle GET , POST and both

The express object can be used to create endpoints.

For making a GET request the syntax will be

app.get(“/api/get’,function);

For making a POST request

app.post(‘/api/write”,function);

For making both POST/GET in a same API Endpoint

app.any(‘/api’,function);

Thanks

Balasundaram

Varnish config for caching

Varnish is one of the powerful caching used widely.

The below config is the basic config for varnish

backend default {

.host = “127.0.0.1”;
.port = “8071”;
}

It should be added in the default.vcl in /etc/varnish/default.vcl

Then it should be restarted by using

sudo varnishd -f /etc/varnish/default.vcl start

The main advantage of varnish caching is , It is very powerful for serving dynamic content.

more config to follow/……………

Nginx Config for load balancing ( using UPStream module)

Upstream module in nginx gives the way for load balancing.

The below is the nginx config for load balancing two redis slaves

upstream app_cluster_1 {
server 127.0.0.1:6379;
server 127.0.0.1:6380;
}

server {
listen 0.0.0.0:8001;
server_name nodetest.local nodetest;
access_log /var/log/nginx/nodetest.log;
error_log /var/log/nginx/nodetesterrot.log;

location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;

proxy_pass http://app_cluster_1/;
proxy_redirect off;
}

}

HAProxy Config for Load balancing the servers

I tried using HAPRoxy for load balancing between the app servers as well as the database slaves.

The generic config for the HAPRoxy is

global
pidfile /tmp/haproxy-queue.pid

defaults
mode tcp
balance roundrobin
option httpclose
option forwardfor

listen redis 0.0.0.0:8011
server server5 127.0.0.1:6379 maxconn 1 check
server server6 127.0.0.1:6380 maxconn 1 check

 

For getting realtime statistics

the below config is used and the port can be used to get the statistics

listen stats :1936
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /

Nginx Configuration redirecting to Node Application(Centos)

We may need some point of time to use reverse proxy server to redirect the requests to the application which may well called as a load balancer.

Nginx is one of the reverse proxy server and today I will explain how to use it to redirect to the node application running in a diffrent port.

Install the nginx 

Install the nginx either using yum (RHEL) or using apt-get install nginx in Ubuntu Operating System.

Configuration of Nginx

Centos:

In centos once we install the nginx we will have /etc/nginx/conf.d

i) Create a file virtual.conf in the conf.d directory

ii) Add the following in the directory

upstream app_cluster_1 {
server localhost:8080;
}

server {
listen 0.0.0.0:80;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;

proxy_pass http://app_cluster_1/;
proxy_redirect off;
proxy_cache anonymous;
}
}

ii)After this we need to change the default.conf.

Change the http default port from 80 to some port.

listen       80; to listen 8011;

iii) Restart the nginx server we will be getting the redirecting thing working

Ubuntu configuration will be posted in the next post…….;

Any queries feel free to ask me………..

Installation and Config of Multi Node Riak Cluster

Erlang Installation

The first step of installing the Riak is to install the Erlang R1403

  Installing using source tar ball

i) Install some of the required libraries before installing the erlang

$ sudo yum install gcc glibc-devel make ncurses-devel openssl-devel

ii) Download the source tar ball by using the following command

wget http://www.erlang.org/download/otp_src_R14B01.tar.gz

ii) Then configure and install by using the below commands

tar zxvf otp_src_R14B03.tar.gz
cd otp_src_R14B03
./configure && make && sudo make install

RIAK Installation

1) Some of the libraries required before installing the riak are

i)gcc

ii)gcc-c++

iii)glibc-devel

iv)make

This can be done by using the below command

$ sudo yum install gcc gcc-c++ glibc-devel make

2) Download the source tar ball by using the following command

$ wget http://downloads.basho.com/riak/riak-1.0.2/riak-1.0.2.tar.gz

3) Then install by using the below commands

$ tar zxvf riak-1.0.2.tar.gz
$ cd riak-1.0.2
$ make rel
$ make devrel
$ cd dev/dev1
    Configuration of the Riak node

Change the default IP address 127.0.0.1 to 0.0.0.0 located under http{} in the riak_core section of riak1.0.2/dev/dev1/etc/app.config

{http, [
            {"0.0.0.0", 8091 }
          ]},

Next edit the riak1.0.2/dev/dev1/etc/vm.args file and change the -name to your IP:

 -name riak@127.0.0.1 ==> -name riak@IPaddress
The ip-address is the address of the ec2 instance
     Starting the riak in dev1
bin/riak start

Once this is done the riak is started and the default port is 8091 we can check this by using curl to “http://IPaddress:8091”

Add a Second Node to Your Cluster

Repeat the steps above for another other node on the network. Once the node has started you will use the bin/riak-admin command to have it join the other node in the Riak cluster.

$ dev/dev1/bin/riak-admin join dev1@Ipaddress
Sent join request to dev1@Ipaddress

How to get Http Response using Python – 1

Today I am starting new series of posts for python and interesting things that can be done with it.

The following code can be used to hit the url and print the response in python

#You can use the Python standard libraries urllib, urllib2 or httplib to make HTTP requests

import urllib2

#the method urlopen hits the url specified and read method returns the response

response = urllib2.urlopen(‘http://www.facebook.com’).read()

print response

Thanks

J.K.B.S

Supervisord utility – 1

The supervisord utility is very much useful for starting , restarting and stopping the process even from remote machines without given the access previlages to the machine.

The supervisord can be installed by using the following steps in the Installation of Supervisord

This can be done when we have python installed in our machine.

 

 

More on this will be on the upcoming posts…

Setting the Hadoop Cluster (Multi Node Setup)

The following are the changes that need to be made inside hadoop folder.

1)core-site.xml

For MASTER:

changes :

The ip address of the master is given in place of the localhost

<property>

<name>fs.default.name</name>

<value>hdfs://10.229.152.18:10011</value>

</property>

SLAVE:

changes :

Similar to master, each replacing with their corresponding ip address.

3)hdfs-site.xml

MASTER:

// The vaule in the replication is the no of slaves + master

<configuration>

<property>

<name>dfs.replication</name>

<value>3</value>

</property>

// Here the name node directory is specified

<property>

<name>dfs.name.dir</name>

<value>/home/user1/asl-hadoop-0.20.2+228/filesystem/name</value>

</property>

// data node directory path

<property>

<name>dfs.data.dir</name>

<value>/home/user1/asl-hadoop-0.20.2+228/filesystem/data</value>

</property>

// temporary directory path

<property>

<name>dfs.temp.dir</name>

<value>/home/user1/asl-hadoop-0.20.2+228/filesystem/temp</value>

</property>

</configuration>

SLAVES:

no changes default values

4)mapred-site.xml

MASTER:

<configuration>

<property>

<name>mapred.job.tracker</name>

<value>10.229.152.18:10012</value>

</property>

//local path need to be given where the local directory is created automatically

<property>

<name>mapred.local.dir</name>

<value>/home/user1/asl-hadoop-0.20.2+228/local</value>

</property>

//Here no of task may be (no of slaves*10) which is rule of thumbs

<property>

<name>mapred.map.tasks</name>

<value>30</value>

</property>

//Here no of reduce tasks may be (no of slaves*3) which is rule of thumbs

<property>

<name>mapred.reduce.tasks</name>

<value>6</value>

</property>

SLAVES:

no changes default values

5)CONF / MASTERS AND SLAVES

MASTER:

conf/masters

master ip

conf/slaves

master_ip

slave_ip

SLAVE:

conf/masters

localhost

conf/slaves

localhost

{HADOOP_HOME}/bin/start-all.sh

After executing the start-all.sh the jps must look like

running node:

in master:

23763 TaskTracker

23186 NameNode

23603 JobTracker

23359 DataNode

In slave

3232 DataNode

6772 TaskTracker

SUCCESSFULLY COMPLETED HADOOP CLUSTER…

How to remove multiple deleted files in git repository?

First use

git add -u (removes all the files in deleted mode) or git add -A

After that we need to commit that as usual

git commit -m "Deleted files manually" 

This command commits all the deleted files

git push
After we push all the files will be deleted from the repository.