Posts

Showing posts from 2013

How to implement GlusterFS in Amazon AMI (AWS)

The Gluster file system is used to have high avaibility via use of data replication in servers, following are the steps to implement it in Amazon AMI's:
As Amazon AMI don't have GlusterFS repo enabled in it so we need to enable it..
[root@ip-10-144-143-144 ec2-user]# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
[root@ip-10-144-143-144 ec2-user]# sed -i 's/$releasever/6/g' /etc/yum.repos.d/glusterfs-epel.repo Install the required dependencies.. [root@ip-10-144-143-144 ec2-user]#yum install libibverbs-devel fuse-devel -y  Install the Gluster Server and Fuse packets in master server.. [root@ip-10-144-143-144 ec2-user]# yum install -y glusterfs{-fuse,-server} Start the Gluster service in server.. [root@ip-10-144-143-144 ec2-user]# service glusterd start Starting glusterd: [ OK ] Follow the same procedure in client server and install the GlusterFS packages and st…

Script to copy content from server or local storage to S3

Following script can be used to copy data from linux server to S3 bucket:

#!/bin/bash ## Script to copy data from stage to S3 bucket s3cmd sync /tmp/code-backup/ s3://s3-bucket-name/backup/ >> /var/log/daily-backup.log
s3cmd : Is the command to run S3 command
sync : It will sync data from server to S3 bucket
/var/log/daily-backup.log : Destination of logs

If you want to keep Backup of specific days in server and S3 following script can be used, in this 90 days are taken in consideration:
#!/bin/bash ## Script to copy data from stage to S3 bucket find /tmp/code-backup/ -type f -mtime +90 -exec rm -f {} \; s3cmd sync /tmp/code-backup/ s3://s3-bucket-name/backup/ >> /var/log/daily-backup.log
If daily backup needs to be taken of new files following script can be used:

#!/bin/bash ## Script to copy data from stage to S3 bucket s3cmd put `find /tmp/temp-backups/ -type f -mtime -1` s3://s3-bucket-name/backup/ >> /var/log/daily-backup.log

How to proxy pass in Nginx

Following configuration can be used to proxy pass one URL to other in Nginx

location /abc { rewrite /abc(.*) /$1 break; proxy_pass http://redirect.com; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }
 /abc : Directory you want to proxy pass
/abc(.*) : will work for both /abc and /abc/
$1 : Will not pass the /abc in the end of redirect.com

How to install Apache, Java, Tomcat and Solr

Here is the procedure to install Apache, Tomcat, Java and Solr:

First of all we need Apache in our box:
Download the Apache source.

tar -zxvf httpd-2.2.24.tar.gz cd httpd-2.2.24 ./configure --enable-so --enable-expires --enable-file-cache --enable-cache --enable-disk-cache --enable-mem-cache --enable-headers --enable-ssl --enable-http --disable-userdir --enable-rewrite --enable-deflate --enable-proxy --enable-proxy-connect --enable-proxy-ftp --enable-proxy-http --enable-proxy-ajp --enable-proxy-balancer --enable-cgi --disable-dbd --enable-modules=most --with-mpm=worker --prefix=/usr/local/apache2 make; make install /usr/local/apache2/bin/apachectl -t
./configure : Specify the modules which you want to install for your Apache ( ./configure with out specified modules will install all the modules )
--prefix= :  Its used to specify the path where you need to install Apache.
apachectl -t : To check the syntax of Apache

Download Java from oracle site as Tomcat and Solr will be needing Jav…

How to delete multiple user in linux

Image
To delete multiple user in Linux, refer the following video.



If you want to delete multiple system users in Linux following command can be used..
for user in `cat del.user`;do userdel $user;done
user : its the variable used to have values from del.user.
del.user : Its the file having name of the users which you want to delete.
userdel : Command used to delete user.
userdel -r : If you want to delete users home directory as well use this.

File having user names which need to be removed:
# cat del.user ravi roma ben honey chin

How to check logs hour wise and know the maximum requests in respecitive hours..

If you want to check logs per hours from a days log file use the following command:

Here time.list is the file having time period of which logs is needed by hours..

for time in `cat time.list`;do cat access.log | grep -ir 30/Jul/2013:$time: > logs_$time:00-$time:59.txt;done
It will create the separate text files having logs entry with in the time periods ex "logs_12:00-12:59.txt".


If you need to know maximum requests within specific hours use the following command:

for time in `cat time.list`;do cat access.log | grep -ir 30/Jul/2013:$time: | awk ' { print $7 } ' | sort | uniq -cd | sort -nr |head -15 > maxhit_$time:00-$time:59.txt;done
awk ' { print $7 } ' : It will take the 7th entry from logs.
sort : It will sort the hits.
uniq -cd : It will count the unique entry and not display the ones having only single entry.
sort -nr : It will count all the unique entries.
head -15 : It will display the top 15 results.

Command will create the separate t…

script to backup log files..

If there are multiple logs files which needed to be compressed and than Null following script can be used:

# vi log_compress.sh

echo $1_`date +%d%B%y` cat $1 | gzip > $1_`date +%d%B%y`.gz cp /dev/null $1
# chmod 755 log_compress.sh

For example you want to compress error.log use the following command:

# sh log_compress.sh error.log

It will compress the error log in .gz format and null the original error.log



How to install or update newrelic to latest version

Image
Use the following steps to update Relic to latest version:

Url to track latest update:
http://download.newrelic.com/php_agent/release/

Steps:
wget http://download.newrelic.com/php_agent/release/newrelic-php5-3.7.5.7-linux.tar.gztar –xvf newrelic-php5-3.7.5.7-linux.tar.gz cp –r newrelic-php5-3.7.5.7-linux /usr/local/ cd /usr/local/newrelic-php5-3.7.5.7-linux/ ls agent daemon LICENSE MD5SUMS newrelic-install README scripts
./newrelic-install /usr/local/apache2/bin/apachectl stop /usr/local/apache2/bin/apachectl start php -i | grep Relic New Relic RPM Monitoring => enabled
New Relic Version => 3.7.5.7 ("hadrosaurus")

How change permissions of files and directories using find command

Find is a very powerful tool in linux which is use to search files and than can perform any specific action on them as per required.

To Find the files having specific permission:

# find /tmp/ -type f  -perm 666

> /tmp/ : Its the path of the directory in which you want to search.
> -type f : Here we are searching files only.
> -perm 666 : Searching files having permission 666.

To Find the files having specific name and permission:

# find /tmp/ -type f -name '*.php' -perm 666

> -name '*.php' : Its used to search files with name ending from '.php'.

To exclude any folder from find and change the permission of any specific file:

# find /tmp/ -name "extras" -prune -o -type f -iname '*.cgi' -print -exec chmod 774 {} \;

> -name "extra" -prune -o : It will exclude the "extra" folder from search.
> -iname '*.cgi' : It will search the all files with '.cgi' extension.
> -print : It will print all t…

How to create Amazon web services (AWS) EC2 Instances..

Image
Amazon web services are the most used and the best Cloud computing platform out there in market today. Following is the way to create EC2 (Elastic cloud compute) instance..

Login into your AWS account and go into EC2 under Compute and Networking..

In the EC2 dash board you will find the following screen which has info about all the current running instances, elastic IP's, Load balancers, etc..
To create an Instance click the Launch Insatnce..

After clicking the Launch Instance you will get the following screen..
Classic wizard: It gives you many customization option regarding the instance..
Quick Launch Wizard: It has predefined parameters which is used to get images up and running quickly..
AWS marketplace: Its where you can buy other images rather than the default provided by the Amazon like Centos, Debian, etc

We will be creating the image via Classic wizard..


Here you see the images which are provided by the AWS, star indicates that the images are free to use without any addi…