Posts

Showing posts from 2016

How to analyze AWS RDS Slow Query

Amazon RDS has the feature in which we can log the slow queries and than analyze them to find the queries which are making our Database slow.
There are few prerequisites to this:
1. Slow log query should be enabled in RDS.
2. AWS RDS CLI should be present.
3. Percona Toolkit should be present.

Once you have all the prerequisites than following script will help us in rest.
[root@ip-10-0-1-220 ravi]# cat rds-slowlog.sh #!/bin/bash #Script to Analyze AWS RDS MySQL logs via Percona Toolkit #By Ravi Gadgil #To get list of all slow logs available. /opt/aws/apitools/rds/bin/rds-describe-db-log-files --db-instance-identifier teamie-production --aws-credential-file /opt/aws/apitools/rds/credential | awk '{print $2 }' | grep slow > /home/ravi/slowlog.txt logfile=$(echo -e "slowlog-`date +%F-%H-%M`") resultfile=$(echo -e "resultlog-`date +%F-%H-%M`") for i in `cat /home/ravi/slowlog.txt` ; do #To download Slow Log files and add them to single file. /opt/aws/ap…

How to install Percona MySQL tools

Persona has lots of tools to analyses MySQL data and very useful information can be extracted from them.

Installing via Yum:
[root@ip-10-0-1-220 ravi]# yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm Retrieving http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm Preparing... ########################################### [100%] 1:percona-release ########################################### [100%] [root@ip-10-0-1-220 ravi]# yum install percona-toolkit Loaded plugins: auto-update-debuginfo, priorities, update-motd, upgrade-helper 1123 packages excluded due to repository priority protections Resolving Dependencies --> Running transaction check ---> Package percona-toolkit.noarch 0:2.2.18-1 will be installed --> Processing Dependency: perl(DBD::mysql) >= 1.0 for package: percona-toolkit-2.2.18-1.noarch --> Processing Dependency: perl(DBI) >= 1.13 for…

How to setup AWS RDS CLI

AWS provides the its CLI tools but few of RDS functionalities don't work on them so in order to make them work AWS RDS CLI is very helpful.

Download the RDS CLI:
[root@server downloads]# wget http://s3.amazonaws.com/rds-downloads/RDSCli.zip
Unzip the downloaded file and copy it in desired location:
[root@server downloads]# unzip RDSCli.zip[root@server downloads]# cp -r RDSCli-1.19.004 /opt/aws/apitools/rds [root@server downloads]# cd /opt/aws/apitools/rds/bin
Check RDS CLI version:
[root@server bin]# ./rds-version Relational Database Service CLI version 1.19.004 (API 2014-10-31)
These commands can be added in system environment so you can run them from anywhere:
[root@server bin]# export PATH=/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/apitools/rds/bin:/usr/local/bin
The credential file is also need to be created so that these commands can be run taking permissions in to consideration from that file and file can be added in system environment so we don't need to pass every time in co…

Script to add logs in Logentires in case of Shared hosting Web Server.

Logentries is a very good tool to store and analyze logs in central location but when we are storing logs from a  Nginx/Apache shared hosting environment it gets complex as we need to tag each log to which host it belong. I am using rsyslog to forward to send my logs to Logentries as it gives me more flexibility as all rsyslog functionality works.

First need to create a main configuration file which we will use to create corresponding rsyslog files. It will have your Logentries secret key to whom log need to be followed.

For access log:
[root@ip-10-0-1-220 ravi]# cat access-vanila $Modload imfile $InputFileName access-log-location $InputFileTag access-tag $InputFileStateFile filestate-tag $InputFileSeverity info $InputFileFacility local7 $InputRunFileMonitor # Only entered once in case of following multiple files $InputFilePollInterval 1 $template filestate-tag,"6fb8xxxxxxxxxxxxxxxxxxxxxxe8ed %HOSTNAME% %syslogtag% %msg%\n" if $programname == 'access-tag' then @@…

How to install python2.7 with pip2.7

Python is one of the most famous and powerful languages used so in order to install or update it to 2.7 version following steps can be used.


Installing the Python2.7.
[root@ip-10-0-1-55 ~]# yum install python27 python27-devel[root@ip-10-0-1-55 ~]# python --version Python 2.7.10
If its still showing older version add the new version as default.
[root@ip-10-0-1-55 ~]# alternatives --config python There are 2 programs which provide 'python'. + 1 /usr/bin/python2.6 * 2 /usr/bin/python2.7 Enter to keep the current selection[+], or type selection number: 2 [root@ip-10-0-1-55 ~]# python --version Python 2.7.10
Installing the pip2.7.
[root@ip-10-0-1-55 ~]# wget https://bootstrap.pypa.io/get-pip.py --2016-05-26 11:10:33-- https://bootstrap.pypa.io/get-pip.py Resolving bootstrap.pypa.io (bootstrap.pypa.io)... 103.245.222.175 Connecting to bootstrap.pypa.io (bootstrap.pypa.io)|103.245.222.175|:443... connected. HTTP request sent, awaiting response... 200 OK Length:…

How to create and remove swap partition in Linux

If our server is having memory issues and we want to increase it without updating the physical RAM than Swap is a very good option. It's slower than the physical RAM but can do the tasks also if created on faster hard disks the results are good.

Create a Swap partition in place where you wanna create it. I am creating it in /swap with 2GB of size.
[root@ip-10-0-1-38 /]# dd if=/dev/zero of=/swap bs=1M count=2048 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 54.8004 s, 39.2 MB/s
Update the permissions of /swap partition to 600.
[root@ip-10-0-1-38 /]# swapon /swap swapon: /swap: insecure permissions 0644, 0600 suggested. [root@ip-10-0-1-38 /]# chmod 600 /swap
Start the Swap on /swap partition.
[root@ip-10-0-1-38 /]# swapon /swap
To check if swap has started or not.
[root@ip-10-0-1-38 /]# swapon -s Filename Type Size Used Priority /swap file 2097148 0 -1 [root@ip-10-0-1…

How to install SyntaxNet in Linux

SyntaxNet is an open-source neural network framework for TensorFlow that provides a foundation for Natural Language Understanding (NLU) systems.
In order to install it in Linux servers following steps can be used..

It requires Python 2.7 so if you don't have that install the same.
[root@ip-10-0-1-55 ~]# yum install python27 python27-devel[root@ip-10-0-1-55 ~]# python --version Python 2.7.10
If its still showing older version add the new version as default.
[root@ip-10-0-1-55 ~]# alternatives --config python There are 2 programs which provide 'python'. + 1 /usr/bin/python2.6 * 2 /usr/bin/python2.7 Enter to keep the current selection[+], or type selection number: 2 [root@ip-10-0-1-55 ~]# python --version Python 2.7.10
Install the java1.8 version.
[root@ip-10-0-1-55 ~]# yum install java-1.8.0-openjdk*
If old java version is showing up update it via following command.
[root@ip-10-0-1-55 ~]# alternatives --config java
Make sure that your java home is pointi…

Take dump of all the databases from mysql server

To take MySQL dump of all the databases following script can be used it will work for normal MySQL server as well as Amazon RDS.
[root@ip-10-0-1-231 ravi]# cat dump.sh #!/bin/bash #Script to get dump of all the databases within the server. USER="root" PASSWORD="dbpassword" databases=`mysql -h prod-XXXXXXXXXXXX.rds.amazonaws.com -u $USER -p$PASSWORD -e "SHOW DATABASES;" | tr -d "| " | grep -v Database` for db in $databases; do if [[ "$db" != "information_schema" ]] && [[ "$db" != "performance_schema" ]] && [[ "$db" != "mysql" ]] && [[ "$db" != _* ]] ; then echo "Dumping database: $db" mysqldump -h prod-XXXXXXXXXXXXXX.rds.amazonaws.com -u $USER -p$PASSWORD --databases $db > `date +%Y%m%d`.$db.sql # gzip $OUTPUT/`date +%Y%m%d`.$db.sql fi done

How to add numbers via bash

There are cases when we need to add up numbers or the outputs of our commands via bash so in that case following commands are helpful.

[ec2-user@ip-10-0-1-38 ~]$ cat list 45 78 56 67 34 56
To do the sum:
[ec2-user@ip-10-0-1-38 ~]$ cat list | awk '{ SUM += $1} END { print SUM }' 336
Same can be used to get sum of any output for example grep.
[ec2-user@ip-10-0-1-38 ~]$ cat s3data.txt | grep Size | grep Mi | awk '{ print $3 }' | awk '{ SUM += $1} END { print SUM }' 3405





Find size of S3 Buckets

In order to find the size of S3 buckets we can use following ways:

First Method: via s3api cli
[root@ip-10-0-1-231 ravi]# aws s3api list-objects --bucket bucketname --output json --query "[sum(Contents[].Size), length(Contents[])]" [ 30864102, 608 ]
30864102: Is the size in Bytes.
608: No of objects in bucket.

Second Method : via s3 cli
[root@ip-10-0-1-231 ravi]# aws s3 ls s3://bucketname --recursive | grep -v -E "(Bucket: |Prefix: |LastWriteTime|^$|--)" | awk 'BEGIN {total=0}{total+=$3}END{print total/1024/1024" MB"}' 29.4343 MB
Using bash commands to get output in desired way.

Third Method : via s3 cli with parameters
[root@ip-10-0-1-231 ravi]# aws s3 ls s3://bucketname --recursive --human-readable --summarize 2016-05-04 11:32:00 7.6 KiB prompthooks0.py Total Objects: 1 Total Size: 7.6 KiB
-human-readable: this will provide the data that is already in KB, MB, GB. TB etc.

To get size of all the buckets in your S3 use the following…

How to run multiple commands on servers via Ansible

Ansible is a very powerful tool for central management of servers and we can run commands on servers from a central location but there is limitation that we can only run one command at a time on servers using command module and cat perform complex commands so to over come it following script is very helpful.

We are creating a script and placing it in /usr/bin/ so that can be run from any location.
[root@ip-10-0-1-231 ravi]# which prod-command.sh /usr/bin/prod-command.sh
Following is the script which is being used to run multiple commands of servers:

[root@ip-10-0-1-231 ec2-user]# cat /usr/bin/prod-command.sh #!/bin/bash #To run commands on server in group tag_Prod_VPC #By Ravi Gadgil echo -e "Running command on Production Server... " for i in "$@" ; do ansible tag_Prod_VPC -u ec2-user -s -m shell -a "$i" ; done
$@ : Will take the variables from the scripts which can be of n numbers.
ansible : To run ansible command line.
tag_Prod_VPC : Server Host gro…

How to check most memory utilization service in Linux

Linux has lots of tools such as free, top, htop, vmstat etc to show system utilization but in order to find exact service which is using the maximum system memory in descending order following command is very helpful.
[root@ip-10-0-1-231 ravi]# ps aux --sort -rss USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND rundeck 1308 28.6 61.2 2092252 371628 ? Ssl 10:33 0:56 /usr/bin/java -Djava.security.auth.login.con jenkins 1272 13.7 23.6 1301096 143400 ? Ssl 10:33 0:27 /etc/alternatives/java -Dcom.sun.akuma.Daemo root 1561 0.0 0.4 181944 2492 pts/0 S 10:35 0:00 sudo su root 1229 0.0 0.3 91012 2256 ? Ss 10:33 0:00 sendmail: accepting connections root 1563 0.0 0.3 115432 2068 pts/0 S 10:35 0:00 bash root 1549 0.0 0.2 73688 1796 ? Ss 10:35 0:00 ssh: /root/.ansible/cp/ansible-ssh-52.74.164 root 1519 0.0 0.2 113428 1772 ? Ss 10:35 0:00 sshd: ec2-user [priv] smms…

How to add multiple users in Linux with or without password

Image
To add multiple users in Linux with or without password, refer to following video:


For details to add multiple users in Linux following steps can be used.

Create a file having list of user which need to be added:
[root@localhost ravi]# cat add.user user1 user2 user3 user4 user5
Run the following command to add users:
[root@localhost ravi]# for i in `cat add.user` ; do useradd $i ; done
i : its the variable used to have values from add.user.
add.user : Its the file having name of the users which you want to added.
useradd : Command used to add user.

To check is users are created or not:
[root@localhost ravi]# for i in `cat add.user` ; do id $i ; done uid=501(user1) gid=501(user1) groups=501(user1) context=root:system_r:unconfined_t:SystemLow-SystemHigh uid=502(user2) gid=502(user2) groups=502(user2) context=root:system_r:unconfined_t:SystemLow-SystemHigh uid=503(user3) gid=503(user3) groups=503(user3) context=root:system_r:unconfined_t:SystemLow-SystemHigh uid=504(user4) gid=504(user4)…

Run mysqltuner on AWS RDS

In order to tune up AWS MySQL RDS following script is very helpful as it can find out quite a few flaws in DB and can provide good recommendation to make RDS better.

To download the script:
wget http://mysqltuner.pl/ -O mysqltuner.pl
Run script on RDS by providing the amount of memory allocated to DB server:
[root@ip-10-0-1-55 ravi]# ./mysqltuner.pl --host rds-staging.DB.com --user root --password dbpassword --forcemem 75000 >> MySQLTuner 1.6.4 - Major Hayden <major@mhtx.net> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [--] Performing tests on rds-staging.DB.com:3306 Please enter your MySQL administrative password: [--] Skipped version check for MySQLTuner script [--] Assuming 75000 MB of physical memory [!!] Assuming 0 MB of swap space (use --forceswap to specify) [OK] Currently running supported MySQL version 5.6.19-log -------- Storage Engine Statist…

How to do automated NFS failover in AWS via script

NFS is always consider to be Single point of failure and if we are using it so in order to over come we can use clusters, glusterfs, DRBD etc but in case you wanna do a manual fail over following script can be very helpful.

The scenario which is being for this script is:
1. The elastic IP is attached to NFS server. 
2. The lsync is used to keep main server and secondary in sync.
3. Main server is pinged every minute and if 5 continuous ping fails do the required fail over.
4. Restart the netfs service in all the client servers to avoid NFS tail error.

 Script:
#!/bin/bash #Script to make secondary server as primary NFS storage server. #By Ravi Gadgil. #To check 54.254.X.X is up or not for 5 consecutive times count=$(ping -c 5 54.254.X.X | grep 'received' | awk -F',' '{ print $2 }' | awk '{ print $1 }') if [ $count -eq 0 ]; then # 100% failed echo "Host : 54.254.X.X is down (ping failed) at $(date)" #To disassociate IP from primar…

Script to monitor sites via response code

To find out the availability of our sites following script can be very helpful as it will enable us to check the status code of sites. I have created the file which has list of sites which we have to monitor and than passing that file in script to monitor them.

First create a file having names of the sites which need to be monitored listed:
[root@ip-10-0-1-55 ravi]# cat sitelist.txt https://www.theteamie.com https://nyp-trial.theteamie.com http://samsung.theteamie.com
Than use the following script to monitor the sites and check the return status.
Note : I am only considering status code 200 and 301 as acceptable return code rest all are consider as error.
#!/bin/bash #By Ravi Gadgil #Script to monitor sites using there return status for i in `cat /home/ravi/sitelist.txt`; do echo -e "-----------------------------------------------------------------" echo -e "$i is being checked" res=`curl -I -s $i | grep HTTP/1.1 | awk {'print $2'}` echo -e "$res&quo…

How to use crontab with examples

Crontab is one of the most useful services in Linux which helps us to automate the tasks. It enables us to run commands on a specific interval of time.

It takes following 5 parameters into consideration:
# Minute Hour Day of Month Month Day of Week Command # (0-59) (0-23) (1-31) (1-12 or Jan-Dec) (0-6 or Sun-Sat) .---------------- minute (0 - 59) | .------------- hour (0 - 23) | | .---------- day of month (1 - 31) | | | .------- month (1 - 12) OR jan,feb,mar,apr ... | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat | | | | | * * * * * <command to be executed>
To edit cron for current user:
# crontab -e
To list cron for current user:
# crontab -l
To edit cron for any specific user:
# crontab -u ravi -e
To list cron for any specific user:
# crontab -u ravi -l
There are few predefined strings which can help to run cron at specific time frame:
string meaning -----…

How to find daily Web server hits count

If we are hosting the Web servers in shared hosting or dedicated hosting and what to know the hits count sorted in descending order than following script can be very helpful. We are analyzing of access logs situated in /var/log/nginx folder to get the daily hits counts.

If you want to get the daily hits count the best time would be just before your log rotation happens as after the log rotation the access logs will become null and new logs will start to add in.

Script:
#!/bin/bash #By Ravi Gadgil. #Script to find daily hits count via web server access logs. echo -e "Hits \t Url" for i in `find /var/log/nginx/ -name "*access.log"`;do echo $i | sed -e 's/_access.log/ /g' | sed -e 's/:/ /g'| cut -d'/' -f5 --output-delimiter=' ' | awk '{printf $0}'; grep -v 'jpg\|png\|jpeg\|gif\|js' $i | grep -ircn `date | awk ' { print $3 } '`; done | awk ' { print $2,"\011",$1 } ' | sort -nr
Note: Access lo…

Script to delete Server with attached EBS volumes.

Following script is to delete Server with all attached EBS volumes with it. In AWS it can be tough task to remove Servers if it has EBS volumes attached in it and they need to be removed manually so following script can be a great help at that time.

Note : In order to make it work you need to have your AWS output set to table form else you need to do bit of changes as per your output type.

Script:
#!/bin/bash #By Ravi Gadgil. #Script to delete ami with attached snapshots. #Take input of server to be deleted. echo -e "$1" > /tmp/imageid.txt #Find EBS associated with server. aws ec2 describe-instances --instance-ids `cat /tmp/imageid.txt` | grep vol | awk ' { print $4 }' > /tmp/vol.txt echo -e "Following are the volume associated with it : `cat /tmp/vol.txt`:\n " echo -e "Starting the termination of Server... \n" #Terminating server aws ec2 terminate-instances --instance-ids `cat /tmp/imageid.txt` echo -e "\nDeleting the ass…

Script to delete AMI with attached snapshots.

Following script is to delete AMI with all attached snapshots with it. In AWS it can be tough task to remove AMI if it has snapshots attached in it and they need to be removed manually so following script can be a great help at that time.

Note : In order to make it work you need to have your AWS output set to table form else you need to do bit of changes as per your output type.

Script :

#!/bin/bash #By Ravi Gadgil. #Script to delete ami with attached snapshots. #Take input of AMI to be deleted. echo -e "$1" > /tmp/imageid.txt #Find snapshots associated with AMI. aws ec2 describe-images --image-ids `cat /tmp/imageid.txt` | grep snap | awk ' { print $4 }' > /tmp/snap.txt echo -e "Following are the snapshots associated with it : `cat /tmp/snap.txt`:\n " echo -e "Starting the Deregister of AMI... \n" #Deregistering the AMI aws ec2 deregister-image --image-id `cat /tmp/imageid.txt` echo -e "\nDeleting the associated snapshots.... \…

Change user IAM password with AWS CLI.

In order to change the password of IAM user in AWS following commands can be used.

First we need to create a json file having old and new password of the user.
[root@ip-10-0-1-55 ravi]# cat change.json { "OldPassword": "Ravi@123", "NewPassword": "Ravi@1234" }
Following command will reset the password of user from which the command has been run.
[root@ip-10-0-1-55 ravi]# aws iam change-password --cli-input-json file://change.json