Showing posts from 2015

Nginx vhost redirects with and without location.

Nginx is a very good Web servers which is fast and highly customizable. Following are the few redirects used in daily work.

A simple vhost with HTTP and HTTPS configuration.
server { listen 80; server_name; root /data/html/; index index.html index.htm; access_log /var/log/nginx/testsite.domain.com_access.log; error_log /var/log/nginx/testsite.domain.com_error.log; include /etc/nginx/denyhost.conf; } server { listen 443 ssl; server_name; root /data/html/; index index.html index.htm; access_log /var/log/nginx/testsite.domain.com_access.log; error_log /var/log/nginx/testsite.domain.com_error.log; include /etc/nginx/denyhost.conf; } listen : Its for port on which you want web server to listen, 80 for HTTP and 443 for HTTPS.
server_name : For domain name to be hosted.
root : Its the location of sites document directory.
index : For pages which need to be rendered first.
access_log & error_log : for sites logs.

Sed command to replace wild card and special characters.

Sed is a very powerful tool to replace characters in Linux. It is very helpful in scripts when we need to replace entries with new one or create a whole new entry.

To do simple character replace, here mp4 is replaced by abc:
[root@ip-10-0-1-24 ravi]# cat test.txt new= -Days-Shipping-With-Swift-and-VIPER.mp4 new= -Ways-to-Enrich-the-Tech-Industry.mp4 new= All-the-IO-News-That-You-Should-Care-About.mp4 new= BottomUp-Programming-in-Swift.mp4 [root@ip-10-0-1-24 ravi]# sed -i 's/mp4/abc/g' test.txt [root@ip-10-0-1-24 ravi]# cat test.txt new= new= new= new=
General syntax : sed -i 's/word to be replaced/new word/g' filename

To replace characters with wildcard, here numericals are replaced new:
[root@ip-10-0-1-24 ravi]# cat test3.txt FILENAME1 FILENAME2 FILENAME3 FILENAME4 [root@ip-10-0-1-24 ravi]# sed -i 's/F…

Command to sort Domain Names with respect to there TTL.

If in case you want to sort the Domain Names in order of TTL then following Commands can be used.

First create a file listing all the Domains which need to be checked.
[root@ip-10-0-1-24 ravi]# cat test.txt
Following command will give the domains in sorted order.
[root@ip-10-0-1-24 ravi]# for i in `cat test.txt` ; do dig +noauthority +noquestion +nostats $i @ | grep -A1 SECTION |tail -n 1 | awk '{print $2" " $1 }' | sort -n ; done 299 293 9
dig : command to show DNS record.
+noauthority, +noquestion, +nostats : To get TTL only.
@ : To get without cached TTL.
grep -A1 : To get next line after grepped item.

How to create users from ansible with public key and password.

Ansible is used for centrally managing the tasks and one of the major task is user management. So in order to perform this either we will be creating the users with passwords or by there public keys which is one of the preferred way as well.

To add user with password first we need to create a encrypted password from command line which we can forward to our Ansible playbook.

[root@localhost Desktop]# python -c 'import crypt; print crypt.crypt("userpassword", "user")' usx7b002w0mBw
Use the "usx7b002w0mBw" password in your Ansible playbook to set user's password. The required Ansible playbook will be like this.

--- - hosts: stage remote_user: ec2-user sudo: yes tasks: - name: Add User user: name=user groups=wheel,dev-team password=usx7b002w0mBw
hosts : To servers whom you want to add user.
remote_user : From which user you want to run your commands.
sudo : Run commands as sudo.
name : Name of user to be c…

Script to take daily backup of S3 buckets.

For a total Disaster recovery environment we should have backup of S3 buckets as well if in case it got deleted by accident. I am taking the backup of all S3 buckets in S3 it self by using the AWS CLI tool.

Following script can be help full in this case:
#!/bin/bash # Created by Ravi Gadgil # Script to daily sync data of S3 buckets in S3 bucket echo -e "\n-----------------------------\n Starting the sync for `date`..... \n------------------------------\n" aws s3 ls | awk '{print $3}' | grep -v 'S3-daily-sync\|production-crocodoc' > /tmp/s3buckets.txt for i in `cat /tmp/s3buckets.txt` ; do aws s3 sync s3://$i s3://S3-daily-sync/$i/ ; done

aws s3 ls : Is used to get name of all the buckets which are in your S3.
grep -v : Is used to exclude specific buckets of which you don't want to sync. You should always put the bucket to exclude which your using to take rest of S3 buckets backup as in my case its "S3-daily-sync" else it will keep syncing …

How to upgrade Ansible to latest version.

Ansible releases it update in Git very frequently and to get the most of the updated modules and package its recommended to keep it updated.

You need to go the directory where you have installed the Ansible in my case its /root/ansible.
Following is the way to update Ansible:
[root@server downloads]# cd /root/ansible/ [root@server ansible]# git pull --rebase [root@server ansible]# git submodule update --init --recursive
If you have used EC2 module to access AWS resources you should also update the files in /etc/ansible as well. To know how to use Ansible with AWS you can use following link.

Following is the way to update:
[root@server ansible]# cp contrib/inventory/ /etc/ansible/hosts [root@server ansible]# cp contrib/inventory/ec2.ini /etc/ansible/ec2.ini [root@server ansible]# cp examples/ansible.cfg /etc/ansible/ansible.cfg
Note: After updating ansible.cfg do make the required changes which you have made in your previous ansible.cfg.

How to install and configure Ansible for AWS in EC2 Linux.

Ansible is a very good open source configuration management and automation tool which can run on any machine which has SSH and Python working on it. There is no need of client server architecture and any other language.
It has pre built commands in it as well and we can write out own with YML language. It can run any type of scripting language.

To install Ansible use following steps:
[root@server downloads]# cd ~ [root@server root]# git clone git:// --recursive [root@server root]# cd ansible/[root@server ansible]# source ./hacking/env-setup
env-setup is used to set the environment variables for Ansible so to make it permanent its recommended to add it in .bashrc.
[root@server root]# echo "source ~/ansible/hacking/env-setup" >> ~/.bashrc

There are few dependencies for Ansible which need to be install.
[root@server root]# easy_install pip[root@server root]# pip install paramiko PyYAML Jinja2 httplib2 six[root@server root]# ansible --version an…

How to manage DNS record via AWS Route 53

AWS offers a get product as Route 53 which can be be used to manage our DNS records. Great thing about Route 53 is that is can be managed by AWS CLI which helps in automating things via scripts.
Following commands can be used to do day today work on Route 53 via command line..

To check how many Hosted Zones are present.
[root@server newsite-setup]# aws route53 list-hosted-zones ---------------------------------------------------------------------- | ListHostedZones | +--------------------------------------------------------------------+ || HostedZones || |+-------------------------+----------------------------------------+| || CallerReference | 1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxd || || Id | /hostedzone/ZXXXXXXXXXXXX6 || || Name | || || ResourceRecordSetCount | 212 …

Manage AWS Load Balancer Certificates.

AWS EC2 Load Balancer certificates can be further manage by AWS CLI.

To get list of Certificates available:

[root@server ec2-user]# aws iam list-server-certificates ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | ListServerCertificates | +--------------------------------------------------------------------------------------------------------------------------------------------------------------+ || ServerCertificateMetadataList || |+-----------------------------------------------------------------------+-------+------------------------+--------------------------+------------------------+| || …

Set rule in S3 to give all access to files added in specific folder.

In S3 we can't set permissions on specific folder with in the S3 bucket and sometimes we need to set a global rule for files under such specific folder. We can use Bucket Policy in this case to do this for us.

Following is the rule to give global access to files all uploaded in /global-data folder so it can be downloaded.

To set this up go to root location of your bucket than permissions and select edit bucket policy.

{ "Version": "2012-10-17", "Id": "Policy1438583545455", "Statement": [ { "Sid": "Stmt1438583521051", "Effect": "Allow", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::test-bucket/global-data/*" } ] }

How to create a super user in MySQL RDS.

RDS by default has a root user but if we need to create a user with all access or limited access depending upon need following commands are helpful.

mysql> CREATE USER 'superuser'@'%' IDENTIFIED BY 'Su93RU53R'; Query OK, 0 rows affected (0.04 sec) mysql> GRANT ALL ON `%`.* TO superuser@`%` Query OK, 0 rows affected (0.01 sec)
To create a user with only read access.

mysql> CREATE USER 'dummy'@'%' IDENTIFIED BY 'Dummy123@'; Query OK, 0 rows affected (0.04 sec) mysql> GRANT SELECT ON `%`.* TO dummy@`%`; Query OK, 0 rows affected (0.01 sec)
Grant SELECT will let user to view every thing on DB but not able to edit any thing.

To check user access of any user following command can be used.

mysql> show grants for dummy ; +------------------------------------------------------------------------------------------------------+ | Grants for dummy@% | +--…