Thursday, June 28, 2012

amazon route 53


                                AMAZON ROUTE 53

Amazon route 53 is nothing but a dns service provided by the amazon for high scale availability and reliability. this service is useful for also trafficing your site and for high availability

  1. log in to amazon management console and then activate the amazon route 53 service
  1. in the amazon route 53 in the name give the domain of your site then here name server (NS) records are generated. If your domain have already uses the name servers then replace the new name servers with the old name servers.
  1. After replacing then submit the name servers it will take 2 – 48 hours to propagate across the Internet if migrating from old infrastructure
  1. then create A record in the route 53 then go to amazon management console then to instances and copy the elastic ip of the proxy ec2 instance then paste this in route 53 record sets and save , do this for the other proxy ec2 instance if available then save the records
  1. Create A Record weighted round robin for this create record set give the name and keep TTL=300 then copy the elastic ip of the proxy ec2 instance and then pate this in the values of route 53 record set then give weight w1=1, for the other ec2 do the same and give weight w2=1, because it will the income traffic to the proxy like this 1 request to the 1st proxy and 2nd request to the 2nd proxy and 3rd request to 1st proxy and 4th request to the 2nd proxy and so on
  1. create the canonical name CNAME for this we require the elastic load balancer after create the load balancer in amazon management console then copy the A address of the elastic load balancer in the menu description the copy the address then go to route 53 create record set give the name, type=CNAME TTL=300 in the value paste the address and then save the record set

  2. create the Alias record for this copy the A address and hosted zone ip of elastic load balancer, then go to route 53 create record set give the name , type=ipv4 address copy the A address in the alias DNS name and also pate the hostzone IP in alias hosted zone id then create record set
  1. create the MX record for the domain in route 53 in this at the value give the type as MX and in values give the MX records of the mail server
  1. create SPF record give the type=SPF and TTL=300 and value for the record and then save for applying no forgery
  1. create the NS records for the use of your sub domains in amazon route 53 the new NS records will be generated copy this this records and then give the route 53 then create record set the place this NS records in the values then give TTL=3oo type=NS and then save
  1. now the amazon route 53 is created 

Tuesday, June 26, 2012

elastic load balancer

what is the elastic load balancing


actually what is meant by the load balancing it is nothing but when incoming traffic to your site is full then if you would like to divert the load to the other server the here we use load balancer to share the traffic between the two servers. the load balancer will share the load to your server for more reliability.
amazon elastic load balancing


 how to create the elastic load balancing in amazon managment console

1) go to amazon managment console
2) go to elastic load balancer
3) then create the elastic load balancer
4) define the elastic load balancer name then save it give the secure http server then give    
the port no as 80 and give the ec2 instance port as 80 which will be port 80 by default
5) then after configure the health check the configuration options, give the ping protocol as http and ping     port as 80 and then ping path as /
6) in advance options give the response time as 5 seconds, health check as 0.5 Mintues unhealthy threshold as2 and healthy as 8 then continue
7) then add the ec2 instance to the load balancer
8) then review the status of the load balnacer
9) then save the load balancer


click here to more review

Friday, June 22, 2012

vmplayer installation in ubuntu


How to install vmplayer in ubuntu:

To install vmplayer download the vmplayer from vmwaresite here then create account and download the vmplayer package for linux 32bit after this download ha been completed then after give full permissions to the file like this

#chmod 777 Vmware-Player-3.1.4-385536.i386.bundl

then this

#./Vmware-Player-3.1.4-385536.i386.bundl

after this trhe file will be extracted and done then go to Application → systemtools-> Vmplayer then this is been down after this create an virtual machine by taking an iso image of the operating syatem which you could like to run in vmplayer then this will be done finally 

youtube video downloads


How to download youtube videos in ubuntu directly:

you can download videos from you tube directly with this software clipgrab by this you can download videos from you tube. In clipgrab you will get a search box in that you can type the video name which you like to download. In this clipgrab you will now get a download option then click on grab the clip here you will get the download directly the formats can be changed will downloading I.e; MPEG, WMV,MP3 formats . You can download the clipgrab click here 
or else you can do like this also
 if it doesnt work you can also download the files from the command line
$ sudo add-apt-repository ppa:clipgrab-team/ppa 
$ sudo apt-get update
        $ sudo apt-get install clipgrab

after this clipgrab will be intstalled in ubuntu 

Tuesday, June 12, 2012

backup



Backup Doc



Full Backup
Full backup is the starting point for all other backups, and contains all the data in the folders and files that are selected to be backed up. Because full backup stores all files and folders, frequent full backups result in faster and simpler restore operations. Remember that when you choose other backup types, restore jobs may take longer.

Incremental Backup

Incremental backup means backing up everything that has changed since last full backup.


Differential Backup
Differential seems to be another name for incremental.differential backup offers a middle ground by backing up all the files that have changed since the last full backup




What to backup?
If there is room on the backup media, and time limits permit running backups long enough, it probably is wisest to back up everything. You may skip /tmp or other places where it is known there are only temporary files that nobody wants to backup.
If space or time limits place restrictions, consider not backing up the following
Files that come directly from a CD or other removable media. It may even be faster to copy them again from CD than restoring from backup media.
Files that can be regenerated easily. For example, object files that can be made with make. Just make sure all the source files and compilers are backed up.
If the Internet connection is fast, it may be easy enough to download files again. Just keep a list of the files and where to download them from.


Backup devices and media

You need some media to store the backups. It is preferable to use removable media, to store the backups away from the computer and to get “unlimited” storage for backups.

If the backups are on-line, they can be wiped out by mistake. If the backups are on the same disk as the original data, they do not help at all if the disk fails and is not readable anymore. If the backup media is cheap, it is possible to take a backup every day and store them indefinitely.

Floppy,Disk,Tapes,CD-R and CD-RW are the medias available for backup


Planning a Backup

Before doing a backup, plan it carefully. Consider
Which files are irreplaceable without a back up. Irreplaceable files probably include those in users’ home directories (including /root), and configuration files, such as those in the /etc/ directory.
Which files are on removable drives, such as cd s or floppies. Since you probably do not need to back up removable drives, you might unmount them before doing a complete system backup.

Which files can be easily replaced by installing a package or doing a selective install or upgrade of the operating system. You can save time and storage space by not including these files in a backup.
Which files are unnecessary or dangerous to backup. For example, files in /tmp are probably unnecessary, while restoring some files that are in the /proc directory could crash the system.

Whether to compress files using gzip or bzip2 . Compressing saves space, but adds another step to the backup. Also, while compression is generally reliable, it creates another stage at which the process can fail.
Whether users are responsible for backing up their own files. Since only the root user has full permissions for all files on the system, usually backups are best done by the root user. However, if users back up their own files, you might omit backing up the home directory, or at least not back it up regularly.

Backup Locations for your Linux, Application, Database, Config Files

1. Backups on the same server

This is probably the straight forward approach. Taking the backup of your critical information (applications, databases, configuration files, etc.,) and storing it on a disk on the same server. If you’ve mounted a remote dedicated backup filesystem using NFS on the local server, I still consider that as storing the backup on the same server. The disadvantage of this method is that when the whole system crashes, or if by mistake you do a rm -rf /, and erased everything on the system, you’ve lost your backup.
Taking a backup and storing it on the same server is a good staring point. In addition to this, you should consider storing your backups in one of the following locations.
  1. 2. Backups on a different server

Once you’ve taken the backup on the local server, copy the backup to a remote server. If you have a qa-server, take a backup of your production, and restore it on qa-server. Probably you should assign a dedicated server with lot of space to store backups. When you have a dedicated server for backup, you can even initiate the backup from the dedicated remote server, and don’t have to store a copy of the backup on the local server.
For database backups, I prefer to take the backup on the local server, and copy the backup to a remote server. This way, the database backup copy is located at two different locations. If you lose one backup, you still have the other one. Also, when the database crashes on the local server, it is quick and easy to restore it from the backup located on the same server, instead of copying the backup from the remote server to the local server and restoring it.



Note: Use mysqldumpmysqlcopy for MySQL database backup, and pg_dump psql for PostgreSQL database backup, and RMAN for Oracle database backup.
    1. 3. Tape backup

If you don’t have a dedicated backup server to store copy of all your backups, implement a tape backup solution and store all your backups on tape. Tape backups are slow. So, take a backup on the local server first and copy the backup to tape during off peak hours or weekends. The advantage of tape backup is that the backups are easily portable, where you can move around the backup anywhere you want.
    1. 4. Backup at an Off-site

You can do all of the above and still get into trouble, when disaster strikes. If the local server, backup server, and tape backup are all located at the same physical location, in a diaster situation, you might lose all the data. So, it is important that you store your backups at an off-site.
You can either have a redundant datacenter, where all your critical applications in the primary datacenter are synced with the disaster datacenter (or) at a bare minimum, keep a copy of the backup tape at an off-site location. Don’t physically rotate the tapes and keep it at the same datacenter, which is useless during a disaster recovery scenario.


Choosing a Backup Tool


Linux has several tools for backing up and restoring files



dump / restore :
Old tools that work with filesystems, rather than files, and can back up unmounted devices. Although you can easyly control what is backed up with dump by editing a single column in the /etc/fstab file, for some reason these utilities have fallen into disuse. Today, many distributions of Linux, including Debian, do not even include them by default. If you want to use dump and restore , you must install them yourself.



tar :
A standard backup tool, and by far the easiest to use. It is especially useful for backing up over multiple removable devices using the -M option.



cpio :
A very flexible command, but one that is hard to use because of the unusual way in which the command must be entered.


dd :
The dd command is one of the original Unix utilities and should be in everyone’s tool box. It can strip headers, extract parts of binary files and write into the middle of floppy disks; it is used by the Linux kernel Makefiles to make boot images.



Mondo :
Mondo is reliable. It backs up your GNU/Linux server or workstation to tape, CD-R, CD-RW, DVD-R[W], DVD+R[W], NFS or hard disk partition. In the event of catastrophic data loss, you will be able to restore all of your data [or as much as you want], from bare metal if necessary. Mondo is in use by Lockheed-Martin, Nortel Networks, Siemens, HP, IBM, NASA’s JPL, the US Dept of Agriculture, dozens of smaller companies, and tens of thousands of users.



Dar:
dar is a shell command that backs up directory trees and files. It has been tested under Linux, Windows, Solaris, FreeBSD, NetBSD, MacOS X and several other systems



Many commercial or free software back up tools are also available.








mysql server installation


INSTALLATION OF MY SQL:

To install mysql server
#apt-get install mysql-server
then it promts for new password for the root
give new password and repeat this second time

To login to mysql type
#mysql -u root -p
it prompts for the password
you will get like this
mysql>

here create the database
>create database sunny
>show database sunny


NGINX server


How to install NGINX:

To install nginx type
#apt-get install nginx
To restart the nginx service
# /etc/init.d/nginx restart

Then go to the file and default file it will be generated will nstalling nginx
#rm -rf /etc/nginx/sites-enabled/default
# rm -rf /etc/nginx/sites-available/default

To create a new configuration file
# vim /etc/nginx/sites-available/basic

Inside the file use this format
server {
  listen  127.0.0.1:80;
  server_name  basic;
  access_log  /var/log/nginx/basic.access.log;
  error_log  /var/log/nginx/basic.error.log;
  location  / {
    root  /var/www/basic;
    index  index.html index.htm;
  }
}


Then create a root directory and index.html file
# mkdir /var/www/basic
# cd /var/www/basic
# vim index.html
here type your website files

To enable the site and restart nginx
# cd /etc/nginx/sites-enabled

# ln -s ../sites-available/basic 

# /etc/init.d/nginx restart
then go to browser and type http;//127.0.0.1/ then your site will be open using  nginx
STATIC IP PROCESS:
for this you have to assign static ip
# vim /etc/network/interface
there change the address, netmask, gateway for example
auto lo
iface lo inet loopback
address 192.168.1.249
netmask 255.255.255.0
gateway 192.168.1.1
then save
after that go to resolv.conf
# vim /etc/resolv.conf
and here #(comment) the old name server
if you are using internet means give the dns server name of the service provider
then save
then go to the nginx for ip resolution

RESOLVING THE SITE WITH IP ADDRESS:
# vim /etc/nginx/sites-avialable/basic
server {
listen 192.168.1.249:80; (here it is new ip address which we have given)
server_name basic;
access_log /var/log/nginx/basic.access.log;
error_log /var/log/nginx/basic.error.log;
location / {
root /var/www/basic;
index index.html index.htm;
}
}
then save
then go
#vim /etc/hosts
give the new ip address and name of server
for exp: 192.168.1.249 basic
then save
then after start the service
#/etc/init.d/nginx restart
then go to browser and give the new address.

TO RESOLVE NGINX WITH HOSTNAME:
Go to nginx
# vim /etc/nginx/sites-available/basic
then
server {
listen 192.168.1.249:80;
server_name sunny.example.com; (which we would like to give)
access_log /var/log/nginx/basic.access.log;
error_log /var/log/nginx/basic.error.log;
location / {
root /var/www/basic;
index index.html index.htm;
}
}
then save
then go to
# vim /etc/hosts
192.168.1.249 sunny.example.com
then save
after this start the service
#/etc/init.d/nginx restart
then go browser and type the server name like sunny.example.com then the site will be resolved

Amazon S3



Amazon Simple Storage Service (s3)

Amazon S3 is storage for the Internet. It is designed to make web-scale computing easier for developers. Its is used for storing the database and dumps in the cloud. Its like a external hard disk in the cloud where you can upload the data and download the data when ever needed.
Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, secure, fast, inexpensive infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers.

Amazon S3 Functionality:

Amazon S3 is intentionally built with a minimal feature set.
Write, read, and delete objects containing from 1 byte to 5 terabytes of data each. The number of objects you can store is unlimited
Options for secure data upload/download and encryption of data at rest are provided for additional data protection.
Uses standards-based REST and SOAP interfaces designed to work with any Internet-development toolkit.

Data Security Details:

Amazon S3 supports several mechanisms that give you flexibility to control who can access your data as well as how, when, and where they can access it. Amazon S3 provides four different access control mechanisms: Identity and Access Management (IAM) policies, Access Control Lists (ACLs), bucket policies, and query string authentication. IAM enables organizations with multiple employees to create and manage multiple users under a single AWS account. With IAM policies, you can grant IAM users fine-grained control to your Amazon S3 bucket or objects. You can use ACLs to selectively add (grant) certain permissions on individual objects. Amazon S3 Bucket Policies can be used to add or deny permissions across some or all of the objects within a single bucket. With Query string authentication, you have the ability to share Amazon S3 objects through URLs that are valid for a predefined expiration time
Amazon S3’s standard storage is:
  • Designed to provide 99.999999999% durability and 99.99% availability of objects over a given year.
  • Designed to sustain the concurrent loss of data in two facilities.





static ip ubuntu


for this you have to assign static ip

# vim /etc/network/interface
there change the address, netmask, gateway for example
address 192.168.1.249
netmask 255.255.255.0
gateway 192.168.1.1
then save
after that go to resolv.conf
# vim /etc/resolv.conf
and here #(comment) the old name server
if you are using internet means give the dns server name of the service provider
then save

lan chat window



FREE LAN CHAT FOR WINDOWS:

its easy to download and easy to install and no requirment of give username , email ids, passwords
click here to get the free open source download of lan chat

Key features
  • Instant messaging
    Connect and chat with users on your network.
  • Secure messaging for privacy
    All messages are protected by AES encryption with RSA as the key exchange mechanism.
  • Broadcast messages
    Send notifications to all users or specified users.
  • File transfer
    Exchange files with others easily.
  • Organize contacts
    Arrange your contacts into groups for easier management.
  • Message logging
    Past conversations are logged and can be retreived at any time.
  • Server less architecture
    A server does not need to be set up on the network for LAN Messenger to work.
  • No internet connection required
    As the name suggests, LAN Messenger works inside the local network and does not require internet access. This helps to minimize external threats.
  • Multi language user interface
    You can select the language for the user interface.
  • Cross-platform support
    All the features of this application are supported on Windows, Mac and Linux. The interface is fully integrated with the native environment of each platform.


lan chat ubuntu



Free lan chat for ubuntu:

  1. go to application at the bottom we can see the ubuntu software center
  2. in this at the search place give the name as iptux
  3. then seen can see the iptux there
  4. install the iptux in the ubuntu
  5. after this you can chat with the users who are using ubuntu for lan chat
in this you can find the system in the lan by the ip address

ssh login


SSL ( SECURE SOCKET LAYER):

SSL (Secure Sockets Layer) is the standard security technology for establishing an encrypted link between a web server and a browser. This link ensures that all data passed between the web server and browsers remain private and integral. SSL is an industry standard and is used by millions of websites in the protection of their online transactions with their customers
To be able to create an SSL connection a web server requires an SSL Certificate. When you choose to activate SSL on your web server you will be prompted to complete a number of questions about the identity of your website and your company. Your web server then creates two cryptographic keys - a Private Key and a Public Key.
The Public Key does not need to be secret and is placed into a Certificate Signing Request (CSR) - a data file also containing your details. You should then submit the CSR. During the SSL Certificate application process, the Certification Authority will validate your details and issue an SSL Certificate containing your details and allowing you to use SSL. Your web server will match your issued SSL Certificate to your Private Key. Your web server will then be able to establish an encrypted link between the website and your customer's web browser.
The complexities of the SSL protocol remain invisible to your customers. Instead their browsers provide them with a key indicator to let them know they are currently protected by an SSL encrypted session - the lock icon in the lower right-hand corner, clicking on the lock icon displays your SSL Certificate and the details about it. All SSL Certificates are issued to either companies or legally accountable individuals.

Package: ssh
Portno: 22 default
service : sshd

#ssh 192.168.1.10
here it will login to other systems through ssh
#ssh -P 2222 192.168.1.14
to change the port no so that no one can login

we can generate keys through dsa or rsa
the below is used for private keys so that no passwd required for every time login

#scp -p 2222 /root/.ssh/id_dsa.pub root@192.168.1.14:/rtoot/.ssh/authorized_keys
#ssh -p 2222 192.168.1.14

mysql replication


Mysql server replication



#apt-get install mysql-server mysql-client
here it will prompt for the new password

#/etc/mysql/my.cnf

IN MASTER:
52 bind-address = 192.168.1.26
77 general_log_file = /var/log/mysql/mysql.log
78 general_log = 1
79
80 log_error = /var/log/mysql/error.log
81

90 server-id = 1
91 log_bin = /var/log/mysql/mysql-bin.log
92 expire_logs_days = 10
93 max_binlog_size = 100M
94 binlog_do_db = exleaz

sudo apt-get install mysql-server

Start another instance and repeat the above steps, this will be our slave server.
Since now the MySQL server is running, let’s configure it to make this the Master server.
Edit
#/etc/mysql/my.cnf
MySQL should listen to all IP Addresses, so we comment out the following lines:

#skip-networking
#bind-address = 127.0.0.1
We need to specify the database that needs to be replicated, the path to the binary log (slaves read this log to know what changed in the master and update themselves accordingly) and set a server id to make this as the master server.
log-bin = /var/log/mysql/mysql-bin.log
binlog-do-db=testdb
server-id=1
Restart MySQL by issuing the command
#/etc/init.d/mysql restart
Log in to the MySQL shell
#mysql -u root -p

Inside the shell run the following pair of commands to grant replication privileges to the slave user
GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'%' IDENTIFIED BY '<a_real_password>';
use testdb;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;

After running the above command, you should be able to see binary log position

Write down the position, this would be needed later. Leave the shell.
Quit;
The existing data on the master would have to be manually moved to the slave table. Hence, we take a database dump of testdb using mysqldump
mysqldump -u root -p<password> --opt testdb > testdbdump.sql
Transfer the dump file to the slave server.
Next, we need to unlock the tables in testdb
mysql -u root -p
UNLOCK TABLES;
quit;
Our master server has been configured, lets ready the slave now.
Create a database with the same name, testdb in our case
mysql -u root -p
CREATE DATABASE testdb;
quit;
Its time to load the sqldump file created earlier
mysql -u root -p<password> testdb < /path/to/testdbdump.sql
To configure the slave, edit its
my.cnf
server-id = 2
master-host=
master-user=slave_user
master-password=password
master-connect-retry=60
replicate-do-db=testdb
Restart MySQL/etc/init.d/mysql restart

Final steps
mysql -u root -p
SLAVE STOP;
CHANGE MASTER TO MASTER_HOST=' ', MASTER_USER='slave_user', MASTER_PASSWORD=' ',
MASTER_LOG_FILE='mysql-bin.001', MASTER_LOG_POS=315;
MASTER_HOST is the private IP of the master, you can copy this from the instance details pane.
MASTER_USER is the user we granted replication privileges on the master
MASTER_PASSWORD is the password of MASTER_USER on the master
MASTER_LOG_FILE is the name of the binary log file on the master
MASTER_LOG_POS is the position of the binary log



Finally, start the slave
START SLAVE;
quit;
And now, each write to the master gets instantly replicated on the slave as well. You can create and configure multiple slaves and all of them will have the same data as on the master.
In Client :

90 server-id = 2
91 master-host = 192.168.1.26
92 master-user = venky
93 master-password = exleaz123
94 master-connect-retry = 60
95 replicate-do-db = exleaz
96 log_bin = /var/log/mysql/mysql-bin.log
97 expire_logs_days = 10
98 max_binlog_size = 100M


Adding multile databases to replication:click here

=============================
MASTER: add lines to my.cnf
=============================
binlog-do-db=database_name_1
binlog-do-db=database_name_2
binlog-do-db=database_name_3 

MASTER: SQL SYNTAX
=============================
GRANT REPLICATION SLAVE ON *.* TO 'user'@'%' IDENTIFIED BY 'password';
FLUSH PRIVILEGES;
FLUSH TABLES WITH READ LOCK;
UNLOCK TABLES;
SHOW MASTER STATUS;
output> file | Position | Binlog_Do_DB
mysql-bin.000963 1570 database_name_1,database_name_2,database_name_3
=============================
SLAVE: add lines to my.cnf
=============================
replicate-do-db=database_name_1
replicate-do-db=database_name_2
replicate-do-db=database_name_3
=============================
SLAVE: SQL SYNTAX
=============================
SLAVE STOP;
CHANGE MASTER TO MASTER_HOST='192.168.0.2', MASTER_USER='user', MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000963', MASTER_LOG_POS=98;
START SLAVE;
SHOW SLAVE STATUS; 


NOTE:

MASTER_LOG_FILE='mysql-bin.000963', MASTER_LOG_POS=98; is displayed when you run the SQL command from the master: cmd mysql#> SHOW MASTER STATUS;

ALSO:

When you run #> SHOW SLAVE STATUS;
make sure you see: Slave_IO_Running | Slave_SQL_Running
Yes Yes
for more in details

samba server




SAMBA SERVER

samba server is used for file sharing and transfering between different platforms like as ubuntu to windows with authentication

port no:137 138 139
package: samba
service: smb
demon: smbd

now open the configuration file

#apt-get install samba

vim /etc/samba/smb.conf

[photos]
path=/home/sunny/photos
valid user=sunny
browseable=yes
writeable=yes
create mask= 0770
directory mask=0770

then save and quit

after that
create a samba user
#useradd -m -s /bin/bash sunny

Then create a samba password for the user like this
#smbpasswd -a sunny
****
****
Then after this restart the service
#/etc/init.d/samba restart

then after this try this in other system
#smbclient //192.168.1.15/photos -U sunny

or like this also

#mount.cifs 192.168.1.15:photos /mnt -o username=sunny

IN WINDOWS:
Right click the my computer ther you can see option like select network drive
here take any drive from (a-z) then in next block give the ip address of the samba server then browse
after this you can see the ip address of the network
then select the ip
after this you can see the share name like a folder with the name photos
then select it
it will prompt for the samba user passwd before you have seen
then give the password now you can see the shared folder in windows