Tuesday 5 February 2013

Use of linux commands


#=======================================================================================================
# 1. PERL ONE-LINERS
#=======================================================================================================
#-----------------------------------------------------------------------
# Display Table of WORD in FILE*
#-----------------------------------------------------------------------
# Displays all occurrences of WORD in the FILE*(s), formatting as a
# table with filename, line number and line text.
#-----------------------------------------------------------------------
# Simple example for short filenames.  Seems to cause trouble in Windows command line
grep -in WORD FILE* | perl -ne 'if (/(.*):(\d*):\s*(.*)\s*/) { print $1, " " x (20 - length $1), " ", "0" x (5 - length $2), $2, "   $3\n"}'

# More complex example using egrep and assuming longer filenames (note the '50' in the spaces computation)
# Also seems to cause trouble in Windows command line.
egrep -in '(Alabama|"AL")' *.asp | perl -ne 'if (/(.*):(\d*):\s*(.*)\s*/) { print $1, " " x (50 - length $1), " ", "0" x (5 - length $2), $2, "   $3\n"}'

#-----------------------------------------------------------------------
# Text Trimming and Processing
#-----------------------------------------------------------------------
# Trim excess blanks from beginning to end of line. A Cygwin creature.
# cat myfile | perl -ne 'if (/^\s*(.*?)\s*$/s) { print "${1}\n" }' # seems less efficient in practice
perl -ne 'if (/^\s*(.*?)\s*$/s) { print "${1}\n" }' < temp.txt

# Don't forget the tr command! Not that we have use right now...


#=======================================================================================================
# 2. Unix FIND one liners
#=======================================================================================================
find .                    # Find all files under .
find . -type d                # Find all subdirectories.
find . -iregex ".*\(bas\|cls\|frm\)"    # Find all Visual Basic code files in a directory
                    # -iregex matches a case-insensitive regular expression
                    # backslashes necessary to protect special characters at shell

# Find all VB files containing a given string
# Note you must escape the {} and the ; because we are at the shell
find . -iregex ".*\(bas\|cls\|frm\)" -exec grep NIBRS \{\} \;                   

# Find all VB files containing comment lines
# Note you must escape the {} and the ; because we are at the shell
find . -iregex ".*\(bas\|cls\|frm\)" -exec egrep "^*'" \{\} \;

# Find all VB files containing NON comment lines
# Note you must escape the {} and the ; because we are at the shell
find . -iregex ".*\(bas\|cls\|frm\)" -exec egrep -v "^*'" \{\} \;

# Find all VB files containing NON comment, NON blank lines in a directory
find . -iregex ".*\(bas\|cls\|frm\)" -exec egrep -v "^*'|^[[:space:]]*$" \{\} \;

# Count the code in a directory hierarchy
find . -iregex ".*\(bas\|cls\|frm\)" -exec egrep -v "^*'|^[[:space:]]*$" \{\} \; | wc


#=======================================================================================================
# 3. Grep one liners
#=======================================================================================================
grep -R "Stuff" .            # Find all files in a subdirectory containing "Stuff"
grep -R --include "*.asp" "Stuff" .    # Find all .asp files in a directory tree containing "Stuff"

grep '"b"' * | cut -d":" -f1         # List of all filenames matching a pattern

# Long list of all the files matching a pattern
for file in $(grep '"b"' * | cut -d":" -f1 ); do ls -l $file; done

# Compare files with one extension with VI modified versions, redirecting diff error if ~ file not found
for file in *.cmd; do echo; echo $file; diff $file ${file}~ 2>&1; done > diffs.txt

#=======================================================================================================
# 4. File/Directory one liners
#=======================================================================================================
ls -R                    # Recursively list contents of subdirectories


                    # Two different formats for the subdirectories of just this directory
ls | xargs perl -e '@ARGV = grep( -d $_ , @ARGV); print "@ARGV\n"'   
ls | xargs perl -e '@ARGV = grep( -d $_ , @ARGV); while($dir = shift @ARGV) { print "$dir\n" }'

# Subdirectory sizes
du                    # Show sizes of subdirectories
du -h                    # Make the sizes more humanly readable
du -h --max-depth=1            # Display the sizes of just one level of subdirectories
du -h --summarize *             # Summarize the elements of this directory, humanly

# File Sizes

# Reversing a file
tac temp.txt > pmet.txt            # Reverse the order of the lines in a file using 'tac' (cat reverse) command
rev temp.txt > pmet.txt            # Reverse the text in each line of a file using 'rev' (reverse line) command

# cat all the files in a directory
for file in myDir/* ; do echo; echo "---$file--" ; cat $file ; done

# Sum the line counts of all the code files in a directory
find . -iregex ".*\(java\|html\|txt\)" -exec wc \{\} \; | gawk '{ print $1 "\t" $4; sum += $1 } END { print "--------"; print sum "\tTOTAL" }'


#=======================================================================================================
# 5. Path Parsing One Liners
#=======================================================================================================
# Can't use EXPR because it only matches at the beginning of a line.
if echo $PATH | grep -q "Program" ; then echo "present"; else echo "absent"; fi
if echo $PATH | grep -q "\." ; then echo ". present"; else export PATH=".:$PATH"; echo "added . to path";  fi
perl -e "print \"$PATH\" =~ /Program/"    # not good for testing

# See if a variable exists
if [ "$fram" ]; then echo "found a fram"; else echo "no fram here"; fi

# Use Python to print all the variable names on one line - note this can futz up some pathnames
python -c "for path in '%CLASSPATH%'.split(';'): print path"

# Use Perl to do the same thing
perl -e "print join qq(\n), split /;/, '%CLASSPATH%' "
perl -e "foreach (split ';', '%CLASSPATH%') { print; print qq(\n) }"

#=======================================================================================================
# 6. AWK File Parsing One Liners
#=======================================================================================================
# Get a sorted list of values from the first column of a tab-separated file
gawk '{ print $1 }' tabfile.txt | sort | uniq > firstcolvals.txt

# Get a sorted list of column values from a file with fields split by dashes
gawk 'BEGIN { FS="--" } ; { print $1 }' data.txt | sort | uniq > firstcolvals.txt

# Reverse fields in a tab-separated file and sort it by the numeric field
gawk 'BEGIN { FS="\t" } ; { print $2 "\t" $1 }' data.txt  | sort -nr > sorted.txt

# Reverse fields in a tab-separated file and sort it by the numeric field
gawk 'BEGIN { FS="\t" } ; { print $2 "\t" $1 }' data.txt  | sort -nr > sorted.txt

#
gawk '{ freq[$0]++ } ; END { sort = "sort --key=2" ; for (word in freq) printf "%s\t%d\n", word, freq[word] | sort ; close(sort) }' allprops.txt

# Extract the first field, collect the count of it, and output the word, counts sorted by name
gawk 'BEGIN { FS="--" } ; { freq[$1]++ } ; END { sort = "sort" ; for (word in freq) printf "%s\t%d\n", word, freq[word] | sort ; close(sort) }' data.txt

# Extract the first field, collect the count of it, and output the word, counts sorted by count
gawk 'BEGIN { FS="--" } ; { freq[$1]++ } ; END { sort = "sort --key=2 -nr" ; for (word in freq) printf "%s\t%d\n", word, freq[word] | sort ; close(sort) }' data.txt


#=======================================================================================================
# 7. DOS Fu One Liners
#=======================================================================================================
#-----------------------------------------------------------------------
# 7a. Windows Equivalents of Common UNIX Commands
#-----------------------------------------------------------------------
# Iterate through all the lines in a file, doing something
for /F %i IN (lines.txt) DO echo %i

#-----------------------------------------------------------------------
# 7b. OTHER CODE SNIPPETS
#-----------------------------------------------------------------------
# Windows Prompt
prompt $P $C$D$T$F: $_$+$G$S


#=======================================================================================================
# Z. ONE-LINES WRITTEN BY OTHER AUTHORS
#=======================================================================================================
#-----------------------------------------------------------------------
# FROM "Perl One-Liners" by Jeff Bay jlb0170@yahoo.com
#-----------------------------------------------------------------------
# Find the sum of the sizes of the non-directory files in a directory
ls -lAF | perl -e 'while (<>) { next if /^dt/; $sum += (split)[4] } print "$sum\n"'

#-----------------------------------------------------------------------
# Other One-Liners
#-----------------------------------------------------------------------
# Find all Perl modules - From Active Perl documentation
find `perl -e 'print "@INC"'` -name '*.pm' -print


Free-UP momory
echo 1 > /proc/sys/vm/drop_caches

kill pid
kill -9 `ps -ef |  grep http | grep -v grep  | awk '{print $2}'`

list only files
ls -l | awk 'NR!=1 && !/^d/ {print $NF}'

list only dirs
ls -l | awk 'NR!=1 && /^d/ {print $NF}'

log from to date
awk '$0>=from&&$0<=to' from="2009-08-10 00:00" to="2009-08-10 23:49" log4j.output.4

to see process
ps -U mailman  -o pid,%cpu,%mem,stime,time,vsz,args

to delete connection time out mails from postfix mail queue
cat /var/log/maillog | grep "Connection timed out" | awk '{print $6}' | cut -d: -f1 | /var/postfix/sbin/postsuper -d -

replace ipaddress
perl -pi -e "s/172.30.1.10/172.30.254.5/;" /etc/sysconfig/network-scripts/ifcfg-bond0

to show all file except 2 first lines
sed -n '3,$p'

delete many files
find . -type f -exec rm -v {} \;

remote incremental backup
ssh -l root main.server.com "tar -c -f - --incremental-listed=/var/tar-snapshot-file.txt /my/dir/to/backup" > my_local_backup.tar

to find which file has text examlestring from all folders
find . -type f -print0 | xargs -0 grep "examplestring"



To remove blank lines from a file using sed, use the following:

sed -e '/^$/d' filetoread >filetowrite

The ^ matches the beginning of a line and the $ matches the end.
The two of them together matches a line that begins and ends with
nothing in between (blank line).
The d just says delete the lines for which we have a match.
Since the standard operation of sed is to print every line,
all lines exept blank lines will be sent to filetowrite.
===============================================================
ZERO THOSE LOG FILES


Some programs write to multiple
log files in a directory and
need to be zeroed out sometimes
to save diskspace. The following
ksh shell script will zero out
all files with the ".log"
extension in a directory.

--- cut here ---
for object in *.log
do
> $object
print "$object has been zeroed!"
done
--- cut here ---

Just a little time saver when
you have 100 other things to
be doing.
================================================================
WHAT FILE IS THAT IN?

Did you ever just want to know
what files search patterns were
in but didn't want to see the
search results?

Turn this:
$ grep quick *

story1.txt: The quick brown fox
story1.txt: The turtle was not quick.
recipe.txt: How to make soup quick.

Into this:
$ grep -l quick *

story1.txt
recipe.txt
=================================================================
EDIT RUN EDIT

Here is a simple script that will
edit a file, run it and edit it again.
This is useful when you're doing alot
of testing against a script.

Replace "file" with the name of the script.

------------- cut here ---------------
while [ 1 ]
do
  vi file
  clear; echo
  chmod 755 file
  ./file
  echo; echo  -n "[pause] "
  read pause
done


------------- cut here ---------------

==================================================================

How to delete blank lines within vi?

Just type this baby in:

    <esc>:g/^$/d


NOTE:  This means that all the lines that just have a
       carriage return on them (NO Spaces), will be removed.




Ok, so I have some of those lines too.  How can I remove all of them as well?

    <esc>:g/^ *$/d


NOTE: There is a space after the '^' and before the '*'.
=================================================================
CHANGING PERMS RECURSIVELY

*** NOTE: Odly enough, the previous mailing
of this tip did not go through do to a
permissions problem. Sorry.

To change permissions recursively for all files in a
directory

find dirname -exec chmod xxx {} \; -print

where dirname is the directory you want to change permissions.
==================================================================
BASH HOTKEYS

Bash provides many hot keys to ease use. Like
ctrl-l  -- clear screen
ctrl-r  -- does a search in the previously given commands so that you don't
have to repeat long command.
ctrl-u  -- clears the typing before the hotkey.
ctrl-a  -- takes you to the begining of the command you are currently typing.
ctrl-e  -- takes you to the end of the command you are currently typing in.
esc-b   -- takes you back by one word while typing a command.
ctrl-c  -- kills the current command or process.
ctrl-d  -- kills the shell.
ctrl-h  -- deletes one letter at a time from the command you are typing in.
ctrl-z  -- puts the currently running process in background, the process
can be brought back to run state by using fg command.
esc-p  -- like ctrl-r lets you search through the previously given commands.
esc-.  -- gives the last command you typed.
======================================================================
KILLING MORE USERS

*** SORRY ABOUT THE PEVIOUS NULL TIP IF YOU RECEIVED IT****

To kill all processes of a particular user from root
at unix prompt type:

# kill -9 `ps -fu username |awk '{ print $2 }'|grep -v PID`

We can also use the username as an argument and pass it from
command line, if this command is put as a script.

====================================================
STRING REMOVAL

What the following does:

rm `ls -al | grep str | awk '{if ($9 !~ /^str/) {print $9}'`

Removes all files that contains the string "str" excepts
those that begin with it. Changing the !~ to =~ does the
opposite.

======================================================================
TRACKING OF LOGINS AND LOGOUTS


In the .login file add the commands:
------------------------------------

echo login time `date` >> .daylogs/masterlog

grep -i "sun" .daylogs/masterlog > .daylogs/sunday.log
grep -i "mon" .daylogs/masterlog > .daylogs/monday.log
grep -i "tue" .daylogs/masterlog > .daylogs/tuesday.log
grep -i "wen" .daylogs/masterlog > .daylogs/wensday.log
grep -i "thu" .daylogs/masterlog > .daylogs/thursday.log
grep -i "fri" .daylogs/masterlog > .daylogs/friday.log
grep -i "sat" .daylogs/masterlog > .daylogs/saturday.log


In the .logout file add this line
-----------------------------------
==================================================================
find . -name 'tmp*' -print0 | xargs -0 rm -rf


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
to change command type

export PS1='\u@\h - \w >> '
----------------------------------------------------------------
To Enable ipconntrack

modprobe ip_conntrack hashsize=131072
modprobe iptable_nat

----------------------------------------------------------------
to use proxy in linux command line
export http_proxy=http://'system:syst3m'@mails.poweryourtrade.com:3389

----------------------------------------------------------------

uniq(1) takes a stream of lines and collapses adjacent duplicate lines into one copy of the lines. So if you had a file called foo that looked like:
--------------------------------------------------------------------------------

davel
davel
davel
jeffy
jones
jeffy
mark
mark
mark
chuck
bonnie
chuck


--------------------------------------------------------------------------------
You could run uniq on it like this:
--------------------------------------------------------------------------------

% uniq foo
davel
jeffy
jones
jeffy
mark
chuck
bonnie
chuck


--------------------------------------------------------------------------------
Notice that there are still two jeffy lines and two chuck lines. This is because the duplicates were not adjacent. To get a true unique list you have to make sure the stream is sorted:
--------------------------------------------------------------------------------

% sort foo | uniq
jones
bonnie
davel
chuck
jeffy
mark


--------------------------------------------------------------------------------
That gives you a truly unique list. However, it's also a useless use of uniq since sort(1) has an argument, -u to do this very common operation:
--------------------------------------------------------------------------------

% sort -u foo
jones
bonnie
davel
chuck
jeffy
mark


--------------------------------------------------------------------------------
That does exactly the same thing as "sort | uniq", but only takes one process instead of two.
uniq has other arguments that let it do more interesting mutilations on its input:

-d tells uniq to eliminate all lines with only a single occurrence (delete unique lines), and print just one copy of repeated lines:
--------------------------------------------------------------------------------

% sort foo | uniq -d
davel
chuck
jeffy
mark


--------------------------------------------------------------------------------

-u tells uniq to eliminate all duplicated lines and show only those which appear once (only the unique lines):
--------------------------------------------------------------------------------

% sort foo | uniq -u
jones
bonnie


--------------------------------------------------------------------------------

-c tells uniq to count the occurrences of each line:
--------------------------------------------------------------------------------

% sort foo | uniq -c
   1 jones
   1 bonnie
   3 davel
   2 chuck
   2 jeffy
   3 mark


--------------------------------------------------------------------------------
I often pipe the output of "uniq -c" to "sort -n" (sort in numeric order) to get the list in order of frequency:
--------------------------------------------------------------------------------

% sort foo | uniq -c | sort -n
   1 jones
   1 bonnie
   2 chuck
   2 jeffy
   3 davel
   3 mark


--------------------------------------------------------------------------------

Finally, there are arguments to make uniq ignore leading characters and fields. See the man page for details.


for loop for finding IP

for i in `cat 1.txt `;do echo -n $i "  " ; ssh $i /sbin/ifconfig eth0 | sed -n 's/.*addr\:\([0-9\.]*\).*/\1/p' ;done

No comments:

Post a Comment