# ls /jjjj / > std 2>&1 -- Merge standard error into standard output and redirect it to a file
# ls /jjjj / &> std -- Result and previous entry The statements are equivalent and more concise
# ls /jjjj / &>> std -- also redirects standard output and standard error to a file
in the file /etc/passwd. The redirection described above connects the standard input and output to the file. We can also connect the standard input and output between multiple programs to transfer data between commands. This technology is vividly called pipeline. The output of the program is like running water flowing in the pipe. Same, flow from the leftmost program to the rightmost program. Pipes are a very common technology in the Linux shell. Using pipe technology, multiple small tools can be used together to complete very complex and powerful functions.
# cat /etc/passwd | head -n 3 --The standard output of the command on the left is used as the standard input of the command on the right
The standard input and output are three created by Linux for programs by default. File descriptors. Although most programs will use these three file descriptors as their own input and output, this is not mandatory. That is to say, the program does not need to use these three standard file descriptors. It is to open a new file descriptor for use. When the program does not use standard input and output, the redirection used here will not work for it. For example:
# cat -n file > file --expect cat to add a line number in front of each line , and then redirect and save the result back to the original file file
$ sudo ls /root > /root/ls.log --In this example , the user does not have the opportunity to enter the password, because the redirection operation is performed first, the current user is an ordinary user, and the file cannot be created in the path /root/ls.log. The shell errors and exits: bash: /root/ls.log: Permission denied. At this time, the sudo command has not been executed, so there is no chance to enter the password
The standard output of the process is stored in a temporary file and returned to the temporary file. The path of <(), when you need to use the output of the program but do not want to generate an intermediate file, you can consider using process substitution
# paste <(seq $(wc -l /etc/passwd | cut -d" " -f1)) <(awk -F: '{print $1}' /etc/passwd)
# paste <(seq 26) <(awk -F: '{ print $1}' /etc/passwd) --The two commands can be compared
# echo 'hello world' > test -- If test does not exist, it will be created. If it exists, it will overwrite the content inside.
2. < --Input redirection, sourced from file
3. >> --Redirect to the file and create (add to the end of the file if it exists)
# echo 'hello george' >> george - - If george does not exist, create it, if it exists, append the content to the end of the file
4.1. Here Document
special redirection method in Linux Shell. Its basic form is as follows:
cmd << delimiter
Here Document Content
delimiter
: << delimiter --shell batch comment
Here Document Content
delimiter
Function: Pass the content between two delimiters (Here Document Content) to cmd as an input parameter
4.2. Terminal
# cat << EOF
> one
> two
> ; three
> EOF
EOF --It's just a logo and can be replaced with any legal character
##> - - This symbol is the identifier generated by the terminal to prompt for input information
delimiter -- it must be written in the top format, and there cannot be any characters before and after it, including spaces
4.3. shell
# vim here.sh --
Note: You can also use variables in it
#!/bin/bash
cat << EOF > output.sh
echo "hello"
echo "world"
echo $1
EOF
# chmod a+x here.sh
# ./here.sh george
# cat output.sh --View the contents; here $1 is expanded Becomes a parameter of the script
Note: If you do not want to expand this variable, you need to use double quotes to enclose the first EOF.
## 4.4. <<-
Here Document Another usage is to change '<<' into '< <-'. The only change in using <<- is that the tab (tab character) in front of each line of the content part of the Here Document will be deleted. This usage is for the convenience of indenting the content part when writing the Here Document. Read code
5, <<< --Redirect, for Here string
# wc -l <<< "$( ls -l /home)"
# while read x; do echo "hello";done <<< "$(seq 5)"
# bc <<< "2 ^ 10"
# vim string.sh
#!/bin/bash
while read line
do
if [ "${line#ftp :}" != "$line" ];then
awk -F: '{print $6}' <<< $line
break
fi
done < /etc/passwd
Comment: Loop through each line in the /etc/passwd file. If it is an ftp user, print out its home directory and Exit the loop
${line#ftp:}: The beginning of a line matches ftp:; then only the unmatched part of the line is taken
# chmod a +x string.sh
# ./string.sh
4. Text processing_1: cat;head;tail;cut;wc;sort ;uniq;tr;tac;rev
Text processing is a task that every system administrator will frequently come into contact with. Its core content is the use of related tools. The key point is to flexibly combine multiple tools to Complete the task
1. cat --concatinate, concatenate the contents of one or more files in order, and output to the standard output
# cat -n /etc /passwd --Display the file content and add the line number
# cat -A /etc/passwd --Print out some invisible characters and position marks
# cat 1.txt 2 .txt > test.txt -- Merge files
2, head --Read file header
# head -n 3 /etc/passwd -- Read the first three lines of the file /etc/passwd
# head -n -1 file --discard the last line of the file
# head -c 3 /etc/passwd --read The first three bytes of the file /etc/passwd
# head -c -3 file --discard the last three bytes of file
# head -c 10m /dev/urandom > ; big --Create a 10M file
3, tail --Read the tail of the file
# tail -n 3 /etc/passwd -- Read the last three lines of the file /etc/passwd
# tail -n +28 /etc/passwd --read from line 28 until the end of the file; discard the first 27 lines
# tail -c 3 /etc/passwd --Read the last three bytes of the file /etc/passwd
# tail -c +28 /etc/passwd --Read from the 28th byte until the end of the file; discard the 27 bytes of the header
# tail -f /etc/passwd - -Track changes in the content at the end of files, often used to inspect changes in log files, very practical
4. cut --The function is similar to awk, but not as powerful and complex as awk. When you want to manipulate data When doing column output, awk is often used, and cut is rarely used.
Commonly used options:
-d -- Define the delimiter
-b --Output the byte at the specified position
-c --Output the specified position Character (character)
# echo "a;b;c d;e" | cut -d ";" -f1,3,4 -- -d defines the separator (default is TAB); -f defines the output corresponding fields
# cat -n /etc/passwd | cut -d $'\n' -f1,3-5,7 -- Use newline character as separator
# echo I am Chinese | cut -b1-3 -- -b outputs the byte at the specified position; a utf8 Chinese character occupies 3 Bytes
# echo I am Chinese | cut -c2-4 -- -c outputs the character (character) at the specified position; the difference from -b is the processing of non-English character
# echo Be a brave Chinese | cut -b1-2,9 -- will output a "fake" character
# echo -n Be a brave Chinese people | xxd -- will find that the three bytes 1, 2, 9 are: e581 87
# echo -n false | xxd -- and "false" is also
5, wc -- Calculate the number of bytes, characters, words, and lines of data
Common options:
-c --Calculate the number of bytes
-m --Calculate the number of characters
-w --Calculate the number of words
-l -- Calculate the number of lines
# echo -n I am Chinese | wc -c -- -c Calculate the number of bytes, 5 utf8 Chinese account for 15 Bytes
# echo -n I am Chinese | wc -m -- -m counts the number of characters. The difference from -c is when processing non-English characters, similar to the command cut
# echo -n I am Chinese | wc -w -- -w counts the number of words, there is no separator to separate them, 5 Chinese characters count as one word, which is different from the so-called "words" in Chinese
# echo -n Uppercase CHINESE | wc -w --Two words
# echo -n Uppercase CHINESE | wc -c --17 English bytes, 18 without -n (because there are a newline character)
# echo -n Uppercase CHINESE | wc -m --17 English characters, 18 without -n (because there is a newline character)
# wc -l /etc/passwd - - -l Count the number of lines
6, sort -- Sort files by lines
Common options:
-t --Specify the delimiter
-k --Specify the sorting field
-u --Remove duplicate rows
-n, -h --Sort by value
-r --Sort in reverse
# cut -d ":" -f7 /etc/passwd | sort -u -- -u remove duplicate lines
# echo -e "1\n2\n10" | sort
# echo -e "1\n2\n10" | sort -n -- -n sorts by numerical value, cannot handle unit characters such as K, M, G etc.
# ls -lh | tail -n +2 | sort -k5,5n -- -k specifies the sorted field
# ls -lh | tail -n +2 | sort -k5,5h -- -h When sorting by numerical value, you can Process unit characters such as K, M, G
# head -4 /etc/passwd | sort -t: -k7,7 -- -t uses colon: as the field separator, press 7 Field sorting
# head -4 /etc/passwd | sort -t: -k7,7 -k3,3n --First sort by the 7th field, if the 7th field has the same, then Sort by field 3
# echo -e "1\n2\n3" | sort -nr -- -r reverse the sort
# head -4 /etc/passwd | sort -t: -k7,7 -k3,3nr --To reverse the sorting of the third field, you can also reverse the two fields at the same time
7, uniq --Remove consecutive duplicate lines
Commonly used options:
-c --Count the number of duplicate rows
# echo -e "1 \n1\n2\n1" | uniq -- There are still two 1's in the result because the two 1's are discontinuous
# echo -e "1\n1\n2\n1" | sort - u -- sort removes duplicates without consecutive
# cut -d: -f7 /etc/passwd | sort | uniq -c -- Based on the sort command example, count the number of occurrences of different login shells
8, tr - Convert, delete, reduce the same characters
Common options:
-d --Delete, delete all matching letters
-s --Reduce, reduce the same characters
Format: tr SET1 SET2
Note: Convert the characters in set 1 to the characters in the corresponding position in set 2, so in principle, the characters in the two sets The numbers should be the same. However, if the number of characters in the two sets is not equal, the program will not error. Please pay attention to the results in this case. Points:
a. tr does not care what characters are in the two sets, it just simply replaces the characters at the corresponding positions one by one.
b. tr replaces a single character, not a string
# echo abc | tr a-z A-Z --Convert 26 lowercase letters into corresponding uppercase letters
# echo abc | tr ab BA -- Convert a and b to B and A respectively
# echo Good people do good things | tr Good or bad -- "good" is converted to "bad"
# echo abcdefg | tr a-z AB --Set 2 is shorter, the program automatically uses the last character in set 2 to extend set 2
# echo abcdefg | tr a-b A-Z --Set 2 is longer, the program automatically extends set 2 Cut short
# echo Abc | tr a-zA-Z A-Za-z --Reverse the size of English letters
# echo hello world | tr -d ow -- -d delete, all matching letters
# echo 0123456789 | tr -d 13579
# tr -d '\012' < /etc/passwd -- delete The newline character in the file /etc/passwd, tr can represent a character in octal
# echo aabbaacc | tr -s a -- -s, reduce each connected a to one
# echo aabbaacc | tr -s a A -- do the conversion after aggregation
9. tac -- concatenate the contents of one or more files in order and output them to the standard output. In each file, the contents are printed in reverse order by line number
# echo -e "111111111\n2222222" > f1
# echo -e "333333333\n4444444" > f2
# tac f2 f1
10, rev --reverse the lines in the file
# echo -e "1234567\nabcdefg "> test
# rev test
5. Extension
1. cat, md5sum
# echo file1 > file1
# echo file2 > file2 --Create two files
# md5sum file1 file2 --Compare whether their md5 values are the same
# head -c 10m /dev/urandom > bigfile --Create a 10M file using a random device
# head -c 3m bigfile > file1 --Import the first 3M data into file1
# tail -c 4m bigfile > file3 --Import the last 3M data into file3
# head -c 6m bigfile | tail -c 3m > file2 --Import the intermediate 4M data into file2
# ls -lh file*
# cat file1 file2 file3 > newbigfile --Use cat merges three files into a new big file
# md5sum newbigfile bigfile --Compare the md5sum value of the old big file and the new big file using the command
The above is the detailed content of Examples of input and output, redirection, and pipelines. For more information, please follow other related articles on the PHP Chinese website!