The representation methods of characters and numerical values are different. The numbers in the ASCII code are not numerical values, but characters represented by encoding. Therefore, each numerical character occupies 7 bits (the extended ASCII code occupies 8 bits). For example, 12, is expressed as 0110001 0110010 in ASCII code (it represents a string composed of one or two numeric characters, and does not have the meaning of the size of twelve), and is expressed as 00000000 00000000 00000000 00001010 in int, which Represents an integer with a value of twelve. The two numbers one and two are indivisible. . In short, the way numerical values and characters are represented in computers are different. Int is not represented by ascii code
What the computer actually stores is binary. A byte has 8 bits. It is expressed in binary. The maximum number it can represent is 11111111, which is 2 to the 8th power - 1. Similarly, 4 bytes have 32 bits, and the maximum number can be 11111111. It means 2 raised to the 32nd power -1. You will understand if you understand the principles of microcomputer. At the same time, since the first bit is usually the sign bit, indicating positive and negative, 0 is positive and 1 is negative, so the signed number is 2 to the 31st power -1. The word length of int, long, double, etc. is the convention of the compiler. For example, the int of early 16-bit machines is 16 bits, and the maximum value is 65535. In the latest VS version, int is 4 bytes and 32 bits. . I just missed the ASCII encoding. An ASCII code occupies one character, which is 8 bits. You will understand if you look at the table. The high 4 bits and the low 4 bits form a table, so it can represent up to 256 characters. character. ASCII is actually a string.
It feels like two things. The limitation that int occupies 4 bytes can give you is the maximum value that int can reach. AscII says that the space occupied by a number is one byte, which should be the physical space and how much space it occupies. Theoretically, this space can be infinitely large, as long as the hard disk is enough, it can be infinitely long. The 4 bytes of int refers to the space occupied by the declared value. If the declared value reaches a certain upper and lower limit (plus or minus 2 to the 31st power), it will overflow. No matter how big your hard drive is, this restriction will not change on identical machines. This is how I understand it. I wonder if you can understand it.
ASCII is the encoding specification, and int is the storage type represented in memory. Just like a car has a maximum of 5 people, the traffic police will punish you if there are more people. This is int, and who is in it, what is its name, and what is the ID number? This is regulated by ASCII code.
And the decimal system in real life: One decimal digit = 0-9 10 to 1 Binary is the same 1bit = 0-1 1 to 1 The binary of an int 0 is: 00000000000000000000000000000000 +1 The result is: 0000000000000000000000000000001 The result of +1 is: 0000000000000000000000000000010 The result of +1 is: 00000000000000000000000000001 00 then+1000: 00000000000000000000001111101011
One, two and twelve are two different concepts. 1 byte is an 8-bit binary number, so int can represent a number within 2 to the 31st power. For example, 1212, if it is stored in ASCII, it is one two hundred and two, if it is stored in int, it is one thousand two hundred and twelve
One byte is 8 bits 4 bytes is 32 bits So int is 32 bits But int is a signed integer, The maximum number that can be represented is 31 1 That is, 2 to the 31st power - 1
The representation methods of characters and numerical values are different. The numbers in the ASCII code are not numerical values, but characters represented by encoding. Therefore, each numerical character occupies 7 bits (the extended ASCII code occupies 8 bits). For example, 12, is expressed as 0110001 0110010 in ASCII code (it represents a string composed of one or two numeric characters, and does not have the meaning of the size of twelve), and is expressed as 00000000 00000000 00000000 00001010 in int, which Represents an integer with a value of twelve. The two numbers one and two are indivisible. . In short, the way numerical values and characters are represented in computers are different. Int is not represented by ascii code
ASCII defines the expression of characters, not the expression of numbers. The way computers store numbers and strings is different.
What the computer actually stores is binary. A byte has 8 bits. It is expressed in binary. The maximum number it can represent is 11111111, which is 2 to the 8th power - 1. Similarly, 4 bytes have 32 bits, and the maximum number can be 11111111. It means 2 raised to the 32nd power -1. You will understand if you understand the principles of microcomputer. At the same time, since the first bit is usually the sign bit, indicating positive and negative, 0 is positive and 1 is negative, so the signed number is 2 to the 31st power -1. The word length of int, long, double, etc. is the convention of the compiler. For example, the int of early 16-bit machines is 16 bits, and the maximum value is 65535. In the latest VS version, int is 4 bytes and 32 bits. .
I just missed the ASCII encoding. An ASCII code occupies one character, which is 8 bits. You will understand if you look at the table. The high 4 bits and the low 4 bits form a table, so it can represent up to 256 characters. character. ASCII is actually a string.
It feels like two things. The limitation that int occupies 4 bytes can give you is the maximum value that int can reach. AscII says that the space occupied by a number is one byte, which should be the physical space and how much space it occupies. Theoretically, this space can be infinitely large, as long as the hard disk is enough, it can be infinitely long. The 4 bytes of int refers to the space occupied by the declared value. If the declared value reaches a certain upper and lower limit (plus or minus 2 to the 31st power), it will overflow. No matter how big your hard drive is, this restriction will not change on identical machines. This is how I understand it. I wonder if you can understand it.
ascii
中的数字是指'1'
, that is, the angle of the character. The angles are different and they cannot be confused at all.ASCII is the encoding specification, and int is the storage type represented in memory. Just like a car has a maximum of 5 people, the traffic police will punish you if there are more people. This is int, and who is in it, what is its name, and what is the ID number? This is regulated by ASCII code.
int = 4 bytes (Byte), not 4 bits (bit)
1 byte = 8 bits
1 bit = 0 or 1
1 byte = 0-255
And the decimal system in real life:
One decimal digit = 0-9
10 to 1
Binary is the same
1bit = 0-1
1 to 1
The binary of an int 0 is: 00000000000000000000000000000000
+1 The result is: 0000000000000000000000000000001
The result of +1 is: 0000000000000000000000000000010
The result of +1 is: 00000000000000000000000000001 00
then+1000: 00000000000000000000001111101011
One, two and twelve are two different concepts. 1 byte is an 8-bit binary number, so int can represent a number within 2 to the 31st power. For example, 1212, if it is stored in ASCII, it is one two hundred and two, if it is stored in int, it is one thousand two hundred and twelve
int stores numbers, not char
One byte is 8 bits
4 bytes is 32 bits
So int is 32 bits
But int is a signed integer,
The maximum number that can be represented is 31 1
That is, 2 to the 31st power - 1