Cheat Engine Forum Index Cheat Engine
The Official Site of Cheat Engine
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 


what do you guys think about this game idea
Goto page 1, 2  Next
 
Post new topic   Reply to topic    Cheat Engine Forum Index -> Random spam
View previous topic :: View next topic  
Author Message
paupav
Master Cheater
Reputation: 13

Joined: 15 Apr 2011
Posts: 317
Location: P. Sherman 42, Wallaby Way, Sydney

PostPosted: Sun Jan 05, 2014 9:08 pm    Post subject: what do you guys think about this game idea Reply with quote

Like in game PCs are infected with "dark bytes" and you have to install cheat engines on those PCs, but you have to find some resources to get to those PCs or fight some enemies?
_________________
Back to top
View user's profile Send private message
Volictic
Cheater
Reputation: 61

Joined: 15 Aug 2007
Posts: 41

PostPosted: Sun Jan 05, 2014 9:08 pm    Post subject: Reply with quote

what
Back to top
View user's profile Send private message
Nirojan
How do I cheat?
Reputation: 108

Joined: 16 Sep 2008
Posts: 0
Location: seshville

PostPosted: Sun Jan 05, 2014 10:03 pm    Post subject: Reply with quote

Brolock wrote:
what

_________________
Quote:
yo i b 22 tryna make it in dis rap game but da steetz dont got luv for no wun na mean so im out hea tryna holla at da fams on dis innernet shit u no way i sayin
Back to top
View user's profile Send private message
InternetIsSeriousBusiness
Grandmaster Cheater Supreme
Reputation: 8

Joined: 12 Jul 2010
Posts: 1269

PostPosted: Sun Jan 05, 2014 10:10 pm    Post subject: Reply with quote

I just read the post without looking at the author and I knew it was a paupav thread
_________________
FLAME FLAME [email protected]@@
Back to top
View user's profile Send private message
SF
I'm a spammer
Reputation: 119

Joined: 19 Mar 2007
Posts: 6029

PostPosted: Sun Jan 05, 2014 10:15 pm    Post subject: This post has 1 review(s) Reply with quote

old fag wrote:
I just read the post without looking at the author and I knew it was a paupav thread

i literally did the same thing, when i scrolled up to see the name my instant reaction was "well who was i expecting it to be besides him?"

_________________
Back to top
View user's profile Send private message
Nirojan
How do I cheat?
Reputation: 108

Joined: 16 Sep 2008
Posts: 0
Location: seshville

PostPosted: Sun Jan 05, 2014 11:24 pm    Post subject: Reply with quote

is paupav trying to be the new hiroshi or someting
_________________
Quote:
yo i b 22 tryna make it in dis rap game but da steetz dont got luv for no wun na mean so im out hea tryna holla at da fams on dis innernet shit u no way i sayin
Back to top
View user's profile Send private message
SinStar87
Master Cheater
Reputation: 7

Joined: 23 Sep 2010
Posts: 420

PostPosted: Sun Jan 05, 2014 11:45 pm    Post subject: Reply with quote

You used dark byte as a negative entity.
Back to top
View user's profile Send private message Send e-mail Visit poster's website AIM Address Yahoo Messenger
paupav
Master Cheater
Reputation: 13

Joined: 15 Apr 2011
Posts: 317
Location: P. Sherman 42, Wallaby Way, Sydney

PostPosted: Mon Jan 06, 2014 6:41 am    Post subject: Reply with quote

so, no?
_________________
Back to top
View user's profile Send private message
Fafaffy
Cheater
Reputation: 65

Joined: 12 Dec 2007
Posts: 36

PostPosted: Mon Jan 06, 2014 8:10 am    Post subject: Reply with quote

"OH NO! THIS COMPUTER IS INFECTED WITH A DARK BYTE!!!"

User:
"What the fuck is a "Dark Byte." Actually, wtf is a regular "byte"?

_________________
Brillia wrote:
I FUCKING FUCK SEX
Back to top
View user's profile Send private message Send e-mail
Evil_Intentions
Expert Cheater
Reputation: 64

Joined: 07 Jan 2010
Posts: 214

PostPosted: Mon Jan 06, 2014 8:38 am    Post subject: Reply with quote

So it's like Skyrim with bytes?
Back to top
View user's profile Send private message
Womanizer
Grandmaster Cheater
Reputation: 2

Joined: 30 May 2009
Posts: 958

PostPosted: Mon Jan 06, 2014 11:04 am    Post subject: Reply with quote

blablfy wrote:
"OH NO! THIS COMPUTER IS INFECTED WITH A DARK BYTE!!!"

User:
"What the fuck is a "Dark Byte." Actually, wtf is a regular "byte"?


The byte /ˈbaɪt/ is a unit of digital information in computing and telecommunications that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer[1][2] and for this reason it is the smallest addressable unit of memory in many computer architectures. The size of the byte has historically been hardware dependent and no definitive standards existed that mandated the size. The de facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte. The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers optimize for this common usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit size.[3]
The unit octet was defined to explicitly denote a sequence of 8 bits because of the ambiguity associated at the time with the byte.[4]
Contents [hide]
1 History
2 Unit symbol
3 Unit multiples
4 Common uses
5 See also
6 References
History[edit]

The term byte was coined by Werner Buchholz in July 1956, during the early design phase for the IBM Stretch computer.[5][6] It is a deliberate respelling of bite to avoid accidental mutation to bit.[1]
Early computers used a variety of 4-bit binary coded decimal (BCD) representations and the 6-bit codes for printable graphic patterns common in the U.S. Army (Fieldata) and Navy. These representations included alphanumeric characters and special graphical symbols. These sets were expanded in 1963 to 7 bits of coding, called the American Standard Code for Information Interchange (ASCII) as the Federal Information Processing Standard which replaced the incompatible teleprinter codes in use by different branches of the U.S. government. ASCII included the distinction of upper and lower case alphabets and a set of control characters to facilitate the transmission of written language as well as printing device functions, such as page advance and line feed, and the physical or logical control of data flow over the transmission media. During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the 8-bit Extended Binary Coded Decimal Interchange Code (EBCDIC), an expansion of their 6-bit binary-coded decimal (BCDIC) representation used in earlier card punches.[7] The prominence of the System/360 led to the ubiquitous adoption of the 8-bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different.
In the early 1960s, AT&T introduced digital telephony first on long-distance trunk lines. These used the 8-bit -law encoding. This large investment promised to reduce transmission costs for 8-bit data. The use of 8-bit codes for digital telephony also caused 8-bit data octets to be adopted as the basic data unit of the early Internet.[citation needed]
The development of 8-bit microprocessors in the 1970s popularized this storage size. Microprocessors such as the Intel 8008, the direct predecessor of the 8080 and the 8086, used in early personal computers, could also perform a small number of operations on four bits, such as the DAA (decimal add adjust) instruction, and the auxiliary carry (AC/NA) flag, which were used to implement decimal arithmetic routines. These four-bit quantities are sometimes called nibbles, and correspond to hexadecimal digits.
The term octet is used to unambiguously specify a size of eight bits, and is used extensively in protocol definitions, for example.
Unit symbol[edit]

Prefixes for multiples of
bits (b) or bytes (B)
Decimal
Value Metric
1000 k kilo
10002 M mega
10003 G giga
10004 T tera
10005 P peta
10006 E exa
10007 Z zetta
10008 Y yotta
Binary
Value JEDEC IEC
1024 K kilo Ki kibi
10242 M mega Mi mebi
10243 G giga Gi gibi
10244 - - Ti tebi
10245 - - Pi pebi
10246 - - Ei exbi
10247 - - Zi zebi
10248 - - Yi yobi
The unit symbol for the byte is specified in IEC 80000-13, IEEE 1541 and the Metric Interchange Format[8] as the upper-case character B.
In the International System of Units (SI), B is the symbol of the bel, a unit of logarithmic power ratios named after Alexander Graham Bell. The usage of B for byte therefore conflicts with this definition. It is also not consistent with the SI convention that only units named after persons should be capitalized. However, there is little danger of confusion because the bel is a rarely used unit. It is used primarily in its decadic fraction, the decibel (dB), for signal strength and sound pressure level measurements, while a unit for one tenth of a byte, i.e. the decibyte, is never used.
The unit symbol kB is commonly used for kilobyte, but may be confused with the still often-used abbreviation of kb for kilobit. IEEE 1541 specifies the lower case character b as the symbol for bit; however, IEC 80000-13 and Metric-Interchange-Format specify the abbreviation bit (e.g., Mbit for megabit) for the symbol, providing disambiguation from B for byte.
The lowercase letter o for octet is defined as the symbol for octet in IEC 80000-13 and is commonly used in several non-English languages (e.g., French[9] and Romanian), and is also used with metric prefixes (for example, ko and Mo)
Unit multiples[edit]



Percentage difference between decimal and binary interpretations of the unit prefixes grows with increasing storage size
See also: Binary prefix
Considerable confusion exists about the meanings of the SI (or metric) prefixes used with the unit byte, especially concerning prefixes such as kilo (k or K) and mega (M) as shown in the chart Prefixes for bit and byte. Computer memory is designed with binary logic, multiples are expressed in powers of 2. Some portions of the software and computer industries often use powers-of-2 approximations of the SI-prefixed quantities, while producers of computer storage devices prefer strict adherence to SI powers-of-10 values. This is the reason for specifying computer hard drive capacities of, say, 100 GB, when it contains 93 GiB of storage space.
While the numerical difference between the decimal and binary interpretations is relatively small for the prefixes kilo and mega, it grows to over 20% for prefix yotta, illustrated in the linear-log graph (at right) of difference versus storage size.
Common uses[edit]

The byte is also defined as a data type in certain programming languages. The C and C++ programming languages, for example, define byte as an "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment" (clause 3.6 of the C standard). The C standard requires that the char integral data type is capable of holding at least 256 different values, and is represented by at least 8 bits (clause 5.2.4.2.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte.[10][11] The actual number of bits in a particular implementation is documented as CHAR_BIT as implemented in the limits.h file. Java's primitive byte data type is always defined as consisting of 8 bits and being a signed data type, holding values from −128 to 127. The C# programming language, along with other .NET-languages, has both the unsigned byte (named byte) and the signed byte (named sbyte), holding values from 0 to 255 and -128 to 127, respectively.
In addition, the C and C++ standards require that there are no "gaps" between two bytes. This means every bit in memory is part of a byte.[12]
In data transmission systems, a byte is defined as a contiguous sequence of binary bits in a serial data stream, such as in modem or satellite communications, which is the smallest meaningful unit of data. These bytes might include start bits, stop bits, or parity bits, and thus could vary from 7 to 12 bits to contain a single 7-bit ASCII code.[citation needed]

Back to top
View user's profile Send private message
Fafaffy
Cheater
Reputation: 65

Joined: 12 Dec 2007
Posts: 36

PostPosted: Mon Jan 06, 2014 6:59 pm    Post subject: Reply with quote

Womanizer wrote:
blablfy wrote:
"OH NO! THIS COMPUTER IS INFECTED WITH A DARK BYTE!!!"

User:
"What the fuck is a "Dark Byte." Actually, wtf is a regular "byte"?


The byte /ˈbaɪt/ is a unit of digital information in computing and telecommunications that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer[1][2] and for this reason it is the smallest addressable unit of memory in many computer architectures. The size of the byte has historically been hardware dependent and no definitive standards existed that mandated the size. The de facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte. The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers optimize for this common usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit size.[3]
The unit octet was defined to explicitly denote a sequence of 8 bits because of the ambiguity associated at the time with the byte.[4]
Contents [hide]
1 History
2 Unit symbol
3 Unit multiples
4 Common uses
5 See also
6 References
History[edit]

The term byte was coined by Werner Buchholz in July 1956, during the early design phase for the IBM Stretch computer.[5][6] It is a deliberate respelling of bite to avoid accidental mutation to bit.[1]
Early computers used a variety of 4-bit binary coded decimal (BCD) representations and the 6-bit codes for printable graphic patterns common in the U.S. Army (Fieldata) and Navy. These representations included alphanumeric characters and special graphical symbols. These sets were expanded in 1963 to 7 bits of coding, called the American Standard Code for Information Interchange (ASCII) as the Federal Information Processing Standard which replaced the incompatible teleprinter codes in use by different branches of the U.S. government. ASCII included the distinction of upper and lower case alphabets and a set of control characters to facilitate the transmission of written language as well as printing device functions, such as page advance and line feed, and the physical or logical control of data flow over the transmission media. During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the 8-bit Extended Binary Coded Decimal Interchange Code (EBCDIC), an expansion of their 6-bit binary-coded decimal (BCDIC) representation used in earlier card punches.[7] The prominence of the System/360 led to the ubiquitous adoption of the 8-bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different.
In the early 1960s, AT&T introduced digital telephony first on long-distance trunk lines. These used the 8-bit -law encoding. This large investment promised to reduce transmission costs for 8-bit data. The use of 8-bit codes for digital telephony also caused 8-bit data octets to be adopted as the basic data unit of the early Internet.[citation needed]
The development of 8-bit microprocessors in the 1970s popularized this storage size. Microprocessors such as the Intel 8008, the direct predecessor of the 8080 and the 8086, used in early personal computers, could also perform a small number of operations on four bits, such as the DAA (decimal add adjust) instruction, and the auxiliary carry (AC/NA) flag, which were used to implement decimal arithmetic routines. These four-bit quantities are sometimes called nibbles, and correspond to hexadecimal digits.
The term octet is used to unambiguously specify a size of eight bits, and is used extensively in protocol definitions, for example.
Unit symbol[edit]

Prefixes for multiples of
bits (b) or bytes (B)
Decimal
Value Metric
1000 k kilo
10002 M mega
10003 G giga
10004 T tera
10005 P peta
10006 E exa
10007 Z zetta
10008 Y yotta
Binary
Value JEDEC IEC
1024 K kilo Ki kibi
10242 M mega Mi mebi
10243 G giga Gi gibi
10244 - - Ti tebi
10245 - - Pi pebi
10246 - - Ei exbi
10247 - - Zi zebi
10248 - - Yi yobi
The unit symbol for the byte is specified in IEC 80000-13, IEEE 1541 and the Metric Interchange Format[8] as the upper-case character B.
In the International System of Units (SI), B is the symbol of the bel, a unit of logarithmic power ratios named after Alexander Graham Bell. The usage of B for byte therefore conflicts with this definition. It is also not consistent with the SI convention that only units named after persons should be capitalized. However, there is little danger of confusion because the bel is a rarely used unit. It is used primarily in its decadic fraction, the decibel (dB), for signal strength and sound pressure level measurements, while a unit for one tenth of a byte, i.e. the decibyte, is never used.
The unit symbol kB is commonly used for kilobyte, but may be confused with the still often-used abbreviation of kb for kilobit. IEEE 1541 specifies the lower case character b as the symbol for bit; however, IEC 80000-13 and Metric-Interchange-Format specify the abbreviation bit (e.g., Mbit for megabit) for the symbol, providing disambiguation from B for byte.
The lowercase letter o for octet is defined as the symbol for octet in IEC 80000-13 and is commonly used in several non-English languages (e.g., French[9] and Romanian), and is also used with metric prefixes (for example, ko and Mo)
Unit multiples[edit]



Percentage difference between decimal and binary interpretations of the unit prefixes grows with increasing storage size
See also: Binary prefix
Considerable confusion exists about the meanings of the SI (or metric) prefixes used with the unit byte, especially concerning prefixes such as kilo (k or K) and mega (M) as shown in the chart Prefixes for bit and byte. Computer memory is designed with binary logic, multiples are expressed in powers of 2. Some portions of the software and computer industries often use powers-of-2 approximations of the SI-prefixed quantities, while producers of computer storage devices prefer strict adherence to SI powers-of-10 values. This is the reason for specifying computer hard drive capacities of, say, 100 GB, when it contains 93 GiB of storage space.
While the numerical difference between the decimal and binary interpretations is relatively small for the prefixes kilo and mega, it grows to over 20% for prefix yotta, illustrated in the linear-log graph (at right) of difference versus storage size.
Common uses[edit]

The byte is also defined as a data type in certain programming languages. The C and C++ programming languages, for example, define byte as an "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment" (clause 3.6 of the C standard). The C standard requires that the char integral data type is capable of holding at least 256 different values, and is represented by at least 8 bits (clause 5.2.4.2.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte.[10][11] The actual number of bits in a particular implementation is documented as CHAR_BIT as implemented in the limits.h file. Java's primitive byte data type is always defined as consisting of 8 bits and being a signed data type, holding values from −128 to 127. The C# programming language, along with other .NET-languages, has both the unsigned byte (named byte) and the signed byte (named sbyte), holding values from 0 to 255 and -128 to 127, respectively.
In addition, the C and C++ standards require that there are no "gaps" between two bytes. This means every bit in memory is part of a byte.[12]
In data transmission systems, a byte is defined as a contiguous sequence of binary bits in a serial data stream, such as in modem or satellite communications, which is the smallest meaningful unit of data. These bytes might include start bits, stop bits, or parity bits, and thus could vary from 7 to 12 bits to contain a single 7-bit ASCII code.[citation needed]

tl;dr
Back to top
View user's profile Send private message Send e-mail
paupav
Master Cheater
Reputation: 13

Joined: 15 Apr 2011
Posts: 317
Location: P. Sherman 42, Wallaby Way, Sydney

PostPosted: Mon Jan 06, 2014 7:41 pm    Post subject: Reply with quote

blablfy wrote:
Womanizer wrote:
blablfy wrote:
"OH NO! THIS COMPUTER IS INFECTED WITH A DARK BYTE!!!"

User:
"What the fuck is a "Dark Byte." Actually, wtf is a regular "byte"?


The byte /ˈbaɪt/ is a unit of digital information in computing and telecommunications that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer[1][2] and for this reason it is the smallest addressable unit of memory in many computer architectures. The size of the byte has historically been hardware dependent and no definitive standards existed that mandated the size. The de facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte. The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers optimize for this common usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit size.[3]
The unit octet was defined to explicitly denote a sequence of 8 bits because of the ambiguity associated at the time with the byte.[4]
Contents [hide]
1 History
2 Unit symbol
3 Unit multiples
4 Common uses
5 See also
6 References
History[edit]

The term byte was coined by Werner Buchholz in July 1956, during the early design phase for the IBM Stretch computer.[5][6] It is a deliberate respelling of bite to avoid accidental mutation to bit.[1]
Early computers used a variety of 4-bit binary coded decimal (BCD) representations and the 6-bit codes for printable graphic patterns common in the U.S. Army (Fieldata) and Navy. These representations included alphanumeric characters and special graphical symbols. These sets were expanded in 1963 to 7 bits of coding, called the American Standard Code for Information Interchange (ASCII) as the Federal Information Processing Standard which replaced the incompatible teleprinter codes in use by different branches of the U.S. government. ASCII included the distinction of upper and lower case alphabets and a set of control characters to facilitate the transmission of written language as well as printing device functions, such as page advance and line feed, and the physical or logical control of data flow over the transmission media. During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the 8-bit Extended Binary Coded Decimal Interchange Code (EBCDIC), an expansion of their 6-bit binary-coded decimal (BCDIC) representation used in earlier card punches.[7] The prominence of the System/360 led to the ubiquitous adoption of the 8-bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different.
In the early 1960s, AT&T introduced digital telephony first on long-distance trunk lines. These used the 8-bit -law encoding. This large investment promised to reduce transmission costs for 8-bit data. The use of 8-bit codes for digital telephony also caused 8-bit data octets to be adopted as the basic data unit of the early Internet.[citation needed]
The development of 8-bit microprocessors in the 1970s popularized this storage size. Microprocessors such as the Intel 8008, the direct predecessor of the 8080 and the 8086, used in early personal computers, could also perform a small number of operations on four bits, such as the DAA (decimal add adjust) instruction, and the auxiliary carry (AC/NA) flag, which were used to implement decimal arithmetic routines. These four-bit quantities are sometimes called nibbles, and correspond to hexadecimal digits.
The term octet is used to unambiguously specify a size of eight bits, and is used extensively in protocol definitions, for example.
Unit symbol[edit]

Prefixes for multiples of
bits (b) or bytes (B)
Decimal
Value Metric
1000 k kilo
10002 M mega
10003 G giga
10004 T tera
10005 P peta
10006 E exa
10007 Z zetta
10008 Y yotta
Binary
Value JEDEC IEC
1024 K kilo Ki kibi
10242 M mega Mi mebi
10243 G giga Gi gibi
10244 - - Ti tebi
10245 - - Pi pebi
10246 - - Ei exbi
10247 - - Zi zebi
10248 - - Yi yobi
The unit symbol for the byte is specified in IEC 80000-13, IEEE 1541 and the Metric Interchange Format[8] as the upper-case character B.
In the International System of Units (SI), B is the symbol of the bel, a unit of logarithmic power ratios named after Alexander Graham Bell. The usage of B for byte therefore conflicts with this definition. It is also not consistent with the SI convention that only units named after persons should be capitalized. However, there is little danger of confusion because the bel is a rarely used unit. It is used primarily in its decadic fraction, the decibel (dB), for signal strength and sound pressure level measurements, while a unit for one tenth of a byte, i.e. the decibyte, is never used.
The unit symbol kB is commonly used for kilobyte, but may be confused with the still often-used abbreviation of kb for kilobit. IEEE 1541 specifies the lower case character b as the symbol for bit; however, IEC 80000-13 and Metric-Interchange-Format specify the abbreviation bit (e.g., Mbit for megabit) for the symbol, providing disambiguation from B for byte.
The lowercase letter o for octet is defined as the symbol for octet in IEC 80000-13 and is commonly used in several non-English languages (e.g., French[9] and Romanian), and is also used with metric prefixes (for example, ko and Mo)
Unit multiples[edit]



Percentage difference between decimal and binary interpretations of the unit prefixes grows with increasing storage size
See also: Binary prefix
Considerable confusion exists about the meanings of the SI (or metric) prefixes used with the unit byte, especially concerning prefixes such as kilo (k or K) and mega (M) as shown in the chart Prefixes for bit and byte. Computer memory is designed with binary logic, multiples are expressed in powers of 2. Some portions of the software and computer industries often use powers-of-2 approximations of the SI-prefixed quantities, while producers of computer storage devices prefer strict adherence to SI powers-of-10 values. This is the reason for specifying computer hard drive capacities of, say, 100 GB, when it contains 93 GiB of storage space.
While the numerical difference between the decimal and binary interpretations is relatively small for the prefixes kilo and mega, it grows to over 20% for prefix yotta, illustrated in the linear-log graph (at right) of difference versus storage size.
Common uses[edit]

The byte is also defined as a data type in certain programming languages. The C and C++ programming languages, for example, define byte as an "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment" (clause 3.6 of the C standard). The C standard requires that the char integral data type is capable of holding at least 256 different values, and is represented by at least 8 bits (clause 5.2.4.2.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte.[10][11] The actual number of bits in a particular implementation is documented as CHAR_BIT as implemented in the limits.h file. Java's primitive byte data type is always defined as consisting of 8 bits and being a signed data type, holding values from −128 to 127. The C# programming language, along with other .NET-languages, has both the unsigned byte (named byte) and the signed byte (named sbyte), holding values from 0 to 255 and -128 to 127, respectively.
In addition, the C and C++ standards require that there are no "gaps" between two bytes. This means every bit in memory is part of a byte.[12]
In data transmission systems, a byte is defined as a contiguous sequence of binary bits in a serial data stream, such as in modem or satellite communications, which is the smallest meaningful unit of data. These bytes might include start bits, stop bits, or parity bits, and thus could vary from 7 to 12 bits to contain a single 7-bit ASCII code.[citation needed]

tl;dr

just copied article from wikipedia.

_________________
Back to top
View user's profile Send private message
Volictic
Cheater
Reputation: 61

Joined: 15 Aug 2007
Posts: 41

PostPosted: Mon Jan 06, 2014 7:50 pm    Post subject: Reply with quote

wiki articles look hilarious in comic sans
Back to top
View user's profile Send private message
Fafaffy
Cheater
Reputation: 65

Joined: 12 Dec 2007
Posts: 36

PostPosted: Mon Jan 06, 2014 8:00 pm    Post subject: Reply with quote

paupav wrote:
blablfy wrote:
Womanizer wrote:
blablfy wrote:
"OH NO! THIS COMPUTER IS INFECTED WITH A DARK BYTE!!!"

User:
"What the fuck is a "Dark Byte." Actually, wtf is a regular "byte"?


The byte /ˈbaɪt/ is a unit of digital information in computing and telecommunications that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer[1][2] and for this reason it is the smallest addressable unit of memory in many computer architectures. The size of the byte has historically been hardware dependent and no definitive standards existed that mandated the size. The de facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte. The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers optimize for this common usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit size.[3]
The unit octet was defined to explicitly denote a sequence of 8 bits because of the ambiguity associated at the time with the byte.[4]
Contents [hide]
1 History
2 Unit symbol
3 Unit multiples
4 Common uses
5 See also
6 References
History[edit]

The term byte was coined by Werner Buchholz in July 1956, during the early design phase for the IBM Stretch computer.[5][6] It is a deliberate respelling of bite to avoid accidental mutation to bit.[1]
Early computers used a variety of 4-bit binary coded decimal (BCD) representations and the 6-bit codes for printable graphic patterns common in the U.S. Army (Fieldata) and Navy. These representations included alphanumeric characters and special graphical symbols. These sets were expanded in 1963 to 7 bits of coding, called the American Standard Code for Information Interchange (ASCII) as the Federal Information Processing Standard which replaced the incompatible teleprinter codes in use by different branches of the U.S. government. ASCII included the distinction of upper and lower case alphabets and a set of control characters to facilitate the transmission of written language as well as printing device functions, such as page advance and line feed, and the physical or logical control of data flow over the transmission media. During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the 8-bit Extended Binary Coded Decimal Interchange Code (EBCDIC), an expansion of their 6-bit binary-coded decimal (BCDIC) representation used in earlier card punches.[7] The prominence of the System/360 led to the ubiquitous adoption of the 8-bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different.
In the early 1960s, AT&T introduced digital telephony first on long-distance trunk lines. These used the 8-bit -law encoding. This large investment promised to reduce transmission costs for 8-bit data. The use of 8-bit codes for digital telephony also caused 8-bit data octets to be adopted as the basic data unit of the early Internet.[citation needed]
The development of 8-bit microprocessors in the 1970s popularized this storage size. Microprocessors such as the Intel 8008, the direct predecessor of the 8080 and the 8086, used in early personal computers, could also perform a small number of operations on four bits, such as the DAA (decimal add adjust) instruction, and the auxiliary carry (AC/NA) flag, which were used to implement decimal arithmetic routines. These four-bit quantities are sometimes called nibbles, and correspond to hexadecimal digits.
The term octet is used to unambiguously specify a size of eight bits, and is used extensively in protocol definitions, for example.
Unit symbol[edit]

Prefixes for multiples of
bits (b) or bytes (B)
Decimal
Value Metric
1000 k kilo
10002 M mega
10003 G giga
10004 T tera
10005 P peta
10006 E exa
10007 Z zetta
10008 Y yotta
Binary
Value JEDEC IEC
1024 K kilo Ki kibi
10242 M mega Mi mebi
10243 G giga Gi gibi
10244 - - Ti tebi
10245 - - Pi pebi
10246 - - Ei exbi
10247 - - Zi zebi
10248 - - Yi yobi
The unit symbol for the byte is specified in IEC 80000-13, IEEE 1541 and the Metric Interchange Format[8] as the upper-case character B.
In the International System of Units (SI), B is the symbol of the bel, a unit of logarithmic power ratios named after Alexander Graham Bell. The usage of B for byte therefore conflicts with this definition. It is also not consistent with the SI convention that only units named after persons should be capitalized. However, there is little danger of confusion because the bel is a rarely used unit. It is used primarily in its decadic fraction, the decibel (dB), for signal strength and sound pressure level measurements, while a unit for one tenth of a byte, i.e. the decibyte, is never used.
The unit symbol kB is commonly used for kilobyte, but may be confused with the still often-used abbreviation of kb for kilobit. IEEE 1541 specifies the lower case character b as the symbol for bit; however, IEC 80000-13 and Metric-Interchange-Format specify the abbreviation bit (e.g., Mbit for megabit) for the symbol, providing disambiguation from B for byte.
The lowercase letter o for octet is defined as the symbol for octet in IEC 80000-13 and is commonly used in several non-English languages (e.g., French[9] and Romanian), and is also used with metric prefixes (for example, ko and Mo)
Unit multiples[edit]



Percentage difference between decimal and binary interpretations of the unit prefixes grows with increasing storage size
See also: Binary prefix
Considerable confusion exists about the meanings of the SI (or metric) prefixes used with the unit byte, especially concerning prefixes such as kilo (k or K) and mega (M) as shown in the chart Prefixes for bit and byte. Computer memory is designed with binary logic, multiples are expressed in powers of 2. Some portions of the software and computer industries often use powers-of-2 approximations of the SI-prefixed quantities, while producers of computer storage devices prefer strict adherence to SI powers-of-10 values. This is the reason for specifying computer hard drive capacities of, say, 100 GB, when it contains 93 GiB of storage space.
While the numerical difference between the decimal and binary interpretations is relatively small for the prefixes kilo and mega, it grows to over 20% for prefix yotta, illustrated in the linear-log graph (at right) of difference versus storage size.
Common uses[edit]

The byte is also defined as a data type in certain programming languages. The C and C++ programming languages, for example, define byte as an "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment" (clause 3.6 of the C standard). The C standard requires that the char integral data type is capable of holding at least 256 different values, and is represented by at least 8 bits (clause 5.2.4.2.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte.[10][11] The actual number of bits in a particular implementation is documented as CHAR_BIT as implemented in the limits.h file. Java's primitive byte data type is always defined as consisting of 8 bits and being a signed data type, holding values from −128 to 127. The C# programming language, along with other .NET-languages, has both the unsigned byte (named byte) and the signed byte (named sbyte), holding values from 0 to 255 and -128 to 127, respectively.
In addition, the C and C++ standards require that there are no "gaps" between two bytes. This means every bit in memory is part of a byte.[12]
In data transmission systems, a byte is defined as a contiguous sequence of binary bits in a serial data stream, such as in modem or satellite communications, which is the smallest meaningful unit of data. These bytes might include start bits, stop bits, or parity bits, and thus could vary from 7 to 12 bits to contain a single 7-bit ASCII code.[citation needed]

tl;dr

just copied article from wikipedia.

Thanks, so a dark byte is just a copied wikipedia article. The more you know.
Back to top
View user's profile Send private message Send e-mail
Display posts from previous:   
Post new topic   Reply to topic    Cheat Engine Forum Index -> Random spam All times are GMT - 6 Hours
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Powered by phpBB © 2001, 2005 phpBB Group

CE Wiki   IRC (#CEF)   Twitter
Third party websites