So for most of the bit functions they compare the logical number of two numbers you compare for example bit.bxor(0,15) would return 0 since 15 is 4 bits so it compares their first 4 bits as expected however the not function evaluates the given number as a 32 bit integer so no matter what number you put in it you get a ridiculously large seemingly unusable number. What is the practicality or reason for the function running differently from any other bit functions?
Curiosity on the bit API
Started by HDeffo, Apr 10 2015 10:47 AM
3 replies to this topic
#1
Posted 10 April 2015 - 10:47 AM
#2
Posted 10 April 2015 - 11:31 AM
bit.xor() isn't just checking "the first four bits". It's checking all those provided to it, same as bnot. The functions are behaving the same in that regard.
If you only want to invert a certain amount of bits, then bit.band() your result.
If you only want to invert a certain amount of bits, then bit.band() your result.
#3
Posted 10 April 2015 - 11:48 AM
Right I just woke up when I was testing this and temporarily forgot logic
ignore me I just got shocked by the massive output at 3 in the morning. Currently though I'm just doing 15-number instead since that will always be equal to the inverted number bitwise if it's 4 bits or less
#4
Posted 10 April 2015 - 11:49 AM
To illustrate Bomb Blokes answere
The "practicality" it always uses 32 bits is, that it doesn't take longer to execute than on a specified set of bits and elegantly escapes all the problems you get when using different amounts of bits.
Also: How would you provide your functions a zero at the beginning? You wouldn't be able to do it, since your misconception assumes, that the highest bit set to 1 defines the length of your bit chain.
0...0000011 = 3 XOR 0...0001110 = 14 = 0...0001101 = 13 ^^^^^^^^^^^ affects all bits, you always provide a ton of zeros to the function, when you chose small numbers. NOT 0...0000011 = 3 = 1...1111100 = the large number you are talking about
The "practicality" it always uses 32 bits is, that it doesn't take longer to execute than on a specified set of bits and elegantly escapes all the problems you get when using different amounts of bits.
Also: How would you provide your functions a zero at the beginning? You wouldn't be able to do it, since your misconception assumes, that the highest bit set to 1 defines the length of your bit chain.
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users











