On the other hand, C has designator initializers for arrays (http://en.cppreference.com/w/c/language/array_initialization) and structs (http://en.cppreference.com/w/c/language/struct_initialization) (and has since C99)
C++11 has something similar (or maybe identical?) for arrays. It's nice to be able to initialize a std::map<std::string, int> like this:
std::map<std::string, int> whatever =
{
{ "whatever1", 1 },
{ "whatever2", 2 },
/* ... snip ... */
{ "whatever99", 99 }
};
For production/live code that does this see ProductPool.cpp (https://github.com/ldicker83/OPHD/blob/master/src/ProductPool.cpp) from the OutpostHD project.
but because the C++ ISO committee prefers insanity
I haven't laughed so hard in quite awhile. But yes, I agree, C++ has lagged behind in a lot of very useful features. C++17 thankfully includes a lot of them.
I would say it's valuable to learn a little about how C++'s extra features over C, i.e. OOP, overloads, operator overloads, STL, exceptions (yuck), etc. are typically implemented/how they get compiled, and understand the overhead of them and how to mitigate that. If you're conscious about that, it's not too difficult to write C++ code that's just as efficient as the equivalent in C.
You have a point. I didn't realize how amazingly useful the STL was until I started pulling apart the idTech 2 engine. Holy shit, the ways to avoid pointer bugs in C++ makes C look like I'm carving stone tablets in Sumerian. But I also do appreciate what I can do with it on a very low level... e.g., if I was going to be writing software for an embedded system like a Rasberry Pi or even lower specs than that I'd probably stick with C with potentially some hand-tuned assembly.
I'm assuming multiple inheritance means something like:
great-grandparent->grandparent->parent->child->grandchild ... etc.
What you described is a single inheritance hierarchy. Each child class only inherits from a single parent class.
In C++, multiple inheritance is when an object inherits from more than one base class. E.g.:
class Foo
{
/* Whatever stuff here */
};
class Bar
{
/* Whatever stuff here */
};
class FooBar: public Foo, public Bar
{
/* I inherit from two parents! */
};
Does C++ allow for Composition, like Python does?
Yes. It's just a fancy way of saying that a class can use other classes as member variables. E.g.:
class Contrived
{
/* useless stuff here */
};
class SomeClass
{
public:
SomeClass() {}
~SomeClass() {}
private:
int _someInt;
std::string _someString; /* std::string is a class */
Contrived _someContrivedObject; /* Contrived is a class declared above */
};
The above code is basically useless but it does illustrate composition. Basically, you're composing a class using other classes (and also non-classes if you want to be very broad about the definition).
A good way to think about it is like this: Inheritance means Class B is a Class A vs. composition in which Class B has a Class A.
I read through the chapter on bit operations and did a couple of the exercises associated with bit operations. Very interesting. Also relevant sense MapData uses a couple of bit operations. Not that my solutions matter to anyone, but they are pasted below.
// Checks if the compiler performs bit right shifts as arithmetic or logical
_Bool ArithmeticRightShift(void)
{
signed int test = -1;
test >>= 1;
return test < 0;
}
// returns how many bits are contained within an int for the compilation
unsigned int IntBitSize()
{
unsigned int test = ~0;
unsigned int intBitSize = 0;
while (test > 0)
{
intBitSize++;
test <<= 1;
}
return intBitSize;
}
// Checks if the supplied bit is set to 1 (ON) in the number. Bit counts from left to right.
_Bool IsBitSet(unsigned int number, unsigned int bitToSet)
{
unsigned int bitShift = IntBitSize() - bitToSet;
if (bitShift < 0) {
return 0;
}
number >>= bitShift;
return number & 1;
}
// Sets the supplied bit in the number to 1. Bit counts from left to right.
unsigned int SetBit(unsigned int number, unsigned int bitToSet)
{
unsigned int bitShift = IntBitSize() - bitToSet;
if (bitShift < 0) {
return number;
}
unsigned int bitToAdd = 1 << bitShift;
return number | bitToAdd;
}
void main(void)
{
_Bool arithmeticRightShift = ArithmeticRightShift();
int intBitSize = IntBitSize();
_Bool isBitSet = IsBitSet(100, 2); //Should be false
int newNumber = SetBit(0, 30); // Should be 4
}
Vagabond, I love that you went and wrote a paper on this. I feel inspired.
Actually, writing a paper to explain stuff is probably a really good way to learn something. I've heard it said, if you really want to learn something well, try teaching it to someone else. I recently came across mention of some research that had a similar conclusion. You learn best when you struggle with trying to explain a concept.
I had a few thoughts reading through the paper. Some of it is additional knowledge, which you might find interesting.
The fastest 2-input logic gate is NAND. The second fastest is NOR. An AND gate is typically implemented as a NAND gate followed by a NOT gate. An OR gate is typically implemented as a NOR gate followed by a NOT gate. I'm willing to bet that was unexpected. If you're curious, I could give a considerable explanation (read "rant") as to why. =)
But wait, there's more! NAND is Functionally Complete (https://en.wikipedia.org/wiki/Functional_completeness). Any boolean expression can be represented entirely by NAND. NOR is also functionally complete. NAND and NOR are known as universal gates. Neither AND, nor OR are functionally complete. (See how I expressed that there? ;)) Even together, the set {AND, OR} is still not functionally complete. The set {NOT, AND, OR} is functionally complete, however, it's not minimal. You can remove an operation and still have a functionally complete set. The sets {NOT, AND} and {NOT, OR} are both functionally complete (and minimal).
Though that last part shouldn't be surprising due to De Morgan's Laws (https://en.wikipedia.org/wiki/De_Morgan%27s_laws), which express AND and OR in terms of each other and NOT. Using fairly typical digital logic notation, these laws can be expressed as:
(AB)' = (A' + B') (Ambiguously read: "A (and) B not equals A not or B not")
(A + B)' = (A'B') (Ambiguously read: "A or B not equals A not (and) B not")
Which brings me to notation. There are a lot of variants, and it often depends on context, such as mathematically founded boolean algebra, digital logic textbooks, or programming languages. Here are a few common symbols:
NOT: ¬, ̅x (overbar), ', !, ~
AND: ∧, ·, (no symbol), &&, &
OR: ∨, +, ||, |
Examples:
C++:
(x && y) || !z (logical)
(x & y) | ~z (bitwise)
Mathematical Boolean Algebra:
(x ∧ y) ∨ ¬z = (x AND y) OR NOT(z)
Digital Logic textbooks:
AB + C̅ = (A AND B) OR NOT(C)
AB + C' = (A AND B) OR NOT(C)
You'll sometimes see a dot used for AND, much like for multiplication in Algebra: · (Unicode U+22C5)
Like multiplication, AND is the default assumed operation when the symbol is omitted.
ab = a · b = a * b ("a times b") (Math)
AB = A · B = A AND B (Digital Logic)
There's some fun stuff where given a truth table, you can implementing a corresponding boolean expression, minimizing the number of gates using Karnaugh maps (https://en.wikipedia.org/wiki/Karnaugh_map), and converting it to NAND form. Though perhaps I'll save that for another time.
Feedback specific to your paper:
16-bit Word 0 to 65,536 -32,767 to 32,767
-32,768 ;)
0 ^ 3 = 0 + 2 ^ 2 = 4 + 0 ^ 1 = 0 + 2 ^ 0 = 1 Decimal 5
The notation looks a bit off. I assume you mean something like:
0*2^3 + 1*2^2 + 0*2^1 + 1*2^0
= 0 + 4 + 0 + 1 = 5
XNOR ~(a ^ b ) OR a == b
Not wrong, but slightly odd. XNOR does represent equivalence of boolean values. As C++ code, the two expressions behave quite differently for non-boolean values. In particular, they would mean something different for int values, producing different results. The left uses bit-wise operators, which work on corresponding bits in parallel. The right side is a logical operator and works on single boolean values. This is the difference between & and &&, or | and ||. The single character forms are bitwise operations, while the double character forms are logical operations.
You're correct in that the size of a WORD is the register size for a particular processor. In the case of x86, the original processor in that family had 16-bit registers. When the architecture was extended in a backwards compatible way to 32-bit, and later to 64-bit, they kept referring to WORD as a 16-bit datatype and began calling 32-bit values DWORDs (double words), and 64-bit values QWORDs (quad words). Other architectures may not support operations smaller than the WORD size, relying on bitmasks and AND to work on smaller sets of bytes.
This one is kind of a nitpick, especially considering there isn't some obvious inherent ordering of operations, but I find it odd your tables have AND and OR next to each other, but split up and re-arrange NAND and NOR. ;)
Leeor, there's usually a way to use references. ;)