I was working with the key-value database RocksDB 1 for a project and came up with a method of using structs as keys that works surprisingly well. Why would you want to do this? The main benefit is avoiding creating functions to convert your struct keys to strings and back again. Especially when there are many types of keys. There are some definite drawbacks to this approach, but also some significant benefits:
Calculating Accurate Means Using 2048
The goal here is to find an algorithm that can calculate the mean of very large lists of numbers with high accuracy and good performance. For small lists of numbers, using double, or even float is more than good enough in most cases. If, however, you start averaging lists of numbers that are millions of numbers long, then you can start to get significant error buildup. Using the most straightforward approach for calculating the mean, summing up all the numbers and dividing by the count, the sum starts to grow to be very large.
Shadowed Variables in the Debugger
I had an interesting run-in with a C++ debugger recently. Using Visual Studio Code and GDB, I was debugging code similar to the following: int a = 0; int b = 5; while(a < b) { a = b; printf("a = %d\n", a); //more code in the loop } What was strange is that when I had set it to stop on the statement a = b; and checked the value of b at that point, the debugger said that b = 32767, which was wrong.