A common problem in all programming languages except for those highly specialized for mathematics (and it even then must happen for irrational numbers) is that of floating-point precision. There are only so many different numbers you can represent in eighty bits, and it provides only an approximation to the set of real numbers- but usually, a sufficient approximation.
The c++ data type double, the most popular floating point type, on an Intel x86 processor (current generation), holds a mantissa worth about 16 digits of precision and an exponent part that can offer rather large values. The problem is what happens when operations are done that exceed this precision- and then the value drops, so what looks like it should be within precision has actually long since died to what's known as the cancellation catastrophe. The classic example:
double x = 2.49E80; //that's 2.49 X 10^80
x = x + 1;
cout << x << endl; //print x
x = x - 2.49E80;
cout << x << endl;
This block of code makes a very large number for X, adds one to it, prints the result, subtracts the original number, and prints the final result. If this was with real numbers instead of doubles, the obvious result would be that the last line will print one. This is not the nature of reality; the 1 gets lost in the noise. The result is error of greater than 100%, it is a relative error of infinite magnitude: no matter what you multiply the zero by, you're not getting the right answer! The +/- error at this point is actually on the order of 1064, conservative estimate, and a 1 is long gone.
It is quite possible to be completely unaware of this error until it slaps you in the face. This sort of error has killed a number of programs at TopCoder, where answers have to be within 10^-9, either relative or absolute (plus or minus that OR plus or minus that times the value, whichever gives a more generous range) to be deemed correct, when accumulating error through a loop or recursion gives a program its failure.
What would be useful is a class that behaves like a long double, with eighty-bit precision (a double is 64 bits), but is actually an object and so has methods to report the error on the object. Operator overloading and implicit constructor casts would make it appear like a double in all basic mathematical operations, but it would keep track of the growing error in the value, a +/- range kept by proper statistical rules. This would start out at -inf (MIN_INT- given the limits of long double exponents, this can be a short int) if an integer has been represented exactly and cast to an ErrorDouble (for a lack of a better idae), or -18 (representing 10^-18, which for a long double is actually worse than the real value of the error and I need to find out what it is) relative to the exponent in base 10 of the double; realistically, instead of tens, the error will be relative to powers of 2. You can also construct it with an explicit level of error; if this error is less than the error of a long double, it will be kept only until actual math is done (same with an int-constructed "perfect double") at which point it will revert to the error of the math, capped at the best precision the long double could have. Error propogates, of course, and this would be kept; multiplying or adding values does appropriate things to the error.
The thing is, I'm a lazy sort of lizard, and I expect that this already exists. If it does, where is it? And if it doesn't, is it really worth the trouble to write?