The usual definition of the decibel is of course that the dB value y is related to the proportion x by

y = 10 · log10(x).

It bothers me a bit that there's two operations in there. After all, if we expect that y can be manipulated as a logarithm is, shouldn't there be simply some log base we can use, since changing log base is also a multiplication (rather, division, but same difference) operation? With a small amount of algebra I found that there is:

y = log(100.1)(x).

Of course, this is not all that additionally useful in most cases. If you're using a calculator or a programming language, you usually have loge and maybe log10, and 10·log10 will have less floating-point error than involving the irrational value 100.1. If you're doing things by hand, you either have a table (or memorized approximations) of dB (or log10) and are done already, or you have a tedious job which carrying around 100.1 is not going to help.

I've finally gotten around to uploading my custom Mac OS X keyboard layout. The most significant additions over the standard U.S. keyboard layout are: mathematical symbols (relevant to calculus and symbolic logic), Greek characters, and arrows. It also makes (, ), :, and | unshifted.

It is an often-used fact (in examples of elementary trigonometry and problems that involve it) that the sines of certain simple angles have simple expressions themselves. Specifically,

sin(0)=0
sin(π/6)=sin(30°)=1/2
sin(π/4)=sin(45°)=√(2)/2
sin(π/3)=sin(60°)=√(3)/2
sin(π/2)=sin(90°)=1

Furthermore, these statements about angles in the first quadrant can be reflected to handle similar common angles in the other three quadrants, and cosines. There is a common diagram illustrating this, considering sine and cosine as the coordinates of points on the unit circle.

When attempting to memorize these facts as one is expected to, I observed a pattern: each of the values may be expressed in the form √(x)/2.

sin(0)=√(0)/2
sin(π/6)=sin(30°)=√(1)/2
sin(π/4)=sin(45°)=√(2)/2
sin(π/3)=sin(60°)=√(3)/2
sin(π/2)=sin(90°)=√(4)/2

I found this pattern to be quite useful. It does not, however, explain why those particular angles (quarters, sixths, eighths, and twelfths — but not tenths! — of circles) form this pattern of sines.

YRFOTD #2

Monday, December 6th, 2010 07:59

10,000 minutes ≈ 1 week

Has anything been done in logic programming (especially in languages not too far from the classic Prolog) which is analogous to monads and the IO monad in Haskell; that is, providing a means of performing IO or other side-effects which does not have the programmer depend on the evaluation/search order of the language?

In other words, what is to Prolog as Haskell is to ML?

(no subject)

Monday, October 4th, 2010 18:39
x {UleavesTair,avg}d x 0     when t now (mod 1 solar year)
dt

I forget why I wrote this Haskell program, but it's cluttering up my do-something-with-this folder, so I’ll just publish it.

-- This program calculates all the ways to combine [1,2,3,4] using + - * / and ^
-- to produce *rational* numbers (i.e. no fractional exponents). (It could be so
-- extended given a data type for algebraic numbers (or by using floats instead
-- of exact rationals, but that would be boring).)
--
-- Written September 7, 2009.
-- Revised August 25, 2010 to show the expressions which produce the numbers.
-- Revised August 26, 2010 to use Data.List.permutations and a fold in combine.
-- 
-- In the unlikely event you actually wants to reuse this code, here's a license
-- statement:
-- Copyright 2009-2010 Kevin Reid, under the terms of the MIT X license
-- found at http://www.opensource.org/licenses/mit-license.html

Code )

I'd include the output here, but that would spam several aggregators, so I'll just show some highlights. The results are listed in increasing numerical order, and only one of the expressions giving each distinct result is shown.

(1 - (2 ^ (3 ^ 4))) = -2417851639229258349412351
(1 - (2 ^ (4 ^ 3))) = -18446744073709551615
(1 - (3 ^ (2 ^ 4))) = -43046720
(1 - (4 ^ (3 ^ 2))) = -262143
(1 - (4 ^ (2 ^ 3))) = -65535
...all integers...
((1 - (2 ^ 4)) * 3) = -45
(((1 / 2) - 4) ^ 3) = -343/8
((1 - (3 ^ 4)) / 2) = -40
(1 - ((3 ^ 4) / 2)) = -79/2
(1 - ((3 ^ 2) * 4)) = -35
...various short fractions...
(1 / (2 - (3 ^ 4))) = -1/79
(((1 + 2) - 3) * 4) = 0
(1 / (2 ^ (3 ^ 4))) = 1/2417851639229258349412352
(2 ^ (1 - (3 ^ 4))) = 1/1208925819614629174706176
(1 / (2 ^ (4 ^ 3))) = 1/18446744073709551616
(2 ^ (1 - (4 ^ 3))) = 1/9223372036854775808
(2 ^ ((1 - 4) ^ 3)) = 1/134217728
...various short fractions...
(((3 ^ 2) + 1) ^ 4) = 10000      (the longest string of zeros produced)
...all integers...
(2 ^ (3 ^ (1 + 4))) = 14134776518227074636666380005943348126619871175004951664972849610340958208
(2 ^ ((1 + 3) ^ 4)) = 115792089237316195423570985008687907853269984665640564039457584007913129639936
Tried 23090 formulas, got 554 unique results.

In the unlikely event that you haven't heard of it already, the barber paradox is:

The barber [who is the only barber in town] shaves every man who does not shave himself. Does the barber shave himself?

Now, this can be considered just logically contradictory, or a gotcha (“the barber is a woman”). But how about considering it as a poorly-written specification? Under this principle I propose a correction:

The barber shaves every man who would not otherwise be shaved.

Ask them what the length of a vector is.

(no subject)

Monday, April 26th, 2010 09:32

(Images are public domain from Wikimedia Commons: 1 2)

One bit of advice sometimes given to the novice programmer is don't ever compare floating-point numbers for equality, the reason being that floating-point calculations are inexact, and one should use a small epsilon, allowable error, instead, e.g. if (abs(value - 1.0) < 0.0001).

This advice is actually wrong, or rather, overly strong. There is a situation in which it is 100% valid to compare floats, and that is an cache or anything else which is comparing a float with, not a specific constant (in which case the epsilon notion is appropriate), but rather a previous value from the same source; floating-point numbers may be approximations of exact arithmetic, but that doesn't mean you won't get the same result from the same inputs.

So, don't get any bright ideas about outlawing aFloat == anotherFloat.

Unfortunately, there's a case in which the common equality on floats isn't what you want for previous-value comparison anyway: for most definitions of ==, NaN ≠≠ NaN. This definition makes sense for numerics (and is conformant to IEEE floating point specifications), because NaN is “not a number”; it's an error marker, provided as an alternative to exceptions (or rather, floating point error signals/traps/whateveryoucallit) which propagates to the end of your calculation rather than aborting it and requiring immediate error handling, which can be advantageous in both code simplicity and efficiency. So if you think about calculating within the space of “numbers”, then NaN is outside of that. But if you're working in the space of “results of calculations”, then you probably want to see NaN == NaN, but that may not be what you get.

Mathematically, the floating-point comparison is not an equivalence relation, because it is not reflexive on NaN.

(It's also typically the case that 0 == -0, even though positive and negative zero are distinct values. Oh, and NaNs carry data, but I'm not talking about that.)

What to do about it, in a few languages:

JavaScript

Even the === operator does not compare identities rather than numeric values, so if you want to compare NaN you have to do it as a special case. Google Caja handles it this way:

/**
 * Are x and y not observably distinguishable?
 */
function identical(x, y) {
  if (x === y) {
    // 0 === -0, but they are not identical
    return x !== 0 || 1/x === 1/y;
  } else {
    // NaN !== NaN, but they are identical.
    // NaNs are the only non-reflexive value, i.e., if x !== x,
    // then x is a NaN.
    return x !== x && y !== y;
  }
}
Common Lisp

The = operator generally follows the IEEE comparison (if the implementation has NaN at all) and the eql operator does the identical-object comparison.

E

The == operator is guaranteed to be reflexive, and return false for distinguishable objects, so it is appropriate for the “cache-like” use cases, and the <=> operator does conventional !(NaN <=> NaN), 0.0 <=> -0.0 floating-point comparison.

2000 food calories/day ≈ 100 watts

(no subject)

Sunday, March 14th, 2010 14:28

Extended subset” makes perfect sense as long as you don’t think about it with set theory.

Normal People Things:

  • Gone to a party unrelated to my family.

Never thought I'd do:

  • Worn a t-shirt with text on it.
  • Written an essay structured using a gratuitous extended metaphor.

Extra nerd points:

  • Programmed a number type which carries units and error values, to reduce the tedium of lab reports.
  • Learned to write all my assignments in LATEX.

Java programming class (yes, I know Java already) today, introducing loops. Task: write a factorial program. I wrote mine to use int arithmetic if possible, floating-point if not. Test the edge cases, of course. It says:

500! ≈ Infinity

I'm glad I used the “approximately equals” sign.

(This is (finally, now that the semester is over and I have some free time, heh) one of those posts-related-to-schoolwork I mentioned before.)

Does the infinite series $ \displaystyle\sum_{n=2}^{∞} \dfrac{n^2}{n^3 + 1}$ converge?

First, rewrite it to have n = 1 as $ \displaystyle -\frac{1}{2} + \sum_{n=1}^{∞} \dfrac{n^2}{n^3 + 1}$ , and discard the constant term since it does not affect convergence. The terms of this series are strictly less than those of $ \displaystyle \sum_{n=1}^{∞} \dfrac{n^2}{n^3} = \sum_{n=1}^{∞} \dfrac{1}{n}$ ; therefore, there is some x such that

$\displaystyle \sum_{n=1}^{∞} \dfrac{n^2}{n^3 + 1} < x < \sum_{n=1}^{∞} \dfrac{1}{n}$

Let k be the upper bound of the sum as we take the limit: $ \displaystyle \sum_{n=1}^{∞} \dfrac{n^2}{n^3 + 1} = \lim_{k\to ∞}\sum_{n=1}^{k} \dfrac{n^2}{n^3 + 1}$ .

Since $ \displaystyle \sum_{n=1}^k \dfrac{1}{n^p}$ is a continuous function of p, for any k there is some p greater than 1 such that

$\displaystyle \sum_{n=1}^k \dfrac{n^2}{n^3 + 1} < x = \sum_{n=1}^k \dfrac{1}{n^p} < \sum_{n=1}^k \dfrac{1}{n}$

Since $ \displaystyle \sum_{n=1}^{∞} \dfrac{1}{n^p}$ is a p-series which converges, i.e. has a finite sum, and the series under consideration has a lesser sum by the above inequality, it converges.

Furthermore, the above may be generalized to a proof that any series whose terms are eventually less than those of the harmonic series converges.


However, it is invalid, and in fact $ \displaystyle\sum_{n=2}^{∞} \dfrac{n^2}{n^3 + 1}$ diverges.

I managed to convince myself and my calculus teacher with it, but we realized it must be invalid after he presented a counterexample to the general case. I then realized which step was invalid.


You can't use the same k for all three series in the second inequality; each infinite sum has its own independent limit, and what this proof is doing is along the lines of ∞ - ∞ = 0 — assuming that “two infinities are the same size”. Or rather, the inequality itself (among partial sums) is true, but that fact has nothing to do with the properties of the true infinite series.

I would be mildly interested in a more formal description of this sort of failure: how the k inequality is true yet the independent series do not have the same relation.


Update: I have received many informative comments and replied to some; one pointed out an earlier mistake: $\displaystyle \sum_{n=1}^{∞} \dfrac{n^2}{n^3 + 1} < x < \sum_{n=1}^{∞} \dfrac{1}{n}$ is bogus because we can't compare the series until we know they converge.