II.5b: Structure of the Surreal Numbers

In addition to ONAG, some of the claims in this post are coming from Foundations of Analysis over Surreal Number Fields.

Ordinals
An “ordinal” is a Number equivalent to one with no right options. There are a few basic facts you should know about ordinal numbers.

  • If G is an ordinal, G={ordinals H<G |} (in particular, with this set up the collection of left options really is a set if we only pick equivalence class representatives of this standard form).
  • Any non-empty class (for example, a nonempty set) of ordinals has a least ordinal.
  • For any set of ordinals S, {S|} is an ordinal bigger than any member of S.
  • The nonnegative integers are ordinals, and ω={0,1,2,…|} is the next biggest ordinal after those.

If you happen to have a sense of ordinal arithmetic already, Surreal Number arithmetic works differently, in general.

Birthdays
With the ordinals in hand, we can give names to the steps we take to build all the Numbers (in fact, all the Conway positions). Before we have any positions, there’s nothing we can assign to be left or right options, so we say “on day 0, 0={|} was born”. Then on day 1, 1={0|} and -1={|0} (and *={0|0}) were born, etc. In general: the Numbers (or Conway positions) made by day α (α being an ordinal) are defined to be all the Numbers you get where left options and right options were “made by day β” for β<α. With this setup, each Number (or Conway position) has a “birthday”: the first ordinal α such that it was “made by day α”.

By convention, we usually only worry about the equivalence class of a position: {*|*}=0, so we’d usually say that {*|*} was born on day 0, not day 2. Also, as a consequence of the simplicity rule from II.5, a Number is equal to the unique one of earliest birthday that fits between the sets of left and right options.

Infinite days and real Numbers
Dyadic fractions were born on finite days. What Numbers were born on day ω? Well, the infinite ω={0,1,2,…|}=”LLLL…” was born (and its negative). So was the infinitesimal 1/ω={0|1,1/2,1/4,…}=”LRRR…” (and its negative) and dyadic translations of it like 1+1/ω={1|1+1,1+1/2,1+1/4,…}.

The other things that were born on day ω are every real number that’s not a dyadic! Like 1/3={0,1/4,1/4+1/16,…|1/2,1/4+1/8,1/4+1/16+1/32,…} and π={3,3+1/8,3+1/8+1/64,…|4,3+1/2,3+1/4,3+1/8+1/16,…}. If you’re familiar with Dedekind cuts, you can probably figure out how to make an explicit definition of the reals this way (you can do it without the dyadic restriction, but then you won’t get all of the reals as early as possible).

On day ω+1 you get things like ω-1={0,1,2,…|ω}, 1/3+1/ω, and 1/(2ω)={0|1/ω}. If you want ω/2, you actually have to wait until day 2*ω; ω/2={0,1,2,…|ω,ω-1,ω-2,…}.

However, there’s an implicit definition of the reals that’s rather elegant. A Number G is said to be “real” if 1. There are some integers m and n such that m<G<n (so its value isn’t infinite) and 2. G={G-q|G+q} where q ranges over the positive rationals (or positive dyadics, if you prefer). ω isn’t a real number because it’s not less than any integer. 1/ω={0|1,1/2,1/4,…} isn’t a real number because 1/ω-1/n for any positive integer n is negative, so {G-q|G+q}=0 when G=1/ω.

Infinite Hackenstrings
The game of Hackenstrings actually covers all of the Surreal Numbers. But it’s a little complicated to see carefully. Given a Number G with birthday α, we can form the day-β approximation to it Gβ (for ordinal β≤α) by giving Gβ all Numbers born before day β that are less than G as left options, and similarly for the right options. Note that G0=0 for all Numbers G.

With these approximations, we can build a (possibly infinite; it’s really a function from the left set of some ordinal in standard form to {“L”,”R”}) Hackenstring equal to G in the following way. For each ordinal β less than the birthday of G, give the βth spot an “L” if G>Gβ, and an “R” otherwise. (We can’t ever have equality by the definition of “birthday” and day-β approximation.) Every one of these Hackenstrings is in a distinct equivalence class: to compare two distinct Hackenstrings, read them until the first spot they differ, and then “L”>blank>”R”.

Scattered Closing Remarks

There are lots of different equivalent (up to FIELD isomorphism, I guess) ways to axiomatize the Surreals. There are more technical versions of statements like “they’re the unique FIELD with all the ordered fields in it”, or “they’re the unique FIELD where ordinal birthdays make sense”. You can adjoin i and get the Surcomplex Numbers (which are algebraically closed). You can define things like infinite series and something like a limit and lots of other crazy things that could fill at least one whole book.

Everything in the FIELD On2 was equivalent to a Nim pile: basically a Hackenstring except all the letters were “E”s, which can be taken by either player. Everything in the FIELD of Surreals was equivalent to a Hackenstring with no “E”s, just “L”s and “R”s. You could mix them up and get weird games like “EL”, but there’s no nice arithmetic for those. And I must emphasize: not every Conway position is equivalent to a Hackenstring.

II.5a: Arithmetic of the Surreal Numbers

Knowledge about the ordinals isn’t required for this post or the next one, but it would definitely add to your understanding. I’ll be talking about the Knuth-named “Surreal Numbers”, which for our purposes are just the Numbers (no “short” restriction) from the previous post. Conway denoted the proper class (bunch that’s too big to be a set) of surreal numbers “No“, but nowadays “\mathbb{S}” (for “Surreals”) is pretty popular, in analogy to ℝ for the reals. They’re the Surreals because they include the reals but go further. In fact, in some sense, the Surreals contain everything you might want to think of as a number (including the ordinals). Since the previous post contained the facts about Surreals that are relevant to CGT, and it would take pages and pages to prove all the basic assertions about the Surreals, let alone of the non-basic ones, this post will omit most of the proofs (which can generally be found in On Numbers and Games, the primary source for this post and the next).

Where unspecified, Gℓ will range over the left options of G, etc. Remember that Numbers can be added (inductively) via the formula G+H={Gℓ+H,G+Hℓ|Gr+H,G+Hr} and they can be subtracted by adding the negative: -G={ -Gr | -Gℓ }. It turns out we can do more with Numbers, though.

Numbers can be multiplied.
With all of these Numbers that act like dyadic fractions and integers, it would be nice if we had a procedure that let us multiply 2={1|}={{0|}|}={{{|}|}|} by 1/2={0|1}={{|}|{{|}|}} to get 1={{|}|}. To get a formula for G*H, we just need a bunch of things less than it for the left options, and a bunch of things more than it for the right options.

A common first guess (perhaps even of Conway’s?) is to define it recursively by G*H={G*ℓ*H,G*Hℓ|Gr*H,G*Hr}. However, a moment’s thought will show that this doesn’t actually make any sense for negative numbers, and another moment’s thought will show that this is literally the same definition as that for G+H! With a lot of thought, Conway came up with a janky definition that required G and H to be in some simplified form. With a lot more thought (months), Conway found the “right” definition.

The key idea is that we have four inequalities (coming from property 4. from the previous post and intuition about multiplication) (G-Gℓ)*(H-Hℓ)>0, (Gr-G)*(H-Hℓ)>0, etc. If we expand these out and manipulate, we get things like G*H>Gℓ*H+G*Hℓ-Gℓ*Hℓ, G*H<Gr*H+G*Hℓ-Gr*Hℓ, etc. This leads to the recursive definition G*H={Gℓ*H+G*Hℓ-Gℓ*Hℓ, Gr*H+G*Hr-Gr*Hr | Gℓ*H+G*Hr-G*ℓ*Hr, Gr*H+G*Hℓ-Gr*Hℓ}. All the properties that you expect of multiplication hold up to equivalence (the product of positive numbers is positive, 1 times anything is itself, multiplication distributes over addition, etc.).

Numbers can be divided.
This is a little surprising since if we imposed the finiteness condition, there’d be no Number corresponding to 1/3 (even though we have 1/2={0|1} and 1/4={0|1/2}). To find H/G, we really just need to multiply H by 1/G. To find 1/G, we really just need to be be able to find 1/G for positive G (we can take the negative afterwards if necessary). Even then, it’s tricky to write down a formula (and took a rather long while to find).

If G is positive, then we don’t need any of the negative left options (one of the left options has to be nonnegative to make G positive), so we’ll throw them out. Also, we’ll throw in 0 to the left options of G (G was positive so this is equal by the simplicity rule from the previous post). Then we can define H=1/G recursively by the following: H={0,(1+(Gr-G)*Hℓ)/Gr, (1+(Gℓ-G)*Hr)/Gℓ | (1+(Gℓ-G)*Hℓ)/Gℓ, (1+(Gr-G)*Hr)/Gr}. Now this looks problematic because it has options of H in the formula, but you can build the left and right sets with induction. Start with 0 as an option of H and use all of G’s options. Then go through all of the options of H you just got, etc. (Technical note: This transfinite induction doesn’t need more steps than the cardinality of the set of G’s options to the ℵ0.)

If you don’t want to use a crazy double-inductive definition, Kruskal solved the H part of the recursion: you can have the options (both left and right) of 1/G be (1-Π(1-G/Gi))/G where Gi ranges over the positive options of G, the products are always finite (the empty product of no terms is 1) and the dividing by G in the formula isn’t actually a problem since there’s a G in the expansion of the numerator that can cancel. That expression is a left option of 1/G exactly when an even number of the Gi are left options of G.

In technical language, the Surreal Numbers are an ordered FIELD (they’re too big to be a field), and they contain an isomorphic copy of every ordered field.

You can take square roots (and beyond)!
There’s a vaguely similar doubly-recursive formula to the one for 1/G found by Clive Bach for the square root of a nonnegative Number. And a similar singly-recursive version, they’re both kinda complicated. To take other roots and do other things, there’s lots of complex machinery (you can get a different definition of square root using infinite series stuff, for instance).

Similarity to arithmetic for On2.
I talked about the “ordinal Nimbers” On2 in a previous post. As far as I know, these multiplication, division, and square root formulas were found for the Surreals first, and then they just tried the same formulas on the impartial games and they all magically worked. (There’s a lot of simplification that comes from the fact that G=-G for impartial G.) I admit I don’t know of a good reason why the same definitions should work for such a different class of positions; they break down horribly for general positions.

II.5: The Theory of Numbers

Almost all of the results in this post will apply equally well to the non-short case (you can add “transfinite” before “induction” and “recursion” where appropriate). If you’re not comfortable with that, you can just assume everything is finite and you won’t miss much. Also, when I say “positions” I really mean “positions” not “any equivalent position”; it will actually matter for some of these arguments.

In the positions we’ve looked at so far, some play nicely with the order (-5, 1/4, etc.) and others don’t (*3, ±2, etc.). It would be nice if we could find a nice big class of positions that are all comparable with each other (this means that A<B or A=B or A>B; remember that {|}={-1|1} even though they’re different positions). Also, since we’d like to use some sort of induction, it should be “closed under options”: the options (the positions that can be moved to) of something in this class should all be part of this class as well.

You might guess that all we have to do is take the positions that aren’t N-positions, but {0|*} is an L-position with an N-position as an option. A second guess might be that all we have to do is take positions that aren’t N-positions and whose options (and options’ options, etc.) aren’t N-positions, but that still wouldn’t work: {4|2} is one of those, but {4|2}-2=±1+1 is an N-position, so {4|2}‖2! From this sort of example, there’s one restriction that stands out (and it turns out to be the only restriction, in some sense):

1. If a position G has A as a right option and B as a left option and A≤B, then G‖A.
Why? We have to show that G-A is an N-position. If Right goes first, they can move to A-A=0 and win. If Left goes first, they can move to B-A≥0 and they either win because they handed Right a P-position or because they handed Right an L-position. ✓

With this in mind, we make the following recursive definition (since we’re dealing with non-loopy things, recursion is fine):
A position is said to be a “Number” if all of its options are Numbers and none of its right options are less than or equal to any of its left options.
Basically, if something doesn’t have any non-Numbers as options, and isn’t a position where theorem 1. would apply, it’s a Number. There are a few ways of motivating this definition, but they come down to the fact that a whole lot of things you’d want from numbers are satisfied by Numbers.

2. Every integer (as defined earlier) is a Number.
Why? Well, { | }=0 is a number since it has no options at all so there’s nothing that can go wrong. Note that here I’m using 0 to refer to that particular position, not any equivalent position like {*|*}. { 0 | }=1 is a number since 0 is a number and there are no right options to be less than or equal to any of the right options. By induction, the same can be said of {1|}=2, {2|}=3, etc. Similarly for {|0}=-1, {|-1}=-2, etc. ✓

3. The negative of a Number is a Number.
Why? We can prove it by induction. -{ | }={ | } so it’s true for the base case of 0. Now assume, as an inductive hypothesis, that it’s true for all of the options of some Number G={ Gℓ1,Gℓ2,… | Gr1,Gr2,… }. For convenience, I’ve ordered the left and right options; by the axiom of choice I can do this in the infinite case too, but it’s not actually necessary. Then -G={ -Gr1,-Gr2,… | -Gℓ1,-Gℓ2,… } passes the first test for being a Number by hypothesis. Suppose it failed the second test. Then there would be -Gℓn≤-Grm for some naturals (or ordinals) n and m. This implies that Grm≤Gℓn, which would have made G not a Number: contradiction. ✓

4. A Number G is greater than all of its left options and less than all of its right options.
Why? We’ll prove that G is greater than all of its left options; the proof that it’s less than all of its right options is exactly analogous. It’s trivially true for { | }=0. Now assume, as an inductive hypothesis, that it’s true for all of the options of some Number G. Call an arbitrary left option of G “Gℓ”. Then to check G>Gℓ, we need to see that Left can always win G-Gℓ. If Left goes first, they can just move to Gℓ-Gℓ=0 and win. If Right moves first, and they move in G to Gr-Gℓ, then since G is a Number, we know that Gr≰Gℓ so that either Gr>Gℓ (so Gr-Gℓ is an L-position) or Gr‖Gℓ (so Gr-Gℓ is an N-position); either way, Left wins. Finally, suppose Right goes first and moves in Gℓ to G+(-Gℓ)r=G-(Gℓℓ) (by this I mean that a right option of -Gℓ is the negative of a left option of Gℓ). Then Left can move to Gℓ-Gℓℓ, which is >0 by inductive hypothesis, and win. ✓

Incidentally, I think this fact 4. is a big part of the justification for the convention that L-positions are positive, rather than R-positions. If you have a Number like {0,1|3,7}, then the left options are to the left on the “Number line” (they’re lower) and the right options are to the right (they’re greater). Unfortunately, this convention will cause problems with other views of the number line: the positive games that Left can win either have to be on the right or the picture of the number line has to be flipped (this is common in CGT).

5. The sum of two Numbers is a Number.
Why? Recall G+H={Gℓ+H,G+Hℓ|Gr+H,G+Hr} from II.2. As usual, we’ll prove fact 5. by induction, it being clear for 0+0=0. Assume, by inductive hypothesis, that all of the options of G+H are Numbers. We must show that G+H can’t have a right option less than or equal to a left option. Suppose Gr+H≤Gℓ+H where Gr and Gℓ are some particular options of G (elements of sets). Then Gr≤Gℓ which is impossible since G is a Number. Suppose Gr+H≤G+Hℓ. By 4., which we just proved, Hℓ<H, so G+Hℓ<G+H. Also by 4., G<Gr, so G+H<Gr+H. We have G+H<Gr+H≤G+Hℓ<G+H, which is a contradiction. The two cases with things of the form G+Hr are analogous. ✓

For those who know what this means, 0 being a Number and facts 3. and 5. make the Numbers a subGROUP of the proper class of all positions. However, they’re not just any subGROUP:

6. Numbers are never confused with other Numbers. (i.e. Two Numbers are always comparable.)
Why? Suppose G and H are Numbers and G‖H, so that G+(-H)‖0. By facts 3. and 5., G+(-H) is a Number too, so it will suffice to show that no Number is an N-position. Suppose, for sake of contradiction, that G is a Number that’s also an N-position. Then for Left to have a winning move, G must have a left option that’s either a P-position or an L-position. For Right to have a winning move, G must have a right option that’s either a P-position or an R-position. Since P/L-positions are ≥ P/R-positions, all four combinations would contradict the assumption that G is a Number. ✓

Note that in the proof of 4. we considered a hypothetical case where a Number might be an N-position, but we didn’t know it was a Number because we didn’t have 5. yet, and we needed 4. to prove 5. and 5. to prove 6. But anyway, Numbers really do play nicely with the order, as we had hoped from the beginning, and they also play nicely with + and -, but right now they’re just some nebulous bunch of positions with nice properties. For example, we know that {0|1}=1/2 and “LRR”={0|1/2,1}=1/4 are Numbers, but we don’t really know (immediately) what the value of the Number {1+1/2|2+1/4} is.

To really understand the Numbers, we need a series of lemmas, beginning with something which Conway calls the “simplicity rule”. We’ll say that a position H “fits (for G)” if H is greater than all of G’s left options and less than all of G’s right options. In this language, fact 4. says “a Number fits for itself”.

7. Simplicity Rule: If G and H are Numbers, and H fits for G and none of H’s options fit for G, then G=H.
Why? To test that G=H, we need to show that the second player always has a winning strategy in G-H. If Left starts and moves to Gℓ-H, then since H fits, this is negative, and Right wins. Similarly, if Right starts and moves to Gr-H, then since H fits, this is positive, and Left wins. Now suppose Left starts and moves to G-(Hr). Since none of H’s options fit, Hr must not sit in between all of G’s left and right options. Since H is a number, fact 4. gives us that Hr>H. Therefore, there exists a right option of G, call it Gr, such that Gr≤Hr (remember fact 6.). Right moves to Gr-(Hr), which is negative or 0, and wins. The case of Right starting and moving to G-(Hℓ) is analogous. ✓

8. It makes sense to recursively define non-integer dyadic fractions by (2p+1)/2q+1={p/2q | (p+1)/2q}.
Why? Base case: q=0. To verify that we can call {p|p+1} “(2p+1)/2”, we need to check that {p|p+1}+{p|p+1}=2p+1 for integers p. We’ll assume that p≥0 as the p+1≤0 case is analogous. We must verify that the following is a P-position:
{p|p+1}+{p|p+1}-(2p+1)
={p|p+1}+{p|p+1}-{2p | } by definition/proven fact about 2p+1
={p|p+1}+{p|p+1}+{ | -2p} by definition of –
={p+{p|p+1}-(2p+1) | p+1+{p|p+1}-(2p+1), {p|p+1}+{p|p+1}-2p} by definition of +
=0 since 0 fits (use fact 4. on {p|p+1} to see that 0 fits and fact 7. to see that it must therefore equal 0)
(Notice how no “If Left goes first…” type arguments were necessary.)
The induction step is essentially identical in style to this base case argument, except for the fact that everything has the denominator 2q attached. ✓

9. If some dyadic fraction fits for a Number G, then G is equivalent to some dyadic fraction (maybe not the same one).
Why? First, suppose that an integer fits for G. Then suppose that n is the integer of lowest absolute value that fits for G (if both positive integers and negative integers fit, then so does 0). Then either n is 0 and n has no options at all or n has only one option which is of lower absolute value, so it doesn’t fit. By fact 7., G=n. Now suppose that no integer fits but a half-integer (see fact 8.) fits. Then since the options of a half-integer are integers, fact 7. says we’re done. Similarly for quarter-integers, etc. ✓

10. Canonical form for short Numbers: A short Number is equivalent to a dyadic fraction.
Why? From fact 9., it suffices to prove that for a Number G, some dyadic fraction fits. Suppose, for sake of contradiction, that some finite Number is not equivalent to a dyadic fraction. Then by non-loopiness, we can go through the options until we get to a “minimal” one: a Number whose options are all equivalent to dyadic fractions, but is not equivalent to one. By finiteness, there are only finitely many left options and finitely many right options, so any dyadic greater than the greatest left option and lower than the lowest right option should fit. ✓

This was a lot of theory, but it’s important theory because Numbers are very tidy positions, and they’re (in a non-technical sense) sort of orthogonal to the impartial ones. They also still have more nice properties, especially if we don’t require them to be short, in which case they’re generally called (or isomorphic to, depending on who you ask) the “Surreal numbers”, and I’ll talk more about them next time.

II.4: Hackenstrings and 0.999…vs.1

A significant portion of this page was inspired by Dr. A. N. Walker’s Hackenstrings, And The 0.999… ?= 1 Faq, but the page seems to be gone. It’s linked as a reference at this much shorter page, but Dr. Walker’s treatment was very well done and I do not have a copy of it.

Basic definitions
Now that we have an order on some positions (at least the integer ones), I think we can fill in the “number line” a little bit. To make this less abstract, we can look at an actual game instead of merely Conway representations of positions. “Hackenstrings” is a game vaguely similar to Nim in which players hack away at strings. For now, an individual string will be comprised of “L”s and “R”s. On their turn, Left gets to remove an L (and everything to the right of it) and Right gets to remove an R (and everything to the right of it). The integers we’ve already discussed pop up as positions in Hackenstrings: “”=0, “LLL”=3, “RR”=-2, etc. However, there are lots of new positions like “LR”, “RLL”, and even non-short games like “LLLL…”.

What is “LR”?
Let’s consider the game “LR”. Right can remove the R, leaving “L”=1, and Left can remove the L (and the R to the right of it) leaving “”=0. Therefore, “LR”={0|1}. Our order allows us to say more about this position, though. Since Left can win by taking the “L”, “LR”>0. How does {0|1} compare to 1? In {0|1}-1=”LR”+”R”, Right can move to “L”+”R”=1-1=0 (so Left loses), and Left can move to “R”=-1 (which Left still loses). Thus, {0|1}-1=”LR”+”R” is an R-position and 0<{0|1}<1. But is it almost 0 or almost 1 or neither?

We can get an even better understanding of the value of “LR” by comparing “LR”+”LR” with “L”=1. We must play “LR”+”LR”-“L”=”LR”+”LR”+”R”. If Left goes first, they move to “LR”+”R”<0 and Left loses. If Right goes first and takes the lone “R” then “LR”+”LR”>0 remains and Right loses. If Right goes first another R, then they leave “LR”+”L”+”R”=”LR” and Right loses. In any case, the person who goes first loses (if the other player plays perfectly), so this is a P-position! “LR”+”LR”-“L”=0, so “LR”+”LR”=”L”=1. In summary, “LR” is a game between 0 and 1 such that “LR”+”LR”=1. These are pretty good reasons to call “LR” “1/2”, and that’s exactly what people do. As an aside, “LR”+* is also between 0 and 1 and (“LR”+*)+(“LR”+*)=1, but there are other reasons to give “LR” the title as opposed to “LR”+*. (Note: At this point, “1/2” is just a name; a priori there’s no concept of multiplication/division of games.)

Other fractions?
“LR”=1/2 and similarly, “RL”=-1/2. but what happens with more complicated strings? Consider “LRR”; it’s clearly positive since Left wins even if Right goes first, and it’s less than 1 since “LRR”+”R” is a win for Right. If you play “LRR”+”LRR”-“LR”=”LRR”+”LRR”+”RL”, then if Right goes first and they leave “LRR”+”LR”+”RL”=”LRR”>0, they lose. If Right goes first and they leave “LRR”+”LRR”>0, they lose. If Left goes first and they leave “LRR”+”RL” then Right can respond with “LR”+”RL”=0 and they lose. If Left goes first and they leave “LRR”+”LRR”+”R” then Right can move to “LRR”+”LR”+”R” and Left’s only moves are to “LR”+”R” (a win for Right) or “LRR”+”R” (also a win for Right). In all cases, the second player has a winning strategy, so “LRR”+”LRR”=”LR”. Thus, we can/should call “LRR” “1/4”. This pattern continues: “LRRR”=1/8, “LRRRR”=1/16, etc.

Let’s look at another family of positions, starting with “LLR”={0,1|2}. Left can take the middle L which the R is sitting on, but the bottom L intuitively seems independent of that top “LR” action. For this reason, I’d guess that “LLR”=”L”+”LR”. To test this, play “LLR”+”R”+”RL”. If Right goes first, they might take the first R, leaving “LL”+”R”+”RL”=”L”+”RL”>0, but they’d lose. If they take the third R, they’d leave “LLR”+”R” and Left could respond with “L”+”R”=0 and Right would still lose. “LLR”+”RL”>”LLR”+”R”, so Left would win if Right started by taking the other R, too. If Left goes first, they can move to “LLR”+”R”+”R”<“LL”+”R”+”R”=0, but they’d lose. They could also move to “L”+”R”+”RL”=”RL”<0, but they’d still lose. And “R”+”RL” is even worse! “LLR”+”R”+”RL” is a P-position, so “LLR”=”L”+”LR”=1+1/2. Similarly, “LLLR”=2+1/2, etc.

Finally, let’s look at one more family before I make a general claim. Consider “LRL”; this is potentially better for Left than “LR”, but worse than “L”, so it’s somewhere in between 1/2 and 1. It actually turns out to be 3/4. We can prove that “LRL”+”LRR”=”L” by playing “LRL”+”LRR”+”R”. It’s easy to see (from previously known facts) that all of Left’s starting moves result in negative games, so this is ≤0. If Right takes the first R, they leave “L”+”LRR”+”R”=”LRR”>0. If Right takes the third R (the second is worse) then Left moves to 0 by leaving “LR”+”LR”+”R”. If Right takes the last R, Left leaves “LRL”>0. There will be a way to avoid these case analyses with a general theorem later. “LRLL” is even better for left, and equals 7/8, etc.

General Hackenstring Values
Since the negative of a basic Hackenstring position switches all of the Ls and Rs, and we know “”=0, it suffices to describe the values of strings starting with L. We will focus only on finite strings for now. A string consisting only of n Ls just has the integer value n. Otherwise, suppose there are n Ls before the first R. Then the value is n-(1/2)+a1(1/4)+a2(1/8)+a3(1/16)… where ai=1 if the ith character after the first R is L, -1 if it’s R, and 0 if the string has no character i spaces after the first R. In this way, it’s relatively easy to convert finite binary representations of numbers (the so-called dyadic fractions) to Hackenstrings: 10.1100101(2)=”LLLRLLRRLR”, etc. The book Lessons in Play brought to my attention an equivalent method due to Berlekamp: the value is (n-1)+the binary number you get by reading “L”s as 1s and “R”s as 0s after the first “LR”, with a 1 appended. This can be seen in: 10.1100101(2)=”LLLRLLRRLR“. I make these claims without proof because they’d be really annoying to prove without some better general theorems we will cover later.

To help you visualize this pattern, here’s a tree diagram (except here they’re using ↿s instead of Ls and ⇃s instead of Rs): Hackenstrings Tree

0.999…=1?
We can use this idea to set up a funny comparison to the classic “.999…=1” argument. Note that “LR”=.1(2), “LRL”=.11(2), “LRLL”=.111(2), “LRLLL”=.1111(2), etc. Now consider the game “LRLLLL…” where Right’s only move is to “L” and Left can move to any finite “LRLLL…L” (or to 0). However, this game (which seems like it might have been .1111…(2)=1) is not equal to “L”. To see this, we have to play “LRLLL…”+”R”. If Left moves first, they either move to “R” and lose, or to something of the form “LRLLL…L”+”R” and Right moves to “L”+”R”=0 for the win. But if Right moves first, they just move to “L”+”R”=0 to win immediately. This is an R-position, so “LRLLL…”<“L”!

People often say things like “If .999… were less than 1, then their difference would be nonzero, but that’s ridiculous.” In this case, we know that “L”-“LRLLL…”>0, but there’s actually more that can be said. I claim that “L”-“LRLLL…”=”LRRRR…” We have to check that “LRRRR…”+”LRLLL…”+”R” is a P-position. Since “LR”=1/2 and “LRR”=1/4, etc. and “LRL”=3/4 and “LRLL”=7/8, etc. If Left moves to, say, “LRRRR…”+”LRLL”+”R” then Right moves to “LRRR”+”LRLL”+”R”=1/8+7/8-1=0 and Left loses. If Left takes a whole string then Right moves to “L”+”R”=0 and Left loses. If Right goes first, then they can’t leave “L”+”R”+”L…”=”L…”>0, but if they move to something like “LRR”+”LRLLL…”+”R” then Left can move to “LRR”+”LRL”+”R”=1/4+3/4-1=0 and Right loses. Therefore, the difference between “LRLLL…” and “L” is the infinitesimal “LRRR…”.

Exercises

  • How do things like * and ±1 compare to fractional values like 1/2 or 1+1/2?
  • The thing that should be 0.111…(2)=1, wasn’t. What about the thing that should be 1/3=0.010101…(2), “LRRLRLRLRLR…”? Does “LRRLRLR…”+”LRRLRLR…”+”LRRLRLR…”=”L”?

I.6a: The Nimber FIELD

This post is going to rely on more background than my average posts. In particular, facts about finite fields and ordinals will come into play. If you don’t know about these things at all, don’t worry, they won’t be necessary for I.7, etc. As a disclaimer, a good portion of this post is coming from Conway’s excellent, if slightly inaccessible, On Numbers and Games.

As discussed before, ⊕ (binary xor) makes ω (the set of nonnegative integers) into an abelian group of order 2. Also, by the rules of Nim and the Sprague-Grundy theorem, we know that p⊕q=mex({p’⊕q,p⊕q’:p'<p,q'<q}). To go along with this, there is a nice multiplication operation, similarly defined: p⊗q=mex({(p'⊗q)⊕(p⊗q')⊕(p'⊗q'):p'<p,q'<q}). It's easy to show (as in, it requires no background other than definitions) that ⊕ and ⊗ make ω into a commutative ring with 1. With a terrible bit of induction, it can be shown that 22n⊗22m is their regular product if n≠m, and is 3*22n/2 if n=m. With this in hand, you can show (in a terribly ugly way) that if you use these operations on the set of numbers below a “Fermat 2-power” 22n, you actually have a finite field! Thus, ω is the direct limit (think “union”) of the finite fields of size 22n. ω is not an algebraically closed field, though.

Since the ordinals are well-ordered, the mex definitions for ⊕ and ⊗ extend just fine to the class of all ordinals, On. When we give it the operations ⊕ and ⊗, the same proof that ω became a commutative ring works: On becomes a commutative RING with 1 (in fact it becomes an algebraically closed FIELD), denoted On2. A technical note: On is a proper class, not a set, so it can’t really be a ring or a field; the capitalization of “RING” and “FIELD” in the previous sentence is a standard way to represent this distinction.

As mentioned in I.6, ⊕ works essentially the same way in the infinite case as it did in the finite case: like binary xor. ⊗ in the infinite realm will be a little weirder; the pattern for finite numbers doesn’t really continue in any meaningful way. We can still get some preliminary results to help us understand things.

n⊗ω=nω for finite n.
Why?
We’ll prove it by induction on n; it’s trivial for n=0. By definition, n⊗ω=mex({(m⊗ω)⊕(n⊗o)⊕(m⊗o):m<n,o<ω}). By inductive hypothesis, m⊗ω=mω for any m<n, so we have to deal with mex({(mω)⊕(n⊗o)⊕(m⊗o):m<n,o<ω}). Consider a Fermat 2-power above n. Then the set of numbers below that form a field, so for any choice of m, we can solve the equation (n⊗o)⊕(m⊗o)=x for o. Since m and x are arbitrary, any ordinal less than nω is in the set (and no others), so the mex is just nω, as desired. ✓

ω⊗ω=ω2.
Why?
By definition, ω⊗ω=mex({(m⊗ω)⊕(ω⊗n)⊕(m⊗n):m,n<ω}). From the previous fact and the commutative ring structure of On2, this is mex({(m⊕n)ω+(m⊗n):m,n<ω}). To see that this is ω2, we must show that m⊕n and m⊗n are independently arbitrary. I.e. we need to be able to solve m⊕n=x and m⊗n=y for m and n. This system reduces to a quadratic in m, and we can solve any desired quadratic in a sufficiently big finite field of characteristic 2. ✓

ω⊗ω2=2.
That’s not a typo. Why?
By definition, ω⊗ω2=mex({(n⊗ω2)⊕(ω⊗(pω+q)⊕(n⊗(pω+q):n,p,q<ω}). Collecting like terms and using previously shown facts, this reduces to mex({(n⊕p)⊗ω2⊕(q+n⊗p)ω⊕n⊗q:n,p,q<ω}). If the expression inside the set is finite, then there must be no ω2 term, so n⊕p=0 so n=p. Similarly, there must be no ω term so q=n⊗p=n⊗n. Thus, the only finite numbers attainable are those of the form n⊗q where q=n⊗n. 0=0⊗0⊗0, so it’s not the mex, and similarly for 1. The question boils down to “does 2 have a cube root in any of the finite fields of size 22k?” Since 2⊗2⊗2=1, the question becomes “does 1 have any primitive 9th roots in one of those fields?” Since the multiplicative groups are cyclic, and 22k is never 1 modulo 9, the answer is no, and the mex is just 2, as desired. ✓

Very do-able exercise (but requiring some background): What are the other cube roots of 2?

At the beginning of this post I alluded to some inelegant inductions to prove some of the basic facts, and haven’t provided any justification at all for claims like “On2 is a FIELD“. Almost everything you need to know about On2 actually follows from a beautiful collection of theorems I’m quoting straight from Conway (after dealing with notation differences) which aren’t terribly hard to prove (once you know the statements), but which I won’t spend time proving here. Be warned, Δ and Γ are ordinals playing double-duty as both an ordinal and the set of all lower ordinals; also, I’ll use α⊗3 to denote α⊗α⊗α, etc.

  • If Δ is not a group under ⊕, then Δ=α⊕β, where (α,β) is the lexicographically earliest pair such that α⊕β is not in Δ.
  • If Δ is a group under ⊕, then Δα⊕β=Δα+β for all α and all β∈Δ
  • If Δ is a group under ⊕ but not a ring with ⊗, then Δ=α⊗β, where (α,β) is the lexicographically earliest pair such that α⊕β is not in Δ
  • If Δ is a ring, and Γ≤Δ is an additive subgroup all of whose non-zero elements have ⊗-multiplicative inverses in Δ, then Δ⊗γ=Δγ for all γ∈Γ.
  • If Δ is a ring, but not a field, then Δ is the multiplicative inverse for the smallest element α of Δ without an inverse in Δ
  • If Δ and α are as in the previous line, and Γ is the largest ordinal≤α which is a group under ⊕, then (Γ is a field and) Δ⊗n⊗γn⊕Δ⊗(n-1)⊗γn-1⊕…⊕Δ⊗γ1⊕δ=Δ(Γn-1γn+…+γ1)+δ for all n∈ω, all δ∈Δ, and all γ01,…γn∈Γ. (This one is actually a little subtle; Conway’s proof contains an error that was patched by Lenstra.)
  • If Δ is a field but not algebraically closed, then Δ is a root of the lexicographically earliest polynomial having no root in Δ. [In the lexicographic order, we examine high degree coefficients first.]
  • If Δ is as in the previous line, and N is the degree of that “earliest” polynomial, then Δ⊗n⊗δn⊕…⊕δ0nδn+…+δ0 for all n<N and all δ0,…,δn∈Δ.
  • If Δ is an algebraically closed field, then Δ is transcendental over Δ and the previous equation holds for all n∈ω.

In Conway’s words, “Each ordinal Δ extends the set of all previous ordinals in the simplest possible way.” With this in hand it’s not to bad to prove things like “distinct powers of 2 add normally”. or ω⊗3=2, or anything else you’d want to know, really. There are more things that can be said about On2, but I’ll leave you with a couple more facts: (ω3)⊗3=ω and ωωω is algebraically closed.

Wait a second, if we have all of these fields, there’s gotta be a way to divide. In fact, finding inverses does not require trial and error. The inverse of p, call it q, is given recursively by q=:1/p= mex({0,((p’⊕p)⊗q’⊕1)⊗(1/p’):0<p'<p,q'<q}).

Since "q'<q" is part of the expression, it might not seem like this calculation can be done. Let’s calculate the inverse of 2. I’ll build up the list of things we’re taking the mex of recursively:
We know 0 is in there, so the mex is at least 1.
Since the mex is at least 1, we may take q’=0, and since p=2, p’ is always forced to be 1.
Thus, we also have in the set: ((1⊕2)⊗0⊕1)⊗(1/1)=1.
So the mex is at least 2, and we may take q’=1, yielding ((1⊕2)⊗1⊕1)⊗(1/1)=3⊕1=2.
So the mex is at least 3, and we may take q’=2, yielding
((1⊕2)⊗2⊕1)⊗(1/1)=3⊗2⊕1=(1⊗2)⊕(2⊗2)⊕1=2⊕3⊕1=1⊕1=0.
But we already knew 0 was in the set, so we can’t get anything new anymore, and 1/2=3.

With the previous calculation we’ve already multiplied 2⊗3 to get 1, so this calculation really worked. It takes a bit of thought to see why this weird recursive definition will always work. Incidentally, there’s a similar formula that gives the square root: r=√s= mex({√s’,(p⊗q⊕s)⊗(1/(p⊕q)):s'<s, and p,q<r distinct}). Both of these formulas (and perhaps the formula for ⊗ as well) seem to come out of nowhere without the context of the Surreal Numbers, which will be discussed in a much later post.