II.5b: Structure of the Surreal Numbers

In addition to ONAG, some of the claims in this post are coming from Foundations of Analysis over Surreal Number Fields.

Ordinals
An “ordinal” is a Number equivalent to one with no right options. There are a few basic facts you should know about ordinal numbers.

  • If G is an ordinal, G={ordinals H<G |} (in particular, with this set up the collection of left options really is a set if we only pick equivalence class representatives of this standard form).
  • Any non-empty class (for example, a nonempty set) of ordinals has a least ordinal.
  • For any set of ordinals S, {S|} is an ordinal bigger than any member of S.
  • The nonnegative integers are ordinals, and ω={0,1,2,…|} is the next biggest ordinal after those.

If you happen to have a sense of ordinal arithmetic already, Surreal Number arithmetic works differently, in general.

Birthdays
With the ordinals in hand, we can give names to the steps we take to build all the Numbers (in fact, all the Conway positions). Before we have any positions, there’s nothing we can assign to be left or right options, so we say “on day 0, 0={|} was born”. Then on day 1, 1={0|} and -1={|0} (and *={0|0}) were born, etc. In general: the Numbers (or Conway positions) made by day α (α being an ordinal) are defined to be all the Numbers you get where left options and right options were “made by day β” for β<α. With this setup, each Number (or Conway position) has a “birthday”: the first ordinal α such that it was “made by day α”.

By convention, we usually only worry about the equivalence class of a position: {*|*}=0, so we’d usually say that {*|*} was born on day 0, not day 2. Also, as a consequence of the simplicity rule from II.5, a Number is equal to the unique one of earliest birthday that fits between the sets of left and right options.

Infinite days and real Numbers
Dyadic fractions were born on finite days. What Numbers were born on day ω? Well, the infinite ω={0,1,2,…|}=”LLLL…” was born (and its negative). So was the infinitesimal 1/ω={0|1,1/2,1/4,…}=”LRRR…” (and its negative) and dyadic translations of it like 1+1/ω={1|1+1,1+1/2,1+1/4,…}.

The other things that were born on day ω are every real number that’s not a dyadic! Like 1/3={0,1/4,1/4+1/16,…|1/2,1/4+1/8,1/4+1/16+1/32,…} and π={3,3+1/8,3+1/8+1/64,…|4,3+1/2,3+1/4,3+1/8+1/16,…}. If you’re familiar with Dedekind cuts, you can probably figure out how to make an explicit definition of the reals this way (you can do it without the dyadic restriction, but then you won’t get all of the reals as early as possible).

On day ω+1 you get things like ω-1={0,1,2,…|ω}, 1/3+1/ω, and 1/(2ω)={0|1/ω}. If you want ω/2, you actually have to wait until day 2*ω; ω/2={0,1,2,…|ω,ω-1,ω-2,…}.

However, there’s an implicit definition of the reals that’s rather elegant. A Number G is said to be “real” if 1. There are some integers m and n such that m<G<n (so its value isn’t infinite) and 2. G={G-q|G+q} where q ranges over the positive rationals (or positive dyadics, if you prefer). ω isn’t a real number because it’s not less than any integer. 1/ω={0|1,1/2,1/4,…} isn’t a real number because 1/ω-1/n for any positive integer n is negative, so {G-q|G+q}=0 when G=1/ω.

Infinite Hackenstrings
The game of Hackenstrings actually covers all of the Surreal Numbers. But it’s a little complicated to see carefully. Given a Number G with birthday α, we can form the day-β approximation to it Gβ (for ordinal β≤α) by giving Gβ all Numbers born before day β that are less than G as left options, and similarly for the right options. Note that G0=0 for all Numbers G.

With these approximations, we can build a (possibly infinite; it’s really a function from the left set of some ordinal in standard form to {“L”,”R”}) Hackenstring equal to G in the following way. For each ordinal β less than the birthday of G, give the βth spot an “L” if G>Gβ, and an “R” otherwise. (We can’t ever have equality by the definition of “birthday” and day-β approximation.) Every one of these Hackenstrings is in a distinct equivalence class: to compare two distinct Hackenstrings, read them until the first spot they differ, and then “L”>blank>”R”.

Scattered Closing Remarks

There are lots of different equivalent (up to FIELD isomorphism, I guess) ways to axiomatize the Surreals. There are more technical versions of statements like “they’re the unique FIELD with all the ordered fields in it”, or “they’re the unique FIELD where ordinal birthdays make sense”. You can adjoin i and get the Surcomplex Numbers (which are algebraically closed). You can define things like infinite series and something like a limit and lots of other crazy things that could fill at least one whole book.

Everything in the FIELD On2 was equivalent to a Nim pile: basically a Hackenstring except all the letters were “E”s, which can be taken by either player. Everything in the FIELD of Surreals was equivalent to a Hackenstring with no “E”s, just “L”s and “R”s. You could mix them up and get weird games like “EL”, but there’s no nice arithmetic for those. And I must emphasize: not every Conway position is equivalent to a Hackenstring.

Advertisements

II.5a: Arithmetic of the Surreal Numbers

Knowledge about the ordinals isn’t required for this post or the next one, but it would definitely add to your understanding. I’ll be talking about the Knuth-named “Surreal Numbers”, which for our purposes are just the Numbers (no “short” restriction) from the previous post. Conway denoted the proper class (bunch that’s too big to be a set) of surreal numbers “No“, but nowadays “\mathbb{S}” (for “Surreals”) is pretty popular, in analogy to ℝ for the reals. They’re the Surreals because they include the reals but go further. In fact, in some sense, the Surreals contain everything you might want to think of as a number (including the ordinals). Since the previous post contained the facts about Surreals that are relevant to CGT, and it would take pages and pages to prove all the basic assertions about the Surreals, let alone of the non-basic ones, this post will omit most of the proofs (which can generally be found in On Numbers and Games, the primary source for this post and the next).

Where unspecified, Gℓ will range over the left options of G, etc. Remember that Numbers can be added (inductively) via the formula G+H={Gℓ+H,G+Hℓ|Gr+H,G+Hr} and they can be subtracted by adding the negative: -G={ -Gr | -Gℓ }. It turns out we can do more with Numbers, though.

Numbers can be multiplied.
With all of these Numbers that act like dyadic fractions and integers, it would be nice if we had a procedure that let us multiply 2={1|}={{0|}|}={{{|}|}|} by 1/2={0|1}={{|}|{{|}|}} to get 1={{|}|}. To get a formula for G*H, we just need a bunch of things less than it for the left options, and a bunch of things more than it for the right options.

A common first guess (perhaps even of Conway’s?) is to define it recursively by G*H={G*ℓ*H,G*Hℓ|Gr*H,G*Hr}. However, a moment’s thought will show that this doesn’t actually make any sense for negative numbers, and another moment’s thought will show that this is literally the same definition as that for G+H! With a lot of thought, Conway came up with a janky definition that required G and H to be in some simplified form. With a lot more thought (months), Conway found the “right” definition.

The key idea is that we have four inequalities (coming from property 4. from the previous post and intuition about multiplication) (G-Gℓ)*(H-Hℓ)>0, (Gr-G)*(H-Hℓ)>0, etc. If we expand these out and manipulate, we get things like G*H>Gℓ*H+G*Hℓ-Gℓ*Hℓ, G*H<Gr*H+G*Hℓ-Gr*Hℓ, etc. This leads to the recursive definition G*H={Gℓ*H+G*Hℓ-Gℓ*Hℓ, Gr*H+G*Hr-Gr*Hr | Gℓ*H+G*Hr-G*ℓ*Hr, Gr*H+G*Hℓ-Gr*Hℓ}. All the properties that you expect of multiplication hold up to equivalence (the product of positive numbers is positive, 1 times anything is itself, multiplication distributes over addition, etc.).

Numbers can be divided.
This is a little surprising since if we imposed the finiteness condition, there’d be no Number corresponding to 1/3 (even though we have 1/2={0|1} and 1/4={0|1/2}). To find H/G, we really just need to multiply H by 1/G. To find 1/G, we really just need to be be able to find 1/G for positive G (we can take the negative afterwards if necessary). Even then, it’s tricky to write down a formula (and took a rather long while to find).

If G is positive, then we don’t need any of the negative left options (one of the left options has to be nonnegative to make G positive), so we’ll throw them out. Also, we’ll throw in 0 to the left options of G (G was positive so this is equal by the simplicity rule from the previous post). Then we can define H=1/G recursively by the following: H={0,(1+(Gr-G)*Hℓ)/Gr, (1+(Gℓ-G)*Hr)/Gℓ | (1+(Gℓ-G)*Hℓ)/Gℓ, (1+(Gr-G)*Hr)/Gr}. Now this looks problematic because it has options of H in the formula, but you can build the left and right sets with induction. Start with 0 as an option of H and use all of G’s options. Then go through all of the options of H you just got, etc. (Technical note: This transfinite induction doesn’t need more steps than the cardinality of the set of G’s options to the ℵ0.)

If you don’t want to use a crazy double-inductive definition, Kruskal solved the H part of the recursion: you can have the options (both left and right) of 1/G be (1-Π(1-G/Gi))/G where Gi ranges over the positive options of G, the products are always finite (the empty product of no terms is 1) and the dividing by G in the formula isn’t actually a problem since there’s a G in the expansion of the numerator that can cancel. That expression is a left option of 1/G exactly when an even number of the Gi are left options of G.

In technical language, the Surreal Numbers are an ordered FIELD (they’re too big to be a field), and they contain an isomorphic copy of every ordered field.

You can take square roots (and beyond)!
There’s a vaguely similar doubly-recursive formula to the one for 1/G found by Clive Bach for the square root of a nonnegative Number. And a similar singly-recursive version, they’re both kinda complicated. To take other roots and do other things, there’s lots of complex machinery (you can get a different definition of square root using infinite series stuff, for instance).

Similarity to arithmetic for On2.
I talked about the “ordinal Nimbers” On2 in a previous post. As far as I know, these multiplication, division, and square root formulas were found for the Surreals first, and then they just tried the same formulas on the impartial games and they all magically worked. (There’s a lot of simplification that comes from the fact that G=-G for impartial G.) I admit I don’t know of a good reason why the same definitions should work for such a different class of positions; they break down horribly for general positions.

II.5: The Theory of Numbers

Almost all of the results in this post will apply equally well to the non-short case (you can add “transfinite” before “induction” and “recursion” where appropriate). If you’re not comfortable with that, you can just assume everything is finite and you won’t miss much. Also, when I say “positions” I really mean “positions” not “any equivalent position”; it will actually matter for some of these arguments.

In the positions we’ve looked at so far, some play nicely with the order (-5, 1/4, etc.) and others don’t (*3, ±2, etc.). It would be nice if we could find a nice big class of positions that are all comparable with each other (this means that A<B or A=B or A>B; remember that {|}={-1|1} even though they’re different positions). Also, since we’d like to use some sort of induction, it should be “closed under options”: the options (the positions that can be moved to) of something in this class should all be part of this class as well.

You might guess that all we have to do is take the positions that aren’t N-positions, but {0|*} is an L-position with an N-position as an option. A second guess might be that all we have to do is take positions that aren’t N-positions and whose options (and options’ options, etc.) aren’t N-positions, but that still wouldn’t work: {4|2} is one of those, but {4|2}-2=±1+1 is an N-position, so {4|2}‖2! From this sort of example, there’s one restriction that stands out (and it turns out to be the only restriction, in some sense):

1. If a position G has A as a right option and B as a left option and A≤B, then G‖A.
Why? We have to show that G-A is an N-position. If Right goes first, they can move to A-A=0 and win. If Left goes first, they can move to B-A≥0 and they either win because they handed Right a P-position or because they handed Right an L-position. ✓

With this in mind, we make the following recursive definition (since we’re dealing with non-loopy things, recursion is fine):
A position is said to be a “Number” if all of its options are Numbers and none of its right options are less than or equal to any of its left options.
Basically, if something doesn’t have any non-Numbers as options, and isn’t a position where theorem 1. would apply, it’s a Number. There are a few ways of motivating this definition, but they come down to the fact that a whole lot of things you’d want from numbers are satisfied by Numbers.

2. Every integer (as defined earlier) is a Number.
Why? Well, { | }=0 is a number since it has no options at all so there’s nothing that can go wrong. Note that here I’m using 0 to refer to that particular position, not any equivalent position like {*|*}. { 0 | }=1 is a number since 0 is a number and there are no right options to be less than or equal to any of the right options. By induction, the same can be said of {1|}=2, {2|}=3, etc. Similarly for {|0}=-1, {|-1}=-2, etc. ✓

3. The negative of a Number is a Number.
Why? We can prove it by induction. -{ | }={ | } so it’s true for the base case of 0. Now assume, as an inductive hypothesis, that it’s true for all of the options of some Number G={ Gℓ1,Gℓ2,… | Gr1,Gr2,… }. For convenience, I’ve ordered the left and right options; by the axiom of choice I can do this in the infinite case too, but it’s not actually necessary. Then -G={ -Gr1,-Gr2,… | -Gℓ1,-Gℓ2,… } passes the first test for being a Number by hypothesis. Suppose it failed the second test. Then there would be -Gℓn≤-Grm for some naturals (or ordinals) n and m. This implies that Grm≤Gℓn, which would have made G not a Number: contradiction. ✓

4. A Number G is greater than all of its left options and less than all of its right options.
Why? We’ll prove that G is greater than all of its left options; the proof that it’s less than all of its right options is exactly analogous. It’s trivially true for { | }=0. Now assume, as an inductive hypothesis, that it’s true for all of the options of some Number G. Call an arbitrary left option of G “Gℓ”. Then to check G>Gℓ, we need to see that Left can always win G-Gℓ. If Left goes first, they can just move to Gℓ-Gℓ=0 and win. If Right moves first, and they move in G to Gr-Gℓ, then since G is a Number, we know that Gr≰Gℓ so that either Gr>Gℓ (so Gr-Gℓ is an L-position) or Gr‖Gℓ (so Gr-Gℓ is an N-position); either way, Left wins. Finally, suppose Right goes first and moves in Gℓ to G+(-Gℓ)r=G-(Gℓℓ) (by this I mean that a right option of -Gℓ is the negative of a left option of Gℓ). Then Left can move to Gℓ-Gℓℓ, which is >0 by inductive hypothesis, and win. ✓

Incidentally, I think this fact 4. is a big part of the justification for the convention that L-positions are positive, rather than R-positions. If you have a Number like {0,1|3,7}, then the left options are to the left on the “Number line” (they’re lower) and the right options are to the right (they’re greater). Unfortunately, this convention will cause problems with other views of the number line: the positive games that Left can win either have to be on the right or the picture of the number line has to be flipped (this is common in CGT).

5. The sum of two Numbers is a Number.
Why? Recall G+H={Gℓ+H,G+Hℓ|Gr+H,G+Hr} from II.2. As usual, we’ll prove fact 5. by induction, it being clear for 0+0=0. Assume, by inductive hypothesis, that all of the options of G+H are Numbers. We must show that G+H can’t have a right option less than or equal to a left option. Suppose Gr+H≤Gℓ+H where Gr and Gℓ are some particular options of G (elements of sets). Then Gr≤Gℓ which is impossible since G is a Number. Suppose Gr+H≤G+Hℓ. By 4., which we just proved, Hℓ<H, so G+Hℓ<G+H. Also by 4., G<Gr, so G+H<Gr+H. We have G+H<Gr+H≤G+Hℓ<G+H, which is a contradiction. The two cases with things of the form G+Hr are analogous. ✓

For those who know what this means, 0 being a Number and facts 3. and 5. make the Numbers a subGROUP of the proper class of all positions. However, they’re not just any subGROUP:

6. Numbers are never confused with other Numbers. (i.e. Two Numbers are always comparable.)
Why? Suppose G and H are Numbers and G‖H, so that G+(-H)‖0. By facts 3. and 5., G+(-H) is a Number too, so it will suffice to show that no Number is an N-position. Suppose, for sake of contradiction, that G is a Number that’s also an N-position. Then for Left to have a winning move, G must have a left option that’s either a P-position or an L-position. For Right to have a winning move, G must have a right option that’s either a P-position or an R-position. Since P/L-positions are ≥ P/R-positions, all four combinations would contradict the assumption that G is a Number. ✓

Note that in the proof of 4. we considered a hypothetical case where a Number might be an N-position, but we didn’t know it was a Number because we didn’t have 5. yet, and we needed 4. to prove 5. and 5. to prove 6. But anyway, Numbers really do play nicely with the order, as we had hoped from the beginning, and they also play nicely with + and -, but right now they’re just some nebulous bunch of positions with nice properties. For example, we know that {0|1}=1/2 and “LRR”={0|1/2,1}=1/4 are Numbers, but we don’t really know (immediately) what the value of the Number {1+1/2|2+1/4} is.

To really understand the Numbers, we need a series of lemmas, beginning with something which Conway calls the “simplicity rule”. We’ll say that a position H “fits (for G)” if H is greater than all of G’s left options and less than all of G’s right options. In this language, fact 4. says “a Number fits for itself”.

7. Simplicity Rule: If G and H are Numbers, and H fits for G and none of H’s options fit for G, then G=H.
Why? To test that G=H, we need to show that the second player always has a winning strategy in G-H. If Left starts and moves to Gℓ-H, then since H fits, this is negative, and Right wins. Similarly, if Right starts and moves to Gr-H, then since H fits, this is positive, and Left wins. Now suppose Left starts and moves to G-(Hr). Since none of H’s options fit, Hr must not sit in between all of G’s left and right options. Since H is a number, fact 4. gives us that Hr>H. Therefore, there exists a right option of G, call it Gr, such that Gr≤Hr (remember fact 6.). Right moves to Gr-(Hr), which is negative or 0, and wins. The case of Right starting and moving to G-(Hℓ) is analogous. ✓

8. It makes sense to recursively define non-integer dyadic fractions by (2p+1)/2q+1={p/2q | (p+1)/2q}.
Why? Base case: q=0. To verify that we can call {p|p+1} “(2p+1)/2”, we need to check that {p|p+1}+{p|p+1}=2p+1 for integers p. We’ll assume that p≥0 as the p+1≤0 case is analogous. We must verify that the following is a P-position:
{p|p+1}+{p|p+1}-(2p+1)
={p|p+1}+{p|p+1}-{2p | } by definition/proven fact about 2p+1
={p|p+1}+{p|p+1}+{ | -2p} by definition of –
={p+{p|p+1}-(2p+1) | p+1+{p|p+1}-(2p+1), {p|p+1}+{p|p+1}-2p} by definition of +
=0 since 0 fits (use fact 4. on {p|p+1} to see that 0 fits and fact 7. to see that it must therefore equal 0)
(Notice how no “If Left goes first…” type arguments were necessary.)
The induction step is essentially identical in style to this base case argument, except for the fact that everything has the denominator 2q attached. ✓

9. If some dyadic fraction fits for a Number G, then G is equivalent to some dyadic fraction (maybe not the same one).
Why? First, suppose that an integer fits for G. Then suppose that n is the integer of lowest absolute value that fits for G (if both positive integers and negative integers fit, then so does 0). Then either n is 0 and n has no options at all or n has only one option which is of lower absolute value, so it doesn’t fit. By fact 7., G=n. Now suppose that no integer fits but a half-integer (see fact 8.) fits. Then since the options of a half-integer are integers, fact 7. says we’re done. Similarly for quarter-integers, etc. ✓

10. Canonical form for short Numbers: A short Number is equivalent to a dyadic fraction.
Why? From fact 9., it suffices to prove that for a Number G, some dyadic fraction fits. Suppose, for sake of contradiction, that some finite Number is not equivalent to a dyadic fraction. Then by non-loopiness, we can go through the options until we get to a “minimal” one: a Number whose options are all equivalent to dyadic fractions, but is not equivalent to one. By finiteness, there are only finitely many left options and finitely many right options, so any dyadic greater than the greatest left option and lower than the lowest right option should fit. ✓

This was a lot of theory, but it’s important theory because Numbers are very tidy positions, and they’re (in a non-technical sense) sort of orthogonal to the impartial ones. They also still have more nice properties, especially if we don’t require them to be short, in which case they’re generally called (or isomorphic to, depending on who you ask) the “Surreal numbers”, and I’ll talk more about them next time.

II.4: Hackenstrings and 0.999…vs.1

A significant portion of this page was inspired by Dr. A. N. Walker’s Hackenstrings, And The 0.999… ?= 1 Faq, but the page seems to be gone. It’s linked as a reference at this much shorter page, but Dr. Walker’s treatment was very well done and I do not have a copy of it.

Basic definitions
Now that we have an order on some positions (at least the integer ones), I think we can fill in the “number line” a little bit. To make this less abstract, we can look at an actual game instead of merely Conway representations of positions. “Hackenstrings” is a game vaguely similar to Nim in which players hack away at strings. For now, an individual string will be comprised of “L”s and “R”s. On their turn, Left gets to remove an L (and everything to the right of it) and Right gets to remove an R (and everything to the right of it). The integers we’ve already discussed pop up as positions in Hackenstrings: “”=0, “LLL”=3, “RR”=-2, etc. However, there are lots of new positions like “LR”, “RLL”, and even non-short games like “LLLL…”.

What is “LR”?
Let’s consider the game “LR”. Right can remove the R, leaving “L”=1, and Left can remove the L (and the R to the right of it) leaving “”=0. Therefore, “LR”={0|1}. Our order allows us to say more about this position, though. Since Left can win by taking the “L”, “LR”>0. How does {0|1} compare to 1? In {0|1}-1=”LR”+”R”, Right can move to “L”+”R”=1-1=0 (so Left loses), and Left can move to “R”=-1 (which Left still loses). Thus, {0|1}-1=”LR”+”R” is an R-position and 0<{0|1}<1. But is it almost 0 or almost 1 or neither?

We can get an even better understanding of the value of “LR” by comparing “LR”+”LR” with “L”=1. We must play “LR”+”LR”-“L”=”LR”+”LR”+”R”. If Left goes first, they move to “LR”+”R”<0 and Left loses. If Right goes first and takes the lone “R” then “LR”+”LR”>0 remains and Right loses. If Right goes first another R, then they leave “LR”+”L”+”R”=”LR” and Right loses. In any case, the person who goes first loses (if the other player plays perfectly), so this is a P-position! “LR”+”LR”-“L”=0, so “LR”+”LR”=”L”=1. In summary, “LR” is a game between 0 and 1 such that “LR”+”LR”=1. These are pretty good reasons to call “LR” “1/2”, and that’s exactly what people do. As an aside, “LR”+* is also between 0 and 1 and (“LR”+*)+(“LR”+*)=1, but there are other reasons to give “LR” the title as opposed to “LR”+*. (Note: At this point, “1/2” is just a name; a priori there’s no concept of multiplication/division of games.)

Other fractions?
“LR”=1/2 and similarly, “RL”=-1/2. but what happens with more complicated strings? Consider “LRR”; it’s clearly positive since Left wins even if Right goes first, and it’s less than 1 since “LRR”+”R” is a win for Right. If you play “LRR”+”LRR”-“LR”=”LRR”+”LRR”+”RL”, then if Right goes first and they leave “LRR”+”LR”+”RL”=”LRR”>0, they lose. If Right goes first and they leave “LRR”+”LRR”>0, they lose. If Left goes first and they leave “LRR”+”RL” then Right can respond with “LR”+”RL”=0 and they lose. If Left goes first and they leave “LRR”+”LRR”+”R” then Right can move to “LRR”+”LR”+”R” and Left’s only moves are to “LR”+”R” (a win for Right) or “LRR”+”R” (also a win for Right). In all cases, the second player has a winning strategy, so “LRR”+”LRR”=”LR”. Thus, we can/should call “LRR” “1/4”. This pattern continues: “LRRR”=1/8, “LRRRR”=1/16, etc.

Let’s look at another family of positions, starting with “LLR”={0,1|2}. Left can take the middle L which the R is sitting on, but the bottom L intuitively seems independent of that top “LR” action. For this reason, I’d guess that “LLR”=”L”+”LR”. To test this, play “LLR”+”R”+”RL”. If Right goes first, they might take the first R, leaving “LL”+”R”+”RL”=”L”+”RL”>0, but they’d lose. If they take the third R, they’d leave “LLR”+”R” and Left could respond with “L”+”R”=0 and Right would still lose. “LLR”+”RL”>”LLR”+”R”, so Left would win if Right started by taking the other R, too. If Left goes first, they can move to “LLR”+”R”+”R”<“LL”+”R”+”R”=0, but they’d lose. They could also move to “L”+”R”+”RL”=”RL”<0, but they’d still lose. And “R”+”RL” is even worse! “LLR”+”R”+”RL” is a P-position, so “LLR”=”L”+”LR”=1+1/2. Similarly, “LLLR”=2+1/2, etc.

Finally, let’s look at one more family before I make a general claim. Consider “LRL”; this is potentially better for Left than “LR”, but worse than “L”, so it’s somewhere in between 1/2 and 1. It actually turns out to be 3/4. We can prove that “LRL”+”LRR”=”L” by playing “LRL”+”LRR”+”R”. It’s easy to see (from previously known facts) that all of Left’s starting moves result in negative games, so this is ≤0. If Right takes the first R, they leave “L”+”LRR”+”R”=”LRR”>0. If Right takes the third R (the second is worse) then Left moves to 0 by leaving “LR”+”LR”+”R”. If Right takes the last R, Left leaves “LRL”>0. There will be a way to avoid these case analyses with a general theorem later. “LRLL” is even better for left, and equals 7/8, etc.

General Hackenstring Values
Since the negative of a basic Hackenstring position switches all of the Ls and Rs, and we know “”=0, it suffices to describe the values of strings starting with L. We will focus only on finite strings for now. A string consisting only of n Ls just has the integer value n. Otherwise, suppose there are n Ls before the first R. Then the value is n-(1/2)+a1(1/4)+a2(1/8)+a3(1/16)… where ai=1 if the ith character after the first R is L, -1 if it’s R, and 0 if the string has no character i spaces after the first R. In this way, it’s relatively easy to convert finite binary representations of numbers (the so-called dyadic fractions) to Hackenstrings: 10.1100101(2)=”LLLRLLRRLR”, etc. The book Lessons in Play brought to my attention an equivalent method due to Berlekamp: the value is (n-1)+the binary number you get by reading “L”s as 1s and “R”s as 0s after the first “LR”, with a 1 appended. This can be seen in: 10.1100101(2)=”LLLRLLRRLR“. I make these claims without proof because they’d be really annoying to prove without some better general theorems we will cover later.

To help you visualize this pattern, here’s a tree diagram (except here they’re using ↿s instead of Ls and ⇃s instead of Rs): Hackenstrings Tree

0.999…=1?
We can use this idea to set up a funny comparison to the classic “.999…=1” argument. Note that “LR”=.1(2), “LRL”=.11(2), “LRLL”=.111(2), “LRLLL”=.1111(2), etc. Now consider the game “LRLLLL…” where Right’s only move is to “L” and Left can move to any finite “LRLLL…L” (or to 0). However, this game (which seems like it might have been .1111…(2)=1) is not equal to “L”. To see this, we have to play “LRLLL…”+”R”. If Left moves first, they either move to “R” and lose, or to something of the form “LRLLL…L”+”R” and Right moves to “L”+”R”=0 for the win. But if Right moves first, they just move to “L”+”R”=0 to win immediately. This is an R-position, so “LRLLL…”<“L”!

People often say things like “If .999… were less than 1, then their difference would be nonzero, but that’s ridiculous.” In this case, we know that “L”-“LRLLL…”>0, but there’s actually more that can be said. I claim that “L”-“LRLLL…”=”LRRRR…” We have to check that “LRRRR…”+”LRLLL…”+”R” is a P-position. Since “LR”=1/2 and “LRR”=1/4, etc. and “LRL”=3/4 and “LRLL”=7/8, etc. If Left moves to, say, “LRRRR…”+”LRLL”+”R” then Right moves to “LRRR”+”LRLL”+”R”=1/8+7/8-1=0 and Left loses. If Left takes a whole string then Right moves to “L”+”R”=0 and Left loses. If Right goes first, then they can’t leave “L”+”R”+”L…”=”L…”>0, but if they move to something like “LRR”+”LRLLL…”+”R” then Left can move to “LRR”+”LRL”+”R”=1/4+3/4-1=0 and Right loses. Therefore, the difference between “LRLLL…” and “L” is the infinitesimal “LRRR…”.

Exercises

  • How do things like * and ±1 compare to fractional values like 1/2 or 1+1/2?
  • The thing that should be 0.111…(2)=1, wasn’t. What about the thing that should be 1/3=0.010101…(2), “LRRLRLRLRLR…”? Does “LRRLRLR…”+”LRRLRLR…”+”LRRLRLR…”=”L”?

II.3: An Order on Conway Games

Defining an Order
Now that we have verified the table of possibilities for adding games (at the end of II.1), and understand P-positions as “0”, we can note a couple things: An L-position plus an L-position is another L-position. The negative of an L-position is an R-position (so not an L-position), and vice versa. This makes the L-positions kind of like positive numbers: When you add two positives you get a positive, and the negative of a positive is never positive. For this reason, we say that a Conway position is “positive” if it’s an L-position, and “negative” if it’s an R-position. Why not the other way around? There are good reasons for both choices, but this is the one that stuck (a later post will have a better reason for this one).

With the concept of positive in hand, we can go on to define > and ≥, etc. G>H when G-H is an L-position.G≥H when G>H or G=H (equivalently, when Left has a winning strategy when Right goes first in G-H.) Under these definitions, all of the regular inequalities with integers like “-2>-3” hold for the Conway position versions of them. Also, most of the basic facts about inequalities hold: G>H implies -G<-H, etc. Basically, G>H means something like “G is better for Left than H no matter who goes first”.

The order isn’t ideal
However, be warned: there are two things that “go wrong” compared to common inequalities you’re used to. Firstly, not every Conway position is positive (L-position), negative (R-position) or 0 (P-positions), so not every pair of positions is comparable. For instance, since * is an N-position, “*>0”, “*=0”, and “*<0" are all false! We write "*‖0" and say "star is incomparable with zero" or "star is fuzzy (with respect to zero)" or "star is confused with zero". Slightly confusingly for some programmers, the notation "*<>0″ is used sometimes for this “confused” relationship. The other main thing that goes wrong is that there are games like * and {1+*|-1} which aren’t nonnegative to begin with, but become nonnegative (or even positive, in the latter case) when you add them to themself: *+*=0, {1+*|-1}+ {1+*|-1}>0. In technical language, these Conway positions form a “perforated partially ordered abelian GROUP“.

More Confusion
To become a little more comfortable with this “confused” idea, let’s look more carefully at some familiar positions. *‖0, but how does * compare with 1? Well, in *-1, if Left goes first they move to -1, and then Right moves to 0 and wins (eventually, if Left happens to have moves in the particular representation of 0, like {-1|1}=0). If Right goes first then they move to -1 and win (eventually). Therefore *-1 is an R-position so *<1. Since -*=*, we have -1<* for free. So somehow, * occupies a rough position between -1 and 1, but it's confused with 0.

With ±1={1|-1}, there's even more confusion. If Left goes first in (±1)-1, they move to 1-1=0, and win. If Right goes first, they move to -1-1=-2, and win. Therefore, ±1-1‖0, so ±1‖1 (similarly, ±1‖-1). However, let's see how ±1 compares to 2: If Left goes first in ±1-2, their only move (up to equivalence) is to 1-2=-1, and they lose. If Right goes first, they move to -1-2=-3 and win. Either way Right wins, so ±1-2). You might visualize this as ±1’s location on a number line being unclear: like a cloud over the interval [-1,1], but definitely between -2 and 2. Analogously, ±n‖m for -n≤m≤n but ±n<m for m>n and ±n>m for m<-n.

II.2: Basics of Conway Games

Conway Notation
Working with Conway games is complicated, so we need some notation to make them easier to work with. To really understand a given position G, all we need to do is understand what positions Left can move to and what positions Right can move to; these are called the “left options of G” (resp. “right options”). Now traditionally, if we had an ordered pair of sets, it would be written like ({a,b,c},{x,y}). However, that’s a lot of brackets and the commas mean two different things, which is confusing. Instead, we’ll write things like G={a,b,c|x,y} for the position with left options a, b, and c, and right options x and y. Note that by definition of the negative of a game, -G={-x,-y|-a,-b,-c}.

In general, we may use symbols for sets of options, e.g. a general game G={Gℓ|Gr} whose set of left (resp. right) options is Gℓ (resp. Gr). This makes general arguments and recursive definitions easier to write. For instance, the negative of a game G={Gℓ|Gr} is defined recursively by -G={-Gr|-Gℓ} (where -Gr means the set of the negatives of the positions in Gr). In G+H, Left can either move in G or move in H, and similarly for Right. Thus, G+H={Gℓ+H,G+Hℓ|Gr+H,G+Hr}. Note that H and G are not sets. The Left-moves-first and Right-moves-first Simple versions of G={Gℓ|Gr} can be given recursively by LeftFirst(G)={RightFirst(Gℓ)|RightFirst(Gℓ)} and RightFirst(G)={LeftFirst(Gr)|LeftFirst(Gr)} etc.

When using this notation, it would be annoying if we always had to keep track of the differences between equivalent positions. Luckily, we can generalize the lemma for the Sprague-Grundy theorem at the beginning of I.5:

1. If Gℓi=Hℓi for all i and Grj=Hrj for all j, then G:={Gℓ1,Gℓ2,…|Gr1,Gr2,…}=H:={Hℓ1,Hℓ2,…|Hr1,Hr2,…}.
Why? Let’s play G-H=G+(-H)={Gℓ1,Gℓ2,…|Gr1,Gr2,…}+{-Hr1,-Hr2,…|-Hℓ1,-Hℓ2,…} with you going first. If you’re Left and you move to Gℓi-H or you’re Right and you move to G-Hℓi, I’ll move to Gℓi-Hℓi which is a P-position since Gℓi=Hℓi. Similarly for the other two cases. Since I can always move to a P-position, G-H is a P-position, and G=H. ✓

Now that we have a way of writing things down and are sure that for most purposes we only have to worry about positions up to equivalence, let’s look at some simple Conway positions.

Integers
The simplest position of all is {|}, where neither Left nor Right have any legal moves (the P-position). This game was denoted [0] before, but we’ll just call it “0”; note that -0={|}=0. With this in hand, we can build some new positions: {0|} is the position where Left has one available move (which ends the game), and Right can’t move at all. {0|} is traditionally denoted “1”, and it’s the simplest L-position. Now, -1={|-0}={|0} is the same thing except Right has the move, not Left. 1+1={0|}+{0|} is a game where Right has no moves, and Left’s only moves turn one component into 0, which turns the whole game into 1. Thus, 1+1={1|}; this is called “2”. Similarly, “3” denotes 2+1={1|}+{0|}={{1|},{0|}+{0|}|}={2,1+1|}={2|}, and so on. In general, if n is a nonnegative integer, then n+1={n|} is an L-position. Similarly, if n is a nonnegative integer, then -n-1={|-n} is an R-position.

Some N-positions
Another position we can build from 0 is {0|0}. This is a game in which either player has a game-ending move, and no other moves. We already had a name for this before: [1]. However, to make CGT things readable, I’ll introduce Conway’s notation for this position: “*” (read “star”). {0,*|0,*} is a position where both players can move to *, or move to 0. We called this [2] before, but now we’ll call it “*2” (“star two”). Analogously, *3={0,*,*2|0,*,*2}=[3], etc. All of these stars are N-positions.

Another bunch of N-positions are written with a ±, and are special cases of a type of position called a “switch”. ±1:={1|-1} is a position in which if Left gets to move first, they secure a future move for themselves, and similarly for Right. Analogously we have ±n:={n|-n} for any positive integer n.

Equivalent positions?
There’s a whole menagerie of other positions I haven’t mentioned, but I want to point out that some of the positions you might build are equivalent to others we’ve discussed. For example, {-1|1} (not to be confused with ±1={1|-1}) is a game in which Left’s only move gives Right a move, and vice versa. In detail: If Left goes first, they move to -1={|0} and then Right moves to 0 and Left loses. Similarly, Right loses if they go first. Thus, {-1|1} is a P-position, and hence {-1|1}=0. In general, {-n|m}=0 for positive integers n and m. Another bunch of positions you might worry about are ones in which both Left and Right can move to the same bunch of stars (or zero), e.g. {0,*,*2,*5|0,*,*2,*5}. In such a case, the Sprague-Grundy theorem tells us that the position is the mex of the stars: {0,*,*2,*5|0,*,*2,*5}=*3, etc. We’ll find some more general techniques for simplifying Conway positions later.

Solutions to Exercise from II.1

L+L=L
In either component, Left has a winning strategy whether they go first or second. All they have to do is always respond (appropriately) immediately in the same component that Right plays in (if Right were to get two consecutive moves in the same component, the strategy goes out the window).
R+R=R is exactly analogous.

N+L=NL
Left can win in the sum if they play first by playing first in N, and then responding to each move of Right’s in the same component they move in. Thus, N+L is either N or L. Both are possible: *+1 is an L-position and (±1)+1 is an N-position.
N+R=NR is exactly analogous.

L+R can be any type.
2+(-1) is L, 1+(-2) is R, 1+(-1) is P, (1+*)+(-1) is N (note: 1+* is L).

N+N can be any type.
*+* is P, *+*2 is N (symmetric answer that takes a little thought to verify: both {1+*|-1} and {1+*|-1}+{1+*|-1} are N), (±1+1)+(±1) is L since ±1=-(±1) (symmetric answer: {2|-1} is N but {2|-1}+{2|-1} is L), similarly for R.

Next time: Fuzziness and the meaning of “G≤H”.

II.1: Introduction to Partizan “Conway Games”

In I.1, we talked about Simple games, but the field of Combinatorial Game Theory didn’t really come into its own until Conway’s work on a more general class of games. Since he pioneered the study of these games, I’ll call these “Conway games”. As with Simple games, a Conway game:

  1. is two-player.
  2. is “normal play”, meaning when someone has no legal moves, they lose. (If a player with no legal moves were to win instead, it would be “misère play”.)
  3. has no ties; the only way for the game to end is for someone to lose/win.
  4. is “non-loopy”, meaning the game definitely ends; it can’t go on forever.

However, instead of being impartial, they’ll be “partizan”, so that in a given position, the moves available (right now) to one player might not be the same as the moves available to the other player. As with Simple games, we may occasionally (especially later on) impose the “short” condition (meaning there are finitely many different positions reachable), as it won’t affect much of the general theory as long as we keep the other conditions (2-player, normal-play, no ties, non-loopy). To begin with, we’ll try to reproduce some of the basic results we had for impartial/Simple games; maybe we can even get to something like the Sprague-Grundy theorem (not gonna happen).

The first thing we need to check is the existence of a winning strategy for someone. In a partizan game, the players are distinguished in the moves they can make: we’ll call them Left and Right. If you take a Conway game and add a switch that says “Left moves now” on one side and “Right moves now” on the other and stipulate a starting position for it, then the game effectively becomes Simple. We know that Simple games have a strategy for the “Next” player or the “Previous” player, so for a Conway game position there are four possibilities which need names:

  1. Both Left-goes-first and Right-goes-first are N-positions. We’ll call this an N-position since the next player to move wins no matter if they’re Left or Right.
  2. Both are P-positions. We’ll call this a P-position.
  3. Left-goes-first is an N-position and Right-goes-first is a P-position. We’ll call this an L-position, since Left has a winning strategy no matter who goes first.
  4. The remaining case is an R-position.

In chess, a position where White (call them Left) is strong even if Black (Right) gets the next move may be an L-position. The initial board setup may be an N-position (if the person who goes first wins even if it were Black). If you’ve heard of trébuchet, those are P-positions.

Now that we have our types of positions, we can use it to define a sort of equivalence. If G and H are two Conway game positions, we’ll define G+H to be the “(disjunctive) sum” position: you can move in either component if you have a legal move there (this is actually the same definition as the ⊕ combination from before). We’ll say that G1 and G2 are equivalent if G1+H is always the same type of position (N or P or L or R) as G2+H, for any Conway game position H; we write G1=G2 if they’re equivalent.

If adding a physical switch turns a Conway game into two Simple games, why do we study Conway games at all? Suppose a Conway game splits into a sum (similar to isolated regions in Go): G=H+J. Then if Left moves in H, Right in J, and Left in H again, Left would have made two consecutive moves in H, and that possibility would be lost by only considering the two corresponding simple game versions of H.

It may be instructive to compare/contrast the following results with those about Simple games from I.4.

1. If G1=G2 then G1+H=G2+H for any H.
The proof is identical to the Simple game case: For each H, we can say that for any H’, (G1+H)+H’=G1+(H+H’) has the same winner as G2+(H+H’)=(G2+H)+H’. ✓

2. All P-positions are equivalent!
This is a little complicated. Suppose G is a P-position. If H is a position where Left wins going second (i.e. either a P-position or an L-position), then if Left goes second in G+H, Left has a response in each component for any moves that come up, so Left will win. Hence “H is P or L implies G+H is P or L.” If H is a position where Left wins going first (i.e. either an N-position or an L-position), then if Left goes first in G+H, they can make their first move in H, and then respond to every subsequent move in the same component their opponent moved in. Hence “H is N or L implies G+H is N or L.” Similarly, we have those same implications where L is replaced with R. These four implications show that a P-position like G does not change the winner of H (e.g. If H is an L-position, then both “N or L” and “P or L” are true of H, and hence are true of G+H, which forces G+H to be an L-position too.). Since any P-position doesn’t change the winner of a game, two P-positions have the same result (no change) when added to a game, so they must be equivalent. ✓

The next thing we’d want to do is copy the G⊕G=[0] result. The problem is that if G is, say, a Checkers/Draughts position where Right’s pieces are blocked but Left could still move, then there’s no way for Right to play some sort of “copy moves in the other component” strategy in G+G. We need to fix this by introducing some symmetry: The “negative” of a Conway position G, denoted “-G” is just like G, except in -G, Left can move to the negatives of the positions Right could move to in G, and vice versa. (This definition is recursive, but since Conway games are non-loopy, that’s not a real problem.) To simplify notation, “G-H” will denote “G+(-H)”.

3. G-G=[0]=0 for any G.
Why? Since the roles are reversed in -G, no matter who goes first in whichever component, the other player can always copy the move in the other component. Therefore, G-G is a P-position. Since all P-positions are equivalent, we can just write G-G=[0]. To simplify matters/make it look more like regular arithmetic, we’ll just write G-G=0. ✓

4. If G-H is a P-position, then G=H (and vice versa).
Why? By 2, G-H=0. By 1, G-H+H=0+H=H. By 3, G-H+H=G+(-H)+H=G+H-H=G+0=G. Putting the previous two equations together, we have G=H. For the “vice versa” part, if G=H, then G+(-H)=H+(-H) by 1, so G-H=0 by 3, so G-H is a P-position, by 2. ✓

By splitting ⊕ into the operations + and – (for a Simple game those operations are identical), we were able to recover a nice test for equivalence of Conway positions. In the Simple game case there were few enough types of positions that all the possibilities for sums came up naturally in the discussion. Here, there are many cases, so I’ll use a table (“NX” means “N-position or X-position”, “?” means any of the four types of positions are possible.):

+. P L R N
..---------
P| P L R N
L| . L ? NL
R| . . R NR
N| . . . ?

Our argument that all P-positions are equivalent was essentially a proof of the correctness of the first line of this table.
Exercise: Prove the correctness of the rest of the table. (Solutions in II.2.)

Disclaimer: There are a ton of small facts/things to check that I’ll gloss/skip over. For example, one that’s not too hard to check is “-(A+B)=(-A)+(-B)”; if there’s any detail like this you’d like to see checked, I would be happy to do so, but I skip these things for a bit of increased readability (while still proving a lot of the important stuff, some of which is glossed over in some major sources).