**Introduction to Toads and Frogs**

A fairly rich example of a Conway game is Guy’s “Toads and Frogs”. It is played on one or more strips of boxes, that are either empty, contain a toad, or contain a frog. A position might look something like .

On their turn, Left may move a toad one space to the right (if it has an empty space to move to), or hop over exactly one frog (if it has an empty space to hop to). Similarly, Right moves/hops with frogs to the left. As with all two-player normal play games, you lose if it’s your turn to move and you cannot. Because pictures of toads and frogs take up a lot of space, it’s traditional to use text: For example, the pictured position would be represented by the string “TT□FTF□” (occasionally “.” is used instead of “□” when □ is hard to type). If you’d like to get a feel for the game, you can play various examples of the special case “TT…T□FF…F” at this page. Be warned: they call toads “frogs” and vice versa, in contrast to essentially all literature on the game.

By analyzing small Toads and Frogs positions, lots of familiar values come up. You could make a huge table with positions with uninteresting values (and people have), so let’s consider some carefully-chosen examples together that will lead us somewhere interesting:

In TFTF□, Right has no legal moves (frogs can’t go leftwards), and Left’s only move is to TF□FT. But from there, Right would respond by moving to TFF□T, from which neither player has a legal move. Thus, TFTF□=TFF□T=0.

In TF□TF, if it were Right’s turn, they’d move to TFFT□ and then Left would move to TFF□T=0. If it were Left’s turn, they’d move to □FTTF and then Right would move to F□TTF which has no legal moves. Thus, TF□TF=F□TTF=0.

Now consider TFT□F. Left would move to TF□TF=0 and Right would move to TFTF□=0, so TFT□F=*! Apparently, Toads and Frogs, which seemed so partizan, has a position equivalent to a Nim pile of one piece.

**A curious position**

Next consider T□TFF. Left would move to □TTFF, which has no legal moves (so it’s 0), and Right would move to TFT□F=*. In other words, T□TFF={0|*}. This is a position that we haven’t discussed in any detail before, so let’s study it carefully.

Firstly, since 0 and * don’t have many options (when written in canonical form), it’s easy to check that {0|*} has no dominated or reversible options. To make things easier, let’s define u={0|*} for “unknown”. From II.6.1, we know 0⊳ıu⊳ı*. This doesn’t tell us very much, because that’s something that’s also true of every *n for n>1 (e.g. 0‖(*2)‖*). Since this isn’t too complicated a position, it should be easy to find out who wins. If Left gets to go first, they simply move to 0 and win. If Right gets to go first, they move to *={0|0} and then Left moves to 0 and wins. Therefore, u is an *L*-position, i.e. u>0. At this point, we know that 0<u⊳ı*, but we can’t have 0<u<* since 0‖*. So it must be that u‖*; unlike positive Numbers, u is a positive position that’s still not bigger than *.

For any positive Number x, we have u‖*<x, so by II.6.3, we have u⊳ıx. Let’s figure out *which* positive Numbers u is less than. We can start by comparing u to 1={0|}. In u-1={0|*}+{|0}, Right could start by moving to *-1<0 and win. If Left starts, they’re forced to move to 0-1<0 and Right wins anyway. Thus, 0<u<1. In general, if Right gets to go first in u-x for a Number x>0, then they’ll win by moving to *-x<0, so we only really have to consider what happens if Left goes first. By Number Avoidance, we only have to consider the moves in u. If x is a positive Number, then Left’s move is to -x and they lose, so u<x for *all* Numbers x! This should be somewhat surprising if you’ve thought about the Surreals in any detail: u is a positive position that’s less than 1/4, and less than 1/ω, and less than 1/[the ordinal corresponding to the size of the set of all functions from ℝ to ℝ], and… In other words, it’s so small that in some sense *anything* that makes sense as a positive number is bigger than it (even though it’s exemplified by the really accessible T□TFF). Since it’s just barely positive, u is denoted ↑, read “up”. For symmetry, -↑=↓, read “down”.

There’s even more we can ask about ↑, because there are a lot of comparisons we don’t know. For instance, how does ↑ compare to *2? We must see who wins ↑-*2={0|*}+{0,*|0,*}. If Right goes first, they can move to ↑>0 or ↑+*‖0 or *+*2‖0, so they’ve lost. If Left goes first, they can move to ↑ and win. Thus, ↑>*2 (similarly for *3, etc.), even though ↑‖*. Another thing we could ask is “how does ↑+↑>↑ compare to *? In ↑+↑-*, Left could move to ↑+↑ and win. If Right goes first, they either move to ↑+↑>0 or to ↑+*+*=↑>0 and lose. Thus, ↑+↑>*. The convention is to write ⇑ (read “double-up”) for ↑+↑, but that notation doesn’t work well past quadruple-up, so we also write 2.↑=⇑, 3.↑=⇑+↑, etc. Incidentally, n.↑<x for any integer n and positive Number x by an inductive argument (for once, it’s traditional induction).

**The Conway “Line”**

We now know a number of things about some Conway positions, and they’re not all Numbers so perhaps we shouldn’t speak of the Number line. However, we can still imagine filling in gaps in our knowledge. For instance, suppose 0<y<↑ for some Conway position y (it can’t be a Number). Then y<↑‖* implies y⊳ı* by fact 3. from the previous post. But y<* is impossible since it would imply 0<y<*, contradicting 0‖*. Thus, as we might hope, y‖* for any y between 0 and ↑ (not that we’ve encountered any such positions yet).

Putting many of the inequalities collected so far to use, we obtain the following picture of the Number line plus other positions. Here ☆ represents *n for any n>1, positions are above/below ones they’re confused with, and question marks represent unknown relationships (e.g. we don’t know if there are positions greater than ↑ that * is confused with):

…-1 -1/2 -1/ω ⇓ ↓ 0 ↑ ⇑ 1/ω 1/2 1… ?*****? ?☆?

With this sort of picture in hand, you should be able to answer any question of the form “Who would win this particular given sum of Numbers, arrows, and stars?”. Incidentally, the argument we used above to justify the * line of this picture generalizes to show that any position has a connected “cloud” of positions it’s confused with, in some sense (if it’s confused with both a and b with a<b then it’s confused with anything in between). Also, in some contexts, the number line may be flipped so as to have the *L*-positions on the left.

**Positions with ↑ and ↓ as options**

We learned about new positions ↑ and ↓, and we can add them to each other or to stars or Numbers just fine. But the game of Toads and Frogs can even show us positions where arrows are options. Consider TT□FF. Left’s only move is to T□TFF=↑, and by symmetry, Right’s only move is to -↑=↓, so this is {↑|↓}. Certainly this is an *N*-position, since Left would get to make it positive and Right would get to make it negative, but where does it fit on our Conway “line”?

Well, first we can try and put it in canonical form, to make it simpler to deal with. Obviously neither option is dominated, but what about reversible? If Left moves to ↑={0|*} then Right’s move is to *; is this at least as good for Right than {↑|↓}? We need to check if *-{↑|↓}≤0. For that to be *false*, Left would have to have a winning move right now. If Left moves in *, then Left leaves -{↑|↓}={↑|↓}, an *N*-position, for Right to win with. If Left moves to *-↑, this is ↓+* which is an *N*-position by our Conway “line” picture. Thus, * really is at least as good for Right as the original game, and Left’s move ↑ reverses through * to 0: {↑|↓}={0|↓}. But in our check of reversing, we learned more: by symmetry, since the winning moves for Right were *N*-positions, Right wouldn’t have a winning move from *-{↑|↓} *either*, so in fact we have {↑|↓}={0|↓}={↑|0}={0|0}=*.

Another basic position with arrow options we may wonder about is {↑|↑} (note that {↓|↓}=-{↑|↑}). Right off the bat we can say that this is an *L*-position, because the only moves are to ↑>0. We can also say that, by II.6.1, ↑⊳ı{↑|↑}⊳ı↑, so it must be that {↑|↑}‖↑. Being a positive position confused with ↑, maybe {↑|↑} is ↑+*2, or maybe it’s a totally new position.

To help us find out, we first put {↑|↑} in canonical form. Since {↑|↑}>0, it’s worse for Right than *, a right option of the left option ↑. Therefore, ↑ reverses to 0, and we have {↑|↑}={0|↑}. The right option ↑ doesn’t reverse because 0 (the left option of ↑) isn’t an improvement for Left over {0|↑}. Thus, we have the canonical form. **Exercise:** Compare {0|↑} to various positions of the form n.↑+*m to find its value (it is equal to one such position).

If you want to cheat, that exercise is actually part of the text of Winning Ways, as well as a chart (which only takes time to derive) of canonical forms for all positions of the form n.↑+*m.

]]>**Weak Number Avoidance**

As someone suggested, future details of verification will be simplified if we have something called the “(Weak) Number Avoidance Theorem”, whose statement I first encountered in Winning Ways, although the name may have first appeared in Lessons in Play. It will allow us to bypass silly cases like “If Right moves from x+* to y+*, then…” in our arguments. The idea is that Numbers are extra benefits for Left or Right, and they’re not really where the action of a game is happening. This theorem will apply equally well to Numbers (and other positions) that aren’t short, like ω={0,1,2…|}.

**1. If x is equal to a Number, and G is not equal to a Number, then if a player has a winning move in x+G this turn, they can win by moving in G.**

WLOG, assume that Left has a winning move in x+G, so that x+Gı⊲0. We want to show that Left can win by moving in G: i.e. there is an option Gℓ

Now, assume for sake a contradiction, that we have a Number x of lowest birthday where x+G is a counterexample: x+Gı⊲0 but x+Gℓ⊳ı0 for all options Gℓ of G. Since Left can win x+G but not by moving in G, some xℓ_{1}+G≥0. Note that we can’t have xℓ_{1}+G=0 since then G=-xℓ_{1}, but G is not equal to a Number. Thus, xℓ_{1}+G>0.

Since we assumed x had least birthday among counterexamples, xℓ_{1}+G must be a situation where the theorem applies: Left has a move in the G component to win xℓ_{1}+G, i.e. There is a Gℓ_{1} such that xℓ_{1}+Gℓ_{1}≥0. Since x is a number, x>xℓ, and we have x+Gℓ_{1}> xℓ_{1}+Gℓ_{1}≥0. But this contradicts our assumption x+Gℓ⊳ı0 for all Gℓ! ✓

Since you can ignore a Number until there’s nowhere else to move, the Number components of a position are kind of like score, although Fraser Stewart has written multiple papers on a theory of combinatorial games that would explicitly include score as we usually think of it.

Incidentally, you might think (as claimed in Winning Ways, although it may have been one of the many intentional errors) that repeated application of this result would yield “if you have a sum of a Number and a bunch of non-Numbers, and you can win, then you can win by moving in one of the non-Numbers”, but that’s false for non-short games (and harder to prove for short games). The following example essentially comes from a counterexample of “the translation property” (“strong number avoidance”) for non-short games introduced in ONAG:

Let ℤ denote the set of integer Numbers, and ±ℤ denote {ℤ|ℤ}. Then certainly Left can win 1+±ℤ+±ℤ by moving in the 1 component to get to ±ℤ+±ℤ=0. But if Left tries to win by avoiding the Number 1, and moves to 1+n+±ℤ, then Right can respond by moving in ±ℤ to get to 1+n+-(1+n)=0, regardless of what integer Number n is.

**Hackenstrings and Nim piles**

With this tool in hand, we can try to understand, once and for all, how *={0|0} (a Nim pile of size one) compares to various Numbers. Suppose that x is a positive Number (e.g. 5 or 1/ω), then to see how it compares with *, we need to see who wins x-*=x+*. If Left goes first, they can move in * to make the whole position equal x>0 so they win. If Right could win going first, **1.** says they’d move in *, but that would make the whole position x>0 and Left would *still* win. Therefore, *<x for any positive Number x, no matter how infinitesimally small. Similarly, z<* for any negative Number z. Incidentally, an inductive argument shows that any Nim pile *n={0,*,*2,…,*(n-1)|0,*,*2,…,*(n-1)} (or an infinite Nim pile) also lies between all positive and all negative Numbers.

This information essentially tells us who wins a game of “Hackenstrings and Nim piles” (and by extension, yields a strategy). Use Surreal arithmetic to combine all of the Hackenstrings into a single string, and use Sprague-Grundy to combine all of the Nim piles into a single pile. If the Hackenstring is the empty string “”, then the winner is the winner of the Nim pile (the next player to move or not, depending on whether or not the Nim pile is empty). If the Hackenstring isn’t empty, then the player who can take the first letter of the string (remember: that determines the sign of the value) can win. (In the simplified game, they’d win by taking the Nim pile if they get a chance, and taking the Hackenstring pile after that if they get a chance, but in the more complex game you’d have to make a table of who wins the various reachable positions, as with Nim.)

We can understand x+* even better, by finding its canonical form. By definition, x+*={xℓ+*,x|xr+*,x} where xℓ and xr range over the corresponding options of x. Now let’s check for dominated options. x-(xℓ+*)=(x-xℓ)-*=(x-xℓ)+*>0 since x is a Number a positive Numbers are still positive after you add *. Thus, we can delete all of the “xℓ+*” options. Similarly, xr+*>x so we can delete all of the “xr+*” options. This leaves us with x+*={x|x}.

**Exercise:** Verify that neither of these “x” options could be reversible. By a very similar argument, the canonical form of x+*2 is {x+*,x|x+*,x}, etc.

**Inequality Identities**

As I’ve alluded to before, all of the familiar facts about inequalities hold true for Conway positions: e.g. “F<G≤H implies F<H”. However, sometimes we have positions that are incomparable/confused: G‖H. This ends up giving us a whole collection of *un*familiar facts about inequalities. Just like “G≤H” makes writing “G<H or G=H” tidier, we’ll use “G⊳ıH” to make writing “G<H or G‖H” tidier. Note that “G⊳ıH” really just means “Left can win H-G if they go first”, or equivalently, “Right can win G-H if they go first”.

**1. If Gℓ is a left option of G and Gr is a right option, then Gℓ⊳ıG⊳ıGr.**

Why? We need to show Left can win G-Gℓ if they get to go first. Gℓ is a left option of G, so Left can move to Gℓ-Gℓ=0 on their first move and win. The right option part is analogous. ✓ (This is similar to the proof of II.5.4.)

**2. If F<G‖H or F‖G<H then F⊳ıH.**

Why? Assume F<G‖H. We need to show that Left can win H-F, going first. But H-F=(H-G)+(G-F), and *N*+*L*=*NL*†, so F⊳ıH. The F‖G<H case is very analogous. ✓

(† If you don’t want to recreate/click back to read the argument I cited: Left can win H-G if they go first since G‖H and Left can win G-F even if Right gets to go first since F<G. Therefore, in the sum (H-G)+(G-F) Left simply has to start by making the right move in H-G and then responding in the same component to all of Left’s moves.)

**3. If F≤G⊳ıH or F⊳ıG≤H, then F⊳ıH**

Why? Assume F≤G⊳ıH. If F=G, we’re done automatically. If F<G<H then F<H so certainly F⊳ıH. In the remaining case F<G‖H, the result follows from **2.**. The F⊳ıG≤H case is very similar. ✓

**Dominated Options**

The first main idea is based on the intuition that if a position is worse for Left (or Right) than another, no matter who’s going first, then there’s no reason for Left to make that move in any perfect play; it shouldn’t affect who wins in any combination of games. Suppose a position G has two left options Gℓ_{1}≤Gℓ_{2}. Then we say that Gℓ_{1} is “dominated” by Gℓ_{2}; the left option Gℓ_{2} is not worse for Left no matter what. Similarly, if Gr_{1}≤Gr_{2} then Gr_{2} is “dominated” by Gr_{1} (remember that *lower* positions are better for Right).

**4. If F⊳ıG, then introducing F as a left option gives you an equivalent position (and analogously for Right).**

Why? By symmetry, I’ll only prove the Left case. We need to show that {F,Gℓ|Gr}={Gℓ|Gr} (here Gℓ and Gr range over the relevant options). In other words, we have to show that {F,Gℓ|Gr}-{Gℓ|Gr}={F,Gℓ|Gr}+{-Gr|Gℓ} is a

In Winning Ways Vol.1, theorem **4.** is called “The Gift Horse Principle”.

**5. If you delete a dominated option, you get an equivalent position.**

Why? By symmetry, I’ll only look at the Left option case. Suppose Gℓ_{1}≤Gℓ_{2}. For tidiness, let H denote “G with Gℓ_{1} deleted”. Note that by **1.**, Gℓ_{2}⊳ıH. Combining that with Gℓ_{1}≤Gℓ_{2} and using **3.** tells us that Gℓ_{1}⊳ıH. From that inequality and **4.**, “H with Gℓ_{1} added”=H, so G=H. ✓.

This fact has some implications we already knew: In that {0,1|2} example, 0 is dominated by 1, so we may delete it, leaving {1|2}=1+1/2. In { -5 | 0, {*,*4|*,*4} }, {*,*4|*,*4} is dominated by 0 (in fact, they’re equal since mex({1,4})=0), so we may delete it, leaving {-5|0}=-1. We already could have done that because the equality case is also a consequence of II.2.1. However, this theorem about deleting dominated options that took a little work is much more general.

**Reversible Options**

As with the deleting of dominated options, this next method of simplification is also motivated by relatively simple strategic concerns. Suppose you move in F to a position G where I can move to a position H that’s at least as good for me as F was (as if I’ve at least undone your move to G, and perhaps done even better). Then instead of moving to G, it would be simpler if you could just jump straight to your options in H. It may not be obvious that this sort of replacement wouldn’t change the value of the game in combinations. For instance, you may want to think about the case where I have an even better option than H (call it H′) in G; why is it ok to let you jump to options of H instead of options of H′?

More formally, suppose G has a left option Gℓ_{1} which has a right option Gℓ_{1}r_{1}≤G. Then Gℓ_{1} is said to be “reversible”, and we say that it “reverses through” Gℓ_{1}r_{1}. Similarly, a right option of G is reversible if it has a left option ≥G. To formalize the “jumping”, we can talk about replacing reversible options:

**6. If Gℓ _{1} reverses through Gℓ_{1}r_{1}, then replacing G’s option Gℓ_{1} with all left options Gℓ_{1}r_{1}ℓ yields an equivalent position (and similarly for reversible right options).**

Why? By symmetry, I’ll just prove the reversible left option case. We have to prove that {Gℓ

- Moves by first player to one of the ±Gℓ′ or ±Gr can be responded to in the other component to leave 0 (e.g. Gr
_{5}-Gr_{5}). - A move by Left to Gℓ
_{1}r_{1}ℓ-G is bad for Left since Gℓ_{1}r_{1}ℓ⊳ıGℓ_{1}r_{1}by**1.**and Gℓ_{1}r_{1}≤G by reversibility. Combining these and using**3.**tells us that Right would win. - The remaining case is if Right moves to {Gℓ
_{1}r_{1}ℓ, Gℓ′ | Gr}-Gℓ_{1}. Left can respond by moving to {Gℓ_{1}r_{1}ℓ, Gℓ′ | Gr}-Gℓ_{1}r_{1}.- Then, if Right tries to move to something like {Gℓ
_{1}r_{1}ℓ, Gℓ′ | Gr}-Gℓ_{1}r_{1}ℓ, Left will respond by moving to Gℓ_{1}r_{1}ℓ-Gℓ_{1}r_{1}ℓ=0. - If instead, Right moves to something like Gr-Gℓ
_{1}r_{1}, they’ve lost because Gr-G≤Gr-Gℓ_{1}r_{1}(by reversibility) and 0⊳ıGr-G (by**1.**) so by**3.**this is bad for Right.

- Then, if Right tries to move to something like {Gℓ

✓

It’s good to think about when we’ve replaced reversible options in the past. Consider {0|5}. Since the right option 5={4|} has a left option 4≥{0|5} (this inequality can be checked without knowing what number {0|5} is), 5 reverses through 4, and we may replace 5 with the right options of 4={3|}. Since Right has no options in 4, we can replace 5 with nothing, and have {0|5}={0|}=1. Of course we knew this already from our analysis of Numbers, but this is just an example of the power of replacing reversible options.

A slightly more complicated example would be the impartial game G={ {*|*},*,*3|{*|*},*,*3}. *3 has a right option *2 that’s equal to G (and hence at least as good for Right as G), so the left *3 reverses through *2 and we can replace it with all of *2’s left options, leaving G={{*|*},*, 0,*|{*|*},*,*3}. Similarly, the other *3 reverses through *2 and we have G={{*|*},*, 0,*|{*|*},*, 0,*}. Since * is the same position as *, we don’t need two copies on each side, so G={{*|*},*,0|{*|*},*,0}. Since {*|*}=0 (they’re equivalent positions), we can delete {*|*} from each side for being a dominated option. This leaves us with a much simpler G={0,*|0,*}. (Of course two applications of the mex rule would have told us the same thing.)

**Canonical Forms**

Let’s say you’ve deleted dominated options and replaced reversible options repeatedly and were able to get to a point where you couldn’t do it anymore. Then it turns out that in some sense there’s no other simplification you could possibly do after that. Also, sort of like putting fractions in simplest form, finite Conway positions have a “canonical form”. Also like fractions, you can simplify partially, and full simplification is tedious and not always necessary, although it makes it very easy to check equality once you’ve done it.

**7. If G and H have no dominated or reversible options, then G=H exactly when every left/right option of G is equal to a corresponding left/right option of H and vice versa.**

Why? If the corresponding options are equal, then by II.2.1, G=H (even without the assumption).

Now assume G=H and there are no dominated/reversible options. Since G=H, G-H is a *P*-position, so no matter what move Left makes, Right has a winning response (to a position ≤0). Suppose Left moves to a position like Gℓ+(-H). It can’t be that something like Gℓr-H≤0 because then we’d have Gℓr≤H=G so Gℓ would be reversible. Therefore, Right’s response must be in -H. A right option of -H is the negative of a left option in H, so we have ∀Gℓ∃Hℓ Gℓ≤Hℓ. By the same argument with G and H switched, we have ∀Hℓ∃Gℓ Hℓ≤Gℓ.

Combining these two facts gives statements like “for any Gℓ_{1}, we can write something like Gℓ_{1}≤Hℓ_{1}≤Gℓ_{2}“, but that would mean that G has dominated left options unless all of these inequalities were actually equivalences. Thus, every left option of G is equal to a left option of H. The other three cases (e.g. “every right option of H is equal to a right option of G”) are all similar.

✓

The reason **7.** was so awkwardly worded is that there are different positions that are equivalent that might come up. For instance, { {|} | } is {0|}=1 and has no dominated or reversible options. Note that { {*|*} | } is equivalent to {0|}=1, and it *also* has no dominated/reversible options. Therefore, by **7.**, its left option {*|*}=0, as already known. Incidentally, it’s worth thinking about **7.** in non-short cases: If you have a position G that’s equal to ω+1 (which is {ω|}), and G has no dominated or reversible options, then G must have no right options and only one left option equal to ω.

**8. Short Conway positions have a unique equivalent “canonical form”.**

Note that 0={|} has no options at all, so it certainly has no dominated/reversible options; {|} will be the canonical form for any position equivalent to 0. By induction, assume that all of G’s options are written in their unique canonical forms. Then delete dominated options and replace reversible options until you can’t do so anymore. By

We’ve seen some canonical forms before: for Numbers and stars. For example, the canonical form for {0,-7|1,1+1/2,3/4} (or any equivalent position) is {0|1} where 1 stands for {0|} exactly and 0 stands for {|} exactly. The canonical form for {0,*5|0,*5}=* is {0|0}, i.e. { {|} | {|} }.

The shortness condition in **8.** is essential. Consider ω={0,1,2,3,…|}. It has no reversible options since nonnegative integers have no right options. However, *all* of its options are dominated. **5.** tells us we can delete one of the options (and hence any finite bunch of them), but we can’t delete them all (we’d end up with {|}=0). It’s not even true that we can delete *only* finite bunches of them; our analysis of Numbers tells us that as long as we leave an infinite bunch of nonnegative integers, it’d still be equivalent to ω. For example, we can delete all of the non-squares, and find that ω={0,1,4,9,…|}.

**Ordinals**

An “ordinal” is a Number equivalent to one with no right options. There are a few basic facts you should know about ordinal numbers.

- If G is an ordinal, G={ordinals H<G |} (in particular, with this set up the collection of left options really is a set if we only pick equivalence class representatives of this standard form).
- Any non-empty class (for example, a nonempty set) of ordinals has a least ordinal.
- For any set of ordinals S, {S|} is an ordinal bigger than any member of S.
- The nonnegative integers are ordinals, and ω={0,1,2,…|} is the next biggest ordinal after those.

If you happen to have a sense of ordinal arithmetic already, Surreal Number arithmetic works differently, in general.

**Birthdays**

With the ordinals in hand, we can give names to the steps we take to build all the Numbers (in fact, all the Conway positions). Before we have any positions, there’s nothing we can assign to be left or right options, so we say “on day 0, 0={|} was born”. Then on day 1, 1={0|} and -1={|0} (and *={0|0}) were born, etc. In general: the Numbers (or Conway positions) made by day α (α being an ordinal) are defined to be all the Numbers you get where left options and right options were “made by day β” for β<α. With this setup, each Number (or Conway position) has a “birthday”: the first ordinal α such that it was “made by day α”.

By convention, we usually only worry about the equivalence class of a position: {*|*}=0, so we’d usually say that {*|*} was born on day 0, not day 2. Also, as a consequence of the simplicity rule from II.5, a Number is equal to the unique one of earliest birthday that fits between the sets of left and right options.

**Infinite days and real Numbers**

Dyadic fractions were born on finite days. What Numbers were born on day ω? Well, the infinite ω={0,1,2,…|}=”LLLL…” was born (and its negative). So was the infinitesimal 1/ω={0|1,1/2,1/4,…}=”LRRR…” (and its negative) and dyadic translations of it like 1+1/ω={1|1+1,1+1/2,1+1/4,…}.

The other things that were born on day ω are *every real number that’s not a dyadic*! Like 1/3={0,1/4,1/4+1/16,…|1/2,1/4+1/8,1/4+1/16+1/32,…} and π={3,3+1/8,3+1/8+1/64,…|4,3+1/2,3+1/4,3+1/8+1/16,…}. If you’re familiar with Dedekind cuts, you can probably figure out how to make an explicit definition of the reals this way (you can do it without the dyadic restriction, but then you won’t get all of the reals as early as possible).

On day ω+1 you get things like ω-1={0,1,2,…|ω}, 1/3+1/ω, and 1/(2ω)={0|1/ω}. If you want ω/2, you actually have to wait until day 2*ω; ω/2={0,1,2,…|ω,ω-1,ω-2,…}.

However, there’s an implicit definition of the reals that’s rather elegant. A Number G is said to be “real” if 1. There are some integers m and n such that m<G<n (so its value isn’t infinite) and 2. G={G-q|G+q} where q ranges over the positive rationals (or positive dyadics, if you prefer). ω isn’t a real number because it’s not less than any integer. 1/ω={0|1,1/2,1/4,…} isn’t a real number because 1/ω-1/n for any positive integer n is negative, so {G-q|G+q}=0 when G=1/ω.

**Infinite Hackenstrings**

The game of Hackenstrings actually covers all of the Surreal Numbers. But it’s a little complicated to see carefully. Given a Number G with birthday α, we can form the day-β approximation to it G_{β} (for ordinal β≤α) by giving G_{β} all Numbers born *before* day β that are less than G as left options, and similarly for the right options. Note that G_{0}=0 for all Numbers G.

With these approximations, we can build a (possibly infinite; it’s really a function from the left set of some ordinal in standard form to {“L”,”R”}) Hackenstring equal to G in the following way. For each ordinal β less than the birthday of G, give the β^{th} spot an “L” if G>G_{β}, and an “R” otherwise. (We can’t ever have equality by the definition of “birthday” and day-β approximation.) Every one of these Hackenstrings is in a distinct equivalence class: to compare two distinct Hackenstrings, read them until the first spot they differ, and then “L”>blank>”R”.

**Scattered Closing Remarks**

There are lots of different equivalent (up to `FIELD`

isomorphism, I guess) ways to axiomatize the Surreals. There are more technical versions of statements like “they’re the unique `FIELD`

with all the ordered fields in it”, or “they’re the unique `FIELD`

where ordinal birthdays make sense”. You can adjoin *i* and get the Surcomplex Numbers (which are algebraically closed). You can define things like infinite series and something like a limit and lots of other crazy things that could fill at least one whole book.

Everything in the `FIELD`

On_{2} was equivalent to a Nim pile: basically a Hackenstring except all the letters were “E”s, which can be taken by either player. Everything in the `FIELD`

of Surreals was equivalent to a Hackenstring with no “E”s, just “L”s and “R”s. You could mix them up and get weird games like “EL”, but there’s no nice arithmetic for those. And I must emphasize: *not every Conway position is equivalent to a Hackenstring*.

Where unspecified, Gℓ will range over the left options of G, etc. Remember that Numbers can be added (inductively) via the formula G+H={Gℓ+H,G+Hℓ|Gr+H,G+Hr} and they can be subtracted by adding the negative: -G={ -Gr | -Gℓ }. It turns out we can do more with Numbers, though.

**Numbers can be multiplied.**

With all of these Numbers that act like dyadic fractions and integers, it would be nice if we had a procedure that let us multiply 2={1|}={{0|}|}={{{|}|}|} by 1/2={0|1}={{|}|{{|}|}} to get 1={{|}|}. To get a formula for G*H, we just need a bunch of things less than it for the left options, and a bunch of things more than it for the right options.

A common first guess (perhaps even of Conway’s?) is to define it recursively by G*H={G*ℓ*H,G*Hℓ|Gr*H,G*Hr}. However, a moment’s thought will show that this doesn’t actually make any sense for negative numbers, and *another* moment’s thought will show that this is literally the same definition as that for G+H! With a lot of thought, Conway came up with a janky definition that required G and H to be in some simplified form. With a lot more thought (months), Conway found the “right” definition.

The key idea is that we have four inequalities (coming from property 4. from the previous post and intuition about multiplication) (G-Gℓ)*(H-Hℓ)>0, (Gr-G)*(H-Hℓ)>0, etc. If we expand these out and manipulate, we get things like G*H>Gℓ*H+G*Hℓ-Gℓ*Hℓ, G*H<Gr*H+G*Hℓ-Gr*Hℓ, etc. This leads to the recursive definition G*H={Gℓ*H+G*Hℓ-Gℓ*Hℓ, Gr*H+G*Hr-Gr*Hr | Gℓ*H+G*Hr-G*ℓ*Hr, Gr*H+G*Hℓ-Gr*Hℓ}. All the properties that you expect of multiplication hold up to equivalence (the product of positive numbers is positive, 1 times anything is itself, multiplication distributes over addition, etc.).

**Numbers can be divided.**

This is a little surprising since if we imposed the finiteness condition, there’d be no Number corresponding to 1/3 (even though we have 1/2={0|1} and 1/4={0|1/2}). To find H/G, we really just need to multiply H by 1/G. To find 1/G, we really just need to be be able to find 1/G for positive G (we can take the negative afterwards if necessary). Even then, it’s tricky to write down a formula (and took a rather long while to find).

If G is positive, then we don’t need any of the negative left options (one of the left options has to be nonnegative to make G positive), so we’ll throw them out. Also, we’ll throw in 0 to the left options of G (G was positive so this is equal by the simplicity rule from the previous post). Then we can define H=1/G recursively by the following: H={0,(1+(Gr-G)*Hℓ)/Gr, (1+(Gℓ-G)*Hr)/Gℓ | (1+(Gℓ-G)*Hℓ)/Gℓ, (1+(Gr-G)*Hr)/Gr}. Now this looks problematic because it has options of H in the formula, but you can build the left and right sets with induction. Start with 0 as an option of H and use all of G’s options. Then go through all of the options of H you just got, etc. (Technical note: This transfinite induction doesn’t need more steps than the cardinality of the set of G’s options to the ℵ_{0}.)

If you don’t want to use a crazy double-inductive definition, Kruskal solved the H part of the recursion: you can have the options (both left and right) of 1/G be (1-Π(1-G/Gi))/G where Gi ranges over the positive options of G, the products are always finite (the empty product of no terms is 1) and the dividing by G in the formula isn’t actually a problem since there’s a G in the expansion of the numerator that can cancel. That expression is a *left* option of 1/G exactly when an *even* number of the Gi are left options of G.

In technical language, the Surreal Numbers are an ordered `FIELD`

(they’re too big to be a field), and they contain an isomorphic copy of *every* ordered field.

**You can take square roots (and beyond)!**

There’s a vaguely similar doubly-recursive formula to the one for 1/G found by Clive Bach for the square root of a nonnegative Number. And a similar singly-recursive version, they’re both kinda complicated. To take other roots and do other things, there’s lots of complex machinery (you can get a different definition of square root using infinite series stuff, for instance).

**Similarity to arithmetic for On _{2}.**

I talked about the “ordinal Nimbers” On

In the positions we’ve looked at so far, some play nicely with the order (-5, 1/4, etc.) and others don’t (*3, ±2, etc.). It would be nice if we could find a nice big class of positions that are all comparable with each other (this means that A<B or A=B or A>B; remember that {|}={-1|1} even though they’re different positions). Also, since we’d like to use some sort of induction, it should be “closed under options”: the options (the positions that can be moved to) of something in this class should all be part of this class as well.

You might guess that all we have to do is take the positions that aren’t *N*-positions, but {0|*} is an *L*-position with an *N*-position as an option. A second guess might be that all we have to do is take positions that aren’t *N*-positions and whose options (and options’ options, etc.) aren’t *N*-positions, but that still wouldn’t work: {4|2} is one of those, but {4|2}-2=±1+1 is an *N*-position, so {4|2}‖2! From this sort of example, there’s one restriction that stands out (and it turns out to be the only restriction, in some sense):

**1. If a position G has A as a right option and B as a left option and A≤B, then G‖A.**

Why? We have to show that G-A is an

With this in mind, we make the following recursive definition (since we’re dealing with non-loopy things, recursion is fine):

**A position is said to be a “Number” if all of its options are Numbers and none of its right options are less than or equal to any of its left options.**

Basically, if something doesn’t have any non-Numbers as options, and isn’t a position where theorem 1. would apply, it’s a Number. There are a few ways of motivating this definition, but they come down to the fact that a whole lot of things you’d want from numbers are satisfied by Numbers.

**2. Every integer (as defined earlier) is a Number.**

Why? Well, { | }=0 is a number since it has no options at all so there’s nothing that can go wrong. Note that here I’m using 0 to refer to that particular position, not any equivalent position like {*|*}. { 0 | }=1 is a number since 0 is a number and there are no right options to be less than or equal to any of the right options. By induction, the same can be said of {1|}=2, {2|}=3, etc. Similarly for {|0}=-1, {|-1}=-2, etc. ✓

**3. The negative of a Number is a Number.**

Why? We can prove it by induction. -{ | }={ | } so it’s true for the base case of 0. Now assume, as an inductive hypothesis, that it’s true for all of the options of some Number G={ Gℓ_{1},Gℓ_{2},… | Gr_{1},Gr_{2},… }. For convenience, I’ve ordered the left and right options; by the axiom of choice I can do this in the infinite case too, but it’s not actually necessary. Then -G={ -Gr_{1},-Gr_{2},… | -Gℓ_{1},-Gℓ_{2},… } passes the first test for being a Number by hypothesis. Suppose it failed the second test. Then there would be -Gℓ_{n}≤-Gr_{m} for some naturals (or ordinals) n and m. This implies that Gr_{m}≤Gℓ_{n}, which would have made G not a Number: contradiction. ✓

**4. A Number G is greater than all of its left options and less than all of its right options.**

Why? We’ll prove that G is greater than all of its left options; the proof that it’s less than all of its right options is exactly analogous. It’s trivially true for { | }=0. Now assume, as an inductive hypothesis, that it’s true for all of the options of some Number G. Call an arbitrary left option of G “Gℓ”. Then to check G>Gℓ, we need to see that Left can always win G-Gℓ. If Left goes first, they can just move to Gℓ-Gℓ=0 and win. If Right moves first, and they move in G to Gr-Gℓ, then since G is a Number, we know that Gr≰Gℓ so that either Gr>Gℓ (so Gr-Gℓ is an *L*-position) or Gr‖Gℓ (so Gr-Gℓ is an *N*-position); either way, Left wins. Finally, suppose Right goes first and moves in Gℓ to G+(-Gℓ)r=G-(Gℓℓ) (by this I mean that a right option of -Gℓ is the negative of a left option of Gℓ). Then Left can move to Gℓ-Gℓℓ, which is >0 by inductive hypothesis, and win. ✓

Incidentally, I think this fact 4. is a big part of the justification for the convention that *L*-positions are positive, rather than *R*-positions. If you have a Number like {0,1|3,7}, then the left options are to the left on the “Number line” (they’re lower) and the right options are to the right (they’re greater). Unfortunately, this convention will cause problems with other views of the number line: the positive games that Left can win either have to be on the *right* or the picture of the number line has to be flipped (this is common in CGT).

**5. The sum of two Numbers is a Number.**

Why? Recall G+H={Gℓ+H,G+Hℓ|Gr+H,G+Hr} from II.2. As usual, we’ll prove fact 5. by induction, it being clear for 0+0=0. Assume, by inductive hypothesis, that all of the options of G+H are Numbers. We must show that G+H can’t have a right option less than or equal to a left option. Suppose Gr+H≤Gℓ+H where Gr and Gℓ are some *particular* options of G (elements of sets). Then Gr≤Gℓ which is impossible since G is a Number. Suppose Gr+H≤G+Hℓ. By 4., which we just proved, Hℓ<H, so G+Hℓ<G+H. Also by 4., G<Gr, so G+H<Gr+H. We have G+H<Gr+H≤G+Hℓ<G+H, which is a contradiction. The two cases with things of the form G+Hr are analogous. ✓

For those who know what this means, 0 being a Number and facts 3. and 5. make the Numbers a sub`GROUP`

of the proper class of all positions. However, they’re not just any sub`GROUP`

:

**6. Numbers are never confused with other Numbers. (i.e. Two Numbers are always comparable.)**

Why? Suppose G and H are Numbers and G‖H, so that G+(-H)‖0. By facts 3. and 5., G+(-H) is a Number too, so it will suffice to show that no Number is an *N*-position. Suppose, for sake of contradiction, that G is a Number that’s also an *N*-position. Then for Left to have a winning move, G must have a left option that’s either a *P*-position or an *L*-position. For Right to have a winning move, G must have a right option that’s either a *P*-position or an *R*-position. Since *P*/*L*-positions are ≥ *P*/*R*-positions, all four combinations would contradict the assumption that G is a Number. ✓

Note that in the proof of 4. we considered a hypothetical case where a Number might be an *N*-position, but we didn’t know it was a Number because we didn’t have 5. yet, and we needed 4. to prove 5. and 5. to prove 6. But anyway, Numbers really do play nicely with the order, as we had hoped from the beginning, and they also play nicely with + and -, but right now they’re just some nebulous bunch of positions with nice properties. For example, we know that {0|1}=1/2 and “LRR”={0|1/2,1}=1/4 are Numbers, but we don’t really know (immediately) what the value of the Number {1+1/2|2+1/4} is.

To really understand the Numbers, we need a series of lemmas, beginning with something which Conway calls the “simplicity rule”. We’ll say that a position H “fits (for G)” if H is greater than all of G’s left options and less than all of G’s right options. In this language, fact 4. says “a Number fits for itself”.

**7. Simplicity Rule: If G and H are Numbers, and H fits for G and none of H’s options fit for G, then G=H.**

Why? To test that G=H, we need to show that the second player always has a winning strategy in G-H. If Left starts and moves to Gℓ-H, then since H fits, this is negative, and Right wins. Similarly, if Right starts and moves to Gr-H, then since H fits, this is positive, and Left wins. Now suppose Left starts and moves to G-(Hr). Since none of H’s options fit, Hr must not sit in between all of G’s left and right options. Since H is a number, fact 4. gives us that Hr>H. Therefore, there exists a right option of G, call it Gr, such that Gr≤Hr (remember fact 6.). Right moves to Gr-(Hr), which is negative or 0, and wins. The case of Right starting and moving to G-(Hℓ) is analogous. ✓

**8. It makes sense to recursively define non-integer dyadic fractions by (2p+1)/2 ^{q+1}={p/2^{q} | (p+1)/2^{q}}.**

Why? Base case: q=0. To verify that we can call {p|p+1} “(2p+1)/2”, we need to check that {p|p+1}+{p|p+1}=2p+1 for integers p. We’ll assume that p≥0 as the p+1≤0 case is analogous. We must verify that the following is a

{p|p+1}+{p|p+1}-(2p+1)

={p|p+1}+{p|p+1}-{2p | } by definition/proven fact about 2p+1

={p|p+1}+{p|p+1}+{ | -2p} by definition of –

={p+{p|p+1}-(2p+1) | p+1+{p|p+1}-(2p+1), {p|p+1}+{p|p+1}-2p} by definition of +

=0 since 0 fits (use fact 4. on {p|p+1} to see that 0 fits and fact 7. to see that it must therefore equal 0)

(Notice how no “If Left goes first…” type arguments were necessary.)

The induction step is essentially identical in style to this base case argument, except for the fact that everything has the denominator 2

**9. If some dyadic fraction fits for a Number G, then G is equivalent to some dyadic fraction (maybe not the same one).**

Why? First, suppose that an integer fits for G. Then suppose that n is the integer of lowest absolute value that fits for G (if both positive integers and negative integers fit, then so does 0). Then either n is 0 and n has no options at all or n has only one option which is of lower absolute value, so it doesn’t fit. By fact 7., G=n. Now suppose that no integer fits but a half-integer (see fact 8.) fits. Then since the options of a half-integer are integers, fact 7. says we’re done. Similarly for quarter-integers, etc. ✓

**10. Canonical form for short Numbers: A short Number is equivalent to a dyadic fraction.**

Why? From fact 9., it suffices to prove that for a Number G, *some* dyadic fraction fits. Suppose, for sake of contradiction, that some finite Number is not equivalent to a dyadic fraction. Then by non-loopiness, we can go through the options until we get to a “minimal” one: a Number whose options are all equivalent to dyadic fractions, but is not equivalent to one. By finiteness, there are only finitely many left options and finitely many right options, so any dyadic greater than the greatest left option and lower than the lowest right option should fit. ✓

This was a lot of theory, but it’s important theory because Numbers are very tidy positions, and they’re (in a non-technical sense) sort of orthogonal to the impartial ones. They also still have more nice properties, especially if we don’t require them to be short, in which case they’re generally called (or isomorphic to, depending on who you ask) the “Surreal numbers”, and I’ll talk more about them next time.

]]>**Basic definitions**

Now that we have an order on some positions (at least the integer ones), I think we can fill in the “number line” a little bit. To make this less abstract, we can look at an actual game instead of merely Conway representations of positions. “Hackenstrings” is a game vaguely similar to Nim in which players hack away at strings. For now, an individual string will be comprised of “L”s and “R”s. On their turn, Left gets to remove an L (and everything to the right of it) and Right gets to remove an R (and everything to the right of it). The integers we’ve already discussed pop up as positions in Hackenstrings: “”=0, “LLL”=3, “RR”=-2, etc. However, there are lots of new positions like “LR”, “RLL”, and even non-short games like “LLLL…”.

**What is “LR”?**

Let’s consider the game “LR”. Right can remove the R, leaving “L”=1, and Left can remove the L (and the R to the right of it) leaving “”=0. Therefore, “LR”={0|1}. Our order allows us to say more about this position, though. Since Left can win by taking the “L”, “LR”>0. How does {0|1} compare to 1? In {0|1}-1=”LR”+”R”, Right can move to “L”+”R”=1-1=0 (so Left loses), and Left can move to “R”=-1 (which Left still loses). Thus, {0|1}-1=”LR”+”R” is an *R*-position and 0<{0|1}<1. But is it almost 0 or almost 1 or neither?

We can get an even better understanding of the value of “LR” by comparing “LR”+”LR” with “L”=1. We must play “LR”+”LR”-“L”=”LR”+”LR”+”R”. If Left goes first, they move to “LR”+”R”<0 and Left loses. If Right goes first and takes the lone “R” then “LR”+”LR”>0 remains and Right loses. If Right goes first another R, then they leave “LR”+”L”+”R”=”LR” and Right loses. In any case, the person who goes first loses (if the other player plays perfectly), so this is a *P*-position! “LR”+”LR”-“L”=0, so “LR”+”LR”=”L”=1. In summary, “LR” is a game between 0 and 1 such that “LR”+”LR”=1. These are pretty good reasons to call “LR” “1/2”, and that’s exactly what people do. As an aside, “LR”+* is *also* between 0 and 1 and (“LR”+*)+(“LR”+*)=1, but there are other reasons to give “LR” the title as opposed to “LR”+*. (Note: At this point, “1/2” is just a name; a priori there’s no concept of multiplication/division of games.)

**Other fractions?**

“LR”=1/2 and similarly, “RL”=-1/2. but what happens with more complicated strings? Consider “LRR”; it’s clearly positive since Left wins even if Right goes first, and it’s less than 1 since “LRR”+”R” is a win for Right. If you play “LRR”+”LRR”-“LR”=”LRR”+”LRR”+”RL”, then if Right goes first and they leave “LRR”+”LR”+”RL”=”LRR”>0, they lose. If Right goes first and they leave “LRR”+”LRR”>0, they lose. If Left goes first and they leave “LRR”+”RL” then Right can respond with “LR”+”RL”=0 and they lose. If Left goes first and they leave “LRR”+”LRR”+”R” then Right can move to “LRR”+”LR”+”R” and Left’s only moves are to “LR”+”R” (a win for Right) or “LRR”+”R” (also a win for Right). In all cases, the second player has a winning strategy, so “LRR”+”LRR”=”LR”. Thus, we can/should call “LRR” “1/4”. This pattern continues: “LRRR”=1/8, “LRRRR”=1/16, etc.

Let’s look at another family of positions, starting with “LLR”={0,1|2}. Left can take the middle L which the R is sitting on, but the bottom L intuitively seems independent of that top “LR” action. For this reason, I’d guess that “LLR”=”L”+”LR”. To test this, play “LLR”+”R”+”RL”. If Right goes first, they might take the first R, leaving “LL”+”R”+”RL”=”L”+”RL”>0, but they’d lose. If they take the third R, they’d leave “LLR”+”R” and Left could respond with “L”+”R”=0 and Right would still lose. “LLR”+”RL”>”LLR”+”R”, so Left would win if Right started by taking the other R, too. If Left goes first, they can move to “LLR”+”R”+”R”<“LL”+”R”+”R”=0, but they’d lose. They could also move to “L”+”R”+”RL”=”RL”<0, but they’d still lose. And “R”+”RL” is even worse! “LLR”+”R”+”RL” is a *P*-position, so “LLR”=”L”+”LR”=1+1/2. Similarly, “LLLR”=2+1/2, etc.

Finally, let’s look at one more family before I make a general claim. Consider “LRL”; this is potentially better for Left than “LR”, but worse than “L”, so it’s somewhere in between 1/2 and 1. It actually turns out to be 3/4. We can prove that “LRL”+”LRR”=”L” by playing “LRL”+”LRR”+”R”. It’s easy to see (from previously known facts) that all of Left’s starting moves result in negative games, so this is ≤0. If Right takes the first R, they leave “L”+”LRR”+”R”=”LRR”>0. If Right takes the third R (the second is worse) then Left moves to 0 by leaving “LR”+”LR”+”R”. If Right takes the last R, Left leaves “LRL”>0. There will be a way to avoid these case analyses with a general theorem later. “LRLL” is even better for left, and equals 7/8, etc.

**General Hackenstring Values**

Since the negative of a basic Hackenstring position switches all of the Ls and Rs, and we know “”=0, it suffices to describe the values of strings starting with L. We will focus only on finite strings for now. A string consisting only of n Ls just has the integer value n. Otherwise, suppose there are n Ls before the first R. Then the value is n-(1/2)+a_{1}(1/4)+a_{2}(1/8)+a_{3}(1/16)… where a_{i}=1 if the i^{th} character after the first R is L, -1 if it’s R, and 0 if the string has no character i spaces after the first R. In this way, it’s relatively easy to convert finite binary representations of numbers (the so-called dyadic fractions) to Hackenstrings: 10.1100101_{(2)}=”LLLRLLRRLR”, etc. The book Lessons in Play brought to my attention an equivalent method due to Berlekamp: the value is (n-1)+the binary number you get by reading “L”s as 1s and “R”s as 0s after the first “LR”, with a 1 appended. This can be seen in: *10*.**110010**1_{(2)}=”*LL*LR**LLRRLR**“. I make these claims without proof because they’d be really annoying to prove without some better general theorems we will cover later.

To help you visualize this pattern, here’s a tree diagram (except here they’re using ↿s instead of Ls and ⇃s instead of Rs):

**0.999…=1?**

We can use this idea to set up a funny comparison to the classic “.999…=1” argument. Note that “LR”=.1_{(2)}, “LRL”=.11_{(2)}, “LRLL”=.111_{(2)}, “LRLLL”=.1111_{(2)}, etc. Now consider the game “LRLLLL…” where Right’s only move is to “L” and Left can move to any finite “LRLLL…L” (or to 0). However, this game (which seems like it might have been .1111…_{(2)}=1) is *not* equal to “L”. To see this, we have to play “LRLLL…”+”R”. If Left moves first, they either move to “R” and lose, or to something of the form “LRLLL…L”+”R” and Right moves to “L”+”R”=0 for the win. But if Right moves first, they just move to “L”+”R”=0 to win immediately. This is an *R*-position, so “LRLLL…”<“L”!

People often say things like “If .999… were less than 1, then their difference would be nonzero, but that’s ridiculous.” In this case, we know that “L”-“LRLLL…”>0, but there’s actually more that can be said. I claim that “L”-“LRLLL…”=”LRRRR…” We have to check that “LRRRR…”+”LRLLL…”+”R” is a *P*-position. Since “LR”=1/2 and “LRR”=1/4, etc. and “LRL”=3/4 and “LRLL”=7/8, etc. If Left moves to, say, “LRRRR…”+”LRLL”+”R” then Right moves to “LRRR”+”LRLL”+”R”=1/8+7/8-1=0 and Left loses. If Left takes a whole string then Right moves to “L”+”R”=0 and Left loses. If Right goes first, then they can’t leave “L”+”R”+”L…”=”L…”>0, but if they move to something like “LRR”+”LRLLL…”+”R” then Left can move to “LRR”+”LRL”+”R”=1/4+3/4-1=0 and Right loses. Therefore, the difference between “LRLLL…” and “L” is the infinitesimal “LRRR…”.

**Exercises**

- How do things like * and ±1 compare to fractional values like 1/2 or 1+1/2?
- The thing that should be 0.111…
_{(2)}=1, wasn’t. What about the thing that should be 1/3=0.010101…_{(2)}, “LRRLRLRLRLR…”? Does “LRRLRLR…”+”LRRLRLR…”+”LRRLRLR…”=”L”?

Now that we have verified the table of possibilities for adding games (at the end of II.1), and understand

With the concept of positive in hand, we can go on to define > and ≥, etc. G>H when G-H is an *L*-position.G≥H when G>H or G=H (equivalently, when Left has a winning strategy when *Right* goes first in G-H.) Under these definitions, all of the regular inequalities with integers like “-2>-3” hold for the Conway position versions of them. Also, most of the basic facts about inequalities hold: G>H implies -G<-H, etc. Basically, G>H means something like “G is better for Left than H no matter who goes first”.

**The order isn’t ideal**

However, be warned: there are two things that “go wrong” compared to common inequalities you’re used to. Firstly, not every Conway position is positive (*L*-position), negative (*R*-position) or 0 (*P*-positions), so not every pair of positions is comparable. For instance, since * is an *N*-position, “*>0”, “*=0”, and “*<0" are all false! We write "*‖0" and say "star is incomparable with zero" or "star is fuzzy (with respect to zero)" or "star is confused with zero". Slightly confusingly for some programmers, the notation "*<>0″ is used sometimes for this “confused” relationship. The other main thing that goes wrong is that there are games like * and {1+*|-1} which aren’t nonnegative to begin with, but become nonnegative (or even positive, in the latter case) when you add them to themself: *+*=0, {1+*|-1}+ {1+*|-1}>0. In technical language, these Conway positions form a “perforated partially ordered abelian `GROUP`

“.

**More Confusion**

To become a little more comfortable with this “confused” idea, let’s look more carefully at some familiar positions. *‖0, but how does * compare with 1? Well, in *-1, if Left goes first they move to -1, and then Right moves to 0 and wins (eventually, if Left happens to have moves in the particular representation of 0, like {-1|1}=0). If Right goes first then they move to -1 and win (eventually). Therefore *-1 is an *R*-position so *<1. Since -*=*, we have -1<* for free. So somehow, * occupies a rough position between -1 and 1, but it's confused with 0.

With ±1={1|-1}, there's even more confusion. If Left goes first in (±1)-1, they move to 1-1=0, and win. If Right goes first, they move to -1-1=-2, and win. Therefore, ±1-1‖0, so ±1‖1 (similarly, ±1‖-1). However, let's see how ±1 compares to 2: If Left goes first in ±1-2, their only move (up to equivalence) is to 1-2=-1, and they lose. If Right goes first, they move to -1-2=-3 and win. Either way Right wins, so ±1-2). You might visualize this as ±1’s location on a number line being unclear: like a cloud over the interval [-1,1], but definitely between -2 and 2. Analogously, ±n‖m for -n≤m≤n but ±n<m for m>n and ±n>m for m<-n.

]]>Working with Conway games is complicated, so we need some notation to make them easier to work with. To really understand a given position G, all we need to do is understand what positions Left can move to and what positions Right can move to; these are called the “left options of G” (resp. “right options”). Now traditionally, if we had an ordered pair of sets, it would be written like ({a,b,c},{x,y}). However, that’s a lot of brackets and the commas mean two different things, which is confusing. Instead, we’ll write things like G={a,b,c|x,y} for the position with left options a, b, and c, and right options x and y. Note that by definition of the negative of a game, -G={-x,-y|-a,-b,-c}.

In general, we may use symbols for sets of options, e.g. a general game G={Gℓ|Gr} whose set of left (resp. right) options is Gℓ (resp. Gr). This makes general arguments and recursive definitions easier to write. For instance, the negative of a game G={Gℓ|Gr} is defined recursively by -G={-Gr|-Gℓ} (where -Gr means the set of the negatives of the positions in Gr). In G+H, Left can either move in G or move in H, and similarly for Right. Thus, G+H={Gℓ+H,G+Hℓ|Gr+H,G+Hr}. Note that H and G are *not* sets. The Left-moves-first and Right-moves-first Simple versions of G={Gℓ|Gr} can be given recursively by `LeftFirst`

(G)={`RightFirst`

(Gℓ)|`RightFirst`

(Gℓ)} and `RightFirst`

(G)={`LeftFirst`

(Gr)|`LeftFirst`

(Gr)} etc.

When using this notation, it would be annoying if we always had to keep track of the differences between equivalent positions. Luckily, we can generalize the lemma for the Sprague-Grundy theorem at the beginning of I.5:

**1. If Gℓ _{i}=Hℓ_{i} for all i and Gr_{j}=Hr_{j} for all j, then G:={Gℓ_{1},Gℓ_{2},…|Gr_{1},Gr_{2},…}=H:={Hℓ_{1},Hℓ_{2},…|Hr_{1},Hr_{2},…}.**

Why? Let’s play G-H=G+(-H)={Gℓ

Now that we have a way of writing things down and are sure that for most purposes we only have to worry about positions up to equivalence, let’s look at some simple Conway positions.

**Integers**

The simplest position of all is {|}, where neither Left nor Right have any legal moves (the *P*-position). This game was denoted [0] before, but we’ll just call it “0”; note that -0={|}=0. With this in hand, we can build some new positions: {0|} is the position where Left has one available move (which ends the game), and Right can’t move at all. {0|} is traditionally denoted “1”, and it’s the simplest *L*-position. Now, -1={|-0}={|0} is the same thing except Right has the move, not Left. 1+1={0|}+{0|} is a game where Right has no moves, and Left’s only moves turn one component into 0, which turns the whole game into 1. Thus, 1+1={1|}; this is called “2”. Similarly, “3” denotes 2+1={1|}+{0|}={{1|},{0|}+{0|}|}={2,1+1|}={2|}, and so on. In general, if n is a nonnegative integer, then n+1={n|} is an *L*-position. Similarly, if n is a nonnegative integer, then -n-1={|-n} is an *R*-position.

**Some N-positions**

Another position we can build from 0 is {0|0}. This is a game in which either player has a game-ending move, and no other moves. We already had a name for this before: [1]. However, to make CGT things readable, I’ll introduce Conway’s notation for this position: “*” (read “star”). {0,*|0,*} is a position where both players can move to *, or move to 0. We called this [2] before, but now we’ll call it “*2” (“star two”). Analogously, *3={0,*,*2|0,*,*2}=[3], etc. All of these stars are

Another bunch of *N*-positions are written with a ±, and are special cases of a type of position called a “switch”. ±1:={1|-1} is a position in which if Left gets to move first, they secure a future move for themselves, and similarly for Right. Analogously we have ±n:={n|-n} for any positive integer n.

**Equivalent positions?**

There’s a whole menagerie of other positions I haven’t mentioned, but I want to point out that some of the positions you might build are equivalent to others we’ve discussed. For example, {-1|1} (not to be confused with ±1={1|-1}) is a game in which Left’s only move gives Right a move, and vice versa. In detail: If Left goes first, they move to -1={|0} and then Right moves to 0 and Left loses. Similarly, Right loses if they go first. Thus, {-1|1} is a *P*-position, and hence {-1|1}=0. In general, {-n|m}=0 for positive integers n and m. Another bunch of positions you might worry about are ones in which both Left and Right can move to the same bunch of stars (or zero), e.g. {0,*,*2,*5|0,*,*2,*5}. In such a case, the Sprague-Grundy theorem tells us that the position is the mex of the stars: {0,*,*2,*5|0,*,*2,*5}=*3, etc. We’ll find some more general techniques for simplifying Conway positions later.

**Solutions to Exercise from II.1**

*L*+*L*=*L*

In either component, Left has a winning strategy whether they go first or second. All they have to do is always respond (appropriately) immediately in the same component that Right plays in (if Right were to get two consecutive moves in the same component, the strategy goes out the window).

*R*+*R*=*R* is exactly analogous.

*N*+*L*=*NL*

Left can win in the sum if they play first by playing first in N, and then responding to each move of Right’s in the same component they move in. Thus, N+L is either N or L. Both are possible: *+1 is an *L*-position and (±1)+1 is an *N*-position.

*N*+*R*=*NR* is exactly analogous.

*L*+*R* can be any type.

2+(-1) is *L*, 1+(-2) is *R*, 1+(-1) is *P*, (1+*)+(-1) is *N* (note: 1+* is *L*).

*N*+*N* can be any type.

*+* is *P*, *+*2 is *N* (symmetric answer that takes a little thought to verify: both {1+*|-1} and {1+*|-1}+{1+*|-1} are *N*), (±1+1)+(±1) is *L* since ±1=-(±1) (symmetric answer: {2|-1} is *N* but {2|-1}+{2|-1} is *L*), similarly for *R*.

Next time: Fuzziness and the meaning of “G≤H”.

]]>- is two-player.
- is “normal play”, meaning when someone has no legal moves, they lose. (If a player with no legal moves were to
*win*instead, it would be “misère play”.) - has no ties; the only way for the game to end is for someone to lose/win.
- is “non-loopy”, meaning the game definitely ends; it can’t go on forever.

However, instead of being impartial, they’ll be “partizan”, so that in a given position, the moves available (right now) to one player might not be the same as the moves available to the other player. As with Simple games, we may occasionally (especially later on) impose the “short” condition (meaning there are finitely many different positions reachable), as it won’t affect much of the general theory as long as we keep the other conditions (2-player, normal-play, no ties, non-loopy). To begin with, we’ll try to reproduce some of the basic results we had for impartial/Simple games; maybe we can even get to something like the Sprague-Grundy theorem (not gonna happen).

The first thing we need to check is the existence of a winning strategy for someone. In a partizan game, the players are distinguished in the moves they can make: we’ll call them Left and Right. If you take a Conway game and add a switch that says “Left moves now” on one side and “Right moves now” on the other and stipulate a starting position for it, then the game effectively becomes Simple. We know that Simple games have a strategy for the “Next” player or the “Previous” player, so for a Conway game position there are four possibilities which need names:

- Both Left-goes-first and Right-goes-first are
*N*-positions. We’ll call this an*N*-position since the next player to move wins no matter if they’re Left or Right. - Both are
*P*-positions. We’ll call this a*P*-position. - Left-goes-first is an
*N*-position and Right-goes-first is a*P*-position. We’ll call this an*L*-position, since Left has a winning strategy no matter who goes first. - The remaining case is an
*R*-position.

In chess, a position where White (call them Left) is strong even if Black (Right) gets the next move may be an *L*-position. The initial board setup *may* be an *N*-position (if the person who goes first wins even if it were Black). If you’ve heard of trébuchet, those are *P*-positions.

Now that we have our types of positions, we can use it to define a sort of equivalence. If G and H are two Conway game positions, we’ll define G+H to be the “(disjunctive) sum” position: you can move in either component if you have a legal move there (this is actually the same definition as the ⊕ combination from before). We’ll say that G_{1} and G_{2} are equivalent if G_{1}+H is always the same type of position (*N* or *P* or *L* or *R*) as G_{2}+H, for any Conway game position H; we write G_{1}=G_{2} if they’re equivalent.

If adding a physical switch turns a Conway game into two Simple games, why do we study Conway games at all? Suppose a Conway game splits into a sum (similar to isolated regions in Go): G=H+J. Then if Left moves in H, Right in J, and Left in H *again*, Left would have made two consecutive moves in H, and that possibility would be lost by only considering the two corresponding simple game versions of H.

It may be instructive to compare/contrast the following results with those about Simple games from I.4.

**1. If G _{1}=G_{2} then G_{1}+H=G_{2}+H for any H.**

The proof is identical to the Simple game case: For each H, we can say that for any H’, (G

**2. All P-positions are equivalent!**

This is a little complicated. Suppose G is a

The next thing we’d want to do is copy the G⊕G=[0] result. The problem is that if G is, say, a Checkers/Draughts position where Right’s pieces are blocked but Left could still move, then there’s no way for Right to play some sort of “copy moves in the other component” strategy in G+G. We need to fix this by introducing some symmetry: The “negative” of a Conway position G, denoted “-G” is just like G, except in -G, Left can move to the negatives of the positions *Right* could move to in G, and vice versa. (This definition is recursive, but since Conway games are non-loopy, that’s not a real problem.) To simplify notation, “G-H” will denote “G+(-H)”.

**3. G-G=[0]=0 for any G.**

Why? Since the roles are reversed in -G, no matter who goes first in whichever component, the other player can always copy the move in the other component. Therefore, G-G is a *P*-position. Since all *P*-positions are equivalent, we can just write G-G=[0]. To simplify matters/make it look more like regular arithmetic, we’ll just write G-G=0. ✓

**4. If G-H is a P-position, then G=H (and vice versa).**

Why? By 2, G-H=0. By 1, G-H+H=0+H=H. By 3, G-H+H=G+(-H)+H=G+H-H=G+0=G. Putting the previous two equations together, we have G=H. For the “vice versa” part, if G=H, then G+(-H)=H+(-H) by 1, so G-H=0 by 3, so G-H is a

By splitting ⊕ into the operations + and – (for a Simple game those operations are identical), we were able to recover a nice test for equivalence of Conway positions. In the Simple game case there were few enough types of positions that all the possibilities for sums came up naturally in the discussion. Here, there are many cases, so I’ll use a table (“NX” means “*N*-position or *X*-position”, “?” means any of the four types of positions are possible.):

`+. P L R N`

`..---------`

`P| P L R N`

`L| . L ? NL`

`R| . . R NR`

`N| . . . ?`

Our argument that all *P*-positions are equivalent was essentially a proof of the correctness of the first line of this table.

**Exercise:** Prove the correctness of the rest of the table. (Solutions in II.2.)

**Disclaimer:** There are a ton of small facts/things to check that I’ll gloss/skip over. For example, one that’s not too hard to check is “-(A+B)=(-A)+(-B)”; if there’s any detail like this you’d like to see checked, I would be happy to do so, but I skip these things for a bit of increased readability (while still proving a lot of the important stuff, some of which is glossed over in some major sources).