Extending Conway Games to $n$ players
Has anyone attempted extending the definition of Conway Games to $n$ players? And if not, what would the definition of $-G$ look like in such a theory?
Motivation
Most of the definitions of a game from ONAG seem to have a natural extension to $n$ players. For instance: Let $G = \{G_1|G_2|\ldots|G_n\}$ be a game with $n$ players where player $p$ has options $G_p$. $n$-player addition is also easily extended: $G+H = \{G_1 + H, H_1 + G|\ldots|G_n + H, H_n+G\}$. The real difficulty seems to be a reasonable definition of $-G$ and some canonical form for $G$. I've attempted a definition of $-G$.
(In the following, I'll take for granted that game addition is commutative and associative as the proof is essentially the same as in the 2-player case. Our games are finite in all cases.)
Negative $G$
Let $\sigma^n$ be a permutation of $(1,2,\ldots,n)$ and $\sigma_p^n$ be the $p$th $(1 \leq p \leq n)$ entry of $\sigma^n$. We will just write $\sigma$ or $\sigma_p$ when $n$ is understood. We will take $G + H$ as above. Define $\stackrel{\sigma}{-}G := \{\stackrel{\sigma}{-}G_{\sigma_1}|...|\stackrel{\sigma}{-}G_{\sigma_n}\}$. ($\overset{\sigma}{-}G$ are called partial inverses of $G$ or permutations of $G$).
$$ -G = \sum_{\sigma^n/e} \stackrel{\sigma}{-}G $$
Where we sum over all permutations except the identity ($e=(1,2,\ldots,n)$) permutation. Example: $$ -G = -\{G_1|G_2|G_3\} = \stackrel{132}{-}G +\stackrel{213}{-}G+\stackrel{231}{-}G+\stackrel{312}{-}G+\stackrel{321}{-}G = $$ $$ \{\stackrel{132}{-}G_1|\stackrel{132}{-}G_3|\stackrel{132}{-}G_2\} + \ldots+ \{\stackrel{321}{-}G_3|\stackrel{321}{-}G_2|\stackrel{321}{-}G_1\} $$
Idea being that if we have the game tree of G with the arrows labeled by the players, then $-G$ is the sum of every other permutation of labels. Some more examples:
$$ -0_3=-\{||\}=\stackrel{132}{-}0+\ldots+\stackrel{321}{-}0=0 $$ $$ -H=-\{0||\}=\stackrel{132}{-}\{0||\}+\ldots+\stackrel{321}{-}\{0||\}= $$ $$ \{0||\}+\{|0|\}+\{||0\}+\{|0|\}+\{||0\} $$
We'd of course need to check that $G+(-G)=0$ and $-(-G)=G$ for this to be the $-G$. I suspect we could do this if we equivocate all games that are first-player losses to the zero game. (With the convention there is only one loser and the players play optimally: see Best Play Convention below.) Note that $H+(-H)$ from the last example is a first-player loss.
As an aside, we have $G=\stackrel{e}{-}G \implies \sum_{\sigma^n} \stackrel{\sigma}{-}G = 0 \iff G + (-G) = 0$.
But I know the move order!
There's also a strict version of $-G$, which we'll call $\ominus G$, that we can define if we know the order the players move in. Assume the players $(1,2,\ldots,n)$ move in some order $(1+p,2+p,...,n+p)$ where we take $(p \mod n)$. Then $\ominus G = \sum_{\sigma^{n}/e} \overset{\sigma}{-}G$, but here $\sigma$ ranges over the shifts of $(1+p,2+p,\ldots,n+p)$ (excluding the identity shift, $e$) rather than the permutations, so that our sum only contains $n-1$ terms instead of $n!-1$ terms. (We can call the $\overset{\sigma}{-}G$, in this case, strict partial inverses or shifts of $G$.)
Strategy Stealing
I've found the strategy stealing argument for $G + (-G) = 0$ from 2-player games doesn't quite work. Consider the following game:
Where the dots indicate a very long, but finite continuation. WLOG, let $Red$ play first, then, if the other players appropriately mirror $Red$'s moves the game will end after a very long (but finite) sequence of moves as a loss for $Red$. However, $Red$ and $Blue$ can conspire to end the game after just $8$ moves (with the play order $(\ldots ,Red,Blue,Green,Red,\ldots)$) by playing in just $G$($=\stackrel{e}{-}G$); $Green$ will be forced to use up his only $2$ available moves. So is this game not the $0$ game?
It is, in fact, the $0$ game as $Blue$ and $Green$ can conspire against $Red$ to end the game even sooner, in a mere $6$ moves (!), by playing in just the $\stackrel{RBG}{-}G$ component. This convention of best play will force a first-player loss (the soonest such loss that can be forced, see below). Note that, taking $G$ from this example, $G+(\ominus G)$ is also a first-player loss.
Best-Play Convention
Suppose the players of some game, $G$, are $\mathbb{P} = \{p_1,\ldots,p_n\}$. We consider best play to be the loss that can be forced in the fewest moves by the players $\mathbb{P}/p_k$ cooperating against $p_k$, $ \forall p_k \in \mathbb{P}$. If $\mathbb{P}/p_m$ forces such a loss then we will say that $p_m$ loses $G$ and we call $\mathbb{P}/p_m$ the winning coalition. What we really mean by $G = 0$ is that $G$ is a first-player loss under this best-play convention.
Not Quite There...Yet
Unfortunately, under our conventions, our inverse fails at $G = \{0|0|\}$. Either our conventions need to be revisited or we'll have to redefine $-G$. Mark S.'s answer below assumes a different convention and gets a workable inverse, but perhaps too many games are equivalent to the $0$ game under such a convention. I've considered the equivalence $G \equiv 0$ if $G = \stackrel{\sigma}{-}G$, $\forall \sigma$, but it still seems far less interesting than $G \equiv 0$ if $G$ is a first-player loss under Best Play.
In an attempt to derive $-G$, I calculated by hand the (possible) inverses of some simple games under the constraint that the inverse of a $n$-player game of depth $d$ should be a sum of $n!-1$ depth-$d$ games (and the sum of $G$ and $-G$ are a first-player loss). Here's a few inverses for depth-1 3-player games that suggests a simple pattern for similar depth-1 $n$-player games:
By $nG$, we mean $\sum_{1}^{n}G$. If game addition and subtraction works like we'd hope it would, the second equation above would easily follow from the first by just moving terms around. To figure out more complex inverses, I also assume inverses distribute over game addition and then I check by hand that the result is a first-player loss. I also picked up another trick for which I have no explanation as to why it works, but which I'll try to demonstrate.
Let's try to figure out $-\{|\{0||0\}|\}$ using just the table of inverses above. We first note that this is just a partial game tree of $-(\{|0|\}+\{0||0\})$. We can directly compute this:
Now we add the terms inside the parentheses and rearrange the terms on the RHS:
At this point I just guess that a subtree of the LHS would be a sum of subtrees on the RHS. As noted above, I have no idea why this would be true, but, at least in this case, it leads me to a working inverse:
Here, the root of the tree is on the dotted-line and I've put the subtrees that give me the inverse I'm looking for above the dotted-line. Worth noting that what's left below the dotted-line is not a correct equation as it is a loss for Green.
This may or may not be a useful method for deriving more difficult inverses. Checking these equations by hand is pretty tedious even for these low-depth games and it may be more useful to get a program to brute-force some of these and see if there's any recognizable patterns--I may or may not attempt this soon. Though I'd invite anyone else with interest in it to try.
Definitions
Sum of Games
The OP says "One might hope that [definition of sum]". I think we should take that as the definition of disjunctive sum. I think it is a very natural extension of the 2-player definition, and agrees with the definition in A. Cincotti's "N-player partizan games".
Equaling Zero
For the definition of equality, in the middle of the question, the OP includes some very important information: "I suspect we could do this if we equivocate all games that are first-player losses to the zero game.".
Concern
While that is not a complete definition of equality of games, I would like to point out that this makes this proposed definition of equality considerably coarser than the definitions traditionally-used in CGT. It just so happens to be a sort of coincidental theorem that in normal play, all 2-player games that are first-player losses are equivalent (in the usual sense of preserving outcomes for all sums), but even in 2-player misère play, the spirit of the definition proposed in the OP is violated. For example, the Nim positions $0$ and $G=2+2+1$ are both misère wins for the next player (for $G$, the winning move is to take the heap of size $1$). But since $G+G$ is a misère win for the previous player (mirror until the very end) and $0+G$ is not, we say $G\ne0$.
Possible meanings
In any case, we can break from convention and declare by fiat that $G+\left(-G\right)=0$ just as long as $G+\left(-G\right)$ is a "first-player loss". But then we have another problem: what does it mean here to be a “first-player loss”?
$0$ is 1. "a game that the first player loses regardless of how the others play", 2. "a game that the first player loses because some other player has a winning strategy", 3. "a game that the first player loses if all other players conspire to make that happen", and 4. "a game that the first player loses if some small coalition conspires to make it happen". For all of these possibilities, there's also the question of A. "under at least one play order" vs. B. "under all play orders" vs. C. "in the context of a particular play order". The discussion of the example towards the end of the question suggests 3B or 4B was the OP's main intent, though they also seem content to consider 3C or 4C (given the discussion of $\ominus$). The discussion of coalitions in Salt's answer makes it clear that 4 is not an option, so it seems like 3B (or 3C, depending on context) was intended.
Definition
To clear up this ambiguity, I will let "$=0$" mean "a game that the first player will always lose (under any play order) if the other players are colluding with that goal". And let a notation like "$\equiv0$" mean "a game that the first player will always lose (under a play order determined by context) if the other players are colluding with that goal".
Negatives
We will define $-G$ and $\ominus G$ as in the original question. Briefly, $-G$ is the sum of all of the copies of $G$ with different play orders, and $\ominus G$ is the sum of all of the copies of $G$ with the cyclic permutations of play order.
Negatives are inverses
Theorem 1
$G+\left(\ominus G\right)\equiv0$
Proof:
Given a particular play order for n players, all players other than the first can mirror moves in their corresponding components, so that after $n$ moves, we have $G'+\left(\ominus G'\right)$ for some option $G'$. By structural induction, this is enough to show the claim.
Lemma:
A finite sum of games that are $\equiv0$ are also $\equiv0$.
Proof
First, consider a sum of two games: we wish to show that if $G\equiv0$ and $H\equiv0$ then $G+H\equiv0$. For each move by the first player, the coalition of everyone else can respond in the same component to reduce things to, WLOG, $G+H'$ where $H'\equiv0$. By induction, this is enough for two components. Then by induction on the number of components, we have that any finite sum like ${\displaystyle \sum_{i=1}^{m}G_{i}\equiv0}$ if $G_{i}\equiv0$ for all $i$.
Theorem 2
$G+\left(-G\right)=0$
Proof
The $n!$ permutations of the players are all represented in the sum $G+\left(-G\right)$, and for any choice of play order, they break up into $\left(n-1\right)!$ copies of something of the form $G'+\ominus G'$ for appropriate $G'$s. For example, with $4$ players and a turn order like $\left(1234\right)$, the sum could be suggested by $\left(G^{1234}+G^{2341}+G^{3412}+G^{4123}\right)+\left(G^{1324}+\cdots\right)+\left(G^{1432}+\cdots\right)+\left(G^{1243}+\cdots\right)+\left(G^{1342}+\cdots\right)+\left(G^{1423}+\cdots\right)$. By Theorem 1 and the lemma, this must be $\equiv0$. Since this argument works for any play order, we have $G+\left(-G\right)=0$.
Properties of $=$
Neither the OP nor Salt's answer contains a definition of $=$ in general. But in Salt's answer, it seems that they assume 1. $=$ is transitive, and 2. $X=0\Rightarrow G+X=G$.
The $-G$
As outlined in Salt's answer, we can use those properties to show algebraically that $-\left(-G\right)=G$: Note that $G+-G+-\left(-G\right)=G$ since $-G+-\left(-G\right)=0$ and $G+-G+-\left(-G\right)=-\left(-G\right)$ since $G+-G=0$. By transitivity, we have $G=-\left(-G\right)$.
Also, the OP mentioned "the $-G$" in a way that seemed to implicitly assume uniqueness, as in $G+G_{1}=G+G_{2}=0\Rightarrow G_{1}=G_{2}$. This can be checked with very similar algebra, just considering $G_{1}+G+G_{2}$ to show that $G_{1}=G_{2}$.
Comments
Since $X\equiv0$ is true of many games, "$X\equiv0\Rightarrow G+X\equiv G$" would make a lot of games equal under $\equiv$. For example, if $X$ is any game where the first player never has a move, then $X\equiv0$ so that $G+X\equiv G$ for all $G$, even if $X$ does something like give the seventh player a million free moves so that they would win $G+X$ for $G$ of reasonably small size. For $=$, rather than $\equiv$ the situation would be less extreme, but still coarser than a standard definition, as noted earlier.