Solving very large matrices in "pieces"
The first question that should be asked of any person presenting you with a large-ish matrix: "is the matrix dense or sparse?" In the latter case, one definitely does not need to allocate space for the zero entries, and should thus look into special techniques to storing them(which as I understand nowadays rely on a liberal dose of graph theory in the general case, though band matrices are still handled by storing their bands in an appropriate format).
Now, if even after that you have the perfect excuse for having a large dense matrix (which I still believe is quite unlikely), there is a way to invert and take the determinant of a large matrix via partitioning.
Say we have
$\textbf{A}=\begin{pmatrix}\textbf{E}&\textbf{F}\\ \textbf{G}&\textbf{H}\end{pmatrix}$
where $\textbf{E}$ and $\textbf{H}$ are square matrices with dimensions $m\times m$ and $n\times n$ respectively, and $\textbf{F}$ and $\textbf{G}$ are dimensioned appropriately (so the dimension of $\textbf{A}$ is $(m+n)\times(m+n)$). The inverse can then be computed as
$\textbf{A}^{-1}=\begin{pmatrix}\textbf{E}^{-1}+\left(\textbf{E}^{-1}\textbf{F}\right)(\textbf{H}-\textbf{G}\textbf{E}^{-1}\textbf{F})^{-1}\left(\textbf{G}\textbf{E}^{-1}\right)&-\left(\textbf{E}^{-1}\textbf{F}\right)(\textbf{H}-\textbf{G}\textbf{E}^{-1}\textbf{F})^{-1}\\-(\textbf{H}-\textbf{G}\textbf{E}^{-1}\textbf{F})^{-1}\left(\textbf{G}\textbf{E}^{-1}\right)&(\textbf{H}-\textbf{G}\textbf{E}^{-1}\textbf{F})^{-1}\end{pmatrix}$
where the parentheses emphasize common factors that you might profit from computing only once.
As for the determinant, it is given by
$\det\;\textbf{A}=\det\left(\textbf{H}-\textbf{G}\textbf{E}^{-1}\textbf{F}\right)\;\det\;\textbf{E}$
EDIT:
I'm not entirely sure what seemed to be unsatisfactory with this answer, so here's a bit more of exposition: as always, the "inverse" here is merely notation! One would most certainly first perform LU decomposition on $\textbf{E}$ and $\textbf{H}$ first. One would also partition the right-hand side $\textbf{b}$ accordingly:
$\textbf{b}=\begin{pmatrix}\textbf{c}\\ \textbf{d}\end{pmatrix}$
so $\textbf{c}$ is a length-m vector and $\textbf{d}$ is a length-n vector.
Formally multiplying the partitioned inverse with the partitioned right-hand side gives
$\begin{pmatrix}\textbf{E}^{-1}+\left(\textbf{E}^{-1}\textbf{F}\right)(\textbf{H}-\textbf{G}\textbf{E}^{-1}\textbf{F})^{-1}\left(\textbf{G}\textbf{E}^{-1}\right)&-\left(\textbf{E}^{-1}\textbf{F}\right)(\textbf{H}-\textbf{G}\textbf{E}^{-1}\textbf{F})^{-1}\\-(\textbf{H}-\textbf{G}\textbf{E}^{-1}\textbf{F})^{-1}\left(\textbf{G}\textbf{E}^{-1}\right)&(\textbf{H}-\textbf{G}\textbf{E}^{-1}\textbf{F})^{-1}\end{pmatrix}\cdot\begin{pmatrix}\textbf{c}\\ \textbf{d}\end{pmatrix}$
which when expanded and simplified is
$\begin{pmatrix}\textbf{E}^{-1}\textbf{c}+\left(\textbf{E}^{-1}\textbf{F}\right)(\textbf{H}-\textbf{G}\textbf{E}^{-1}\textbf{F})^{-1}\left(\textbf{G}\textbf{E}^{-1}\textbf{c}-\textbf{d}\right)\\-(\textbf{H}-\textbf{G}\textbf{E}^{-1}\textbf{F})^{-1}\left(\textbf{G}\textbf{E}^{-1}\textbf{c}-\textbf{d}\right)\end{pmatrix}$
At this point you should be able to figure out how you would use an available decomposition of $\textbf{E}$ or $\textbf{H}$, and which common subexpressions can be just computed once.
This is too long for a comment, so I'll throw in this choice piece by Forman Acton in his Numerical Methods That Work on the subject of dealing with large matrices:
Whenever a person eagerly inquires if my computer can solve a set of 300 equations in 300 unknowns, I must quickly suppress the temptation to retort, "Yes, but why bother?" There are, indeed, legitimate sets of equations that large. They arise from replacing a partial differential equation on a set of grid points, and the person who knows enough to tackle this type of problem also usually knows what kind of computer he needs. The odds are all too high that our inquiring friend is suffering from a quite different problem: he probably has collected a set of experimental data and is now attempting to fit a 300-parameter model to it - by Least Squares! The sooner this guy can be eased out of your office, the sooner you will be able to get back to useful work - but these chaps are persistent. They have fitted three-parameter models on desk machines with no serious difficulties and now the electronic computer permits them more grandiose visions. They leap from the known to the unknown with a terrifying innocence and the perennial self-confidence that every parameter is totally justified. It does no good to point out that several parameters are nearly certain to be competing to "explain" the same variations in the data and hence the equation system will be nearly indeterminate. It does no good to point out that all large least-squares matrices are striving mightily to be proper subsets of the Hilbert matrix-which is virtually indeterminate and uninvertible—and so even if all 300 parameters were beautifully independent, the fitting equations would still be violently unstable. All of this, I repeat, does no good—and you end up by getting angry and throwing the guy out of your office...
...The computer center's director must prevent the looting of valuable computer time by these would-be fitters of many parameters. The task is not a pleasant one, but the legitimate computer users have rights, too. The alternative commits everybody to a miserable two weeks of sloshing around in great quantities of "Results" that are manifestly impossible, with no visible way of finding the trouble. The trouble, of course, arises from looking for a good answer to a poorly posed problem, but a computer director seldom knows enough about the subject matter to win any of those arguments with the problem's proposer, and the impasse finally has to be broken by violence—which therefore might as well be used in the very beginning.
If all you are interested in is solving $A x = B$, I would suggest looking at iterative and/or block-iterative methods. Noting your other question asking about positivity of solutions, I would suggest that you look at Applied and Computational Linear Algebra: A First Course by Charlie Byrne. The MART algorithm described therein is used for finding positive $x$ solutions to $A x = B$m, and addresses many similar problems from tomography and signal processing. If you like it get his 'Applied Iterative Methods'.