Have there been efforts to introduce non Greek or Latin alphabets into mathematics?

As a physics student, often I find when doing blackboard problems, the lecturer will struggle to find a good variable name for a variable e.g. "Oh, I cannot use B for this matrix, that's the magnetic field".

Even ignoring the many letters used for common physical concepts, it seems most of the usual Greek and Latin letters already have connotations that would make their usage for other purposes potentially confusing, for instance one would associate $p$ and $q$ with integer arguments, $i,j,k$ with indices or quaternians, $\delta$ and $\varepsilon$ with small values, $w$ with complex numbers and $A$ and $B$ with matrices, and so forth.

It then seems strange to me that there's been no effort to introduce additional alphabets into mathematics, two obvious ones, for their visual clarity, would be Norse runes or Japanese katakana.

The only example I can think of offhand of a non Greek or Latin character that has mainstream acceptance in mathematics would be the Hebrew character aleph ($\aleph$), though perhaps there are more.

My question then, is have there been any strong mainstream efforts, perhaps through using them in books, or from directly advocating them in lectures or articles, to introduce characters from other alphabets into mathematics? If there have been, why have they failed, and if there haven't been, why is it generally seen as unnecessary?

Thank you, and sorry if this isn't an appropriate question for math.stackexchange.com, reading the FAQ made it appear as if questions of this type were right on the borderline of acceptability.


To add to some of the letters and alphabets mentioned, in set theory, the Hebrew letters $\aleph$ and $\beth$ are used.

It is common to, as you mention, use specific variables letters for specific purposes. More obscure, foreign letters are probably seldom used simply because they have no need to be introduced. Mathematicians already they have two alphabets to choose variables from!
However, for things that have specific purposes, like constants or special functions, cannot be given the same variable letter without causing some confusion.


In the 1960's, a fellow at IBM by the name of Kenneth Iverson created a new mathematical notation that originally held the name "Iverson's Better Math".

He published it in a book called A Programming Language, and since IBM wasn't too keen on the internal nickname, the notation itself came to be known as APL. (Iverson didn't mean programming language in the computer programming sense, though an interpreter was in fact soon implemented, and you can now execute APL on a computer.)

You can see the symbol set used in this on-line APL interpreter, and you'll note that there aren't too many greek or latin characters at all. (The iota generates a vector of sequential integers , rho reshapes the rows and columns of an n-dimensional matrix, alpha and omega are used for defining functions of one or two variables).

APL relies heavily on function composition, and often variable names are not needed at all. The number we call pi is represented by a circle (which also can be used for all the trigonomic functions), and one of those symbols does the work of e and log.

It also uses an explicit multiplication sign, which means any word at all can be used to represent a variable, whenever you actually do need one.

Iverson's book is online at http://www.jsoftware.com/papers/APL.htm if you're interested. There's also a shorter article called Notation as a Tool of Thought, which I believe was his Turing Award lecture.

This was a serious effort to reform mathematical notation, and it was actually quite popular at the time. You could even get a typewriter with the APL characters (in fact the character set was partially chosen based on what you type on an IBM typewriter). But the commercial book publishers of the day had trouble with all those new characters, and of course it has a rather steep learning curve.

People still use APL today, especially in the financial markets, along with modern variations like J and K that stick to ascii symbols while managing to remaining just as cryptic.

I'm not sure that's exactly mainstream acceptance, but there you go :)


In advanced number theory arithmeticians have introduced the russian letter Ш, pronounced "shah".
But this is very localized.

Apart from the Greek alphabet, the only different alphabet I know of used in a Latin environment is Fraktur, popularly known as Gothic.
It is massively used in algebra for ideals in rings.
Actually, essentially all standard references in commutative algebra and algebraic geometry make use of Fraktur: Atiyah-Macdonald, Dieudonné-Grothendieck's EGA, Görtz-Wedhorn, Hartshorne, Jacobson, Matsumura, Qing Liu, Shafarevitch, Zariski-Samuel,...

Edit The $\LaTeX$ command for Fraktur is $\text {\mathfrak}$. For example:

Let $\mathfrak p$ be a prime ideal, $\mathfrak q$ a primary ideal and $\mathfrak a,\mathfrak b, \mathfrak c \:$ arbitrary ideals of the ring $A$, then...


The letter eth (ð), present in the Old English, Icelandic, Faroese, and Elfdalian alphabets, "is sometimes used in mathematics and engineering textbooks as a symbol for a spin-weighted partial derivative", according to Wikipedia — i.e. it's used as a variation on the usual partial derivative symbol ∂.

(Which makes me wonder if Kip Thorne has ever used the letter thorn (þ)... if not, someone should persuade him to!)

In a similar vein, I've also just learnt that ħ, which in quantum physics represents the reduced Planck constant (i.e. the Planck constant h divided by 2π) is "used in Maltese for a voiceless pharyngeal fricative consonant". (I'm ashamed to say I didn't even know Maltese was a language!)


The letter $\varnothing$ is actually a Dansk-Norsk letter.

In addition to $\aleph$ and $\beth$ which were mentioned by Argon, there is also $\gimel$ (Gimel, the third letter in Hebrew) and Cantor used Tav (the last letter) but that one didn't stick.


I should add that it became a convention in mathematics to name variables with a certain type of letter. Of course that $n$ being a free variable can be anything, but it alerts the reader that the variable is a positive integer; similarly $\varepsilon$ is a very small quantity.

Think about it for a moment, what is $0$ if not a convention to the additive neutral element? You can see this going further and $0,1$ are used as the neutral elements in rings and groups which have little to do with the real numbers. Why? Because it alerts the reader that this is a special element.

Similarly the fonts can indicate things, in the course in measure theory the professor told us that "lowercase is for elements, capital for sets, and cursive for collections of sets". You often see people use $\cal F$ for a filter, even if $F$ was not used before because this is the font for filters.

Due to these convention it is often hard to find letters when you have used the basic ones already (sometimes in various fonts).

(I actually heard a story about one of the professors which used $x,{\rm x}, X, {\scr X}$ and after a few minutes just wrote a huge $x$ symbol because he ran out of letters.)