A better way to replace many strings - obfuscation in C#
I'm trying to obfuscate a large amount of data. I've created a list of words (tokens) which I want to replace and I am replacing the words one by one using the StringBuilder class, like so:
var sb = new StringBuilder(one_MB_string);
foreach(var token in tokens)
{
sb.Replace(token, "new string");
}
It's pretty slow! Are there any simple things that I can do to speed it up?
tokens is a list of about one thousand strings, each 5 to 15 characters in length.
Solution 1:
Instead of doing replacements in a huge string (which means that you move around a lot of data), work through the string and replace a token at a time.
Make a list containing the next index for each token, locate the token that is first, then copy the text up to the token to the result followed by the replacement for the token. Then check where the next occurance of that token is in the string to keep the list up to date. Repeat until there are no more tokens found, then copy the remaining text to the result.
I made a simple test, and this method did 125000 replacements on a 1000000 character string in 208 milliseconds.
Token and TokenList classes:
public class Token {
public string Text { get; private set; }
public string Replacement { get; private set; }
public int Index { get; set; }
public Token(string text, string replacement) {
Text = text;
Replacement = replacement;
}
}
public class TokenList : List<Token>{
public void Add(string text, string replacement) {
Add(new Token(text, replacement));
}
private Token GetFirstToken() {
Token result = null;
int index = int.MaxValue;
foreach (Token token in this) {
if (token.Index != -1 && token.Index < index) {
index = token.Index;
result = token;
}
}
return result;
}
public string Replace(string text) {
StringBuilder result = new StringBuilder();
foreach (Token token in this) {
token.Index = text.IndexOf(token.Text);
}
int index = 0;
Token next;
while ((next = GetFirstToken()) != null) {
if (index < next.Index) {
result.Append(text, index, next.Index - index);
index = next.Index;
}
result.Append(next.Replacement);
index += next.Text.Length;
next.Index = text.IndexOf(next.Text, index);
}
if (index < text.Length) {
result.Append(text, index, text.Length - index);
}
return result.ToString();
}
}
Example of usage:
string text =
"This is a text with some words that will be replaced by tokens.";
var tokens = new TokenList();
tokens.Add("text", "TXT");
tokens.Add("words", "WRD");
tokens.Add("replaced", "RPL");
string result = tokens.Replace(text);
Console.WriteLine(result);
Output:
This is a TXT with some WRD that will be RPL by tokens.
Note: This code does not handle overlapping tokens. If you for example have the tokens "pineapple" and "apple", the code doesn't work properly.
Edit:
To make the code work with overlapping tokens, replace this line:
next.Index = text.IndexOf(next.Text, index);
with this code:
foreach (Token token in this) {
if (token.Index != -1 && token.Index < index) {
token.Index = text.IndexOf(token.Text, index);
}
}
Solution 2:
OK, you see why it's taking long, right?
You have 1 MB strings, and for each token, replace is iterating through the 1 MB and making a new 1 MB copy. Well, not an exact copy, as any token found is replaced with the new token value. But for each token you're reading 1 MB, newing up 1 MB of storage, and writing 1 MB.
Now, can we think of a better way of doing this? How about instead of iterating the 1 MB string for each token, we instead walk it once.
Before walking it, we'll create an empty output string.
As we walk the source string, if we find a token, we'll jump token.length()
characters forward, and write out the obfuscated token. Otherwise we'll proceed to the next character.
Essentially, we're turning the process inside out, doing the for loop on the long string, and at each point looking for a token. To make this fast, we'll want quick loop-up for the tokens, so we put them into some sort of associative array (a set).
I see why it is taking long alright, but not sure on the fix. For each 1 MB string on which I'm performing replacements, I have 1 to 2 thousand tokans I want to replace. So walking character by character looking for any of a thousand tokens doesn't seem faster
In general, what takes longest in programming? New'ing up memory.
Now when we create a StringBuffer, what likely happens is that some amount of space is allocated (say, 64 bytes, and that whenever we append more than its current capacity, it probably, say, doubles its space. And then copies the old character buffer to the new one. (It's possible we can can C's realloc, and not have to copy.)
So if we start with 64 bytes, to get up to 1 MB, we allocate and copy: 64, then 128, then 256, then 512, then 1024, then 2048 ... we do this twenty times to get up to 1 MB. And in getting here, we've allocated 1 MB just to throw it away.
Pre-allocating, by using something analogous to C++'s reserve()
function, will at least let us do that all at once. But it's still all at once for each token. You're at least producing a 1 MB temporary string for each token. If you have 2000 tokens, you're allocating about a 2 billion bytes of memory, all to end up with 1 MB. Each 1 MB throwaway contains the transformation of the previous resulting string, with the current token applied.
And that's why this is taking so long.
Now yes, deciding which token to apply (if any), at each character, also takes time. You may wish to use a regular expression, which internally builds a state machine to run through all possibilities, rather than a set lookup, as I suggested initially. But what's really killing you is the time to allocate all that memory, for 2000 copies of a 1 MB string.
Dan Gibson suggests:
Sort your tokens so you don't have to look for a thousand tokens each character. The sort would take some time, but it would probably end up being faster since you don't have to search thousands of tokens each character.
That was my reasoning behind putting them into an associative array (e.g, Java HashSet). But the other problem is matching, e.g., if one token is "a" and another is "an" -- if there are any common prefixes, that is, how do we match?
This is where Keltex's answer comes in handy: he delegates the matching to a Regex, which is a great idea, as a Regex already defines (greedy match) and implements how to do this. Once the match is made, we can examine what's captured, then use a Java Map (also an associative array) to find the obfuscated token for the matched, unobfuscated one.
I wanted to concentrate my answer on the not just how to fix this, but on why there was a problem in the first place.
Solution 3:
If you can find your tokens via a regular expression, you can do something like this:
RegEx TokenFinder = new Regex("(tokencriteria)");
string newstring = myRegEx.Replace(one_MB_string, new MatchEvaluator(Replacer));
Then define Replacer as:
private string Replacer(Match match)
{
string token= match.Groups[1].Value;
return GetObfuscatedString(token);
}
Solution 4:
Would it be faster to build the string one token at a time, only replacing if need be? For this, GetObfuscatedString()
could be implemented like so:
string GetObfuscatedString(string token)
{
if (TokenShouldBeReplaced(token))
return ReplacementForToken(token)
else
return token;
}
Now, you can add each token to the builder like this:
StringBuilder sb = new StringBuilder(one_MB_string.Length);
foreach (string token in tokens)
{
sb.Append(da.GetObfuscatedString(token));
}
You'll only have to make one pass over the string, and it might be faster.