Perfect NtHash for Robust Minimizers

NtHash

NtHash (Mohamadi et al. 2016) is a rolling hash suitable for hashing any kind of text, but made for DNA originally. For a string of length \(k\) it is a \(64\) bit value computed as:

\begin{equation} h(x) = \bigoplus_{i=0}^{k-1} rot^i(h(x_i)) \end{equation}

where \(h(x_i)\) assigns a fixed \(64\) bit random value to each character, \(rot^i\) rotates the bits \(i\) places, and \(\bigoplus\) is the xor over all terms.

A nice property of NtHash is that is works well directly on uncompressed data, and hence can handle both DNA strings and other text.

There is also NtHash2 (Kazemi et al. 2022) which improves NtHash by increasing the period from \(64\) bits to \(31\cdot 33 = 1023\) bits and improves hashing of canonical kmers, but both these improvements are not relevant here.

Some Rust implementations are available:

Minimizers

Minimizers (also called winnowing) (Schleimer, Wilkerson, and Aiken 2003) are a way to find anchors in a text in predictable positions. Such anchors can be used for estimating similarity between sequences (Belbasi et al. 2022) and for sparse data structures (Grabowski and Raniszewski 2017).

Minimizers are defined in terms of the kmer length \(k\) and the window length \(w\), and a hash function \(h\) on kmers. First, each kmer \(k_i\) in the text is hashed to \(h(k_i)\). Then, for each window of \(w\) consecutive kmers, the rightmost kmer with minimal hash is stored.

Robust minimizers

Minimizers have the problem that they do not compress repetitive regions.

Robust minimizers improves over minimizers by reusing previously chosen minimizers if possible: if the minimizer of the preceding window also minimizes the hash in the current window, always choose that kmer instead of the rightmost minimizer.

This works well, but introduces an edge case when distinct kmers in a window have the same hash. In this case, some specific kmer with minimal hash could be skipped because other kmers around it have the same hash. That is bad for exact applications like building a suffix array, where all distinct kmer strings are required.

Thus, we would like our hash function to be injective on kmers to avoid this. (An alternative is to order kmers by (h(kmer), kmer) instead of just h(kmer), but this introduces computational overhead in a hot loop.)

Is NtHash injective on kmers?

Clearly NtHash is not injective in general, since for \(k>32\) there are more than \(2^{64}\) kmers and only \(2^{64}\) possible hashes. But for \(k\leq 32\) it would be nice to know whether NtHash is indeed injective.

It turns out that:

  • NtHash is injective on kmers for \(k\leq 22\).
  • There is a hash collision for \(k=23\), and hence also for all larger \(k\). Specifically:
    nthash(b"AAGCAACAAAAGAAAGCAAAGAA") == nthash(b"CATTCAGAGTCTTTGTGGATTAC");
    

The good news is that with a simple modification to the randomly chosen parameters, we can prove that NtHash is injective for \(k\) up to \(32\).

//  a = 0x3c8b_fbb3_95c6_0474; (old)
let a = 0x3c8b_fbb3_95c6_0470;
let c = 0x3193_c185_62a0_2b4c;
let g = 0x2032_3ed0_8257_2324;
//  t = 0x2955_49f5_4be2_4456; (old)
let t = 0x2d2a_04e6_7531_0c18; // = a ^ c ^ g
Code Snippet 1: Fixed NtHash parameters: change the last hex digit of \(h(A)\) from \(4\) to \(0\), and make \(h(T)\) the xor of the other values.

Searching for a collision

A first step is a brute force search: Iterate over all kmers, store their hashes, and find duplicates by sorting the list. This works up to \(k=16\), where we store \(4^{16} \cdot 8B = 32GB\) of hashes.

We can improve a bit on this, by only storing hashes at most \(2^{64} / 2^{a}\) for e.g. \(a=10\). This reduces memory usage by a factor \(1024\), at the cost of only finding collisions in small hashes. I could get this to work up to \(k=21\), but that already took \(4\) hours and going to \(k=22\) would take \(4\) times longer.

A nicer approach is this: instead of trying all possibilities for \(X\) and \(Y\), we can split them into two halves \(X = x + x’\) and \(Y = y + y’\) (\(+\) is concatenation). Then, we can iterate over all \(x\) and \(y\) and find all possible values of \(h(x) \oplus h(y)\). Assume \(k\) is even, so that \(x\) and \(y\) have length \(k/2\). At first it seems that we still have to iterate over all possibilities for \(2\cdot (k/2) = k\) characters, but we can do better! NtHash is linear, and the characters at each position can be considered independently:

\begin{align*} h(x)\oplus h(y) &= \left(\bigoplus_{i=0}^{k/2-1} rot^i(h(x_i))\right) \oplus \left(\bigoplus_{i=0}^{k/2-1} rot^i(h(y_i))\right)\\ &= \bigoplus_{i=0}^{k/2-1} rot^i\big(h(x_i) \oplus h(y_i)\big) \end{align*}

While \(x_i\) and \(y_i\) have a total of \(4\times 4 = 16\) possibilities, the number of possibilities for \(h(x_i) \oplus h(y_i)\) is much smaller: AA, CC, GG, TT all have value \(0\), and all other values come in pairs such as AC and CA. In total only \(7\) distinct combinations remain: AA, AC, AG, AT, CG, CT, GT. This means that we only have to iterate over \(7^{k/2}\) possibilities, which is much smaller than \(16^{k/2}\). (\(2.8\cdot 10^{14}\) vs \(1.4\cdot 10^{10}\) for \(k=24\).)

The algorithm now works by listing all \(7^{k/2}\) possible values of \(h(x) \oplus h(y)\), and all possible values of \(rot^{k/2}(h(x’) \oplus h(y’))\), and then checking for collisions.

  • If there are no collisions, we can be sure that NtHash is injective for \(k\).
  • If there are collisions, we can reconstruct \(X\) and \(Y\) of length \(k\) such that \(h(X) = h(Y)\).

As before, we have to only store hashes below some threshold to save memory for \(k=24\), but it turns out this is still good enough: Up to \(k=22\) there are no collisions, but we do find some collisions at \(k=24\). One of these collisions ends in the same character so is actually already a collision for \(k=23\) as shown before.

My code for this is on github, but note that I didn’t polish this for external readability.

Proving perfection

Let’s replace the original value of \(h(T)\) by \(h(T) = h(A) \oplus h( C) \oplus h(G)\). Now, \(h(X_i) \oplus h(Y_i)\) can take only four distinct values:

\begin{align*} h(A) \oplus h(A) = h( C) \oplus h( C) &= h(G) \oplus h(G) = h(T) \oplus h(T) = 0,\\ h(A) \oplus h( C) = h(G) \oplus h(T) &=:u,\\ h(A) \oplus h(G) = h( C) \oplus h(T) &=:v,\\ h(A) \oplus h(T) = h( C) \oplus h(G) &=u\oplus v. \end{align*}

This means that the four options split into two binary choices: \(\{0, u\} \oplus \{0, v\}\). The set of all possible values of \(h(X) \oplus h(Y)\) is thus all linear combinations of the \(32\) rotations of \(u\) and the \(32\) rotations of \(v\). I.e. we have a linear space with basis

\begin{align*} B=\{rot^i(u) : 0\leq i < 32\} \cup \{rot^i(v) : 0\leq i < 32\}. \end{align*}

If all these \(64\) bit-vectors are linearly independent, the xors of all possible subsets are distinct, and no hash collisions are possible. If they are not independent, there is some collision. We can easily test whether the \(64\) bitvectors are independent using Gaussian Elimination.

It turns out that just replacing \(h(T)\) with the xor of the other characters results in a matrix of rank \(63=64-1\), which is not invertible and has collisions. Changing the last character of \(h(A)\) for \(4\) to \(0\) fixes this.

In fact, there is this math.stackexchange answer states that random binary matrices are invertible with probability at least \(28\%\). Our matrices are not completely random though (since rows are rotations of each other), but this makes me conjecture that indeed the probability that chosen \((h(A), h( C), h(G))\) result in an invertible matrix is at least \(10\%\) or so.

Alternatives

An alternative to NtHash is simply taking the bit representation of a kmer and multiplying by a large random odd constant, as in FxHash. That is guaranteed to be injective. I plan to benchmark both methods.

SmHasher results

I added NtHash to a fork of SmHasher. Note that I convert each &[u8] input of arbitrary characters to a 4 times larger &[u8] containing only ACTG, since the nthash and kmerutils crates do not support arbitrary characters. This makes the tests quite slow, but I only care about quality for now.

Results are here. NtHash fails almost all of the tests.

TODO benchmark NtHash, NtHash2, FxHash

References

Belbasi, Mahdi, Antonio Blanca, Robert S Harris, David Koslicki, and Paul Medvedev. 2022. “The Minimizer Jaccard Estimator Is Biased and Inconsistent.” Bioinformatics 38 (Supplement_1): i169–76. https://doi.org/10.1093/bioinformatics/btac244.
Grabowski, Szymon, and Marcin Raniszewski. 2017. “Sampled Suffix Array with Minimizers.” Software: Practice and Experience 47 (11): 1755–71. https://doi.org/10.1002/spe.2481.
Kazemi, Parham, Johnathan Wong, Vladimir Nikolić, Hamid Mohamadi, René L Warren, and Inanç Birol. 2022. “Nthash2: Recursive Spaced Seed Hashing for Nucleotide Sequences.” Edited by Peter Robinson. Bioinformatics 38 (20): 4812–13. https://doi.org/10.1093/bioinformatics/btac564.
Mohamadi, Hamid, Justin Chu, Benjamin P. Vandervalk, and Inanc Birol. 2016. “Nthash: Recursive Nucleotide Hashing.” Bioinformatics 32 (22): 3492–94. https://doi.org/10.1093/bioinformatics/btw397.
Schleimer, Saul, Daniel S. Wilkerson, and Alex Aiken. 2003. “Winnowing: Local Algorithms for Document Fingerprinting.” In Proceedings of the 2003 Acm Sigmod International Conference on Management of Data. Sigmod/Pods03. ACM. https://doi.org/10.1145/872757.872770.