Abstract
Background
Motifs are significant patterns in DNA, RNA, and protein sequences, which play an important role in biological processes and functions, like identification of open reading frames, RNA transcription, protein binding, etc. Several versions of the motif search problem have been studied in the literature. One such version is called the Planted Motif Search (PMS)or (l, d)motif Search. PMS is known to be NP complete. The time complexities of most of the planted motif search algorithms depend exponentially on the alphabet size. Recently a new version of the motif search problem has been introduced by Kuksa and Pavlovic. We call this version as the Motif Stems Search (MSS) problem. A motif stem is an lmer (for some relevant value of l)with some wildcard characters and hence corresponds to a set of lmers (without wildcards), some of which are (l, d)motifs. Kuksa and Pavlovic have presented an efficient algorithm to find motif stems for inputs from large alphabets. Ideally, the number of stems output should be as small as possible since the stems form a superset of the motifs.
Results
In this paper we propose an efficient algorithm for MSS and evaluate it on both synthetic and real data. This evaluation reveals that our algorithm is much faster than Kuksa and Pavlovic’s algorithm.
Conclusions
Our MSS algorithm outperforms the algorithm of Kuksa and Pavlovic in terms of the run time as well as the number of stems output. Specifically, the stems output by our algorithm form a proper (and much smaller)subset of the stems output by Kuksa and Pavlovic’s algorithm.
Background
Motifs, or sequence motifs, are patterns of nucleotides or amino acids. Motifs are often related to primer selection, transcription factor binding sites, mRNA processing, transcription termination, etc. Sequence motifs of proteins are typically involved in functions such as binding to a target protein, protein trafficking, posttranslational modifications, and so on. Motif search problem has been studied extensively due to its pivotal biological significance. Several types of algorithms have been proposed for motif search. In one such class of methods, putative motifs in an input biological query sequence are predicted based on a database of known motifs. Examples include [13]. In another class of methods, motifs are assumed to appear frequently in a set of sequences, like the same protein sequence from different species. Here the problem of motif search is reduced to that of finding subsequences that occur in many of the input sequences. Planted motif search (PMS)is one such formulation.
Numerous algorithms have been proposed to solve the PMS problem. The WINNOWER algorithm uses a graph to transform this problem into one of finding large cliques in the graph [4]. The PatternBranching algorithm introduces a scoring technique for all the motif candidates [5]. The PROJECTION algorithm repeatedly picks several random positions and uses a hash table with a threshold to limit the motif candidates [6]. Bailey 1995 [7] employs expectation maximization algorithms while Gibbs sampling is used in [8,9]. MULTIPROFILER [10], MEME [11], and CONSENSUS [12] are also known PMS algorithms.
An exact PMS algorithm always outputs all the motifs present in a given set of sequences. MITRA employs a mismatch tree structure to generate the motif candidates efficiently [13]. RISOTTO constructs a suffix tree to compare sequences [14]. PMS1 [15] considers all the motif candidates and evaluates them using sorting. Voting uses a hash table to locate the motifs [16]. PMS2 is based on PMS1 and it extends smaller motifs to get longer motifs, and PMS3 forms a motif of a desired length using two smaller motifs [15]. PMSPrune introduces a tree structure for the motif candidates and uses a branchandbound algorithm to reduce the search space [17]. PMS4 is a speedup technique that finds a superset of the motifs using a subset of the input sequences and validates those candidates [18]. PMS5 employs an Integer Linear Programming (ILP)as the branchbound algorithm for a speedup [19]. PMS6 uses the solutions of such ILPs to generate motif candidates [20].
Most of the work on exact algorithms for PMS has focussed on DNA or RNA sequences where Σ=4. Little work has been done on larger alphabet sizes, such as on proteins. A recent work focusses on protein sequences [21]. However, the stemming algorithm proposed in this paper does not solve the PMS problem. In particular, it does not find motifs but rather motif stems. A motif stem (denoted as stem from hereon)can be thought of as an lmer (for some relevant value of l)with some wild cards present in it. As a result, a stem stands for a set of motifs. A stemming algorithm generates stems (or motifs with wildcards)to represent motifs for largealphabet inputs [21]. The stemming algorithm of [21] generates a very large set of stems (and hence a very large superset of motifs). In this paper we propose two algorithms for Motif Stems Search, MSS1 and MSS2, which outperform the stemming algorithm of [21]. The new algorithms generate a much smaller set of stems. The stems generated by the algorithm of [21] as well as MSS1 and MSS2 are guaranteed to be supersets of all the motifs present in a given set of input sequences.
Methods
Motif search on large alphabets
In this section we provide some definitions pertinent to PMS and MSS problems.
Definition 1
A sequence x=x[1,2,…,l] (x=l)on Σ (x[i]∈Σ, 1≤i≤l)is an lmer.
Definition 2
Given two lmers x and y, the Hamming distance between two lmer is defined as:
Definition 3
Given an l mer x and a sequence s of length m, the Hamming distance between x and s is defined as
Definition 4
(Planted Motif Search (PMS)Problem). Let S be a set of n sequences of length m each on an alphabet Σ. Specifically, let S={s_{i}s_{i}=m, 1≤i≤n}. The planted motif search problem or (l,d)motif search problem takes as input S and two integers l and d, and finds every string x of length l such that for every s_{i}, the Hamming distance between x and s_{i} is no more than d. In particular, we want to compute the following set:
Any such x is called an (l,d)motif. Any lmer of s_{i} that is at a Hamming distance of ≤d from x is called an occurrence or instance of x.
Definition 5
Given an lmer x, the dneighbors of x is defined to be {y : y=l and HD(x,y)≤d}. The dneighbors of x in any sequence s is defined to be the intersection of the dneighbors of x and the set of lmers in s.
Observation 1
If the Hamming distance between two lmers x_{1} and x_{2} is larger than 2d (i.e., HD(x_{1},x_{2})≥2d)then no lmer x_{3} exists such that HD(x_{1},x_{3})≤d and HD(x_{2},x_{3})≤d.
PMS algorithms are typically tested on random data with n=20 and m=600. Each input string is randomly generated such that each symbol in each string is equally likely to be any character from the alphabet. A motif is generated randomly. Randomly mutated versions of this motif are planted in the input strings (one mutated motif per string). For a given value of l, we call the pair (l,d)a challenging instance if d is the smallest value for which the number of (l,d)motifs occurring in the input strings by random chance is ≥1. Some of the challenging instances are: (9, 2), (11, 3), (13, 4), (15, 5), (17, 6), (19, 7), and so on. One of the performance measures of interest for any exact algorithm is the largest challenging instance that it can solve. MITRA can solve the instance of (13, 4)[13], and RISOTTO [14] and Voting [16] successfully run on (15, 5). PMSPrune solves up to (19, 7)[17]. PMS5 [19] and PMS6 [20] can handle (23, 9). These statistics are based on DNA sequences where Σ=4.
The time complexities of exact algorithms typically depend exponentially on the size of Σ. PMS0 takes time, and PMS1 costs time where w is the word length of the computer [15]. It needs time for RISOTTO [14], for Voting [16], and for PMSPrune [17].
When the size of the alphabet is large (e.g., Σ=20 for proteins), the above exact algorithms will take a very long time. Kuksa and Pavlovic have introduced a new version of the motif search problem and proposed an efficient algorithm to solve it on large alphabets. A motif stem is an lmer with wildcards. Thus a stem represents a set of lmers without wildcards. For example, if g ∗ acc is a DNA stem, it represents the following 5mers without wildcards: ggaac,gcaac,gtaac, and gaaac. Given a set of strings from some alphabet, the problem of finding motif stems in them is known as the Motif Stem Search (MSS) problem. We focus on MSS in this paper.
Definition 6
Motif Stem Search (MSS)Problem. Input are N sequences and two integers l and d. The problem is to find a set of stems such that the set of lmers represented by these stems is a superset of all the (l,d)motifs present in the N sequences.
As stated above, there are many possible solutions to the MSS problem. The challenge then is to come up with a superset as small as possible which covers all the (l,d)motifs. In other words, we want the number of lmers (without wildcards)represented by the stems to be as small as possible.
MSS1  A basic Algorithm
Based on OBSERVATION 1, if the Hamming distance between an lmer x and a sequence s is larger than 2d, then no lmer x’ exists such that HD(x,x^{′})≤d and HD(x^{′},s)≤d. This leads us to the following observation.
Observation 2
Given an lmer x, if ∃s_{i} such that HD(x,s_{i})>2d, then none of x’s dneighbors can be a motif.
The stemming algorithm of [21] works as follows. It makes use of OBSERVATION 2 crucially. OBSERVATION 2 states that an lmer x in any input string cannot be an instance of an (l,d)motif if there exists at least one input string s such that HD(x,s)>2d. The algorithm of [21] first identifies a set I of possible motif instances. An lmer x in any input string s will be included in I if and only if HD(x,s^{′})≤2d for every input string s’. Having found such a set I, the algorithm then uses I to generate stems. The stems are found as follows: For every x,y∈I, the algorithm generates the common dneighbors of x and y as stems. The union of all such stems will constitute candidate motif stems. This union is a superset of the motif stems. Finally, for each candidate stem, the algorithm checks if this is a correct answer or not. All valid stems that pass this test are output.
In Algorithm 1 and Algorithm 2 we present a faster algorithm (than that of [21])for generating motif stems. In Algorithm 1 we present an algorithm for generating the set I. The cardinality of I that we generate is a much smaller subset of the I generated in the stemming algorithm of [21]. For any pair of lmers (x,x^{′})in the set I, we begin with x and replace some characters in x with wildcards to generate MSS candidates. The positions in which x and x’ match are referred to as the matching region and the positions in which x and x’ differ are referred to as the nonmatching region. The wildcards can be placed in the matching region and/or the nonmatching region. Any stem t is generated by placing wildcards in x. Therefore, wildcards in the generated stem t are always treated as mismatches between t and x, independent of whether they are in the matching or the nonmatching region. However, for x’, in the nonmatching region, wildcards in the generated stem t are assumed to be matches between t and x’ while in the matching region they are still treated as mismatches between t and x’. The number of wildcards is dependent on the Hamming distance between x and x’ and d. Let HD(x,x^{′})=d_{x}. Table 1 shows how many wildcards should be placed in different cases.
Table 1. Numbers of wildcards
Assume that i wildcards are placed in the nonmatching region of x to form t, resulting in i mismatches between t and x and (d_{x}−i)mismatches between t and x’. We consider the following two cases:
1. d_{x}≤d: The number of wildcards i can vary from 0 to the size of the nonmatching region. To make the total number of mismatches against x smaller than d, at most d−i wildcards can be placed in the matching region of x. Similarly, to make the total number of mismatches against x’ smaller than d, at most d−(d_{x}−i)wildcards can be placed in the matching region of x’.
2. d_{x}>d: At least d_{x}−d wildcards have to be placed in the nonmatching region to eliminate the mismatches. Similar to case 1, in the matching region, at most d− max(i,d_{x}−i)wildcards can be placed.
Algorithm 1 MSS1
In the matching region, d− max(i,d_{x}−i)is an upper bound on the number of wildcards. However, it is not necessary to enumerate all the cases from 0 to d− max(i,d_{x}−i). Similarly, it is not necessary to repeat stems generation for all pairs in I. For any x let be x’s 2dneighbors in sequence s_{i} (i.e., )and let O_{i} be the set of motif instances in s_{i}. Then, clearly, O_{i}⊂I_{i}. The motifs form a subset of stems that can be obtained between x and each of O_{i}. The motif stems we generate are stems that can be obtained from lmer pairs of . To minimize the number of stems generated from I, we have to use the sequence with the smallest j.
Observation 3
For any lmer x, let its 2dneighbors in sequence s_{i} be (for 1≤i≤n). Then, the (l,d)motifs are included in the stems set, which is generated from the pairs .
The detailed MSS1 algorithm is given in Algorithm 1 and Algorithm 2.
In lines 218 we find the sequence in which x has the minimum number of 2dneighbors. Also, if one sequence has no 2dneighbor, the current lmer x is skipped (line 12). The stems are generated by placing wildcards in each pair of x×I_{min}, as shown in Algorithm 2.
Hamming distance is called m^{2}n times. Therefore, excluding wildcards placement, Algorithm 1 takes O(m^{2}nl)time.
Wildcards placement procedure is called (m−l+1)times and each time I_{min}=m−l+1 in the worst case. Therefore, in this case, wildcards placement (line 4–16)in Algorithm 2 is run O(m^{2})times. The number of wildcards is no more than d. So line 4–16 takes time in the worst case. As a result, wildcards placement in MSS1 takes time. In the best case, wildcards placement procedure is only called once when all other lmers in s_{1} have no 2dneigbhors, and I_{min}=1. The best case for line 4–16 is when d_{x}=2d and it takes time (see DISCUSSION for more analysis).
In summary, MSS1 takes O(m^{2}nl+stems)time, where stems is the total number of stems generated.
Algorithm 2 PlaceWildcards
MSS2  A speedup Algorithm
The computation of the 2dneighbors of x from s_{1} in a certain sequence s_{i} can be thought of as the calculation of a distance matrix between all (m−l+1)lmers in s_{1} against those in s_{i} as shown in Figure 1B. A straight forward algorithm takes O(m^{2}l)time. When i ranges from 2 to n, the total time will be O(m^{2}nl). In this section we show how to reduce this total time from O(m^{2}nl)to O(m^{2}n).
Figure 1. Illustration of speeding up the 2dneighbors computation. A: lmer alignments. B: computation order in the matrix.
Assume that we have computed the Hamming distance between x_{1} in s_{1} and in s_{i}. Let this be . Then, can be obtained by comparing: 1)the first characters of x_{1} and x_{j}; and 2)the last characters of x_{2} and x_{j+1}. Observe that the (l−1)length prefix of x_{2} is the (l−1)length suffix of x_{1}, and and also share the same (l−1)mer.
If the first characters of x_{1} and x_{j} match, then the d_{1} mismatches happen in the remaining (l−1)long suffixes of x_{1} and x_{j}. In this case, HD(x_{2},x_{j+1})=d_{1} if the last characters of x_{2} and x_{j+1} match; otherwise HD(x_{2},x_{j+1})=d_{1}+1. Similarly, if the first characters of x_{1} and x_{j} do not match, then there are (d_{1}−1)mismatches in the remaining (l−1)long suffixes of x_{1} and x_{j}. In this case, HD(x_{2},x_{j+1})=d_{1}−1 if the last characters of x_{2} and x_{j+1} match; otherwise HD(x_{2},x_{j+1})=d_{1}. This observation is also mentioned in [4].
Observation 4
Given where x_{1} and are two lmers in s_{1} and s_{i}, the Hamming distance between the next two lmers of s_{1} and s_{i}, can be calculated in O(1)time as in (1):
However, where p>q is left out when OBSERVATION 4 is used repeatedly and reaches the end of s_{i}. We simply append a copy of s_{i} to s_{i} to cover all the pairwise alignments (Figure 1A). Then, by calculating the Hamming distance only once and applying OBSERVATION 4 repeatedly, each diagonal in the matrix of Figure 1B can be computed in O(l+m)time.
We do the above for m diagonals from cell to in Figure 1B. Then the first and last (m−l+1)rows are used to form a complete (m−l+1)×(m−l+1)matrix. The l rows in the middle were eliminated since they are the extra rows caused by appending a copy of s_{i}. Therefore, the computation of the 2dneighbors of x from s_{1} in any sequence s_{i} can be computed in O(m∗(m+l))=O(m^{2})time. The computation for all the sequences from s_{2} to s_{n} takes a total of O(m^{2}n)time.
The pseudocode is given in Algorithm 3. In lines 6–10, the Hamming distance is calculated for the alignment of s_{1} with the j_{th} position of s_{i}. Each of the remaining Hamming distances in this alignment is obtained in constant time by OBSERVATION 4 (line 12–26). Instead of appending a copy of s_{i}, the mod operation is used. MSS2
N_{2d}[k][i] keeps the 2dneighbors of the k_{th}lmer in s_{1} in the i_{th} sequence s_{i}. To build the matrix of 2dneighbors of all the lmers of s_{1} (N_{2d}[k][i]), it takes O(m^{2}n)time (lines 3–28). Lines 29–41 search the 2dneighbors of each lmer of s_{1}. If some sequence s_{j} has no 2dneighbors, the current i_{th}lmer of s_{1} is skipped (lines 32–34). Otherwise, the 2dneighbors in the sequence with the smallest size, I_{min}, are used and the wildcards are placed.
MSS2 takes O(m^{2}n+stems)time.
Optionally, a postprocess phase is used following both MSS1 and MSS2 algorithms to refine the output stems. In the postprocess phase, a stem is retained only if this stem has at least one neighbor in each sequence at a distance of ≤d. This phase takes a total of O(mnl∗stem)time.
An estimation on the number of stems
We can compute the expected number of stems generated by our algorithms as follows. Let q be any lmer in s_{1}. What can we say about I_{min} corresponding to q? Consider any sequence s other than s_{1}. Let Q be any lmer of s. The probability p that HD (q,Q)≤2d is where σ=Σ. This implies that the expected number of such Q’s is mp. When Σ increases, p decreases drastically, as examples are shown in Table 2 for σ=4 and σ=20. In all the previous works (see e.g., [6]), analyses have been done assuming that all the lmers in any sequence are independent. If we assume this, then we can apply Chernoff bounds and show that the number of such Q’s is O(mp)with high probability. This in turn will imply that I_{min}=O(mp)with high probability. N_{stems}, the number of stems generated between any two lmers with the hamming distance d_{HM}, is given in (2), which is crudely bounded by O(2^{l}l^{d}). As a result, it follows that the expected number of stems generated by our algorithms is O(m^{2}p2^{l}l^{d}).
Table 2. Example values of p given  Σ =4 and  Σ =20
Algorithm 3 MSS2
Challenging instances
Consider a PMS instance with n sequences of length m each. For a given value l, let d be the smallest integer such that the expected number of (l,d)motifs that occur by random chance is ≥1. We refer to (l,d)as a challenging instance. We can compute challenging instances as follows. Let the alphabet under concern be Σ with Σ=σ. The probability that two random characters in this alphabet match is 1/σ. Then assuming an IID background, the probability that the Hamming distance between two lmers is no more than d is . For each sequence, the probability that a random lmer has at least one dneighbor (i.e., an lmer with a Hamming distance of no more than d)in this sequence is P=1−(1−p)^{m−l+1}. This means that the expected number of randomly occurring (l,d)motifs in the n sequences is σ^{l}P^{n}. From this we can calculate the challenging instances. For σ=4, the challenging instances are (7,1),(9,2), etc. When σ=20, the challenging instances are (7,4),(9,5), etc. Because of Observation 1, in our tests of challenging instances of protein sequences, we have used the cases of (7,3),(9,4), and (11,5).
Results
We have evaluated our algorithms on the standard benchmark where n=20, m=600. Let Σ=20 (for proteins). We have used (l,d)values starting from (7,1)going up to (21,8).
The testing data was generated as follows: 1)20 sequences of length 600 each were generated such that each character in each sequence is equally likely to be one of the characters from the alphabet; 2)a motif of length l was generated randomly; 3)a random number of mismatch positions which is smaller than or equal to d was selected and the characters in these positions were replaced by other amino acids randomly to form a motif instance; 4)step 3)was done 20 times to generate 20 such instances and these were planted in the 20 sequences (at random places with one instance per sequence).
We have implemented and compared our algorithms with RISOTTO [14] and the stemming algorithm of [21]. Please note that we have implemented the algorithm of [21] since we have no access to a running version of the corresponding program. Both the running time and the number of stems generated were used as performance measures. The machine used had an Intel Core i72760QM 2.40GHZ processor with a memory size of 4GB, as shown in Table 3 and Table 4. In these tables "" indicates that the algorithm took too long to finish. These tables show that MSS1 and MSS2 run faster than RISOTTO [14] and stemming [21]. Since the set of stems is a superset of the true motifs, the stems set contains true motifs and false motifs (or false positive predictions). A smaller number of stems indicates less false predictions. The proposed algorithms generate a much smaller subset of the stems generated by the stemming algorithm [21]. Since instances such as (7,1),(9,2),(11,3),etc. are commonly used in DNA sequences, we have also tested the algorithms on more challenging cases such as (7,3),(9,4), and (11,5)as shown in Table 5. In addition to the case of σ=20, we have also tried different alphabet sizes: 40, 60, 80, and 100. Table 6 displays the running time for various alphabet sizes. The fact that the rune times are nearly the same for different alphabet sizes indicates that the running time of all the algorithms are independent of the alphabet size. The postprocessing phase takes longer time as the alphabet size increases since the number of stems increases.
Table 3. Time comparison of MSS, RISOTTO, and stemming algorithms
Table 4. Number of stems generated by MSS and stemming algorithms
Table 5. Comparison of MSS, ROSOTTO, and stemming algorithms on challenging instances
Table 6. Statistics on different alphabet sizes
Due to better performance, we have used MSS2 in real biological protein data. In Minimotif Miner 3.0 database [1], we randomly sampled 14 protein motifs. Each of these motifs has multiple source proteins. Comparison of MSS, RISOTTO, and stemming is shown in Table 7.
Table 7. Motif search on protein data
Finally, we have compared the MSS2 algorithm with PMSPrune, a wellknown Plant Motif Search (PMS)algorithm on DNA sequences [22]. As is clear from Table 8, MSS2 is not as fast as PMSPrune. On DNA sequences, the number of spurious motifs is very large. Therefore, the Motif Stems Search algorithms, which are efficient for large alphabets are not as efficient for small alphabets.
Table 8. MSS2 vs. PMSPrune on DNA data
Discussion and conclusions
The analysis in [21] shows that, assuming IID background, the expected number of the (l,2d)motifs depends highly on the alphabet size Σ. Therefore, when Σ is large, the expected number of 2dneighbors in the nmlength sequences is very small in comparison with the total number of lmers (n(m−l+1)).
The proposed algorithms consider an even smaller size of candidates by introducing I_{min}. In particular, for any given lmer x, we focus on the sequence that has the smallest number of 2dneighbors for x. The expected size of I_{min} is times the total number of 2dneighbors of x in all the sequences. Please note that we do not miss any of the valid motifs.
On the other hand, when generating the stems, as shown in Table 1, once i wildcards in the nonmatching region are placed, it is known that the upper bound of wildcards in the matching region is d− max(i,d_{x}−i). However, it is not necessary to enumerate all the cases from 0 to d− max(i,d_{x}−i)in the matching region. As long as the case of (d− max(i,d_{x}−i))wildcards cannot be eliminated, 0 to (d− max(i,d_{x}−i)−1)wildcards are contained in the (d− max(i,d_{x}−i))wildcards placement. Therefore, the proposed algorithms do not enumerate 0 to (d− max(i,d_{x}−i)−1)wildcards placements in the output.
In the computation of the 2dneighbors, MSS1 takes O(m^{2}nl)time and O(m)space. MSS2 takes O(m^{2}n)time and O(m^{2})space. The stemming algorithm of [21] uses sorting to compute the set I.
The proposed algorithms MSS1 and MSS2 provide an efficient way to solve the Motif Stems Search problem in terms of both time and space. Also, the stems generated by MSS1 and MSS2 form a much smaller subset, with less false predictions, of the stems generated by the algorithm of [21].
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
TM contributed to the implementation of the algorithms, manuscript preparation, algorithms development, and performance analysis. SR contributed to algorithms development, analysis of the results, performance analysis, and manuscript preparation. Both authors read and approved the final manuscript.
Acknowledgements
This work has been supported in part by the following grants: NSF 0829916 and NIH R01LM010101.
References

Mi T, Merlin JC, Deverasetty S, Gryk MR, Bill TJ, Brooks AW, Lee LY, Rathnayake V, Ross CA, Sargeant DP, Strong CL, Watts P, Rajasekaran S, Schiller MR: Minimotif Miner 3.0: database expansion and significantly improved reduction of falsepositive predictions from consensus sequences.
Nucleic Acids Res 2012, 40:D252—D260. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Gould CM, Diella F, Via A, Puntervoll P, Gemund C, ChabanisDavidson S, Michael S, Sayadi A, Bryne JC, Chica C, Seiler M, Davey NE, Haslam NJ, Weatheritt RJ, Budd A, Hughes T, Pas J, Rychlewski L, Trave G, Aasland R, HelmerCitterich M, Linding R, Gibson TJ: ELM: the status of the 2010 eukaryotic linear motif resource.

Obenauer JC, Cantley LC, Yaffe MB: Scansite 2.0: proteomewide prediction of cell signaling interactions using short sequence motifs.
Nucleic Acids Res 2003, 31:36353641. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Pevzner PA, hoi Sze S: Combinatorial approaches to finding subtle signals in DNA sequences. In Proceedings of the Eighth International Conference on Intelligent Systems for Molecular Biology. Menlo Park: AAAI Press; 2000:269278.

Price A, Ramabhadran S, Pevzner PA: Finding subtle motifs by branching from sample strings.
Bioinformatics 2003, 19:149155. PubMed Abstract  Publisher Full Text

Buhler J, Tompa M: Finding motifs using random projections.
J Comput Biol 2002, 9:225242. PubMed Abstract  Publisher Full Text

Bailey TL, Elkan C: Unsupervised learning of multiple motifs in biopolymers using expectation maximization.

Lawrence CE, Altschul SF, Boguski MS, Liu JS, Neuwald AF, Wootton JC: Detecting subtle sequence signals: a Gibbs sampling strategy for multiple alignment.
Science 1993, 262:208214. PubMed Abstract  Publisher Full Text

Rocke E, Tompa M: An algorithm for finding novel gapped motifs in DNA sequences. In Proceedings of the Second Annual International Conference on Computational Molecular Biology, RECOMB ’98. New York: ACM; 1998:228233.

Keich U, Pevzner P: Finding motifs in the twilight zone.
Bioinformatics 2002, 18:13741381. PubMed Abstract  Publisher Full Text

Timothy LB, Elkan C: Fitting a mixture model by expectation maximization to discover motifs in biopolymers. In Proceedings of the Second International Conference on Intelligent Systems for Molecular Biology. Stanford, CA: AAAI Press; 1994:2836.

Hertz GZ, Stormo GD: Identifying DNA and protein patterns with statistically significant alignments of multiple sequences.
Bioinformatics 1999, 15:563577. PubMed Abstract  Publisher Full Text

Eskin E, Pevzner PA: Finding composite regulatory patterns in DNA sequences.
Bioinformatics 2002, 18:354363. Publisher Full Text

Pisanti N, Carvalho A, Marsan L, Sagot MF: RISOTTO: Fast Extraction of Motifs with Mismatches. In LATIN 2006: Theoretical Informatics, Volume 3887 of Lecture Notes in Computer Science. Edited by Correa J, Hevia A, Kiwi M. Berlin / Heidelberg: Springer; 2006:757768.

Rajasekaran S, Balla S, hsi Huang C: Exact algorithm for planted motif challenge problems.

Chin FYL, Leung HCM: Voting algorithms for discovering long motifs. In Proceedings of the Third AsiaPacific Bioinformatics Conference (APBC2005). London: Imperial College Press; 2005:261271.

Davila J, Balla S, Rajasekaran S: Fast and practical algorithms for planted (l, d)motif search.

Rajasekaran S, Dinh H: A speedup technique for (l, d)motif finding algorithms.
BMC Res Notes 2011, 4:54. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Dinh H, Rajasekaran S, Kundeti V: PMS5: an efficient exact algorithm for the (l, d)motif finding problem.
BMC Bioinformatics 2011, 12:410. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Bandyopadhyay S, Sahni S, Rajasekaran S: PMS6: A fast algorithm for motif discovery. In Proc. 2012 IEEE 2nd International Conference on Computational Advances in Bio and Medical Sciences (ICCABS). New York: IEEE; 2012:16.

Kuksa P, Pavlovic V: Efficient motif finding algorithms for largealphabet inputs.
BMC Bioinformatics 2010, 11:S1. PubMed Abstract  Publisher Full Text

Davila J, Balla S, Rajasekaran S: Fast and practical algorithms for planted (l, d)motif search.
IEEE/ACM Trans Comput Biol Bioinformatics 2007, 4(4):544552.