By Rossano Venturini

Data compression is essential to control big datasets, indexing is key to question them. besides the fact that, their objectives seem as counterposed: the previous goals at minimizing info redundancies, while the latter augments the dataset with auxiliary info to hurry up the question solution. during this monograph we introduce recommendations that conquer this dichotomy. we commence by means of featuring using optimization thoughts to enhance the compression of classical info compression algorithms, then we circulate to the layout of compressed information buildings delivering quick random entry or effective trend matching queries at the compressed dataset. those theoretical reports are supported by means of experimental evidences in their effect in useful scenarios.

Show description

Read or Download Compressed Data Structures for Strings: On Searching and Extracting Strings from Compressed Textual Data PDF

Similar logic books

Sets, logic & numbers

This article is designed to offer the coed a history within the foundations of algebra and research. The algebra of symbolic common sense and the concept that of set are brought early within the textual content in order that the most definitional improvement of the complicated quantity process flows simply from a suite of postulates for the typical numbers.

Fuzzy Logic Foundations and Industrial Applications

Fuzzy common sense Foundations and business purposes is an prepared edited number of contributed chapters protecting easy fuzzy good judgment conception, fuzzy linear programming, and purposes. designated emphasis has been given to assurance of modern learn effects, and to business functions of fuzzy good judgment.

Additional resources for Compressed Data Structures for Strings: On Searching and Extracting Strings from Compressed Textual Data

Sample text

We consider the alternative partition which is not achievable by the booster that subdivides L into σ/α2 substrings denoted S1 , S2 , . . , Sσ/α2 of size α3 symbols each (recall that |T | = σα). Notice that each Si is drawn from an alphabet of size smaller than α3 . The strings Si are compressed separately. The cost of compressing each string Si is O(α3 log α3 + log σ) = O(α3 log α + log σ). Since there are σ/α2 strings Si s, 2 the cost √ of this partition is P = O(σα log α + (σ/α √ ) log σ). Therefore, by setting α = O( log σ/ log log σ), we have that P = O(σ log σ) bits.

We are left with showing how to support Remove() whose computation requires to evaluate the value of Ai [T [l]] for each window wi . Each of these values can be computed as R[t]− B[T [l]] where t is the last occurrence of symbol T [l] in T [l : ri ]. The problem here is given by the fact that we do not know the position t. We solve this issue by resorting to a doubly linked list L c for each symbol c. The list L c links together the last occurrences of c in all those windows, ordered by increasing position.

In this situation the time performance of our solution reduces to O(n log1+ε (B log σ)). The map of the chapter is as follows. 2 describes reduction from the optimal partitioning problem of T to a SSSP problem over a weighted DAG in which edges represent substrings of T and edge costs are entropy-based estimations of the compression of these substrings via C. After that, we show some properties of this DAG that permit our fast solution to the SSSP problem. The subsequent sections will address the problem of incrementally and efficiently computing those edge costs as they are needed by the SSSP-computation, distinguishing the two cases of 0-th order estimators (Sect.

Download PDF sample

Rated 4.28 of 5 – based on 20 votes