slideshow 3

Complexity seminar

usually takes place each Friday at 13:30 in IM, rear building, ground floor
Chair: Michal Koucky, Pavel Pudlak
The programme is announced via the mailing list.

Complexity theory beyond deterministic exponential time and applications Parts II

Igor Carboni Oliveira
MFF, UK
Friday, 17. March 2017 - 13:30 to 15:00

in IM, rear building, ground floor

Part I. (this will be on the Seminar on limits of efficient computation on Marh 15) I'll survey a variety of connections between non-uniform circuit lower bounds, non-trivial proof systems, non-trivial learning algorithms, and variants of natural properties. In particular, I plan to discuss how one can prove new unconditional lower bounds for standard complexity classes such as REXP, ZPEXP, BPEXP, and NEXP by designing algorithms with a non-trivial running time, and to present some open problems.  Part II. I'll focus on a recent application of some of the complexity-theoretic techniques behind these results. I'll describe in more detail recent progress on the problem of efficiently generating large canonical prime numbers, via pseudodeterministic pseudorandomness. If there is enough time and interest, we will also discuss some non-constructive aspects of the result, and connections to derandomization. Based on joint work with Rahul Santhanam.

Lower bounds on Non-adaptive Data Structures for Median and Predecessor search

Anup Rao
Univ. of Washington, Seattle, USA
Friday, 10. March 2017 - 13:30 to 15:00

in IM, rear building, ground floor

What is the best way to maintain a subset of {1,2,...,m} so that the median of the set can be easily recovered? We are interested in the algorithms that access the fewest memory locations when adding an element to the set and computing the median of the set. We prove the first lower bounds for such algorithms. We say that the algorithm handles insertions non-adaptively if the locations of memory accessed depend only on the element being inserted, and not on the contents of the memory. If the algorithm handles insertions non-adaptively, then our lower bounds imply that Binary Search Trees are essentially optimal (our results give a more general tradeoff between the parameters of the algorithm). In addition, we investigate the predecessor search problem, where the algorithm is required to compute the predecessor of any element in the set. Here we prove that if computing the predecessors can be done non-adaptively, then again Binary Search Trees are essentially optimal. Our... more

Interactive Oracle Proofs

Ariel Gabizon
Technion - Israel Institute of Technology, Haifa
Friday, 2. December 2016 - 13:30 to 15:00

in IM, rear building, ground floor

We investigate the Interactive Oracle Proof (IOP) model that generalizes both Probabilistically Checkable Proofs  (PCPs) and classical Interactive Proofs (IPs). An IOP is an interactive protocol where the prover sends long ``oracles'' as messages, of which the verifier only reads a small number of locations. The main justification for the model is that, similarly to PCPs, an IOP can be compiled into a short computationally sound proof whose length depends almost solely on the number of bits actually read by the verifier. We show this added interaction is beneficial. For instance, we obtain - a perfect Zero-Knowledge IOP for #P languages, whereas for standard IPs only computational Zero-Knowledge is known. - A linear proof size IOP for circuit satisfiability with constant verifier query complexity, whereas for PCPs only quasilinear proof size is known.

Communication complexity of syntactically-guaranteed intersection search

Dmitry Gavinsky
IM
Friday, 18. November 2016 - 13:30 to 15:00

in IM, rear building, ground floor

We define a new 2-party version of the (unstructured) intersection-search problem, where the input comes from certain bipartite product distribution, but at the same time every possible pair of input subsets share at least one element. In other words, the existence of a target element is guaranteed syntactically for every possible instance of the problem, and thus its complexity analysis cannot be based on the hardness of "recognising" disjoint pairs. The known methods for proving communicational hardness of unstructured search that we are aware of can be viewed as arguing the discrepancy of one form or another with respect to the disjointness predicate, and therefore these methods are not directly applicable to our problem (where such predicate is always a contradiction). To analyse the complexity of the proposed problem, we investigate the global "rectangular structure" of a communication protocol and identify such properties of large rectangles that make... more

Pages