The complexity class TFNP is the search analogue of the class NP with the additional guarantee that any
instance has a solution. TFNP has attracted extensive attention due to its natural syntactic subclasses that
capture the computational complexity of important search problems from algorithmic game theory,
combinatorial optimization and computational topology. Thus, one of the main research objectives in the
context of TFNP is to search for efficient algorithms for its subclasses, and at the same time proving
hardness results where efficient algorithms cannot exist.
In my talk, I will show that hard-on-average TFNP problems exist under the weak assumption that there exists
a hard-on-average language in NP. In terms of techniques, our results demonstrate an interesting interplay
between problems in TFNP, derandomization techniques, and zero-knowledge proofs.
Based on joint work with Moni Naor and Eylon Yogev.