DIMACS
DIMACS REU 2022

General Information

Student: Saachi Mutreja
School: University of California, Berkeley
E-mail: saachi@berkeley.edu
Project 1: Hardness amplification of One Way Functions Computed in NC0
Project 2: MetaComplexity and Non Interactive Zero Knowledge Proofs

Project Description

I am working on 2 projects this summer.

Hardness amplification of One Way Functions Computed in NC0-- Given a NC0 circuit C that inverts a polynomially hard one way function, does there exist a NC0 circuit C' that inverts an exponentially hard one way function? There is a certain combinatorial design in C that gives possible directions.

MetaComplexity and Non Interactive Zero Knowledge Proofs-- Prove or disprove equivalence of NISZK complexity classes when the verifier and simulator have compute in various complexity classes such as AC0,L,NL,NC1


Weekly Log

Week 1:
I read background material on hardness amplification-- relevant sections of "Computational Complexity: A Modern Approach", Yao's XOR Lemma, Direct Product Lemma, "P=BPP if E requires exponential circuits" by Impagliazzo and Widgerson, and chapter 7 of Salil Vadhan's monograph on Pseudorandomness. Derandomizing tile XOR Lemma
Week 2:

I have quite a few ideas on the hardness amplification question-- if a circuit taking inputs of length log n is proven to be invert a polynomially hard one way function, then we have exponential hardness in n. This leads me to believe that the hardness of circuit C is concentrated in one of the sub circuits in its combinatorial design. My goal is to prove that the one way function computed by this circuit is strong. However, Yao's XOR Lemma and Direct Product Lemmas imply that concatenating many weak one way functions leads to a strong one way function.Thus, I must assume that the given one way function is weak.

Week 3:

I have made major progress this week. I have managed to prove that none of the sub circuits are weak one way functions. I have constructed an adversary that attacks these functions-- in particular, the seminal paper by Hastad, Impagliazzo, Levin and Luby proves that every one way function implies the existence of a pseudo random generator. Using a contradiction argument, I assumed that the function inverted by the sub circuits is a weak OWF. I read the HILL paper, and calculated the exact parameters of the PRG that is implied. Then, I constructed a protocol that breaks the one way function by iterating over the seeds of the PRG. Since the range of the PRG is much smaller, whether or not the PRG finds a pre image is dependent on how far the function is from being a bijection. However, this does not necessarily pose any issues, because I have managed to tweak the protocol to create a distinguisher for the PRG in the case when f is close to a bijection. After working carefully through the parameters, it seems like most of them are preserved fully. I will be writing the formal proof of this next week.

Week 4:

I wrote a proof to show that the one way functions inverted by teh sub circuits are stronger than a weak one way function-- it remains to show the upper bound on their strengh, since this would determine how much hardness is preserved over a log n length input. I am working on modigying my attack protocol to invert stronger one way functions.

I have started reading background material on the metacomplexity project. My particular question is to analyse whether the class of problems recognized by NISZK with the verifier and simulator restricted to AC0 is equal to the class of problems recognized by NISZK with the verifier and simulator restricted to log space (L). Professor Allender believes that they are unequal , since some of the distributions that need to be produced by the simulator to prove zero knowledge can only be computed in L or in AC0 with parity. I am reading about randomness extraction in AC0, and seminal papers in NISZK.

Week 5:

I have made a lot of progress on the NISZK_L vs NISZK_AC0 question this week. In particular, I showed an argument to Professor Allender that could potentially prove that NISZK_L = NISZK_AC0. He believes that it is a promising argument, and that the classes could indeed be equal. The crux of my protocol is as follows: randomness extractors in AC0 do not give an inverse exponential closness in total variational distance to the uniform distribution. Rather, the tvd closeness is only polynomial, and that too only in cases where the entropy of the input is over a threshold. I managed to use this to put the Entropy Approximation problem, a complete problem in NISZK_L, in NISZK_ACO. I was able to create a protocol that uses the AC0 extractor in a creative way that gives us inverse exponential guarantee in the completeness case of the protocol. I proved zero knowledge by showing that a simulator restricted to AC0 can produce a distribution that is very close to the distribution produced by the unbounded prover. I am planning on writing this up formally next week.

For the hardness amplification project, I have written my result formally, and everything seems to work out. I have also managed to modify the protocol such that the adversary using it can now attack much stronger one way functions, whcih means that a considerable amount of hardness is preserved. My mentor seems to agree withthe results, and the next task is to prove similar results for more complicated combinatorial designs, potentially using the communicating streams model, Barrington's theorem and Amit Sahai's paper "Cryptography in NC0".

Week 6:
I formally wrote down the metacomplexity result today. Professor Allender thinks it looks essentially okay. I am currently thinking about 3-4 more problems in this area, and I mainly spent this week reading relevent papers to try to gain some intuition.

Week 7:

I am moving on to a new problem in my Metacomplexity project this week. I am working on proving that SZK_L is closed under L-turing reductions. To get started, I am reading the paper "A complete Problem for SZK" by Salil Vadhan and Amit Sahai. I am analysing how this paper utilizes the completeness of the Statistical Difference problem to prove closure of SZK under NC_1 truth table reductions, and karp reductions. I have been able to succesfully deduce that the computations of the proof in this paper can be performed (with some modifications) in log space, hence I have been able to conclude that SZK_L is also closed under NC_1 truth table reductions, and karp reductions. To prove closure under L-turing reductions, I am thinking about possible ideas to create a formula encoding (fan out 1 circuit) from the non-adaptive query table that results from an L-turing reduction model.

Week 8:

I am still trying to prove SZK_L closure under L-turing reductions. Professor Allender suggested looking at the queries to the oracle as a branching program, rather than trying to convert the given query truth table into a formula. I am currently thinking about a reduction that utilizes certain properties of the branching program queries that can be copmuted in log space (for eg, how many queries were 1). This would help me reduce the given problem to a log space complete problem such as product of permutations or existence of undirected path from s to t in graphs. For the project on hardness amplification of one way functions, my current construction of an adversary against certain one way functions proves a lower bound on hardness of the subcircuits of given model of the one way function. However, this adversary breaks its time bounds when it is acting as a distinguisher. I met with Periklis and discussed possible ways to get around this issue , as well as future plans for this project (post-REU).

Week 9:

This week I attended the Czech-Slovak Conference on Combinatorics and Graph Theory in Prague. I also worked on my final reports for both my projects and met with Professor Allender to discuss SZK_L closure in detail.