On this page, I discuss some results on equivalence checking (EC) of circuits
with a common specification.
These results provide a strong
evidence that EC is unsolvable even for combinational circuits and even from "a practical point of view".
Below, I describe these results and then explain my point of view in the Q&A session. Some questions are "real" and were asked explicitly or implicitly by reviewers and some I made up myself. (This is work in progress. I am going to add new questions and answers in the future .)
The formulation of the EC problem can be found in . Here, I discuss the "simplest version" of EC, namely the EC of combinational circuits.
Here is the main idea. Let N and N* be two single-output combinational circuits to be checked for equivalence. Let N and N* be partitioned into subcircuits N1,..., Nk and N*1,...,N*k respectively so that
Let circuits M and M* have identical sets of input variables. M and M* are said to be toggle equivalent if M(y) ≠ M(y') ⇔ M*(y) ≠ M*(y'). That is for each pair of input assignments y and y' for which M toggles (i.e. produces different output assignments), M* toggles too and vice versa. (More about toggle equivalence can be found, for example, in ) Toggle equivalence of M and M* means that there is a one-to-one mapping between their output assignments. In other words, M and M* can be viewed as two different implementations of the same multi-valued function. Circuits N and N* with a common specification can be viewed as two different implementations of the same network of muti-valued gates (pair of subcircuits Ni and N*i being two different implementations of i-th multi-valued gate). Originally, we considered this (hypothetical) network of multi-valued gates as a common specification of N and N*. However, it is more convenient to view a partition of N (or N*) into subcircuits Ni (respectively N*i) as its specification.
Importantly, circuits N and N* with a common specification can be efficiently checked for equivalence if specifications of N and N* are known. The corresponding EC procedure can be found, for example, in ). (We will refer to this procedure as EC_TE where EC stands for Equivalence Checking and TE stands for Toggle Equivalence). The EC_TE procedure is linear in the number of subcircuits in Spec(N) and Spec(N*) and exponential in the granularity of Spec(N) and Spec(N*) (where the granularity of, say, Spec(N) is the size of its largest subcircuit Ni.) This means that if we consider a class of equivalence checking problems where granularities of Spec(N) and Spec(N*) are bounded by a constant, this class can be solved in linear time.
Along with paper , we call EC external if no extra information about circuits N and N* to be checked for equivalence is given. Otherwise, EC is called internal . So the EC_TE procedure is internal because Spec(N) and Spec(N*) provide extra knowledge about N and N*. This knowlege is exactly what makes EC_TE efficient. However, if specifications of N and N* are not known, then efficient EC of N and N* (if they are "wide" ) is most likely infeasible. (In other words, external EC of circuits with a common specification most likely cannot be done efficiently.) Our reasoning is as follows. If N and N* are "wide", one can not check their equivalence by building ,say, BDDs . The only way to do the job efficiently is to relate intermediate points of N and N*. However, finding toggle equivalent subcircuits Ni and N*i efficiently, is very unlikely due to the huge number of subcircuits one has to enumerate. (If the granularity of Spec(N) and Spec(N*) is 100, then, roughly speaking, one has to enumerate |N|100 × |N*|100 pairs of subcircuits.)
Q. How do your results relate to state-of-the-art EC?
A. In a sense, EC of circuits N and N* with a common specification (and EC in general) can be viewed as a "decryption process". Here N is the original "message" and an optimized circuit N* is an "encrypted message" and EC of N and N* is a process of recovering N* from N. The common specification of N and N* is a "key". If this key is known, the decryption process is easy, otherwise it is hard. The current design flow is similar to behavior of a person who throws away his door key every time he leaves the house. There are basically two strategies to get back in. First strategy, is to kick the door in (if its not strong enough). This is exactly what is done in BDD based verification. Building BDDs of circuits N and N* means destroying their original structure. This can be done efficiently only for "fragile" (i.e. narrow) circuits. If N and N* are wide, then some method of using their is structural similarity is employed (like finding functionally equivalent internal points). In terms of our analogy, this approach is very similar to picking the door lock. For simple locks it is possible. However, our theory implies that these locks can be made arbitrarily complex (by increasing the granularity of the common specification of N and N*). Importantly, increasing the granularity of common specification means making logic synthesis more powerful. In other words, EC of wide circuits (without any knowledge of their common specification) is possible only for very "weak" logic synthesis procedures.
Q. Are these results of any practical importance? Current logic synthesis algorithms do not produce circuits with a common specification
A. The main point is that the complexity of EC and power of logic synthesis are tightly related. The more perturbed an initial circuit N is by logic synthesis, the harder it is to prove the equivalence of N and the optimized circuit N* without any additional information.
In , we sketch a powerful logic synthesis procedure enabled by EC_TE. Given a circuit N and its partition into subcircuits N1,...,Nk (i.e. its specification),this procedure builds an optimized circuit N* with the same specification as N. We will refer to this procedure as LS_TE (here LS stands for Logic Synthesis and TE for Toggle Equivalence). LS_TE builds circuit N* by replacing subcircuits Ni of Spec(N) with toggle equivalent counterparts N*i in topological order. Importantly, LS_TE has the same complexity as EC_TE that is it is linear in the number of subcircuits Ni and exponential in the granularities of Spec(N) and Spec(N*). So if the granularity of Spec(N) is bounded by a constant (this essentially means that granularity of Spec(N*) is also bounded by a constant), then LS_TE is linear in circuit size and so it is scalable. On the other hand, LS_TE enjoys great flexibility because the number of subcircuits toggle equivalent to Ni is astronomical even if the number of outputs inNi is small. (Let a subcircuit Ni has k outputs. Then the number of, say, k output Boolean functions toggle equivalent to the function implemented by Ni is, roughly speaking, 2k!. For k=5 this number is equal to 2.6 × 1035. In , we describe a procedure that, given a subcircuit Ni builds an optimized toggle equivalent subcircuit N*i. This procedure, makes LS_TE "a reality".
In , we show that LS_TE can be viewed as a way to address the local minima entrapment problem. The depth of the minimum that can be (potentially) escaped depends on the granularity of Spec(N). The larger this granularity is, the deeper local minimum can be escaped.
Q. How does the claim that EC is "unsolvable" is different from a well-known fact that EC is coNP-complete and so, most probably, it can not be solved efficiently in the general case?
A. The fact that a particular class of EC problems is hard to solve may mean two things. First, circuits N and N* of this class are very different from each other and so there is no short proof (in a particular proof system) that they are functionally equivalent. In this case, EC is "naturally" hard. Second, circuits N and N* of this class are actually "similar" and so there are short EC proofs. However these proofs can not be efficiently found without some extra information. In this case, EC of N and N* is "unnaturally" hard. Our results introduce a class of EC problems (namely EC of circuits with a common specification) that are most likely "unnaturally" hard. Another way to look at these two kinds of complexity is as follows. The first type of complexity concerns non-deterministic algorithms. That is EC is hard even if one has an oracle suggesting the best decisions. The second type of complexity concerns deterministic algorithms. In this case EC is easy with an oracle (i.e. for a non-deterministic algorithm) but becomes hard if one may not use an oracle (i.e. for a deterministic algorithm).
Note that the very existence of these two types of complexity is still an open question. If NP=coNP, the first type of complexity does not exist for EC problems. (In this case, there should exist a powerful enough proof system in which all EC problems have short proofs.) However, the second kind of complexity still remains even if NP=coNP (because short proofs may be exponentially hard to find) and "disappears" only if P = NP (in which case not only all EC problems should have short proofs, but there should exist an efficient algorithm for finding such proofs).