SQuAD2.0

The Stanford Question Answering Dataset

Predictions

Scores

Computational_complexity_theory

The Stanford Question Answering Dataset

Computational complexity theory is a branch of the theory of computation in theoretical computer science that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an algorithm.

What branch of theoretical computer science deals with broadly classifying computational problems by difficulty and class of relationship?

  • Ground Truth Answers: Computational complexity theoryComputational complexity theoryComputational complexity theory

  • Prediction:

By what main attribute are computational problems classified utilizing computational complexity theory?

  • Ground Truth Answers: inherent difficultytheir inherent difficultyinherent difficulty

  • Prediction:

What is the term for a task that generally lends itself to being solved by a computer?

  • Ground Truth Answers: computational problemsA computational problemcomputational problem

  • Prediction:

A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do.

What measure of a computational problem broadly defines the inherent difficulty of the solution?

  • Ground Truth Answers: if its solution requires significant resourcesits solution requires significant resourcesif its solution requires significant resources

  • Prediction:

What method is used to intuitively assess or quantify the amount of resources required to solve a computational problem?

  • Ground Truth Answers: mathematical models of computationmathematical models of computationmathematical models of computation

  • Prediction:

What are two basic primary resources used to guage complexity?

  • Ground Truth Answers: time and storagetime and storagetime and storage

  • Prediction:

What unit is measured to determine circuit complexity?

  • Ground Truth Answers: number of gates in a circuitnumber of gates in a circuitnumber of gates

  • Prediction:

What practical role does defining the complexity of problems play in everyday computing?

  • Ground Truth Answers: determine the practical limits on what computers can and cannot dowhat computers can and cannot dodetermine the practical limits on what computers can and cannot do

  • Prediction:

Closely related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, it tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kind of problems can, in principle, be solved algorithmically.

What two fields of theoretical computer science closely mirror computational complexity theory?

  • Ground Truth Answers: analysis of algorithms and computability theoryanalysis of algorithms and computability theoryanalysis of algorithms and computability theory

  • Prediction:

What field of computer science analyzes the resource requirements of a specific algorithm isolated unto itself within a given problem?

  • Ground Truth Answers: analysis of algorithmsanalysis of algorithmsanalysis of algorithms

  • Prediction:

What field of computer science analyzes all possible algorithms in aggregate to determine the resource requirements needed to solve to a given problem?

  • Ground Truth Answers: computational complexity theorycomputational complexity theorycomputational complexity theory

  • Prediction:

What field of computer science is primarily concerned with determining the likelihood of whether or not a problem can ultimately be solved using algorithms?

  • Ground Truth Answers: computability theorycomputability theorycomputability theory

  • Prediction:

A computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a computational problem is referred to as a problem instance, and should not be confused with the problem itself. In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem of primality testing. The instance is a number (e.g. 15) and the solution is "yes" if the number is prime and "no" otherwise (in this case "no"). Stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input.

What is the name given to the input string of a computational problem?

  • Ground Truth Answers: problem instancea problem instanceproblem instance

  • Prediction:

In computational complexity theory, what is the term given to describe the baseline abstract question needing to be solved?

  • Ground Truth Answers: the problema problemproblem

  • Prediction:

Is a problem instance typically characterized as abstract or concrete?

  • Ground Truth Answers: concreteconcreteabstract

  • Prediction:

What is another name for any given measure of input associated with a problem?

  • Ground Truth Answers: instancesthe instanceinstance

  • Prediction:

What is the general term used to describe the output to any given input in a problem instance?

  • Ground Truth Answers: solutionthe solutionsolution

  • Prediction:

To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the traveling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 15 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites in Milan whose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances.

By how many kilometers does the traveling salesman problem seek to classify a route between the 15 largest cities in Germany?

  • Ground Truth Answers: 200020002000

  • Prediction:

What is one example of an instance that the quantitative answer to the traveling salesman problem fails to answer?

  • Ground Truth Answers: round trip through all sites in Milanasking for a round trip through all sites in Milan whose total length is at most 10 kma round trip through all sites in Milan whose total length is at most 10 km

  • Prediction:

What does computational complexity theory most specifically seek to answer?

  • Ground Truth Answers: computational problemscomputational problemscomputational problems

  • Prediction:

When considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet (i.e., the set {0,1}), and thus the strings are bitstrings. As in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices, or by encoding their adjacency lists in binary.

In a computational problem, what can be described as a string over an alphabet?

  • Ground Truth Answers: problem instancea problem instanceproblem instance

  • Prediction:

What is the name of the alphabet is most commonly used in a problem instance?

  • Ground Truth Answers: binary alphabetbinarybinary

  • Prediction:

What is another term for the string of a problem instance?

  • Ground Truth Answers: bitstringsbitstringsbitstrings

  • Prediction:

In the encoding of mathematical objects, what is the way in which integers are commonly expressed?

  • Ground Truth Answers: binary notationbinary notationbinary notation

  • Prediction:

What is one way in which graphs can be encoded?

  • Ground Truth Answers: adjacency matricesdirectly via their adjacency matrices

  • Prediction:

Decision problems are one of the central objects of study in computational complexity theory. A decision problem is a special type of computational problem whose answer is either yes or no, or alternately either 1 or 0. A decision problem can be viewed as a formal language, where the members of the language are instances whose output is yes, and the non-members are those instances whose output is no. The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input.

What kind of problems are one of the main topics studied in computational complexity theory?

  • Ground Truth Answers: Decision problemsDecision problemsDecision

  • Prediction:

What are the two simple word responses to a decision problem?

  • Ground Truth Answers: yes or noyes or noyes or no

  • Prediction:

What are the two integer responses to a decision problem?

  • Ground Truth Answers: 1 or 01 or 01 or 0

  • Prediction:

What will the output be for a member of the language of a decision problem?

  • Ground Truth Answers: yesyesyes

  • Prediction:

What answer denotes that an algorithm has accepted an input string?

  • Ground Truth Answers: yesyesyes

  • Prediction:

An example of a decision problem is the following. The input is an arbitrary graph. The problem consists in deciding whether the given graph is connected, or not. The formal language associated with this decision problem is then the set of all connected graphs—of course, to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings.

What kind of graph is an example of an input used in a decision problem?

  • Ground Truth Answers: arbitrary grapharbitraryarbitrary

  • Prediction:

What is the term for the set of all connected graphs related to this decision problem?

  • Ground Truth Answers: formal languageThe formal languageThe formal language associated with this decision problem

  • Prediction:

What encoding decision needs to be made in order to determine an exact definition of the formal language?

  • Ground Truth Answers: how graphs are encoded as binary stringshow graphs are encoded as binary stringshow graphs are encoded as binary strings

  • Prediction:

A function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem, that is, it isn't just yes or no. Notable examples include the traveling salesman problem and the integer factorization problem.

A function problem is an example of what?

  • Ground Truth Answers: a computational problema computational problema computational problem

  • Prediction:

How many outputs are expected for each input in a function problem?

  • Ground Truth Answers: a single outputsinglesingle

  • Prediction:

The traveling salesman problem is an example of what type of problem?

  • Ground Truth Answers: A function problemfunctionfunction problem

  • Prediction:

In addition to the traveling salesman problem, what is another example of a function problem?

  • Ground Truth Answers: the integer factorization probleminteger factorizationinteger factorization problem

  • Prediction:

Is the output of a functional problem typically characterized by a simple or complex answer?

  • Ground Truth Answers: complexcomplexcomplex

  • Prediction:

It is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not really the case, since function problems can be recast as decision problems. For example, the multiplication of two integers can be expressed as the set of triples (a, b, c) such that the relation a × b = c holds. Deciding whether a given triple is a member of this set corresponds to solving the problem of multiplying two numbers.

How can function problems typically be restated?

  • Ground Truth Answers: decision problemsas decision problemsas decision problems

  • Prediction:

If two integers are multiplied and output a value, what is this expression set called?

  • Ground Truth Answers: set of triplestriplethe set of triples (a, b, c) such that the relation a × b = c holds

  • Prediction:

To measure the difficulty of solving a computational problem, one may wish to see how much time the best algorithm requires to solve the problem. However, the running time may, in general, depend on the instance. In particular, larger instances will require more time to solve. Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as a function of the size of the instance. This is usually taken to be the size of the input in bits. Complexity theory is interested in how algorithms scale with an increase in the input size. For instance, in the problem of finding whether a graph is connected, how much more time does it take to solve a problem for a graph with 2n vertices compared to the time taken for a graph with n vertices?

What is a commonly used measurement used to determine the complexity of a computational problem?

  • Ground Truth Answers: how much time the best algorithm requires to solve the problemtimetime

  • Prediction:

What is one variable on which the running time may be contingent?

  • Ground Truth Answers: the instancethe instancethe size of the instance

  • Prediction:

How is the time needed to obtain the solution to a problem calculated?

  • Ground Truth Answers: as a function of the size of the instanceas a function of the size of the instancea function of the size of the instance

  • Prediction:

In what unit is the size of the input measured?

  • Ground Truth Answers: bitsbitsbits

  • Prediction:

Complexity theory seeks to define the relationship between the scale of algorithms with respect to what other variable?

  • Ground Truth Answers: an increase in the input sizeinput sizeinput size

  • Prediction:

If the input size is n, the time taken can be expressed as a function of n. Since the time taken on different inputs of the same size can be different, the worst-case time complexity T(n) is defined to be the maximum time taken over all inputs of size n. If T(n) is a polynomial in n, then the algorithm is said to be a polynomial time algorithm. Cobham's thesis says that a problem can be solved with a feasible amount of resources if it admits a polynomial time algorithm.

Whose thesis states that the solution to a problem is solvable with reasonable resources assuming it allows for a polynomial time algorithm?

  • Ground Truth Answers: Cobham's thesisCobham'sCobham

  • Prediction:

If input size is is equal to n, what can respectively be assumed is the function of n?

  • Ground Truth Answers: the time takenthe time takenthe time taken

  • Prediction:

What term corresponds to the maximum measurement of time across all functions of n?

  • Ground Truth Answers: worst-case time complexityworst-case time complexitythe worst-case time complexity

  • Prediction:

How is worst-case time complexity written as an expression?

  • Ground Truth Answers: T(n)T(n)T(n)

  • Prediction:

Assuming that T represents a polynomial in T(n), what is the term given to the corresponding algorithm?

  • Ground Truth Answers: polynomial time algorithmpolynomial timepolynomial time algorithm

  • Prediction:

A Turing machine is a mathematical model of a general computing machine. It is a theoretical device that manipulates symbols contained on a strip of tape. Turing machines are not intended as a practical computing technology, but rather as a thought experiment representing a computing machine—anything from an advanced supercomputer to a mathematician with a pencil and paper. It is believed that if a problem can be solved by an algorithm, there exists a Turing machine that solves the problem. Indeed, this is the statement of the Church–Turing thesis. Furthermore, it is known that everything that can be computed on other models of computation known to us today, such as a RAM machine, Conway's Game of Life, cellular automata or any programming language can be computed on a Turing machine. Since Turing machines are easy to analyze mathematically, and are believed to be as powerful as any other model of computation, the Turing machine is the most commonly used model in complexity theory.

What is the term for a mathematical model that theoretically represents a general computing machine?

  • Ground Truth Answers: A Turing machineA Turing machineTuring machine

  • Prediction:

It is generally assumed that a Turing machine can solve anything capable of also being solved using what?

  • Ground Truth Answers: an algorithman algorithman algorithm

  • Prediction:

What is the most commonplace model utilized in complexity theory?

  • Ground Truth Answers: the Turing machinethe Turing machineTuring machine

  • Prediction:

What does a Turing machine handle on a strip of tape?

  • Ground Truth Answers: symbolssymbolssymbols

  • Prediction:

A deterministic Turing machine is the most basic Turing machine, which uses a fixed set of rules to determine its future actions. A probabilistic Turing machine is a deterministic Turing machine with an extra supply of random bits. The ability to make probabilistic decisions often helps algorithms solve problems more efficiently. Algorithms that use random bits are called randomized algorithms. A non-deterministic Turing machine is a deterministic Turing machine with an added feature of non-determinism, which allows a Turing machine to have multiple possible future actions from a given state. One way to view non-determinism is that the Turing machine branches into many possible computational paths at each step, and if it solves the problem in any of these branches, it is said to have solved the problem. Clearly, this model is not meant to be a physically realizable model, it is just a theoretically interesting abstract machine that gives rise to particularly interesting complexity classes. For examples, see non-deterministic algorithm.

What is generally considered to be the most basic iteration of a Turing machine?

  • Ground Truth Answers: A deterministic Turing machinedeterministicdeterministic Turing machine

  • Prediction:

What fixed set of factors determine the actions of a deterministic Turing machine

  • Ground Truth Answers: rulesrulesa fixed set of rules to determine its future actions

  • Prediction:

What is the term used to identify a deterministic Turing machine that has additional random bits?

  • Ground Truth Answers: A probabilistic Turing machineprobabilisticprobabilistic Turing machine

  • Prediction:

What type of Turing machine is capable of multiple actions and extends into a variety of computational paths?

  • Ground Truth Answers: A non-deterministic Turing machinenon-deterministicnon-deterministic Turing machine

  • Prediction:

What is the term given to algorithms that utilize random bits?

  • Ground Truth Answers: randomized algorithmsrandomized algorithmsrandomized algorithms

  • Prediction:

Many types of Turing machines are used to define complexity classes, such as deterministic Turing machines, probabilistic Turing machines, non-deterministic Turing machines, quantum Turing machines, symmetric Turing machines and alternating Turing machines. They are all equally powerful in principle, but when resources (such as time or space) are bounded, some of these may be more powerful than others.

Turing machines are commonly employed to define what?

  • Ground Truth Answers: complexity classescomplexity classescomplexity classes

  • Prediction:

What are two factors that directly effect how powerful a Turing machine may or may not be?

  • Ground Truth Answers: time or spacetime or spacetime or space

  • Prediction:

In the determination of complexity classes, what are two examples of types of Turing machines?

  • Ground Truth Answers: probabilistic Turing machines, non-deterministic Turing machinesprobabilistic Turing machines, non-deterministic Turing machines

  • Prediction:

Many machine models different from the standard multi-tape Turing machines have been proposed in the literature, for example random access machines. Perhaps surprisingly, each of these models can be converted to another without providing any extra computational power. The time and memory consumption of these alternate models may vary. What all these models have in common is that the machines operate deterministically.

What is an example of a machine model that deviates from a generally accepted multi-tape Turing machine?

  • Ground Truth Answers: random access machinesrandom access machinesrandom access machines

  • Prediction:

In considering Turing machines and alternate variables, what measurement left unaffected by conversion between machine models?

  • Ground Truth Answers: computational powercomputational powercomputational power

  • Prediction:

What two resources commonly consumed by alternate models are typically known to vary?

  • Ground Truth Answers: time and memorytime and memory consumptiontime and memory consumption

  • Prediction:

What commonality do alternate machine models, such as random access machines, share with Turing machines?

  • Ground Truth Answers: the machines operate deterministicallydeterministicallythe machines operate deterministically

  • Prediction:

However, some computational problems are easier to analyze in terms of more unusual resources. For example, a non-deterministic Turing machine is a computational model that is allowed to branch out to check many different possibilities at once. The non-deterministic Turing machine has very little to do with how we physically want to compute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so that non-deterministic time is a very important resource in analyzing computational problems.

What type of Turing machine can be characterized by checking multiple possibilities at the same time?

  • Ground Truth Answers: non-deterministicnon-deterministicnon-deterministic Turing machine

  • Prediction:

What often affects or facilitates ease of analysis in computational problems?

  • Ground Truth Answers: unusual resourcesmore unusual resourcesmore unusual resources

  • Prediction:

A non-deterministic Turing machine has the ability to capture what facet of useful analysis?

  • Ground Truth Answers: mathematical modelsmathematical modelsbranching

  • Prediction:

What is the most critical resource in the analysis of computational problems associated with non-deterministic Turing machines?

  • Ground Truth Answers: timenon-deterministic timenon-deterministic time

  • Prediction:

For a precise definition of what it means to solve a problem using a given amount of time and space, a computational model such as the deterministic Turing machine is used. The time required by a deterministic Turing machine M on input x is the total number of state transitions, or steps, the machine makes before it halts and outputs the answer ("yes" or "no"). A Turing machine M is said to operate within time f(n), if the time required by M on each input of length n is at most f(n). A decision problem A can be solved in time f(n) if there exists a Turing machine operating in time f(n) that solves the problem. Since complexity theory is interested in classifying problems based on their difficulty, one defines sets of problems based on some criteria. For instance, the set of problems solvable within time f(n) on a deterministic Turing machine is then denoted by DTIME(f(n)).

The time required to output an answer on a deterministic Turing machine is expressed as what?

  • Ground Truth Answers: state transitionsthe total number of state transitions, or stepstotal number of state transitions, or steps, the machine makes before it halts and outputs the answer

  • Prediction:

Complexity theory classifies problems based on what primary attribute?

  • Ground Truth Answers: difficultydifficultydifficulty

  • Prediction:

What is the expression used to identify any given series of problems capable of being solved within time on a deterministic Turing machine?

  • Ground Truth Answers: DTIME(f(n))DTIME(f(n)).DTIME(f(n))

  • Prediction:

What is the most critical resource measured to in assessing the determination of a Turing machine's ability to solve any given set of problems?

  • Ground Truth Answers: timetimetime

  • Prediction:

Analogous definitions can be made for space requirements. Although time and space are the most well-known complexity resources, any complexity measure can be viewed as a computational resource. Complexity measures are very generally defined by the Blum complexity axioms. Other complexity measures used in complexity theory include communication complexity, circuit complexity, and decision tree complexity.

Time and space are both examples of what type of resource?

  • Ground Truth Answers: complexity resourcescomplexity resourcescomplexity

  • Prediction:

A complexity resource can also be described as what other type of resource?

  • Ground Truth Answers: computational resourcecomputationalcomputational

  • Prediction:

What is typically used to broadly define complexity measures?

  • Ground Truth Answers: Blum complexity axiomsthe Blum complexity axiomsthe Blum complexity axioms

  • Prediction:

Communication complexity is an example of what type of measure?

  • Ground Truth Answers: Complexity measurescomplexity measurescomplexity

  • Prediction:

Decision tree is an example of what type of measure?

  • Ground Truth Answers: Complexity measurescomplexity measurescomplexity

  • Prediction:

The best, worst and average case complexity refer to three different ways of measuring the time complexity (or any other complexity measure) of different inputs of the same size. Since some inputs of size n may be faster to solve than others, we define the following complexities:

What are the three primary expressions used to represent case complexity?

  • Ground Truth Answers: best, worst and averagebest, worst and average casebest, worst and average case complexity

  • Prediction:

Case complexity likelihoods provide variable probabilities of what general measure?

  • Ground Truth Answers: complexity measurecomplexitycomplexity

  • Prediction:

What is one common example of a critical complexity measure?

  • Ground Truth Answers: timetime complexitytime complexity

  • Prediction:

Case complexities provide three likelihoods of what differing variable that remains the same size?

  • Ground Truth Answers: inputsinputsinputs

  • Prediction:

For example, consider the deterministic sorting algorithm quicksort. This solves the problem of sorting a list of integers that is given as the input. The worst-case is when the input is sorted or sorted in reverse order, and the algorithm takes time O(n2) for this case. If we assume that all possible permutations of the input list are equally likely, the average time taken for sorting is O(n log n). The best case occurs when each pivoting divides the list in half, also needing O(n log n) time.

What provides a solution to a list of integers provided as input that ned to be sorted?

  • Ground Truth Answers: deterministic sorting algorithm quicksortquicksortthe deterministic sorting algorithm quicksort

  • Prediction:

When extensive time is required to sort integers, this represents what case complexity?

  • Ground Truth Answers: worst-caseworstworst-case

  • Prediction:

What is the expression used to denote a worst case complexity as expressed by time taken?

  • Ground Truth Answers: O(n2)O(n2)O(n2)

  • Prediction:

To classify the computation time (or similar resources, such as space consumption), one is interested in proving upper and lower bounds on the minimum amount of time required by the most efficient algorithm solving a given problem. The complexity of an algorithm is usually taken to be its worst-case complexity, unless specified otherwise. Analyzing a particular algorithm falls under the field of analysis of algorithms. To show an upper bound T(n) on the time complexity of a problem, one needs to show only that there is a particular algorithm with running time at most T(n). However, proving lower bounds is much more difficult, since lower bounds make a statement about all possible algorithms that solve a given problem. The phrase "all possible algorithms" includes not just the algorithms known today, but any algorithm that might be discovered in the future. To show a lower bound of T(n) for a problem requires showing that no algorithm can have time complexity lower than T(n).

Classification of resources is contingent on determining the upper and lower bounds of minimum time required by what?

  • Ground Truth Answers: the most efficient algorithmthe most efficient algorithmthe most efficient algorithm solving a given problem

  • Prediction:

The analysis of a specific algorithm is typically assigned to what field of computational science?

  • Ground Truth Answers: analysis of algorithmsanalysis of algorithmsanalysis of algorithms

  • Prediction:

Which bound of time is more difficult to establish?

  • Ground Truth Answers: lower boundslowerlower bounds

  • Prediction:

A specific algorithm demonstrating T(n) represents what measure of time complexity?

  • Ground Truth Answers: upper boundupper and lower boundsupper bound

  • Prediction:

What is the colloquial phrase used to convey the continuum of algorithms with unlimited availability irrespective of time?

  • Ground Truth Answers: all possible algorithmsall possible algorithmsall possible algorithms

  • Prediction:

Upper and lower bounds are usually stated using the big O notation, which hides constant factors and smaller terms. This makes the bounds independent of the specific details of the computational model used. For instance, if T(n) = 7n2 + 15n + 40, in big O notation one would write T(n) = O(n2).

What expression is generally used to convey upper or lower bounds?

  • Ground Truth Answers: big O notationbig O notationbig O notation

  • Prediction:

What does a big O notation hide?

  • Ground Truth Answers: constant factors and smaller termsconstant factors and smaller termsconstant factors and smaller terms

  • Prediction:

How would one write T(n) = 7n2 + 15n + 40 in big O notation?

  • Ground Truth Answers: T(n) = O(n2)T(n) = O(n2)T(n) = O(n2)

  • Prediction:

Big O notation provides autonomy to upper and lower bounds with relationship to what?

  • Ground Truth Answers: the computational modelspecific details of the computational model usedthe specific details of the computational model used

  • Prediction:

Of course, some complexity classes have complicated definitions that do not fit into this framework. Thus, a typical complexity class has a definition like the following:

What has complicated definitions that prevent classification into a framework?

  • Ground Truth Answers: complexity classescomplexity classessome complexity classes

  • Prediction:

Complexity classes are generally classified into what?

  • Ground Truth Answers: frameworkframeworkframework

  • Prediction:

Difficulty in establishing a framework for complexity classes can be caused by what variable?

  • Ground Truth Answers: complicated definitionscomplicated definitionsdefinitions

  • Prediction:

But bounding the computation time above by some concrete function f(n) often yields complexity classes that depend on the chosen machine model. For instance, the language {xx | x is any binary string} can be solved in linear time on a multi-tape Turing machine, but necessarily requires quadratic time in the model of single-tape Turing machines. If we allow polynomial variations in running time, Cobham-Edmonds thesis states that "the time complexities in any two reasonable and general models of computation are polynomially related" (Goldreich 2008, Chapter 1.2). This forms the basis for the complexity class P, which is the set of decision problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of function problems is FP.

Concrete bounding of computation time frequently produces complexity classes contingent upon what?

  • Ground Truth Answers: chosen machine modelthe chosen machine modelthe chosen machine model

  • Prediction:

A multi-tape Turing machine requires what type of time for a solution?

  • Ground Truth Answers: linear timelinearlinear

  • Prediction:

A language solved in quadratic time implies the use of what type of Turing machine?

  • Ground Truth Answers: single-tape Turing machinessingle-tapesingle-tape

  • Prediction:

What thesis specifies that a polynomial relationship exists within time complexities in a computational model?

  • Ground Truth Answers: Cobham-Edmonds thesisCobham-EdmondsCobham-Edmonds thesis

  • Prediction:

Decision problems capable of being solved by a deterministic Turing machine while maintaining adherence to polynomial time belong to what class?

  • Ground Truth Answers: complexity class PPcomplexity class P

  • Prediction:

Many important complexity classes can be defined by bounding the time or space used by the algorithm. Some important complexity classes of decision problems defined in this manner are the following:

What are two examples of measurements are bound within algorithms to establish complexity classes?

  • Ground Truth Answers: time or spacetime or spacetime or space

  • Prediction:

What function is used by algorithms to define measurements like time or space?

  • Ground Truth Answers: boundingboundingbounding

  • Prediction:

Bounding of time and space or similar measurements is often used by algorithms to define what?

  • Ground Truth Answers: complexity classescomplexity classescomplexity classes

  • Prediction:

Other important complexity classes include BPP, ZPP and RP, which are defined using probabilistic Turing machines; AC and NC, which are defined using Boolean circuits; and BQP and QMA, which are defined using quantum Turing machines. #P is an important complexity class of counting problems (not decision problems). Classes like IP and AM are defined using Interactive proof systems. ALL is the class of all decision problems.

What are three examples of complexity classes associated with definitions established by probabilistic Turing machines?

  • Ground Truth Answers: BPP, ZPP and RPBPP, ZPP and RPBPP, ZPP and RP

  • Prediction:

AC and NC are complexity classes typically associated with what type of circuit?

  • Ground Truth Answers: BooleanBooleanBoolean circuits;

  • Prediction:

BQP and QMA are examples of complexity classes most commonly associated with what type of Turing machine?

  • Ground Truth Answers: quantumquantumquantum

  • Prediction:

What is the expression used to represent a complexity class of counting problems?

  • Ground Truth Answers: #P#P#P

  • Prediction:

IP and AM are most commonly defined by what type of proof system?

  • Ground Truth Answers: InteractiveInteractiveInteractive

  • Prediction:

For the complexity classes defined in this way, it is desirable to prove that relaxing the requirements on (say) computation time indeed defines a bigger set of problems. In particular, although DTIME(n) is contained in DTIME(n2), it would be interesting to know if the inclusion is strict. For time and space requirements, the answer to such questions is given by the time and space hierarchy theorems respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. Thus there are pairs of complexity classes such that one is properly included in the other. Having deduced such proper set inclusions, we can proceed to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved.

What is an example of a measurement within a complexity class that would create a bigger set of problems if the bounds were relaxed?

  • Ground Truth Answers: computation timecomputation timecomputation time

  • Prediction:

In what expression can one expect to find DTIME(n)

  • Ground Truth Answers: DTIME(n2)DTIME(n2)DTIME(n2)

  • Prediction:

What theorems are responsible for determining questions of time and space requirements?

  • Ground Truth Answers: time and space hierarchy theoremstime and space hierarchy theoremstime and space hierarchy theorems

  • Prediction:

Resources are constrained by hierarchy theorems to produce what?

  • Ground Truth Answers: a proper hierarchy on the classes defineda proper hierarchy on the classesa proper hierarchy

  • Prediction:

What kind of statement is made in the effort of establishing the time and space requirements needed to enhance the ultimate number of problems solved?

  • Ground Truth Answers: quantitative statementsquantitativequantitative

  • Prediction:

The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem tells us that P is strictly contained in EXPTIME, and the space hierarchy theorem tells us that L is strictly contained in PSPACE.

What is the foundation for separation results within complexity classes?

  • Ground Truth Answers: time and space hierarchy theoremsThe time and space hierarchy theoremstime and space hierarchy theorems

  • Prediction:

What is responsible for constraining P according to the time hierarchy theorem?

  • Ground Truth Answers: EXPTIMEEXPTIMEEXPTIME

  • Prediction:

Within what variable is L constrained according to the space hierarchy theorem?

  • Ground Truth Answers: PSPACEPSPACEPSPACE

  • Prediction:

Many complexity classes are defined using the concept of a reduction. A reduction is a transformation of one problem into another problem. It captures the informal notion of a problem being at least as difficult as another problem. For instance, if a problem X can be solved using an algorithm for Y, X is no more difficult than Y, and we say that X reduces to Y. There are many different types of reductions, based on the method of reduction, such as Cook reductions, Karp reductions and Levin reductions, and the bound on the complexity of reductions, such as polynomial-time reductions or log-space reductions.

What concept is frequently used to define complexity classes?

  • Ground Truth Answers: reductiona reductionreduction

  • Prediction:

Reduction essentially takes one problem and converts into what?

  • Ground Truth Answers: another problemanother problemanother problem

  • Prediction:

According to reduction, if X and Y can be solved by the same algorithm then X performs what function in relationship to Y?

  • Ground Truth Answers: reducesreducesX reduces to Y

  • Prediction:

What are two examples of different types of reduction?

  • Ground Truth Answers: Karp reductions and Levin reductionsCook reductions, Karp reductions

  • Prediction:

Polynomial time reductions are an example of what?

  • Ground Truth Answers: the bound on the complexity of reductionstypes of reductionsthe bound on the complexity of reductions

  • Prediction:

The most commonly used reduction is a polynomial-time reduction. This means that the reduction process takes polynomial time. For example, the problem of squaring an integer can be reduced to the problem of multiplying two integers. This means an algorithm for multiplying two integers can be used to square an integer. Indeed, this can be done by giving the same input to both inputs of the multiplication algorithm. Thus we see that squaring is not more difficult than multiplication, since squaring can be reduced to multiplication.

What is the most frequently employed type of reduction?

  • Ground Truth Answers: polynomial-time reductionpolynomial-timepolynomial-time reduction

  • Prediction:

What equates to a squared integer according to polynomial time reduction?

  • Ground Truth Answers: multiplying two integersmultiplying two integersmultiplying two integers

  • Prediction:

What measurement of time is used in polynomial time reduction?

  • Ground Truth Answers: polynomial timepolynomialpolynomial time

  • Prediction:

What would need to remain constant in a multiplication algorithm to produce the same outcome whether multiplying or squaring two integers?

  • Ground Truth Answers: inputinputinput

  • Prediction:

According to polynomial time reduction squaring can ultimately be logically reduced to what?

  • Ground Truth Answers: multiplicationmultiplicationmultiplication

  • Prediction:

This motivates the concept of a problem being hard for a complexity class. A problem X is hard for a class of problems C if every problem in C can be reduced to X. Thus no problem in C is harder than X, since an algorithm for X allows us to solve any problem in C. Of course, the notion of hard problems depends on the type of reduction being used. For complexity classes larger than P, polynomial-time reductions are commonly used. In particular, the set of problems that are hard for NP is the set of NP-hard problems.

The complexity of problems often depends on what?

  • Ground Truth Answers: the type of reduction being usedthe type of reduction being used

  • Prediction:

What would create a conflict between a problem X and problem C within the context of reduction?

  • Ground Truth Answers: if every problem in C can be reduced to Xproblem in C is harder than X

  • Prediction:

An algorithm for X which reduces to C would us to do what?

  • Ground Truth Answers: solve any problem in Csolve any problem in Csolve any problem in C

  • Prediction:

A problem set that that is hard for the expression NP can also be stated how?

  • Ground Truth Answers: NP-hardNP-hardNP-hard problems

  • Prediction:

If a problem X is in C and hard for C, then X is said to be complete for C. This means that X is the hardest problem in C. (Since many problems could be equally hard, one might say that X is one of the hardest problems in C.) Thus the class of NP-complete problems contains the most difficult problems in NP, in the sense that they are the ones most likely not to be in P. Because the problem P = NP is not solved, being able to reduce a known NP-complete problem, Π2, to another problem, Π1, would indicate that there is no known polynomial-time solution for Π1. This is because a polynomial-time solution to Π1 would yield a polynomial-time solution to Π2. Similarly, because all NP problems can be reduced to the set, finding an NP-complete problem that can be solved in polynomial time would mean that P = NP.

The hardest problems in NP can be analogously written as what class of problems?

  • Ground Truth Answers: NP-completeNP-completeNP-complete

  • Prediction:

NP complete problems contain the lowest likelihood of being located in what problem class?

  • Ground Truth Answers: NPPP

  • Prediction:

If P = NP is unsolved, and reduction is applied to a known NP-complete problem vis a vis Π2 to Π1, what conclusion can be drawn for Π1?

  • Ground Truth Answers: there is no known polynomial-time solutionno known polynomial-time solutionthere is no known polynomial-time solution

  • Prediction:

If polynomial time can be utilized within an NP-complete problem, what does the imply P is equal to?

  • Ground Truth Answers: NPNPNP

  • Prediction:

The complexity class P is often seen as a mathematical abstraction modeling those computational tasks that admit an efficient algorithm. This hypothesis is called the Cobham–Edmonds thesis. The complexity class NP, on the other hand, contains many problems that people would like to solve efficiently, but for which no efficient algorithm is known, such as the Boolean satisfiability problem, the Hamiltonian path problem and the vertex cover problem. Since deterministic Turing machines are special non-deterministic Turing machines, it is easily observed that each problem in P is also member of the class NP.

What complexity class is characterized by a computational tasks and efficient algorithms?

  • Ground Truth Answers: PPP

  • Prediction:

What hypothesis is associated with the complexity class of P viewed as a mathematical abstraction with efficient algorithmic functionality?

  • Ground Truth Answers: Cobham–Edmonds thesisCobham–Edmonds thesisCobham–Edmonds thesis

  • Prediction:

What complexity class is commonly characterized by unknown algorithms to enhance solvability?

  • Ground Truth Answers: NPNPNP

  • Prediction:

What is an example of a problem that rests within the NP complexity class?

  • Ground Truth Answers: Boolean satisfiability problemBoolean satisfiability problem

  • Prediction:

In what theoretical machine is it confirmed that a problem in P belies membership in the NP class?

  • Ground Truth Answers: Turing machinesdeterministic Turing machinesdeterministic Turing machines

  • Prediction:

The question of whether P equals NP is one of the most important open questions in theoretical computer science because of the wide implications of a solution. If the answer is yes, many important problems can be shown to have more efficient solutions. These include various types of integer programming problems in operations research, many problems in logistics, protein structure prediction in biology, and the ability to find formal proofs of pure mathematics theorems. The P versus NP problem is one of the Millennium Prize Problems proposed by the Clay Mathematics Institute. There is a US$1,000,000 prize for resolving the problem.

If P is ultimately proven to be equal tot NP, what effect would this have on the efficiency of problems?

  • Ground Truth Answers: more efficient solutionsshown to have more efficient solutionsmany important problems can be shown to have more efficient solutions

  • Prediction:

What is a particular problem in biology that would benefit from determining that P = NP?

  • Ground Truth Answers: protein structure predictionprotein structure predictionprotein structure prediction

  • Prediction:

What is the prize offered for finding a solution to P=NP?

  • Ground Truth Answers: $1,000,000US$1,000,000US$1,000,000

  • Prediction:

It was shown by Ladner that if P ≠ NP then there exist problems in NP that are neither in P nor NP-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete.

Who demonstrated that P= NP implies problems not present in P or NP-complete?

  • Ground Truth Answers: LadnerLadnerLadner

  • Prediction:

What is the name for a problem that meets Ladner's assertion?

  • Ground Truth Answers: NP-intermediate problemsNP-intermediate problemsNP-intermediate

  • Prediction:

What is an example of an NP-intermediate problem not known to exist in P or NP-complete?

  • Ground Truth Answers: graph isomorphism problemthe discrete logarithm problemgraph isomorphism problem, the discrete logarithm problem and the integer factorization problem

  • Prediction:

The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to Laszlo Babai and Eugene Luks has run time 2O(√(n log(n))) for graphs with n vertices.

What is the problem attributed to defining if two finite graphs are isomorphic?

  • Ground Truth Answers: The graph isomorphism problemgraph isomorphismThe graph isomorphism problem

  • Prediction:

What class is most commonly not ascribed to the graph isomorphism problem in spite of definitive determination?

  • Ground Truth Answers: NP-completeNP-completeNP-complete

  • Prediction:

What finite hierarchy implies that the graph isomorphism problem is NP-complete?

  • Ground Truth Answers: polynomial time hierarchypolynomial timepolynomial time hierarchy

  • Prediction:

To what level would the polynomial time hierarchy collapse if graph isomorphism is NP-complete?

  • Ground Truth Answers: second levelsecondsecond

  • Prediction:

Who are commonly associated with the algorithm typically considered the most effective with respect to finite polynomial hierarchy and graph isomorphism?

  • Ground Truth Answers: Laszlo Babai and Eugene LuksBabai and Eugene LuksLaszlo Babai and Eugene Luks

  • Prediction:

The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than k. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP will equal co-NP). The best known algorithm for integer factorization is the general number field sieve, which takes time O(e(64/9)1/3(n.log 2)1/3(log (n.log 2))2/3) to factor an n-bit integer. However, the best known quantum algorithm for this problem, Shor's algorithm, does run in polynomial time. Unfortunately, this fact doesn't say much about where the problem lies with respect to non-quantum complexity classes.

What computational problem is commonly associated with prime factorization?

  • Ground Truth Answers: The integer factorization probleminteger factorizationinteger factorization problem

  • Prediction:

The integer factorization problem essentially seeks to determine if the value of of an input is less than what variable?

  • Ground Truth Answers: kkk

  • Prediction:

That there currently exists no known integer factorization problem underpins what commonly used system?

  • Ground Truth Answers: modern cryptographic systemsmodern cryptographic systemsRSA algorithm

  • Prediction:

What is the most well-known algorithm associated with the integer factorization problem?

  • Ground Truth Answers: the general number field sieveRSAgeneral number field sieve

  • Prediction:

Many known complexity classes are suspected to be unequal, but this has not been proved. For instance P ⊆ NP ⊆ PP ⊆ PSPACE, but it is possible that P = PSPACE. If P is not equal to NP, then P is not equal to PSPACE either. Since there are many known complexity classes between P and PSPACE, such as RP, BPP, PP, BQP, MA, PH, etc., it is possible that all these complexity classes collapse to one class. Proving that any of these classes are unequal would be a major breakthrough in complexity theory.

What is the unproven assumption generally ascribed to the value of complexity classes?

  • Ground Truth Answers: suspected to be unequalunequalMany known complexity classes are suspected to be unequal

  • Prediction:

What is an expression that can be used to illustrate the suspected inequality of complexity classes?

  • Ground Truth Answers: P ⊆ NP ⊆ PP ⊆ PSPACEP ⊆ NP ⊆ PP ⊆ PSPACEP ⊆ NP ⊆ PP ⊆ PSPACE

  • Prediction:

Where can the complexity classes RP, BPP, PP, BQP, MA, and PH be located?

  • Ground Truth Answers: between P and PSPACEbetween P and PSPACEbetween P and PSPACE

  • Prediction:

What evidence between and among complexity classes would signify a theoretical watershed for complexity theory?

  • Ground Truth Answers: Proving that any of these classes are unequalProving that any of these classes are unequalProving that any of these classes are unequal

  • Prediction:

Along the same lines, co-NP is the class containing the complement problems (i.e. problems with the yes/no answers reversed) of NP problems. It is believed that NP is not equal to co-NP; however, it has not yet been proven. It has been shown that if these two complexity classes are not equal then P is not equal to NP.

In what complexity class do complement problems of NP problems exist?

  • Ground Truth Answers: co-NPco-NPco-NP

  • Prediction:

How do the yes/no answers of a complement problem of NP appear?

  • Ground Truth Answers: reversedreversedreversed

  • Prediction:

What is commonly believed to be the value relationship between P and co-NP

  • Ground Truth Answers: not equalnot equalnot equal

  • Prediction:

What implication can be derived for P and NP if P and co-NP are established to be unequal?

  • Ground Truth Answers: P is not equal to NPnot equalP is not equal to NP

  • Prediction:

Similarly, it is not known if L (the set of all problems that can be solved in logarithmic space) is strictly contained in P or equal to P. Again, there are many complexity classes between the two, such as NL and NC, and it is not known if they are distinct or equal classes.

What variable is associated with all problems solved within logarithmic space?

  • Ground Truth Answers: LLL

  • Prediction:

Though unkown, what are the most commonly ascribed attributes of L in relation to P

  • Ground Truth Answers: strictly contained in P or equal to Pcontained in P or equal to P.strictly contained in P or equal to P

  • Prediction:

What lies between L and P that prevents a definitive determination of the relationship between L and P?

  • Ground Truth Answers: complexity classesmany complexity classesmany complexity classes

  • Prediction:

What are two complexity classes between L and P?

  • Ground Truth Answers: NL and NCNL and NCNL and NC

  • Prediction:

What is unknown about the complexity classes between L and P that further prevents determining the value relationship between L and P?

  • Ground Truth Answers: if they are distinct or equal classesif they are distinct or equal classesif they are distinct or equal classes

  • Prediction:

Problems that can be solved in theory (e.g., given large but finite time), but which in practice take too long for their solutions to be useful, are known as intractable problems. In complexity theory, problems that lack polynomial-time solutions are considered to be intractable for more than the smallest inputs. In fact, the Cobham–Edmonds thesis states that only those problems that can be solved in polynomial time can be feasibly computed on some computational device. Problems that are known to be intractable in this sense include those that are EXPTIME-hard. If NP is not the same as P, then the NP-complete problems are also intractable in this sense. To see why exponential-time algorithms might be unusable in practice, consider a program that makes 2n operations before halting. For small n, say 100, and assuming for the sake of example that the computer does 1012 operations each second, the program would run for about 4 × 1010 years, which is the same order of magnitude as the age of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. Nevertheless, a polynomial time algorithm is not always practical. If its running time is, say, n15, it is unreasonable to consider it efficient and it is still useless except on small instances.

Problems capable of theoretical solutions but consuming unreasonable time in practical application are known as what?

  • Ground Truth Answers: intractable problemsintractable problemsintractableintractable

  • Prediction:

Intractable problems lacking polynomial time solutions necessarily negate the practical efficacy of what type of algorithm?

  • Ground Truth Answers: exponential-time algorithmsexponential-timeexponential-time algorithmsexponential-time algorithms

  • Prediction:

If NP is not equal to P, viewed through this lens, what type of problems can also be considered intractable?

  • Ground Truth Answers: NP-complete problemsNP-completeNP-completeNP-complete

  • Prediction:

What intractability means in practice is open to debate. Saying that a problem is not in P does not imply that all large cases of the problem are hard or even that most of them are. For example, the decision problem in Presburger arithmetic has been shown not to be in P, yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-complete knapsack problem over a wide range of sizes in less than quadratic time and SAT solvers routinely handle large instances of the NP-complete Boolean satisfiability problem.

What eponymous variation of arithmetic presents a decision problem not evidenced in P?

  • Ground Truth Answers: Presburger arithmeticPresburgerPresburger arithmetic

  • Prediction:

Despite the Presburger problem, and in view of intractability, what has been done to establish solutions in reasonable periods of time?

  • Ground Truth Answers: algorithms have been writtenalgorithms have been writtenalgorithms have been written that solve the problem in reasonable times in most cases

  • Prediction:

What is an example of a problem to which effective algorithms have provided a solution in spite of the intractability associated with the breadth of sizes?

  • Ground Truth Answers: NP-complete knapsack problemNP-complete knapsackthe NP-complete knapsack problem

  • Prediction:

How quickly can an algorithm solve an NP-complete knapsack problem?

  • Ground Truth Answers: in less than quadratic timeless than quadratic timeless than quadratic time

  • Prediction:

What is the example of another problem characterized by large instances that is routinely solved by SAT handlers employing efficient algorithms?

  • Ground Truth Answers: NP-complete Boolean satisfiability problemNP-complete Boolean satisfiabilitythe NP-complete Boolean satisfiability problem

  • Prediction:

Before the actual research explicitly devoted to the complexity of algorithmic problems started off, numerous foundations were laid out by various researchers. Most influential among these was the definition of Turing machines by Alan Turing in 1936, which turned out to be a very robust and flexible simplification of a computer.

What tactic did researchers employ to offset the former deficit of work surrounding the complexity of algorithmic problems?

  • Ground Truth Answers: foundations were laid outnumerous foundations were laid outnumerous foundations were laid out by various researchers

  • Prediction:

Who was the most influential researcher among those grappling with the deficit of work surrounding the complexity posed by algorithmic problems?

  • Ground Truth Answers: Alan TuringAlan TuringAlan Turing

  • Prediction:

What theoretical device is attributed to Alan Turing?

  • Ground Truth Answers: Turing machinesTuring machinesTuring machines

  • Prediction:

In what year was the Alan Turing's definitional model of a computing device received?

  • Ground Truth Answers: 193619361936

  • Prediction:

In the most basic sense what did a Turing machine emulate?

  • Ground Truth Answers: a computera computera computer

  • Prediction:

As Fortnow & Homer (2003) point out, the beginning of systematic studies in computational complexity is attributed to the seminal paper "On the Computational Complexity of Algorithms" by Juris Hartmanis and Richard Stearns (1965), which laid out the definitions of time and space complexity and proved the hierarchy theorems. Also, in 1965 Edmonds defined a "good" algorithm as one with running time bounded by a polynomial of the input size.

What paper is commonly considered the bellwether ushering in systematic studies computational complexity?

  • Ground Truth Answers: On the Computational Complexity of AlgorithmsOn the Computational Complexity of Algorithms"On the Computational Complexity of Algorithms"

  • Prediction:

What individuals were responsible for authoring "On the Computational Complexity of Algorithms"?

  • Ground Truth Answers: Juris Hartmanis and Richard StearnsJuris Hartmanis and Richard StearnsJuris Hartmanis and Richard Stearns

  • Prediction:

In what year was Hatmanis and Stearn's seminal work in computational complexity received?

  • Ground Truth Answers: 196519651965

  • Prediction:

What complex measurements were defined by "On the Computational Complexity of Algorithms"?

  • Ground Truth Answers: time and spacedefinitions of time and space complexitytime and space complexity

  • Prediction:

In what year did Edmond's characterize a "good" algorithm?

  • Ground Truth Answers: 196519651965

  • Prediction:

Earlier papers studying problems solvable by Turing machines with specific bounded resources include  John Myhill's definition of linear bounded automata (Myhill 1960), Raymond Smullyan's study of rudimentary sets (1961), as well as Hisao Yamada's paper on real-time computations (1962). Somewhat earlier, Boris Trakhtenbrot (1956), a pioneer in the field from the USSR, studied another specific complexity measure. As he remembers:

Who provided a definition of linear bounded automata in 1960?

  • Ground Truth Answers: John MyhillJohn MyhillJohn Myhill

  • Prediction:

In what year did Raymond Sullivan publish a study of rudimentary sets?

  • Ground Truth Answers: 196119611961

  • Prediction:

In 1962, who was responsible for the authorship of a paper published on real time-computations?

  • Ground Truth Answers: Hisao YamadaHisao YamadaHisao Yamada

  • Prediction:

Even though some proofs of complexity-theoretic theorems regularly assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding. This can be achieved by ensuring that different representations can be transformed into each other efficiently.

What is the concrete choice typically assumed by most complexity-theoretic theorems?

  • Ground Truth Answers: input encodinginput encodinginput encoding

  • Prediction:

In the effort of maintaining a level of abstraction, what choice is typically left independent?

  • Ground Truth Answers: encodingencodingencoding

  • Prediction:

In 1967, Manuel Blum developed an axiomatic complexity theory based on his axioms and proved an important result, the so-called, speed-up theorem. The field really began to flourish in 1971 when the US researcher Stephen Cook and, working independently, Leonid Levin in the USSR, proved that there exist practically relevant problems that are NP-complete. In 1972, Richard Karp took this idea a leap forward with his landmark paper, "Reducibility Among Combinatorial Problems", in which he showed that 21 diverse combinatorial and graph theoretical problems, each infamous for its computational intractability, are NP-complete.

Who is responsible for axiomatic complexity theory?

  • Ground Truth Answers: Manuel BlumManuel BlumManuel Blum

  • Prediction:

What theorem was implicated by Manuel Blum's axioms?

  • Ground Truth Answers: speed-up theoremspeed-up theoremspeed-up theorem

  • Prediction:

What is the paper written by Richard Karp in 1972 that ushered in a new era of understanding between intractability and NP-complete problems?

  • Ground Truth Answers: "Reducibility Among Combinatorial Problems"Reducibility Among Combinatorial Problems"Reducibility Among Combinatorial Problems"

  • Prediction:

How many combinatory and graph theoretical problems, formerly believed to be plagued by intractability, did Karp's paper address?

  • Ground Truth Answers: 212121

  • Prediction: