No, quantum computers that only use Clifford gates, also known as Clifford-based quantum computers, are not inherently faster than classical computers that use the stabilizer implementation prescribed by Gottesman-Knill.
The Gottesman-Knill theorem states that classical simulations can efficiently simulate quantum circuits composed of Clifford gates and Pauli measurements. This means that if a quantum computation consists only of Clifford gates and Pauli measurements, its results can be efficiently simulated using classical computation.
Clifford gates are a specific set of quantum gates that are easy to simulate classically and form an important component of quantum error correction codes. While quantum circuits consisting only of Clifford gates can be efficiently simulated classically, they do not provide the same computational advantage as non-Clifford gates, such as T-gates or arbitrary single-qubit rotations.
To achieve quantum speedup, quantum algorithms often require the utilization of non-Clifford gates and complex entanglement patterns. Examples include Shor's algorithm for factoring large numbers, which utilizes non-Clifford gates, and the quantum phase estimation algorithm used in quantum chemistry simulations, which requires non-Clifford gates as well.
It's worth noting that quantum error correction, an essential aspect of large-scale fault-tolerant quantum computing, relies on stabilizer codes and Clifford gates. However, fault-tolerant quantum computation extends beyond Clifford gates and involves additional techniques to handle errors and protect quantum information against noise.
In summary, while Clifford-based quantum computers and classical stabilizer implementations have their respective applications and play a role in quantum error correction, quantum algorithms that utilize non-Clifford gates are generally required to achieve a significant speedup over classical computation.