Skip to content

Commit 2dbb0df

Browse files
committed
more typos
1 parent d789d70 commit 2dbb0df

File tree

8 files changed

+275
-285
lines changed

8 files changed

+275
-285
lines changed

doc/pub/week15/html/week15-bs.html

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -213,7 +213,10 @@
213213
None,
214214
'a-simple-feedforward-qnn-structure'),
215215
('Example', 2, None, 'example'),
216-
('Training Output and Loss', 2, None, 'training-output-and-loss'),
216+
('Training output and Cost/Loss-function',
217+
2,
218+
None,
219+
'training-output-and-cost-loss-function'),
217220
('Exampe: Variational Classifier',
218221
2,
219222
None,
@@ -229,10 +232,10 @@
229232
'training-qnns-and-loss-landscapes'),
230233
('Gradient Computation', 2, None, 'gradient-computation'),
231234
('Barren Plateaus', 2, None, 'barren-plateaus'),
232-
('Loss Landscape Visualization',
235+
('Cost/Loss-landscape visualization',
233236
2,
234237
None,
235-
'loss-landscape-visualization'),
238+
'cost-loss-landscape-visualization'),
236239
('Exercises', 2, None, 'exercises'),
237240
('Implementing QNNs with PennyLane',
238241
2,
@@ -365,14 +368,14 @@
365368
<!-- navigation toc: --> <li><a href="#qnn-architecture-and-models" style="font-size: 80%;">QNN Architecture and Models</a></li>
366369
<!-- navigation toc: --> <li><a href="#a-simple-feedforward-qnn-structure" style="font-size: 80%;">A simple feedforward QNN structure</a></li>
367370
<!-- navigation toc: --> <li><a href="#example" style="font-size: 80%;">Example</a></li>
368-
<!-- navigation toc: --> <li><a href="#training-output-and-loss" style="font-size: 80%;">Training Output and Loss</a></li>
371+
<!-- navigation toc: --> <li><a href="#training-output-and-cost-loss-function" style="font-size: 80%;">Training output and Cost/Loss-function</a></li>
369372
<!-- navigation toc: --> <li><a href="#exampe-variational-classifier" style="font-size: 80%;">Exampe: Variational Classifier</a></li>
370373
<!-- navigation toc: --> <li><a href="#variational-layer-algebra" style="font-size: 80%;">Variational Layer Algebra</a></li>
371374
<!-- navigation toc: --> <li><a href="#short-summary" style="font-size: 80%;">Short summary</a></li>
372375
<!-- navigation toc: --> <li><a href="#training-qnns-and-loss-landscapes" style="font-size: 80%;">Training QNNs and Loss Landscapes</a></li>
373376
<!-- navigation toc: --> <li><a href="#gradient-computation" style="font-size: 80%;">Gradient Computation</a></li>
374377
<!-- navigation toc: --> <li><a href="#barren-plateaus" style="font-size: 80%;">Barren Plateaus</a></li>
375-
<!-- navigation toc: --> <li><a href="#loss-landscape-visualization" style="font-size: 80%;">Loss Landscape Visualization</a></li>
378+
<!-- navigation toc: --> <li><a href="#cost-loss-landscape-visualization" style="font-size: 80%;">Cost/Loss-landscape visualization</a></li>
376379
<!-- navigation toc: --> <li><a href="#exercises" style="font-size: 80%;">Exercises</a></li>
377380
<!-- navigation toc: --> <li><a href="#implementing-qnns-with-pennylane" style="font-size: 80%;">Implementing QNNs with PennyLane</a></li>
378381
<!-- navigation toc: --> <li><a href="#using-pennylane" style="font-size: 80%;">Using PennyLane</a></li>
@@ -2189,19 +2192,19 @@ <h2 id="example" class="anchor">Example </h2>
21892192
</p>
21902193

21912194
<!-- !split -->
2192-
<h2 id="training-output-and-loss" class="anchor">Training Output and Loss </h2>
2195+
<h2 id="training-output-and-cost-loss-function" class="anchor">Training output and Cost/Loss-function </h2>
21932196

21942197
<p>Given a QNN with output \( f(\mathbf{x};\boldsymbol\theta) \) (a real
21952198
number or vector of real values), one must define a loss function to
21962199
train on data. Common choices are the mean squared error (MSE) for
21972200
regression or cross-entropy for classification. For a training set
2198-
\( {\mathbf{x}i,y_i} \), the MSE loss is
2201+
\( {\mathbf{x}i,y_i} \), the MSE cost/loss-function is
21992202
</p>
22002203
$$
2201-
L(\boldsymbol\theta) = \frac{1}{N} \sum{i=1}^N \bigl(f(\mathbf{x}i;\boldsymbol\theta) - y_i\bigr)^2.
2204+
C(\boldsymbol\theta) = \frac{1}{N} \sum_{i=1}^N \bigl(f(\mathbf{x}i;\boldsymbol\theta) - y_i\bigr)^2.
22022205
$$
22032206

2204-
<p>One then computes gradients \( \nabla{\boldsymbol\theta}L \) and updates
2207+
<p>One then computes gradients \( \nabla{\boldsymbol\theta}C \) and updates
22052208
parameters via gradient descent or other optimizers.
22062209
</p>
22072210

@@ -2210,9 +2213,7 @@ <h2 id="exampe-variational-classifier" class="anchor">Exampe: Variational Classi
22102213

22112214
<p>A binary classifier can output
22122215
\( f(\mathbf{x};\boldsymbol\theta)=\langle Z_0\rangle \) on qubit 0, and
2213-
predict label \( +1 \) if \( f\ge0 \), else \( -1 \). In [50], a code snippet
2214-
defines such a classifier and uses a square loss . We will build
2215-
similar models in Chapter 4.
2216+
predict label \( +1 \) if \( f\ge0 \), else \( -1 \).
22162217
</p>
22172218

22182219
<!-- !split -->
@@ -2240,7 +2241,7 @@ <h2 id="short-summary" class="anchor">Short summary </h2>
22402241
<!-- !split -->
22412242
<h2 id="training-qnns-and-loss-landscapes" class="anchor">Training QNNs and Loss Landscapes </h2>
22422243

2243-
<p>Training a QNN involves optimizing a nonconvex quantum circuit cost
2244+
<p>Training a QNN involves optimizing a non-convex quantum circuit cost
22442245
function. Like classical neural networks, one typically uses
22452246
gradient-based methods. However, VQCs have unique features, as listed here.
22462247
</p>
@@ -2259,7 +2260,7 @@ <h2 id="gradient-computation" class="anchor">Gradient Computation </h2>
22592260
gradients by two circuit evaluations per parameter (independent of
22602261
circuit size). PennyLane automatically applies parameter-shift rule
22612262
when you call backward on a QNode . Optimizers: One can use gradient
2262-
descent or more advanced optimizers (Adam, SPSA, etc.). PennyLane
2263+
descent or more advanced optimizers (Adam, ADAgrad, RMSprop, etc.). PennyLane
22632264
provides a qml.GradientDescentOptimizer and others. Gradients flow
22642265
through the classical loss into the quantum circuit via the
22652266
parameter-shift trick. In our code examples below we will see
@@ -2289,12 +2290,12 @@ <h2 id="barren-plateaus" class="anchor">Barren Plateaus </h2>
22892290
</p>
22902291

22912292
<!-- !split -->
2292-
<h2 id="loss-landscape-visualization" class="anchor">Loss Landscape Visualization </h2>
2293+
<h2 id="cost-loss-landscape-visualization" class="anchor">Cost/Loss-landscape visualization </h2>
22932294

2294-
<p>One can imagine the loss function \( L(\boldsymbol\theta) \) over the
2295+
<p>One can imagine the cost/loss function \( C(\boldsymbol\theta) \) over the
22952296
parameter space. Unlike convex classical problems, this landscape may
22962297
have many local minima and saddle points. Barren plateaus correspond
2297-
to regions where \( \nabla L\approx 0 \) almost everywhere. Even if
2298+
to regions where \( \nabla C\approx 0 \) almost everywhere. Even if
22982299
plateaus are avoided, poor minima can still trap the optimizer . In
22992300
practice, careful tuning of learning rates and adding small random
23002301
noise can help escape shallow minima.
@@ -2319,8 +2320,7 @@ <h2 id="exercises" class="anchor">Exercises </h2>
23192320
<!-- !split -->
23202321
<h2 id="implementing-qnns-with-pennylane" class="anchor">Implementing QNNs with PennyLane </h2>
23212322

2322-
<p>PennyLane is a software library for hybrid quantum-classical
2323-
computation. It provides QNodes, differentiable quantum functions that
2323+
<p>PennyLane provides QNodes, differentiable quantum functions that
23242324
can be integrated with Python ML frameworks. Here we illustrate
23252325
building and training a simple variational quantum classifier using
23262326
PennyLane.
@@ -2648,8 +2648,8 @@ <h2 id="essential-steps-in-the-code" class="anchor">Essential steps in the code
26482648
<p>After embedding, we apply a layer of trainable rotations and
26492649
entangling gates (StronglyEntanglingLayers), creating a parameterized
26502650
circuit whose outputs depend on adjustable weights . Measuring the
2651-
expectation &#10216;Z&#10217; of the first qubit yields a value in \( [&#8211;1,1] \), which we
2652-
convert to a class probability via \( (1 &#8211; &#10216;Z&#10217;)/2 \).
2651+
expectation \( \langle Z\rangle \) of the first qubit yields a value in \( [-1,1] \), which we
2652+
convert to a class probability via \( (1&#8211;\langle Z\rangle)/2 \).
26532653
</p>
26542654
</div>
26552655
</div>

doc/pub/week15/html/week15-reveal.html

Lines changed: 13 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -2067,21 +2067,21 @@ <h2 id="example">Example </h2>
20672067
</section>
20682068

20692069
<section>
2070-
<h2 id="training-output-and-loss">Training Output and Loss </h2>
2070+
<h2 id="training-output-and-cost-loss-function">Training output and Cost/Loss-function </h2>
20712071

20722072
<p>Given a QNN with output \( f(\mathbf{x};\boldsymbol\theta) \) (a real
20732073
number or vector of real values), one must define a loss function to
20742074
train on data. Common choices are the mean squared error (MSE) for
20752075
regression or cross-entropy for classification. For a training set
2076-
\( {\mathbf{x}i,y_i} \), the MSE loss is
2076+
\( {\mathbf{x}i,y_i} \), the MSE cost/loss-function is
20772077
</p>
20782078
<p>&nbsp;<br>
20792079
$$
2080-
L(\boldsymbol\theta) = \frac{1}{N} \sum{i=1}^N \bigl(f(\mathbf{x}i;\boldsymbol\theta) - y_i\bigr)^2.
2080+
C(\boldsymbol\theta) = \frac{1}{N} \sum_{i=1}^N \bigl(f(\mathbf{x}i;\boldsymbol\theta) - y_i\bigr)^2.
20812081
$$
20822082
<p>&nbsp;<br>
20832083

2084-
<p>One then computes gradients \( \nabla{\boldsymbol\theta}L \) and updates
2084+
<p>One then computes gradients \( \nabla{\boldsymbol\theta}C \) and updates
20852085
parameters via gradient descent or other optimizers.
20862086
</p>
20872087
</section>
@@ -2091,9 +2091,7 @@ <h2 id="exampe-variational-classifier">Exampe: Variational Classifier </h2>
20912091

20922092
<p>A binary classifier can output
20932093
\( f(\mathbf{x};\boldsymbol\theta)=\langle Z_0\rangle \) on qubit 0, and
2094-
predict label \( +1 \) if \( f\ge0 \), else \( -1 \). In [50], a code snippet
2095-
defines such a classifier and uses a square loss . We will build
2096-
similar models in Chapter 4.
2094+
predict label \( +1 \) if \( f\ge0 \), else \( -1 \).
20972095
</p>
20982096
</section>
20992097

@@ -2124,7 +2122,7 @@ <h2 id="short-summary">Short summary </h2>
21242122
<section>
21252123
<h2 id="training-qnns-and-loss-landscapes">Training QNNs and Loss Landscapes </h2>
21262124

2127-
<p>Training a QNN involves optimizing a nonconvex quantum circuit cost
2125+
<p>Training a QNN involves optimizing a non-convex quantum circuit cost
21282126
function. Like classical neural networks, one typically uses
21292127
gradient-based methods. However, VQCs have unique features, as listed here.
21302128
</p>
@@ -2146,7 +2144,7 @@ <h2 id="gradient-computation">Gradient Computation </h2>
21462144
gradients by two circuit evaluations per parameter (independent of
21472145
circuit size). PennyLane automatically applies parameter-shift rule
21482146
when you call backward on a QNode . Optimizers: One can use gradient
2149-
descent or more advanced optimizers (Adam, SPSA, etc.). PennyLane
2147+
descent or more advanced optimizers (Adam, ADAgrad, RMSprop, etc.). PennyLane
21502148
provides a qml.GradientDescentOptimizer and others. Gradients flow
21512149
through the classical loss into the quantum circuit via the
21522150
parameter-shift trick. In our code examples below we will see
@@ -2178,12 +2176,12 @@ <h2 id="barren-plateaus">Barren Plateaus </h2>
21782176
</section>
21792177

21802178
<section>
2181-
<h2 id="loss-landscape-visualization">Loss Landscape Visualization </h2>
2179+
<h2 id="cost-loss-landscape-visualization">Cost/Loss-landscape visualization </h2>
21822180

2183-
<p>One can imagine the loss function \( L(\boldsymbol\theta) \) over the
2181+
<p>One can imagine the cost/loss function \( C(\boldsymbol\theta) \) over the
21842182
parameter space. Unlike convex classical problems, this landscape may
21852183
have many local minima and saddle points. Barren plateaus correspond
2186-
to regions where \( \nabla L\approx 0 \) almost everywhere. Even if
2184+
to regions where \( \nabla C\approx 0 \) almost everywhere. Even if
21872185
plateaus are avoided, poor minima can still trap the optimizer . In
21882186
practice, careful tuning of learning rates and adding small random
21892187
noise can help escape shallow minima.
@@ -2211,8 +2209,7 @@ <h2 id="exercises">Exercises </h2>
22112209
<section>
22122210
<h2 id="implementing-qnns-with-pennylane">Implementing QNNs with PennyLane </h2>
22132211

2214-
<p>PennyLane is a software library for hybrid quantum-classical
2215-
computation. It provides QNodes, differentiable quantum functions that
2212+
<p>PennyLane provides QNodes, differentiable quantum functions that
22162213
can be integrated with Python ML frameworks. Here we illustrate
22172214
building and training a simple variational quantum classifier using
22182215
PennyLane.
@@ -2543,8 +2540,8 @@ <h2 id="essential-steps-in-the-code">Essential steps in the code </h2>
25432540
<p>After embedding, we apply a layer of trainable rotations and
25442541
entangling gates (StronglyEntanglingLayers), creating a parameterized
25452542
circuit whose outputs depend on adjustable weights . Measuring the
2546-
expectation &#10216;Z&#10217; of the first qubit yields a value in \( [&#8211;1,1] \), which we
2547-
convert to a class probability via \( (1 &#8211; &#10216;Z&#10217;)/2 \).
2543+
expectation \( \langle Z\rangle \) of the first qubit yields a value in \( [-1,1] \), which we
2544+
convert to a class probability via \( (1&#8211;\langle Z\rangle)/2 \).
25482545
</p>
25492546
</div>
25502547

doc/pub/week15/html/week15-solarized.html

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -240,7 +240,10 @@
240240
None,
241241
'a-simple-feedforward-qnn-structure'),
242242
('Example', 2, None, 'example'),
243-
('Training Output and Loss', 2, None, 'training-output-and-loss'),
243+
('Training output and Cost/Loss-function',
244+
2,
245+
None,
246+
'training-output-and-cost-loss-function'),
244247
('Exampe: Variational Classifier',
245248
2,
246249
None,
@@ -256,10 +259,10 @@
256259
'training-qnns-and-loss-landscapes'),
257260
('Gradient Computation', 2, None, 'gradient-computation'),
258261
('Barren Plateaus', 2, None, 'barren-plateaus'),
259-
('Loss Landscape Visualization',
262+
('Cost/Loss-landscape visualization',
260263
2,
261264
None,
262-
'loss-landscape-visualization'),
265+
'cost-loss-landscape-visualization'),
263266
('Exercises', 2, None, 'exercises'),
264267
('Implementing QNNs with PennyLane',
265268
2,
@@ -2087,19 +2090,19 @@ <h2 id="example">Example </h2>
20872090
</p>
20882091

20892092
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
2090-
<h2 id="training-output-and-loss">Training Output and Loss </h2>
2093+
<h2 id="training-output-and-cost-loss-function">Training output and Cost/Loss-function </h2>
20912094

20922095
<p>Given a QNN with output \( f(\mathbf{x};\boldsymbol\theta) \) (a real
20932096
number or vector of real values), one must define a loss function to
20942097
train on data. Common choices are the mean squared error (MSE) for
20952098
regression or cross-entropy for classification. For a training set
2096-
\( {\mathbf{x}i,y_i} \), the MSE loss is
2099+
\( {\mathbf{x}i,y_i} \), the MSE cost/loss-function is
20972100
</p>
20982101
$$
2099-
L(\boldsymbol\theta) = \frac{1}{N} \sum{i=1}^N \bigl(f(\mathbf{x}i;\boldsymbol\theta) - y_i\bigr)^2.
2102+
C(\boldsymbol\theta) = \frac{1}{N} \sum_{i=1}^N \bigl(f(\mathbf{x}i;\boldsymbol\theta) - y_i\bigr)^2.
21002103
$$
21012104

2102-
<p>One then computes gradients \( \nabla{\boldsymbol\theta}L \) and updates
2105+
<p>One then computes gradients \( \nabla{\boldsymbol\theta}C \) and updates
21032106
parameters via gradient descent or other optimizers.
21042107
</p>
21052108

@@ -2108,9 +2111,7 @@ <h2 id="exampe-variational-classifier">Exampe: Variational Classifier </h2>
21082111

21092112
<p>A binary classifier can output
21102113
\( f(\mathbf{x};\boldsymbol\theta)=\langle Z_0\rangle \) on qubit 0, and
2111-
predict label \( +1 \) if \( f\ge0 \), else \( -1 \). In [50], a code snippet
2112-
defines such a classifier and uses a square loss . We will build
2113-
similar models in Chapter 4.
2114+
predict label \( +1 \) if \( f\ge0 \), else \( -1 \).
21142115
</p>
21152116

21162117
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
@@ -2138,7 +2139,7 @@ <h2 id="short-summary">Short summary </h2>
21382139
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
21392140
<h2 id="training-qnns-and-loss-landscapes">Training QNNs and Loss Landscapes </h2>
21402141

2141-
<p>Training a QNN involves optimizing a nonconvex quantum circuit cost
2142+
<p>Training a QNN involves optimizing a non-convex quantum circuit cost
21422143
function. Like classical neural networks, one typically uses
21432144
gradient-based methods. However, VQCs have unique features, as listed here.
21442145
</p>
@@ -2157,7 +2158,7 @@ <h2 id="gradient-computation">Gradient Computation </h2>
21572158
gradients by two circuit evaluations per parameter (independent of
21582159
circuit size). PennyLane automatically applies parameter-shift rule
21592160
when you call backward on a QNode . Optimizers: One can use gradient
2160-
descent or more advanced optimizers (Adam, SPSA, etc.). PennyLane
2161+
descent or more advanced optimizers (Adam, ADAgrad, RMSprop, etc.). PennyLane
21612162
provides a qml.GradientDescentOptimizer and others. Gradients flow
21622163
through the classical loss into the quantum circuit via the
21632164
parameter-shift trick. In our code examples below we will see
@@ -2187,12 +2188,12 @@ <h2 id="barren-plateaus">Barren Plateaus </h2>
21872188
</p>
21882189

21892190
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
2190-
<h2 id="loss-landscape-visualization">Loss Landscape Visualization </h2>
2191+
<h2 id="cost-loss-landscape-visualization">Cost/Loss-landscape visualization </h2>
21912192

2192-
<p>One can imagine the loss function \( L(\boldsymbol\theta) \) over the
2193+
<p>One can imagine the cost/loss function \( C(\boldsymbol\theta) \) over the
21932194
parameter space. Unlike convex classical problems, this landscape may
21942195
have many local minima and saddle points. Barren plateaus correspond
2195-
to regions where \( \nabla L\approx 0 \) almost everywhere. Even if
2196+
to regions where \( \nabla C\approx 0 \) almost everywhere. Even if
21962197
plateaus are avoided, poor minima can still trap the optimizer . In
21972198
practice, careful tuning of learning rates and adding small random
21982199
noise can help escape shallow minima.
@@ -2217,8 +2218,7 @@ <h2 id="exercises">Exercises </h2>
22172218
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
22182219
<h2 id="implementing-qnns-with-pennylane">Implementing QNNs with PennyLane </h2>
22192220

2220-
<p>PennyLane is a software library for hybrid quantum-classical
2221-
computation. It provides QNodes, differentiable quantum functions that
2221+
<p>PennyLane provides QNodes, differentiable quantum functions that
22222222
can be integrated with Python ML frameworks. Here we illustrate
22232223
building and training a simple variational quantum classifier using
22242224
PennyLane.
@@ -2545,8 +2545,8 @@ <h2 id="essential-steps-in-the-code">Essential steps in the code </h2>
25452545
<p>After embedding, we apply a layer of trainable rotations and
25462546
entangling gates (StronglyEntanglingLayers), creating a parameterized
25472547
circuit whose outputs depend on adjustable weights . Measuring the
2548-
expectation &#10216;Z&#10217; of the first qubit yields a value in \( [&#8211;1,1] \), which we
2549-
convert to a class probability via \( (1 &#8211; &#10216;Z&#10217;)/2 \).
2548+
expectation \( \langle Z\rangle \) of the first qubit yields a value in \( [-1,1] \), which we
2549+
convert to a class probability via \( (1&#8211;\langle Z\rangle)/2 \).
25502550
</p>
25512551
</div>
25522552

0 commit comments

Comments
 (0)