Backpropagation Algorithm

Question 1
Marks : +2 | -2
Pass Ratio : 10%
Does backpropagaion learning is based on gradient descent along error surface?
yes
no
cannot be said
it depends on gradient descent but not error surface
Explanation:
Weight adjustment is proportional to negative gradient of error with respect to weight.
Question 2
Marks : +2 | -2
Pass Ratio : 10%
What is true regarding backpropagation rule?
it is also called generalized delta rule
error in output is propagated backwards only to determine weight updates
there is no feedback of signal at nay stage
all of the mentioned
Explanation:
These all statements defines backpropagation algorithm.
Question 3
Marks : +2 | -2
Pass Ratio : 20%
How can learning process be stopped in backpropagation rule?
there is convergence involved
no heuristic criteria exist
on basis of average gradient value
none of the mentioned
Explanation:
If average gadient value fall below a preset threshold value, the process may be stopped.
Question 4
Marks : +2 | -2
Pass Ratio : 20%
What is meant by generalized in statement “backpropagation is a generalized delta rule” ?
because delta rule can be extended to hidden layer units
because delta is applied to only input and output layers, thus making it more simple and generalized
it has no significance
none of the mentioned
Explanation:
The term generalized is used because delta rule could be extended to hidden layer units.
Question 5
Marks : +2 | -2
Pass Ratio : 20%
The backpropagation law is also known as generalized delta rule, is it true?
yes
no
Explanation:
Because it fulfils the basic condition of delta rule.
Question 6
Marks : +2 | -2
Pass Ratio : 30%
What is true regarding backpropagation rule?
it is a feedback neural network
actual output is determined by computing the outputs of units for each hidden layer
hidden layers output is not all important, they are only meant for supporting input and output layers
none of the mentioned
Explanation:
In backpropagation rule, actual output is determined by computing the outputs of units for each hidden layer.
Question 7
Marks : +2 | -2
Pass Ratio : 10%
What are general limitations of back propagation rule?
local minima problem
slow convergence
scaling
all of the mentioned
Explanation:
These all are limitations of backpropagation algorithm in general.
Question 8
Marks : +2 | -2
Pass Ratio : 20%
There is feedback in final stage of backpropagation algorithm?
yes
no
Explanation:
No feedback is involved at any stage as it is a feedforward neural network.
Question 9
Marks : +2 | -2
Pass Ratio : 20%
What are the general tasks that are performed with backpropagation algorithm?
pattern mapping
function approximation
prediction
all of the mentioned
Explanation:
These all are the tasks that can be performed with backpropagation algorithm in general.
Question 10
Marks : +2 | -2
Pass Ratio : 20%
What is the objective of backpropagation algorithm?
to develop learning algorithm for multilayer feedforward neural network
to develop learning algorithm for single layer feedforward neural network
to develop learning algorithm for multilayer feedforward neural network, so that network can be trained to capture the mapping implicitly
none of the mentioned
Explanation:
The objective of backpropagation algorithm is to to develop learning algorithm for multilayer feedforward neural network, so that network can be trained to capture the mapping implicitly.