# Quiz 5

## Milestones

Quiz Deadline | 2023-07-27T23:59:00-04:00 |

Late Deadline | not applicable |

Grading Deadline | 2023-08-01T23:59:00-04:00 |

## Rubric

All questions on the quizzes in this course are out of 3 points. When assessing these quizzes, score answers as follows:

**3**is reserved for**excellent**responses that that fully explain the answer to the question, show a true understanding of the question being asked, and don’t contain any errors.*As a general guideline, we expect no more than 25-30% of answers are likely to receive a 3 on any particular question.***2**is awarded for a**good**response that’s getting at the right idea, but might not explain it fully or might have a few minor errors. 2, indeed, is what the course expects most students are likely to earn.*As a general guideline, we expect approximately 50-60% of answers are likely to receive a 2 on any particular question.***1**is awarded for an**inadequate**response that misses the mark, is wholly incorrect, or otherwise fails to answer what the question is asking.**0**is given to answers that are exceedingly brief, or blank, and as such are effectively**non-responses**.

The above rubric should be applied for all questions on the quiz.

**If you give a score lower than full credit, please include a brief comment** in the “Provide comments specific to this submission” section of the form. You can also re-use comments in the “Apply Previously Used Comments” to save time so that you don’t find yourself typing the same thing over and over!

## Grading Responsibilities

Question 1 | Ben |

Question 2 | Taha |

Question 3 | Inno |

Question 4 | Chris |

## Answer Key

### Question 1

- 0, as the calculated result is less than activation.
- 0, as the calculated result is less than activation.
- 1, since the value is greater than the activation point, the output is 1.
- 11, since the value is greater than the activation point, we preserve the value.

### Question 2

- Dropout helps to avoid overfitting. By randomly dropping out nodes from the network while training the network, the network learns not to rely too heavy on any one connection.
- Hidden layers allow the network to learn nonlinear decision boundaries. A single-layer network is only capable of learning a linear separation between classes, which isn’t a good fit (it may “underfit”) for many data sets.
- Backpropagation allows us to determine how to adjust the weights in hidden layers when training a neural network in such a way that it minimizes error/loss in the network. This makes it possible to learn parameters to maximize the accuracy of a multi-layer neural network.

### Question 3

**15**. Each of the 4 inputs is connected to each of the 3 outputs, for a total of 12 weights. Each of the 3 outputs has a bias, for a total of 15 weights.**44**. Each of the 3 inputs is connected to each of the 5 hidden units, for a total of 15 weights — if we include the 5 biases, that becomes 20 weights. That is added to each of the 5 hidden units connected to each of the 4 output units, for a total of 20 weights, plus an additional 4 biases for the output units.

### Question 4

1.

18 | 8 |
---|---|

16 | 26 |

2.

16 | 12 |
---|---|

32 | 2 |