You run gradient descent for 15 iterations with a=0.3 and compute J(theta) after each iteration. You find that the value of J(Theta) decreases quickly and then levels off. Based on this, which of the following conclusions seems most plausible?

Rather than using the current value of a, use a larger value of a (say a=1.0)

Rather than using the current value of a, use a smaller value of a (say a=0.1)

a=0.3 is an effective choice of learning rate

None of the above

Correct Option - c

To get all Infosys Certified Data Science using R Professional-Part2 Exam questions Join Group https://bit.ly/infy_premium_group

We're passionate about offering best placement materials and courses!! A one stop place for Placement Materials. We daily post Offcampus updates and Placement Materials.

Qtr No. 213, New Town Yehlanka Indore 454775

admin@prepflix.in