Error calculation Flashcards
Error calculation
- Supervised training
- Sum of squares error
- Mean squares
- Root mean squares
- data sets
Supervised training
Error calculation is very important for supervised training. In supervised training training set would be used. The training set consists of vector pairs which align to the input with an output or ideal. Example of how vector pairs would be constructed for the XOR function
Input[0,0] -> Ideal [0]
Input[0,1]-> Ideal [1]
XOR function
Example of how vector pairs would be constructed for the XOR function
Input[0,0] -> Ideal [0]
Input[0,1]-> Ideal [1]
XOR function always either return true or false. True when the two inputs are different or false if both the inputs are same. In the above example true is normalized to 1 and false to 0.
In machine learning there is common introductory task involved in which training set would used to teach machine learning to emulate XOR function.
Types of error calculation?
In machine learning algorithms, there are variety of error calculation methods that are commonly used. Following are the few of them
Sum of square error (SSE)
Mean square error (MSE)
Root mean square(RMS)
The most commonly used ecm is MSE.However, this does not mean MSE is the best suitable one in all scenarios. Sometimes ML in use dictate the ECM that you should use. we can also use multiple ECM if necessary for comparison.
Sum of squares error (SSE)
SSE is very simple ECM is used broadly by some MLA. A high SSE value indicates that there is a large difference between expected output and actual output.
SSE is essentially the sum of squares of individual variances of each output. because of this bigger training set will always tend to have larger SSE values. This is one of the weaknesses of SSE value - you cannot directly compare the SSE values from two training sets of different sizes
Root Mean square
Root Mean square error is similar to SSE method in that it is based on squares of the individual variances between the expected output and actual output, however, mean is taken of all the squares and square root taken of this mean. Because RMS is based on mean we can compare the RMS values of two different training sets.
Mean square Error
MSE error calculation is the most commonly used ECM for ML , but not all, Internet examples of neural networks, support vector machines and other models make use of Mean square error.
The mean square error is essentially the mean of the squares of the individual differences. Because individual differences are squared it is does not matter to MSE if the difference is negative to positive.
We may be wondering how to choose best one out of RMS and MSE as they both are very similar. One important difference among each other is that RMS is linear whereas MSE is not.
In usage if every error in the trainning set is required to double then RMS supports doble whereas MSE would not.