After training the nets with mean value of all attributes and it showed 97.2% correctness, now I will perform the GUI one 😀 because it is a lot better than before in graphics, of course. Well, the pre-processing steps are similar to previous one, all you need is just preparing the data, install them into your workspace, and transpose it. But please notice that this experiment will use standard error value and worst value of each field. So, the class attributes are exactly same with the previous, but the atts attributes are not same, you have to copy the next ten attributes to have it.
Firstly, activate Neural Network Fitting Tool by this script and then a window immediately pop-out to your screen (yay!)
The first page of GUI just consists of brief explanation about NNF Tool. Similar with variable formats and targets used in Command Prompt, this GUI also use exactly same rule to define variables. Please prepare your variables before doing classification in MATLAB workspace to ease the next process. Click Next Button.
If you get the warning sign, it indicates that you haven’t transposed your variables. Please transpose it before doing the next step. Click Next Button.
This step is skipped from Command Prompt, but actually this step is adjustable by users. By default, the proportion between Training, Validation and Testing is 70:15:15. In other references suggest different scale, for example: 80:0:20, 60:30:10 or 90:0:10. The existence of validation part is negotiable, thus, in several books, it might be eliminated.
The bigger portion of training data might yield the perfect classifier rule but it also increases the probability of having over-fitting model which will cause low performance. In contrast, the lower part of training data can cause under-fitting model which means that model doesn’t cover enough the requirements through fewer given samples. The error value of under-fitting is usually worse than over-fitting’s error value. In order to make it apple-to-apple comparison with previous one, let it be the default one. Click Next Button.
Congratulations! you don’t have to type anything in command window to view your network architecture now. Input your preferable hidden neurons which are contained in hidden layer. 20 is the default value. In order to make it apple-to-apple comparison with previous one, let it be the default one. Click Next Button.
To perform training in built neural networks, click Train Button. The results will be displayed in top-right container. After training phase, MSE and R value will be fulfilled according to each data portion. Click Next Button.
This interface provides opportunities to re-train network if the result has not been satisfying yet. Re-train can be performed by adjusting network size or import another large dataset. Moreover, it can also perform another test with additional dataset test. Click Next Button.
This graphic makes a big difference in operability of data saving. It also helps users to generate M-File to operate the created Neural Network architecture in other occasions and needs. Click Finish Button after Click Save Results Button.
Then in your workspace will be like this
Let’s do the confusion matrix, but, do not forget that the output have not transformed into binary digit yet, we still have to round and plot confusion manually 😀
>> output = round(output)
>> plotconfusion(class2, output)
I bet that my result is worse than previous one. It caused by the performance value is highly different. The first one was 0.0151, meanwhile, this one is 0.4 😀 Here is my confusion matrix.
Huwaaa, my correctness decrease from 97.2% to 92.1% hahaha.. its okay, it means that the standard error has a worse classifier to predict compared to mean value. Lets compare with the worst value one..
Wow, I have 98.8% accuracy through Neural Nets 😀 Well, it can be concluded that the worst value or the highest value have the best performance to predict cancer than mean and standard error value 🙂
Let’s try with another tool and classifier at the next trial 🙂