Extend/improve the code to do the following:!!(PyTorch)!!
Extend/improve the code to do the following: ————————————————————————— The Code: ————————————————————————— Changing from regression to classification: We are using “logistic regression” for classification tasks because the logistic function (aka, sigmoid) looks like a sideways “S” and ranges between 0 and 1. It essentially produces a “percent likelihood” that the data is in class A or B, etc. 32561 rows × 15 columns Need to turn some of these categorical columns into “one-hot” columns, such as workclass: Private = {0,0,0}, Self-emp-not-inc = {0,1,0}, etc. This way, the neural net does not see labels like Private = 1, Self-emp-not-inc = 2, etc. which may lead it to believe Private is “close to” Self-emp-not-inc, etc. workclass_ ? workclass_ Federal-gov workclass_ Local-gov workclass_ Never-worked workclass_ Private workclass_ Self-emp-inc workclass_ Self-emp-not-inc workclass_ State-gov workclass_ Without-pay 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 2 0 0 0 0 1 0 0 0 0 3 0 0 0 0 1 0 0 0 0 4 0 0 0 0 1 0 0 0 0 … … … … … … … … … … 32556 0 0 0 0 1 0 0 0 0 32557 0 0 0 0 1 0 0 0 0 32558 0 0 0 0 1 0 0 0 0 32559 0 0 0 0 1 0 0 0 0 32560 0 0 0 0 0 1 0 0 0 32561 rows × 9 columns relationship_ Husband relationship_ Not-in-family relationship_ Other-relative relationship_ Own-child relationship_ Unmarried relationship_ Wife 0 0 1 0 0 0 0 1 1 0 0 0 0 0 2 0 1 0 0 0 0 3 1 0 0 0 0 0 4 0 0 0 0 0 1 … … … … … … … 32556 0 0 0 0 0 1 32557 1 0 0 0 0 0 32558 0 0 0 0 1 0 32559 0 0 0 1 0 0 32560 0 0 0 0 0 1 32561 rows × 6 columns Because we are doing classification, we will use CrossEntropyLoss for our criterion function. The documentation shows that the “input” (the last layer of the network) should have 2 neurons because we have two classes (<=50K, >50K) for our target. The docs also show we need to make our y target values a single number that is 0 or 1 (since we have two classes). We do not use “onehot” encoding on these y values. NOTE!!! There are only 24% of one class, so our model has to at least get less error than this! It can simply learn to always say “1” or whatever, ignoring the input data, and still be wrong only 24% of the time! Graph train/val loss and accuracy per epoch (two separate graphs). Use a legend to indicate which line is train and which is val. Make sure you do not cause overfitting (as indicated by your plots). Improve the accuracy so the error on the test dataset (as computed by sklearn, we’re not talking about loss here) is <17%. Show a confusion matrix of the prediction errors. change last activation to Sigmoid to get a value between 0 and 1 use as many output nodes as you have choices (2 choices = 2 nodes) use CrossEntropyLoss to better measure error for classification
We've got everything to become your favourite writing service
Money back guarantee
Your money is safe. Even if we fail to satisfy your expectations, you can always request a refund and get your money back.
Confidentiality
We don’t share your private information with anyone. What happens on our website stays on our website.
Our service is legit
We provide you with a sample paper on the topic you need, and this kind of academic assistance is perfectly legitimate.
Get a plagiarism-free paper
We check every paper with our plagiarism-detection software, so you get a unique paper written for your particular purposes.
We can help with urgent tasks
Need a paper tomorrow? We can write it even while you’re sleeping. Place an order now and get your paper in 8 hours.
Pay a fair price
Our prices depend on urgency. If you want a cheap essay, place your order in advance. Our prices start from $11 per page.