Perceptron - Which step function to choose
$begingroup$
I'm studying Perceptron
algorithm. Some books use this step function
1 if x>=0 else -1
where x is a dot product between the weights w and a sample x.
Other books use:
1 if x>=0 else 0
What are the practical differences between these step functions?
machine-learning neural-network deep-learning perceptron
$endgroup$
bumped to the homepage by Community♦ 11 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
I'm studying Perceptron
algorithm. Some books use this step function
1 if x>=0 else -1
where x is a dot product between the weights w and a sample x.
Other books use:
1 if x>=0 else 0
What are the practical differences between these step functions?
machine-learning neural-network deep-learning perceptron
$endgroup$
bumped to the homepage by Community♦ 11 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
I'm studying Perceptron
algorithm. Some books use this step function
1 if x>=0 else -1
where x is a dot product between the weights w and a sample x.
Other books use:
1 if x>=0 else 0
What are the practical differences between these step functions?
machine-learning neural-network deep-learning perceptron
$endgroup$
I'm studying Perceptron
algorithm. Some books use this step function
1 if x>=0 else -1
where x is a dot product between the weights w and a sample x.
Other books use:
1 if x>=0 else 0
What are the practical differences between these step functions?
machine-learning neural-network deep-learning perceptron
machine-learning neural-network deep-learning perceptron
edited Jan 22 '18 at 1:20
Vaalizaadeh
7,62562265
7,62562265
asked Dec 31 '17 at 8:54
PoieraPoiera
11616
11616
bumped to the homepage by Community♦ 11 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 11 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
They have the same meaning in this context although during training using Rosenblatt
update rule, the former may have great changes during each update. Perceptron
is used for binary classification which means there are two possible classes to classify. If the result of inner product, here dot product, is greater than or equal to zero, the class of inputs will be the first class and if it's smaller than zero the class of inputs would be the other class. Perceptron
just has one neuron. It's a simple linear classifier. The value of threshold is just important. Means that if the product is greater than zero, the input belongs to e.g. positive class and if is negative it belongs to negative class. The step functions and Rosenblatt
update rule are not used any more. They have so much oscillation. Today networks learn using gradient descending algorithms which uses the concept of derivative.
When you progress, you will see that neural nets which use other activation functions like Sigmoid
or Tanh
are different. the former has 0.5
expected value and the latter has 0
expected value which causes the second learn so much faster. Although now a days ReLU
is more popular among other activation functions.
$endgroup$
add a comment |
$begingroup$
I think that depends on how the next step in the algorithm is defined in the respective textbook(s). There might be slight differences.
Your if-statements can be interpreted as the following half-sentences,
"If there is a change in sign, [update the weights, if there isn't, do
nothing"]
.
The second if statement reads:
"if the value is nonzero, [update the weights, otherwise do nothing.]"
Maybe your textbooks differ in how the parts between the [ ... ]
are written.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f26159%2fperceptron-which-step-function-to-choose%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
They have the same meaning in this context although during training using Rosenblatt
update rule, the former may have great changes during each update. Perceptron
is used for binary classification which means there are two possible classes to classify. If the result of inner product, here dot product, is greater than or equal to zero, the class of inputs will be the first class and if it's smaller than zero the class of inputs would be the other class. Perceptron
just has one neuron. It's a simple linear classifier. The value of threshold is just important. Means that if the product is greater than zero, the input belongs to e.g. positive class and if is negative it belongs to negative class. The step functions and Rosenblatt
update rule are not used any more. They have so much oscillation. Today networks learn using gradient descending algorithms which uses the concept of derivative.
When you progress, you will see that neural nets which use other activation functions like Sigmoid
or Tanh
are different. the former has 0.5
expected value and the latter has 0
expected value which causes the second learn so much faster. Although now a days ReLU
is more popular among other activation functions.
$endgroup$
add a comment |
$begingroup$
They have the same meaning in this context although during training using Rosenblatt
update rule, the former may have great changes during each update. Perceptron
is used for binary classification which means there are two possible classes to classify. If the result of inner product, here dot product, is greater than or equal to zero, the class of inputs will be the first class and if it's smaller than zero the class of inputs would be the other class. Perceptron
just has one neuron. It's a simple linear classifier. The value of threshold is just important. Means that if the product is greater than zero, the input belongs to e.g. positive class and if is negative it belongs to negative class. The step functions and Rosenblatt
update rule are not used any more. They have so much oscillation. Today networks learn using gradient descending algorithms which uses the concept of derivative.
When you progress, you will see that neural nets which use other activation functions like Sigmoid
or Tanh
are different. the former has 0.5
expected value and the latter has 0
expected value which causes the second learn so much faster. Although now a days ReLU
is more popular among other activation functions.
$endgroup$
add a comment |
$begingroup$
They have the same meaning in this context although during training using Rosenblatt
update rule, the former may have great changes during each update. Perceptron
is used for binary classification which means there are two possible classes to classify. If the result of inner product, here dot product, is greater than or equal to zero, the class of inputs will be the first class and if it's smaller than zero the class of inputs would be the other class. Perceptron
just has one neuron. It's a simple linear classifier. The value of threshold is just important. Means that if the product is greater than zero, the input belongs to e.g. positive class and if is negative it belongs to negative class. The step functions and Rosenblatt
update rule are not used any more. They have so much oscillation. Today networks learn using gradient descending algorithms which uses the concept of derivative.
When you progress, you will see that neural nets which use other activation functions like Sigmoid
or Tanh
are different. the former has 0.5
expected value and the latter has 0
expected value which causes the second learn so much faster. Although now a days ReLU
is more popular among other activation functions.
$endgroup$
They have the same meaning in this context although during training using Rosenblatt
update rule, the former may have great changes during each update. Perceptron
is used for binary classification which means there are two possible classes to classify. If the result of inner product, here dot product, is greater than or equal to zero, the class of inputs will be the first class and if it's smaller than zero the class of inputs would be the other class. Perceptron
just has one neuron. It's a simple linear classifier. The value of threshold is just important. Means that if the product is greater than zero, the input belongs to e.g. positive class and if is negative it belongs to negative class. The step functions and Rosenblatt
update rule are not used any more. They have so much oscillation. Today networks learn using gradient descending algorithms which uses the concept of derivative.
When you progress, you will see that neural nets which use other activation functions like Sigmoid
or Tanh
are different. the former has 0.5
expected value and the latter has 0
expected value which causes the second learn so much faster. Although now a days ReLU
is more popular among other activation functions.
edited Dec 31 '17 at 9:46
answered Dec 31 '17 at 9:21
VaalizaadehVaalizaadeh
7,62562265
7,62562265
add a comment |
add a comment |
$begingroup$
I think that depends on how the next step in the algorithm is defined in the respective textbook(s). There might be slight differences.
Your if-statements can be interpreted as the following half-sentences,
"If there is a change in sign, [update the weights, if there isn't, do
nothing"]
.
The second if statement reads:
"if the value is nonzero, [update the weights, otherwise do nothing.]"
Maybe your textbooks differ in how the parts between the [ ... ]
are written.
$endgroup$
add a comment |
$begingroup$
I think that depends on how the next step in the algorithm is defined in the respective textbook(s). There might be slight differences.
Your if-statements can be interpreted as the following half-sentences,
"If there is a change in sign, [update the weights, if there isn't, do
nothing"]
.
The second if statement reads:
"if the value is nonzero, [update the weights, otherwise do nothing.]"
Maybe your textbooks differ in how the parts between the [ ... ]
are written.
$endgroup$
add a comment |
$begingroup$
I think that depends on how the next step in the algorithm is defined in the respective textbook(s). There might be slight differences.
Your if-statements can be interpreted as the following half-sentences,
"If there is a change in sign, [update the weights, if there isn't, do
nothing"]
.
The second if statement reads:
"if the value is nonzero, [update the weights, otherwise do nothing.]"
Maybe your textbooks differ in how the parts between the [ ... ]
are written.
$endgroup$
I think that depends on how the next step in the algorithm is defined in the respective textbook(s). There might be slight differences.
Your if-statements can be interpreted as the following half-sentences,
"If there is a change in sign, [update the weights, if there isn't, do
nothing"]
.
The second if statement reads:
"if the value is nonzero, [update the weights, otherwise do nothing.]"
Maybe your textbooks differ in how the parts between the [ ... ]
are written.
answered Aug 20 '18 at 13:52
knbknb
430413
430413
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f26159%2fperceptron-which-step-function-to-choose%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown