Why is finite precision a problem in machine learning?
$begingroup$
Can you explain what is finite precision? Why is finite precision a problem in machine learning?
machine-learning terminology definitions finite-precision
$endgroup$
add a comment |
$begingroup$
Can you explain what is finite precision? Why is finite precision a problem in machine learning?
machine-learning terminology definitions finite-precision
$endgroup$
add a comment |
$begingroup$
Can you explain what is finite precision? Why is finite precision a problem in machine learning?
machine-learning terminology definitions finite-precision
$endgroup$
Can you explain what is finite precision? Why is finite precision a problem in machine learning?
machine-learning terminology definitions finite-precision
machine-learning terminology definitions finite-precision
edited 7 mins ago
nbro
269217
269217
asked Dec 8 '15 at 16:37
GeorgeOfTheRFGeorgeOfTheRF
4732817
4732817
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Finite precision is decimal representation of a number which has been rounded or truncated. There many cases where this may be necessary or appropriate. For example 1/3 and the transcendental numbers $e$ and $pi$ all have infinite decimal representations. In the programming language C, a double value is 8 bit and precise to approximately 16 digits. See here.
http://www.learncpp.com/cpp-tutorial/25-floating-point-numbers/
To concretely represent one of these numbers on a (finite) computer there must be some sort of compromise. We could write 1/3 to 9 digits as .333333333 which is less than 1/3.
These compromises are compounded with arithmetic operations. Unstable algorithms are prone to arithmetic errors. This is why SVD is often used to compute PCA (instability of the covariance matrix).
http://www.sandia.gov/~smartin/presentations/SMartin_Stability.pdf
https://en.wikipedia.org/wiki/Numerical_stability
In the naive bayes classifier you will often see multiplication transformed into a sum of logarithms, which is less prone to rounding errors.
https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Multinomial_naive_Bayes
$endgroup$
$begingroup$
Thanks. Can you pls explain how svd solves the problem in PCA and how taking sum of logs reduces the problem? Where is this sum of logs used in the naive bayes classifier?
$endgroup$
– GeorgeOfTheRF
Dec 9 '15 at 2:53
$begingroup$
These are more in depth questions, but I can provide some pointers. it "solves" it because you can obtain PCA from SVD. See here for an excellent article: arxiv.org/pdf/1404.1100.pdf. SVD is preferred because of the lack of the covariance matrix in its computation. Sum of logs in naive bayes: blog.datumbox.com/…
$endgroup$
– user13684
Dec 9 '15 at 3:37
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f9268%2fwhy-is-finite-precision-a-problem-in-machine-learning%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Finite precision is decimal representation of a number which has been rounded or truncated. There many cases where this may be necessary or appropriate. For example 1/3 and the transcendental numbers $e$ and $pi$ all have infinite decimal representations. In the programming language C, a double value is 8 bit and precise to approximately 16 digits. See here.
http://www.learncpp.com/cpp-tutorial/25-floating-point-numbers/
To concretely represent one of these numbers on a (finite) computer there must be some sort of compromise. We could write 1/3 to 9 digits as .333333333 which is less than 1/3.
These compromises are compounded with arithmetic operations. Unstable algorithms are prone to arithmetic errors. This is why SVD is often used to compute PCA (instability of the covariance matrix).
http://www.sandia.gov/~smartin/presentations/SMartin_Stability.pdf
https://en.wikipedia.org/wiki/Numerical_stability
In the naive bayes classifier you will often see multiplication transformed into a sum of logarithms, which is less prone to rounding errors.
https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Multinomial_naive_Bayes
$endgroup$
$begingroup$
Thanks. Can you pls explain how svd solves the problem in PCA and how taking sum of logs reduces the problem? Where is this sum of logs used in the naive bayes classifier?
$endgroup$
– GeorgeOfTheRF
Dec 9 '15 at 2:53
$begingroup$
These are more in depth questions, but I can provide some pointers. it "solves" it because you can obtain PCA from SVD. See here for an excellent article: arxiv.org/pdf/1404.1100.pdf. SVD is preferred because of the lack of the covariance matrix in its computation. Sum of logs in naive bayes: blog.datumbox.com/…
$endgroup$
– user13684
Dec 9 '15 at 3:37
add a comment |
$begingroup$
Finite precision is decimal representation of a number which has been rounded or truncated. There many cases where this may be necessary or appropriate. For example 1/3 and the transcendental numbers $e$ and $pi$ all have infinite decimal representations. In the programming language C, a double value is 8 bit and precise to approximately 16 digits. See here.
http://www.learncpp.com/cpp-tutorial/25-floating-point-numbers/
To concretely represent one of these numbers on a (finite) computer there must be some sort of compromise. We could write 1/3 to 9 digits as .333333333 which is less than 1/3.
These compromises are compounded with arithmetic operations. Unstable algorithms are prone to arithmetic errors. This is why SVD is often used to compute PCA (instability of the covariance matrix).
http://www.sandia.gov/~smartin/presentations/SMartin_Stability.pdf
https://en.wikipedia.org/wiki/Numerical_stability
In the naive bayes classifier you will often see multiplication transformed into a sum of logarithms, which is less prone to rounding errors.
https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Multinomial_naive_Bayes
$endgroup$
$begingroup$
Thanks. Can you pls explain how svd solves the problem in PCA and how taking sum of logs reduces the problem? Where is this sum of logs used in the naive bayes classifier?
$endgroup$
– GeorgeOfTheRF
Dec 9 '15 at 2:53
$begingroup$
These are more in depth questions, but I can provide some pointers. it "solves" it because you can obtain PCA from SVD. See here for an excellent article: arxiv.org/pdf/1404.1100.pdf. SVD is preferred because of the lack of the covariance matrix in its computation. Sum of logs in naive bayes: blog.datumbox.com/…
$endgroup$
– user13684
Dec 9 '15 at 3:37
add a comment |
$begingroup$
Finite precision is decimal representation of a number which has been rounded or truncated. There many cases where this may be necessary or appropriate. For example 1/3 and the transcendental numbers $e$ and $pi$ all have infinite decimal representations. In the programming language C, a double value is 8 bit and precise to approximately 16 digits. See here.
http://www.learncpp.com/cpp-tutorial/25-floating-point-numbers/
To concretely represent one of these numbers on a (finite) computer there must be some sort of compromise. We could write 1/3 to 9 digits as .333333333 which is less than 1/3.
These compromises are compounded with arithmetic operations. Unstable algorithms are prone to arithmetic errors. This is why SVD is often used to compute PCA (instability of the covariance matrix).
http://www.sandia.gov/~smartin/presentations/SMartin_Stability.pdf
https://en.wikipedia.org/wiki/Numerical_stability
In the naive bayes classifier you will often see multiplication transformed into a sum of logarithms, which is less prone to rounding errors.
https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Multinomial_naive_Bayes
$endgroup$
Finite precision is decimal representation of a number which has been rounded or truncated. There many cases where this may be necessary or appropriate. For example 1/3 and the transcendental numbers $e$ and $pi$ all have infinite decimal representations. In the programming language C, a double value is 8 bit and precise to approximately 16 digits. See here.
http://www.learncpp.com/cpp-tutorial/25-floating-point-numbers/
To concretely represent one of these numbers on a (finite) computer there must be some sort of compromise. We could write 1/3 to 9 digits as .333333333 which is less than 1/3.
These compromises are compounded with arithmetic operations. Unstable algorithms are prone to arithmetic errors. This is why SVD is often used to compute PCA (instability of the covariance matrix).
http://www.sandia.gov/~smartin/presentations/SMartin_Stability.pdf
https://en.wikipedia.org/wiki/Numerical_stability
In the naive bayes classifier you will often see multiplication transformed into a sum of logarithms, which is less prone to rounding errors.
https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Multinomial_naive_Bayes
answered Dec 9 '15 at 2:41
user13684
$begingroup$
Thanks. Can you pls explain how svd solves the problem in PCA and how taking sum of logs reduces the problem? Where is this sum of logs used in the naive bayes classifier?
$endgroup$
– GeorgeOfTheRF
Dec 9 '15 at 2:53
$begingroup$
These are more in depth questions, but I can provide some pointers. it "solves" it because you can obtain PCA from SVD. See here for an excellent article: arxiv.org/pdf/1404.1100.pdf. SVD is preferred because of the lack of the covariance matrix in its computation. Sum of logs in naive bayes: blog.datumbox.com/…
$endgroup$
– user13684
Dec 9 '15 at 3:37
add a comment |
$begingroup$
Thanks. Can you pls explain how svd solves the problem in PCA and how taking sum of logs reduces the problem? Where is this sum of logs used in the naive bayes classifier?
$endgroup$
– GeorgeOfTheRF
Dec 9 '15 at 2:53
$begingroup$
These are more in depth questions, but I can provide some pointers. it "solves" it because you can obtain PCA from SVD. See here for an excellent article: arxiv.org/pdf/1404.1100.pdf. SVD is preferred because of the lack of the covariance matrix in its computation. Sum of logs in naive bayes: blog.datumbox.com/…
$endgroup$
– user13684
Dec 9 '15 at 3:37
$begingroup$
Thanks. Can you pls explain how svd solves the problem in PCA and how taking sum of logs reduces the problem? Where is this sum of logs used in the naive bayes classifier?
$endgroup$
– GeorgeOfTheRF
Dec 9 '15 at 2:53
$begingroup$
Thanks. Can you pls explain how svd solves the problem in PCA and how taking sum of logs reduces the problem? Where is this sum of logs used in the naive bayes classifier?
$endgroup$
– GeorgeOfTheRF
Dec 9 '15 at 2:53
$begingroup$
These are more in depth questions, but I can provide some pointers. it "solves" it because you can obtain PCA from SVD. See here for an excellent article: arxiv.org/pdf/1404.1100.pdf. SVD is preferred because of the lack of the covariance matrix in its computation. Sum of logs in naive bayes: blog.datumbox.com/…
$endgroup$
– user13684
Dec 9 '15 at 3:37
$begingroup$
These are more in depth questions, but I can provide some pointers. it "solves" it because you can obtain PCA from SVD. See here for an excellent article: arxiv.org/pdf/1404.1100.pdf. SVD is preferred because of the lack of the covariance matrix in its computation. Sum of logs in naive bayes: blog.datumbox.com/…
$endgroup$
– user13684
Dec 9 '15 at 3:37
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f9268%2fwhy-is-finite-precision-a-problem-in-machine-learning%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown