Interpreting the C-Index












0












$begingroup$


I have some problems understanding/interpreting the C-Index cluster quality measure. So, if we have



$c(x_i, x_j) = 1 $ if $ x_i, x_j $ in the same cluster; $0$ else



$Gamma = sum_ {i=1}^{n-1}sum_ {j=i+1}^n d(x_i,x_j)*c(x_i,x_j)$



$alpha = sum_ {i=1}^{n-1}sum_ {j=i+1}^n c(x_i,x_j)$



$min=$ sum of the $alpha$ smallest $d(x_i,x_j)$ of distinct pairs $x_i,x_j$ where $x_i neq x_j$



$max=$ sum of the $alpha$ largest $d(x_i,x_j)$ of distinct pairs $x_i,x_j$ where $x_i neq x_j$



then the C-Index is defined as $C=frac{Gamma - min}{max - min}$



The result is a value in $[0, 1]$, where lower values indicate a better cluster quality.



So, here are some things I get from this value:




  • if all elements in a cluster are close together and all clusters are far apart, we can get $Gamma=min$, which means $C=0$

  • Analogously, in a worst case scenario, all the observations that are furthers apart might be in the same cluster, so we would get $Gamma=max$, which means $C=1$


Now, these are the things I'm unsure about:



First: If we only have a single cluster in our clustering (e.g. k-Means for $k=1$), then $alpha$ is equal to the number of distinct pairs of observations, so $max=min$, which means $C=frac{Gamma - min}{max - min} = frac{Gamma - min}{min - min} = frac{Gamma - min}{0}$ and we get a division by 0. A similar problem occurs if we have $N$ observations in $N$ different clusters, since $c$ is always $0$ in that case. So, is it correct to say that the C-Index can only be used for clusterings with $k$ many clusters where $1 < k < N$, for $N$ observations?



Second: Is it reasonable to say that the C-Index is agnostic to the number of clusters (e.g. the value of $k$ in k-Means)? For instance, we might have 5 observations $x_1...x_5$ close to each other, but each put into a separate cluster $C_1...C_5$. Then, we might have a clusters $C_6={x_6, x_7}$ where $x_6, x_7$ are very close to each other but far apart from all other observations. In that case, $Gamma=d(x_6,x_7)$, $alpha=1$, $min=d(x_6,x_7)$, so $Gamma=min$, which means $C=0$. That is, we have the best possible C-Index-Value, even though, intuitively, it might have been better to put $x_1...x_5$ into a single cluster.



Lastly, this is more about k-Means: if we use normal k-Means (not global k-Means), are we always guaranteed to reach $C=0$, for an unbounded number of iterations? I can't seem to find an example that wouldn't result in this.










share|improve this question











$endgroup$




bumped to the homepage by Community 3 hours ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.




















    0












    $begingroup$


    I have some problems understanding/interpreting the C-Index cluster quality measure. So, if we have



    $c(x_i, x_j) = 1 $ if $ x_i, x_j $ in the same cluster; $0$ else



    $Gamma = sum_ {i=1}^{n-1}sum_ {j=i+1}^n d(x_i,x_j)*c(x_i,x_j)$



    $alpha = sum_ {i=1}^{n-1}sum_ {j=i+1}^n c(x_i,x_j)$



    $min=$ sum of the $alpha$ smallest $d(x_i,x_j)$ of distinct pairs $x_i,x_j$ where $x_i neq x_j$



    $max=$ sum of the $alpha$ largest $d(x_i,x_j)$ of distinct pairs $x_i,x_j$ where $x_i neq x_j$



    then the C-Index is defined as $C=frac{Gamma - min}{max - min}$



    The result is a value in $[0, 1]$, where lower values indicate a better cluster quality.



    So, here are some things I get from this value:




    • if all elements in a cluster are close together and all clusters are far apart, we can get $Gamma=min$, which means $C=0$

    • Analogously, in a worst case scenario, all the observations that are furthers apart might be in the same cluster, so we would get $Gamma=max$, which means $C=1$


    Now, these are the things I'm unsure about:



    First: If we only have a single cluster in our clustering (e.g. k-Means for $k=1$), then $alpha$ is equal to the number of distinct pairs of observations, so $max=min$, which means $C=frac{Gamma - min}{max - min} = frac{Gamma - min}{min - min} = frac{Gamma - min}{0}$ and we get a division by 0. A similar problem occurs if we have $N$ observations in $N$ different clusters, since $c$ is always $0$ in that case. So, is it correct to say that the C-Index can only be used for clusterings with $k$ many clusters where $1 < k < N$, for $N$ observations?



    Second: Is it reasonable to say that the C-Index is agnostic to the number of clusters (e.g. the value of $k$ in k-Means)? For instance, we might have 5 observations $x_1...x_5$ close to each other, but each put into a separate cluster $C_1...C_5$. Then, we might have a clusters $C_6={x_6, x_7}$ where $x_6, x_7$ are very close to each other but far apart from all other observations. In that case, $Gamma=d(x_6,x_7)$, $alpha=1$, $min=d(x_6,x_7)$, so $Gamma=min$, which means $C=0$. That is, we have the best possible C-Index-Value, even though, intuitively, it might have been better to put $x_1...x_5$ into a single cluster.



    Lastly, this is more about k-Means: if we use normal k-Means (not global k-Means), are we always guaranteed to reach $C=0$, for an unbounded number of iterations? I can't seem to find an example that wouldn't result in this.










    share|improve this question











    $endgroup$




    bumped to the homepage by Community 3 hours ago


    This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.


















      0












      0








      0





      $begingroup$


      I have some problems understanding/interpreting the C-Index cluster quality measure. So, if we have



      $c(x_i, x_j) = 1 $ if $ x_i, x_j $ in the same cluster; $0$ else



      $Gamma = sum_ {i=1}^{n-1}sum_ {j=i+1}^n d(x_i,x_j)*c(x_i,x_j)$



      $alpha = sum_ {i=1}^{n-1}sum_ {j=i+1}^n c(x_i,x_j)$



      $min=$ sum of the $alpha$ smallest $d(x_i,x_j)$ of distinct pairs $x_i,x_j$ where $x_i neq x_j$



      $max=$ sum of the $alpha$ largest $d(x_i,x_j)$ of distinct pairs $x_i,x_j$ where $x_i neq x_j$



      then the C-Index is defined as $C=frac{Gamma - min}{max - min}$



      The result is a value in $[0, 1]$, where lower values indicate a better cluster quality.



      So, here are some things I get from this value:




      • if all elements in a cluster are close together and all clusters are far apart, we can get $Gamma=min$, which means $C=0$

      • Analogously, in a worst case scenario, all the observations that are furthers apart might be in the same cluster, so we would get $Gamma=max$, which means $C=1$


      Now, these are the things I'm unsure about:



      First: If we only have a single cluster in our clustering (e.g. k-Means for $k=1$), then $alpha$ is equal to the number of distinct pairs of observations, so $max=min$, which means $C=frac{Gamma - min}{max - min} = frac{Gamma - min}{min - min} = frac{Gamma - min}{0}$ and we get a division by 0. A similar problem occurs if we have $N$ observations in $N$ different clusters, since $c$ is always $0$ in that case. So, is it correct to say that the C-Index can only be used for clusterings with $k$ many clusters where $1 < k < N$, for $N$ observations?



      Second: Is it reasonable to say that the C-Index is agnostic to the number of clusters (e.g. the value of $k$ in k-Means)? For instance, we might have 5 observations $x_1...x_5$ close to each other, but each put into a separate cluster $C_1...C_5$. Then, we might have a clusters $C_6={x_6, x_7}$ where $x_6, x_7$ are very close to each other but far apart from all other observations. In that case, $Gamma=d(x_6,x_7)$, $alpha=1$, $min=d(x_6,x_7)$, so $Gamma=min$, which means $C=0$. That is, we have the best possible C-Index-Value, even though, intuitively, it might have been better to put $x_1...x_5$ into a single cluster.



      Lastly, this is more about k-Means: if we use normal k-Means (not global k-Means), are we always guaranteed to reach $C=0$, for an unbounded number of iterations? I can't seem to find an example that wouldn't result in this.










      share|improve this question











      $endgroup$




      I have some problems understanding/interpreting the C-Index cluster quality measure. So, if we have



      $c(x_i, x_j) = 1 $ if $ x_i, x_j $ in the same cluster; $0$ else



      $Gamma = sum_ {i=1}^{n-1}sum_ {j=i+1}^n d(x_i,x_j)*c(x_i,x_j)$



      $alpha = sum_ {i=1}^{n-1}sum_ {j=i+1}^n c(x_i,x_j)$



      $min=$ sum of the $alpha$ smallest $d(x_i,x_j)$ of distinct pairs $x_i,x_j$ where $x_i neq x_j$



      $max=$ sum of the $alpha$ largest $d(x_i,x_j)$ of distinct pairs $x_i,x_j$ where $x_i neq x_j$



      then the C-Index is defined as $C=frac{Gamma - min}{max - min}$



      The result is a value in $[0, 1]$, where lower values indicate a better cluster quality.



      So, here are some things I get from this value:




      • if all elements in a cluster are close together and all clusters are far apart, we can get $Gamma=min$, which means $C=0$

      • Analogously, in a worst case scenario, all the observations that are furthers apart might be in the same cluster, so we would get $Gamma=max$, which means $C=1$


      Now, these are the things I'm unsure about:



      First: If we only have a single cluster in our clustering (e.g. k-Means for $k=1$), then $alpha$ is equal to the number of distinct pairs of observations, so $max=min$, which means $C=frac{Gamma - min}{max - min} = frac{Gamma - min}{min - min} = frac{Gamma - min}{0}$ and we get a division by 0. A similar problem occurs if we have $N$ observations in $N$ different clusters, since $c$ is always $0$ in that case. So, is it correct to say that the C-Index can only be used for clusterings with $k$ many clusters where $1 < k < N$, for $N$ observations?



      Second: Is it reasonable to say that the C-Index is agnostic to the number of clusters (e.g. the value of $k$ in k-Means)? For instance, we might have 5 observations $x_1...x_5$ close to each other, but each put into a separate cluster $C_1...C_5$. Then, we might have a clusters $C_6={x_6, x_7}$ where $x_6, x_7$ are very close to each other but far apart from all other observations. In that case, $Gamma=d(x_6,x_7)$, $alpha=1$, $min=d(x_6,x_7)$, so $Gamma=min$, which means $C=0$. That is, we have the best possible C-Index-Value, even though, intuitively, it might have been better to put $x_1...x_5$ into a single cluster.



      Lastly, this is more about k-Means: if we use normal k-Means (not global k-Means), are we always guaranteed to reach $C=0$, for an unbounded number of iterations? I can't seem to find an example that wouldn't result in this.







      clustering k-means






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 11 '18 at 19:18







      Silas Berger

















      asked Mar 11 '18 at 16:42









      Silas BergerSilas Berger

      114




      114





      bumped to the homepage by Community 3 hours ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







      bumped to the homepage by Community 3 hours ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
























          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          For the first Q you already give a counterexample:



          It is biased to k, it prefers k = N. And it will also overrate N-1, N-2, ... So it is not agnostic to k.



          If k-means would always find the best C index, then the C index would just be redundant to SSQ, which is much cheaper to compute... But you probably have just been looking at way too simple toy datasets. Use real data.






          share|improve this answer









          $endgroup$













          • $begingroup$
            Great, that makes sense, thanks for the answer! I still have some trouble picturing exactly how the C-Index can increase while the SSQ decreases (although I do work on a dataset where that happens), but I'll work on that ;-) Still, I don't quite see how it can prefer k = N. That would leave alpha=0, and if min and max are the alpha shortest/longest distances, that means max and min are 0, max-min=0, and we get a division by 0, right...?
            $endgroup$
            – Silas Berger
            Mar 14 '18 at 7:43












          • $begingroup$
            More precisely, you get 0/0, and in most cases (you'll need to check the math yourself for this particular case though) the proper substitute then is 0. Or intuitively: if the best case equals the worst case (max=min), then any solution is perfect (0). But what you need to consider is k almost n!
            $endgroup$
            – Anony-Mousse
            Mar 14 '18 at 8:48












          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "557"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f28942%2finterpreting-the-c-index%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0












          $begingroup$

          For the first Q you already give a counterexample:



          It is biased to k, it prefers k = N. And it will also overrate N-1, N-2, ... So it is not agnostic to k.



          If k-means would always find the best C index, then the C index would just be redundant to SSQ, which is much cheaper to compute... But you probably have just been looking at way too simple toy datasets. Use real data.






          share|improve this answer









          $endgroup$













          • $begingroup$
            Great, that makes sense, thanks for the answer! I still have some trouble picturing exactly how the C-Index can increase while the SSQ decreases (although I do work on a dataset where that happens), but I'll work on that ;-) Still, I don't quite see how it can prefer k = N. That would leave alpha=0, and if min and max are the alpha shortest/longest distances, that means max and min are 0, max-min=0, and we get a division by 0, right...?
            $endgroup$
            – Silas Berger
            Mar 14 '18 at 7:43












          • $begingroup$
            More precisely, you get 0/0, and in most cases (you'll need to check the math yourself for this particular case though) the proper substitute then is 0. Or intuitively: if the best case equals the worst case (max=min), then any solution is perfect (0). But what you need to consider is k almost n!
            $endgroup$
            – Anony-Mousse
            Mar 14 '18 at 8:48
















          0












          $begingroup$

          For the first Q you already give a counterexample:



          It is biased to k, it prefers k = N. And it will also overrate N-1, N-2, ... So it is not agnostic to k.



          If k-means would always find the best C index, then the C index would just be redundant to SSQ, which is much cheaper to compute... But you probably have just been looking at way too simple toy datasets. Use real data.






          share|improve this answer









          $endgroup$













          • $begingroup$
            Great, that makes sense, thanks for the answer! I still have some trouble picturing exactly how the C-Index can increase while the SSQ decreases (although I do work on a dataset where that happens), but I'll work on that ;-) Still, I don't quite see how it can prefer k = N. That would leave alpha=0, and if min and max are the alpha shortest/longest distances, that means max and min are 0, max-min=0, and we get a division by 0, right...?
            $endgroup$
            – Silas Berger
            Mar 14 '18 at 7:43












          • $begingroup$
            More precisely, you get 0/0, and in most cases (you'll need to check the math yourself for this particular case though) the proper substitute then is 0. Or intuitively: if the best case equals the worst case (max=min), then any solution is perfect (0). But what you need to consider is k almost n!
            $endgroup$
            – Anony-Mousse
            Mar 14 '18 at 8:48














          0












          0








          0





          $begingroup$

          For the first Q you already give a counterexample:



          It is biased to k, it prefers k = N. And it will also overrate N-1, N-2, ... So it is not agnostic to k.



          If k-means would always find the best C index, then the C index would just be redundant to SSQ, which is much cheaper to compute... But you probably have just been looking at way too simple toy datasets. Use real data.






          share|improve this answer









          $endgroup$



          For the first Q you already give a counterexample:



          It is biased to k, it prefers k = N. And it will also overrate N-1, N-2, ... So it is not agnostic to k.



          If k-means would always find the best C index, then the C index would just be redundant to SSQ, which is much cheaper to compute... But you probably have just been looking at way too simple toy datasets. Use real data.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Mar 13 '18 at 8:49









          Anony-MousseAnony-Mousse

          5,155625




          5,155625












          • $begingroup$
            Great, that makes sense, thanks for the answer! I still have some trouble picturing exactly how the C-Index can increase while the SSQ decreases (although I do work on a dataset where that happens), but I'll work on that ;-) Still, I don't quite see how it can prefer k = N. That would leave alpha=0, and if min and max are the alpha shortest/longest distances, that means max and min are 0, max-min=0, and we get a division by 0, right...?
            $endgroup$
            – Silas Berger
            Mar 14 '18 at 7:43












          • $begingroup$
            More precisely, you get 0/0, and in most cases (you'll need to check the math yourself for this particular case though) the proper substitute then is 0. Or intuitively: if the best case equals the worst case (max=min), then any solution is perfect (0). But what you need to consider is k almost n!
            $endgroup$
            – Anony-Mousse
            Mar 14 '18 at 8:48


















          • $begingroup$
            Great, that makes sense, thanks for the answer! I still have some trouble picturing exactly how the C-Index can increase while the SSQ decreases (although I do work on a dataset where that happens), but I'll work on that ;-) Still, I don't quite see how it can prefer k = N. That would leave alpha=0, and if min and max are the alpha shortest/longest distances, that means max and min are 0, max-min=0, and we get a division by 0, right...?
            $endgroup$
            – Silas Berger
            Mar 14 '18 at 7:43












          • $begingroup$
            More precisely, you get 0/0, and in most cases (you'll need to check the math yourself for this particular case though) the proper substitute then is 0. Or intuitively: if the best case equals the worst case (max=min), then any solution is perfect (0). But what you need to consider is k almost n!
            $endgroup$
            – Anony-Mousse
            Mar 14 '18 at 8:48
















          $begingroup$
          Great, that makes sense, thanks for the answer! I still have some trouble picturing exactly how the C-Index can increase while the SSQ decreases (although I do work on a dataset where that happens), but I'll work on that ;-) Still, I don't quite see how it can prefer k = N. That would leave alpha=0, and if min and max are the alpha shortest/longest distances, that means max and min are 0, max-min=0, and we get a division by 0, right...?
          $endgroup$
          – Silas Berger
          Mar 14 '18 at 7:43






          $begingroup$
          Great, that makes sense, thanks for the answer! I still have some trouble picturing exactly how the C-Index can increase while the SSQ decreases (although I do work on a dataset where that happens), but I'll work on that ;-) Still, I don't quite see how it can prefer k = N. That would leave alpha=0, and if min and max are the alpha shortest/longest distances, that means max and min are 0, max-min=0, and we get a division by 0, right...?
          $endgroup$
          – Silas Berger
          Mar 14 '18 at 7:43














          $begingroup$
          More precisely, you get 0/0, and in most cases (you'll need to check the math yourself for this particular case though) the proper substitute then is 0. Or intuitively: if the best case equals the worst case (max=min), then any solution is perfect (0). But what you need to consider is k almost n!
          $endgroup$
          – Anony-Mousse
          Mar 14 '18 at 8:48




          $begingroup$
          More precisely, you get 0/0, and in most cases (you'll need to check the math yourself for this particular case though) the proper substitute then is 0. Or intuitively: if the best case equals the worst case (max=min), then any solution is perfect (0). But what you need to consider is k almost n!
          $endgroup$
          – Anony-Mousse
          Mar 14 '18 at 8:48


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f28942%2finterpreting-the-c-index%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Ponta tanko

          Tantalo (mitologio)

          Erzsébet Schaár