﻿{"id":3798,"date":"2013-09-10T13:00:35","date_gmt":"2013-09-10T04:00:35","guid":{"rendered":"http:\/\/fujiitoshiki.com\/improvesociety\/?p=3798"},"modified":"2017-04-27T15:17:15","modified_gmt":"2017-04-27T06:17:15","slug":"how-to-calculate-akaike-information-criterion-with-probability-distribution-function","status":"publish","type":"post","link":"https:\/\/www.fujiitoshiki.com\/improvesociety\/?p=3798","title":{"rendered":"How to calculate Akaike information criterion with probability distribution function?"},"content":{"rendered":"<div class=\"theContentWrap-ccc\"><p>Akaike information criterion (ACI) is the most useful indicator to select variables in multivariate analysis. It&#8217;s assumed that  N is free parameter number, ACI is calculated as below;<\/p>\n<p><img src='https:\/\/s0.wp.com\/latex.php?latex=%5Cdisplaystyle+AIC+%3D+-2%28Maximum%5C+Log%5C+Likelihood%29%2B2N&#038;bg=T&#038;fg=000000&#038;s=0' alt='\\displaystyle AIC = -2(Maximum\\ Log\\ Likelihood)+2N' title='\\displaystyle AIC = -2(Maximum\\ Log\\ Likelihood)+2N' class='latex' \/><\/p>\n<\/p>\n<p>Free parameter number of model is dimension of the space that parameter value could take in expected models. AIC is an evaluation criterion when expected model is estimated with maximum likelihood method and it indicates that log likelihood bias approximates to free parameter number included in model. <\/p>\n<p>How to find maximum log likelihood? Let&#8217;s define log likelihood function as following equation; <\/p>\n<p><img src='https:\/\/s0.wp.com\/latex.php?latex=%5Cdisplaystyle+l%28%5Ctheta%29+%3D+%5Csum_%7B%5Calpha%3D1%7D%5E%7Bn%7D%5Clog+f%28x_%7B%5Calpha%7D%7C%5Ctheta%29&#038;bg=T&#038;fg=000000&#038;s=0' alt='\\displaystyle l(\\theta) = \\sum_{\\alpha=1}^{n}\\log f(x_{\\alpha}|\\theta)' title='\\displaystyle l(\\theta) = \\sum_{\\alpha=1}^{n}\\log f(x_{\\alpha}|\\theta)' class='latex' \/><\/p>\n<\/p>\n<p><img src='https:\/\/s0.wp.com\/latex.php?latex=%5Chat%5Ctheta&#038;bg=T&#038;fg=000000&#038;s=0' alt='\\hat\\theta' title='\\hat\\theta' class='latex' \/>, that is maximum likelihood estimator, maximizes <em>l<\/em>(<em>&theta;<\/em>) and this is called as maximum-likelihood method. <img src='https:\/\/s0.wp.com\/latex.php?latex=l%28%5Chat%5Ctheta%29+%3D+%5CSigma_%7B%5Calpha%3D1%7D%5E%7Bn%7D%5Clog+f%28x_%5Calpha+%7C%5Chat%5Ctheta%29&#038;bg=T&#038;fg=000000&#038;s=0' alt='l(\\hat\\theta) = \\Sigma_{\\alpha=1}^{n}\\log f(x_\\alpha |\\hat\\theta)' title='l(\\hat\\theta) = \\Sigma_{\\alpha=1}^{n}\\log f(x_\\alpha |\\hat\\theta)' class='latex' \/> is called as maximum log-likelihood. <\/p>\n<p>If log likelihood function (<em>l<\/em>(<em>&theta;<\/em>)) could be differentiable, maximum likelihood estimator (<img src='https:\/\/s0.wp.com\/latex.php?latex=%5Chat%5Ctheta&#038;bg=T&#038;fg=000000&#038;s=0' alt='\\hat\\theta' title='\\hat\\theta' class='latex' \/>) would be given by solving differentiated likelihood equation. <\/p>\n<p><img src='https:\/\/s0.wp.com\/latex.php?latex=%5Cdisplaystyle+%5Cfrac%7B%5Cpartial+l%28%5Ctheta%29%7D%7B%5Cpartial+%5Ctheta%7D+%3D+0&#038;bg=T&#038;fg=000000&#038;s=0' alt='\\displaystyle \\frac{\\partial l(\\theta)}{\\partial \\theta} = 0' title='\\displaystyle \\frac{\\partial l(\\theta)}{\\partial \\theta} = 0' class='latex' \/><\/p>\n<\/p>\n<p>References:<br \/>\n<a href=\"\/\/fujiitoshiki.com\/improvesociety\/?p=3020\" target=\"_blank\">Probability density function, expected value and variance of each probability distribution<\/a><\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Akaike information criterion (ACI) is the most useful indicator to select variables in multivariate analysis.  &hellip; <a href=\"https:\/\/www.fujiitoshiki.com\/improvesociety\/?p=3798\" class=\"more-link\"><span class=\"screen-reader-text\">&#8220;How to calculate Akaike information criterion with probability distribution function?&#8221; \u306e<\/span>\u7d9a\u304d\u3092\u8aad\u3080<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_crdt_document":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[7],"tags":[],"class_list":["post-3798","post","type-post","status-publish","format-standard","hentry","category-statistics"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.fujiitoshiki.com\/improvesociety\/index.php?rest_route=\/wp\/v2\/posts\/3798","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.fujiitoshiki.com\/improvesociety\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.fujiitoshiki.com\/improvesociety\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.fujiitoshiki.com\/improvesociety\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.fujiitoshiki.com\/improvesociety\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3798"}],"version-history":[{"count":7,"href":"https:\/\/www.fujiitoshiki.com\/improvesociety\/index.php?rest_route=\/wp\/v2\/posts\/3798\/revisions"}],"predecessor-version":[{"id":7731,"href":"https:\/\/www.fujiitoshiki.com\/improvesociety\/index.php?rest_route=\/wp\/v2\/posts\/3798\/revisions\/7731"}],"wp:attachment":[{"href":"https:\/\/www.fujiitoshiki.com\/improvesociety\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3798"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.fujiitoshiki.com\/improvesociety\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3798"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.fujiitoshiki.com\/improvesociety\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3798"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}