1.1 Exact Bayes Classifier
We would like to classify categorical output $(k_1,k_2,...,k_3)$ given some attributes$(x_1, x_2, ..., x_n)$
For example, we would like to predict the output is $k_1$ or $k_2$ given three attributes $A,B,C$
If $P(k_1|A, B, C)$ > $P(k_2|A, B, C)$
we would like to say A, B, C are more likely to belong to $k_1$; vice versa
Notation:
If A exists, A; if A does not exist, -A
If B exists, B; if B does not exist, -B
If C exists, C; if C does not exist, -C
Then, if we apply Bayes" Theorm,
$$P(k_1|A, B, C)$$
=$$frac{P(k_1)P(A,B,C|k_1)}{P(A,B,C)}$$
By applying total probability law,
$ Longrightarrow$
$$frac{P(k_1)P(A,B,C|k_1)}{P(k_1)P(A,B,C|k_1)+P(k_2)P(A,B,C|k_2)}$$
However, to calculate $P(A,B,C|k_1)$ needs $2^i$ spaces, where i = 3 in this case,
to calculate $P(A,B,C|k_2)$ needs another $2^2$ spaces
The frequency table is like below:
Freuency | A, B, C | A, B, -C | A, -B, C | A, -B, -C | -A, B, C | -A, B, -C | -A, -B, C | -A, -B, C | |
---|---|---|---|---|---|---|---|---|---|
k1 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | |
k2 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 |
Therefore, we introduce Naive Bayes Algorithm to reduce the storing space and computational speed.
# 1.2 Naive Bayes Classifier
We assume class conditional independence, so that
$P(A,B,C|k_1)$ is equal to $P(A|k_1)P(B|k_1)P(C|k_1)$
$P(A,B,C|k_2)$ is equal to $P(A|k_2)P(B|k_2)P(C|k_2)$
And now, we need only 2in records, where i is the number of attributes, and i being number of categorical output we will predict
Freuency | A | -A | B | -B | C | -C | |
---|---|---|---|---|---|---|---|
k1 | 1 | 2 | 3 | 4 | 5 | 6 | |
k2 | 7 | 6 | 5 | 4 | 3 | 2 |
Therefore, our problem
$$P(k_1|A, B, C)$$
= $$frac{P(k_1)P(A,B,C|k_1)}{P(k_1)P(A,B,C|k_1)+P(k_2)P(A,B,C|k_2)}$$
$Longrightarrow$
$$frac{P(k_1)[P(A|k_1)P(B|k_1)P(C|k_1)]}{P(k_1)[P(A|k_1)P(B|k_1)P(C|k_1)]+P(k_2)[P(A|k_2)P(B|k_2)P(C|k_2)]} (i)$$
$$P(k_2|A, B, C)$$
= $$frac{P(k_2)P(A,B,C|k_2)}{P(k_1)P(A,B,C|k_1)+P(k_2)P(A,B,C|k_2)}$$
$Longrightarrow$
$$frac{P(k_2)[P(A|k_2)P(B|k_2)P(C|k_2)]}{P(k_1)[P(A|k_1)P(B|k_1)P(C|k_1)]+P(k_2)[P(A|k_2)P(B|k_2)P(C|k_2)]} (ii)$$
We notice that (i),(ii) share the same numerator, we can focus only on the denominator
$$P(k_1|A, B, C)$$
= $$frac{P(k_1)P(A,B,C|k_1)}{P(k_1)P(A,B,C|k_1)+P(k_2)P(A,B,C|k_2)}$$
$propto$
$$P(k_1)[P(A|k_1)P(B|k_1)P(C|k_1)]$$
$$P(k_2|A, B, C)$$
= $$frac{P(k_2)P(A,B,C|k_2)}{P(k_1)P(A,B,C|k_1)+P(k_2)P(A,B,C|k_2)}$$
$propto$
$$P(k_2)[P(A|k_2)P(B|k_2)P(C|k_2)]$$
If $P(k_1)[P(A|k_1)P(B|k_1)P(C|k_1)]$ > $P(k_2)[P(A|k_2)P(B|k_2)P(C|k_2)]$, we say that A, B, C are more likely to belong to $k_1$; vice versa
1.3 why not P(C | A, B) = P(C | A) * P(C | B)From 1.2, we know that from Naive Bayes Algorithm, we assume class conditional independence, so that
$P(A,B | C)$ = $P( A | C) * P(B | C)$
buy why not diretly say that
$P(C | A, B)$ = $P(C | A) * P(C | B)$
This is because it happens only when P(C) = 0 or P(1), meaningless
$$P(C | A, B)=P(C | A) * P(C | B)$$
$Longrightarrow$
$$frac{P(C,(A,B))}{P(A,B)} = frac{P(C,A)}{P(A)}*frac{P(C,B)}{P(B)} $$
$Longrightarrow if B and C are class conditional independent$
$$frac{P(C)*P(A)*P(B)}{P(A)*P(B)} = frac{P(C)*P(A)}{P(A)}*frac{P(C)*P(B)}{P(B)} $$
$Longrightarrow$
$$P(C) = P(C)*P(C)$$
where only possible if P(C) = 0, or, 1
Therefore, we use Bayes therom to swap $P(C | A, B)$ to $P(A, B | C) $ before applying naive bayes algorithm
2. ExampleConsider the following 4 SMS messages:
message | Label | |
---|---|---|
I am not coming | ham | |
Good work | ham | |
Do you need viagra | spam | |
win an IMac | spam |
Let $p(spam)$ be the probability of a new SMS message being "spam"
Let $p(ham)$ be the probability of a new SMS message being "ham"
Therefore
$$p(spam)= 0.5$$
$$p(ham)=0.5$$
Each de-capitalised keyword are put into two rows(word row, and -word row):
we mark the number of ham massage that the certain keyword exists on the (word row, ham column);
we mark the number of ham massage that the certain keyword does not exist on the (-word row, ham column);
we mark the number of spam massage that the certain keyword exists on the (word row, spam column);
we mark the number of spam massage that the certain keyword does not exist on the (-word row, spam column);
We can construct a frequency table following:
Frequency | Ham | Spam |
---|---|---|
-am | 1 | 2 |
-an | 2 | 1 |
-coming | 1 | 2 |
-do | 2 | 1 |
-good | 1 | 2 |
-i | 1 | 2 |
-imac | 2 | 1 |
-need | 2 | 1 |
-not | 1 | 2 |
-viagra | 2 | 1 |
-win | 2 | 1 |
-work | 1 | 2 |
-you | 2 | 1 |
am | 1 | 0 |
an | 0 | 1 |
coming | 1 | 0 |
do | 0 | 1 |
good | 1 | 0 |
i | 1 | 0 |
imac | 0 | 1 |
need | 0 | 1 |
not | 1 | 0 |
viagra | 0 | 1 |
win | 0 | 1 |
work | 1 | 0 |
you | 0 | 1 |
Then, to record the likelihoods P(W|ham), P(-W|ham), P(W|spam) and P(-W|spam), we divide each entry in ham column by 2 (the total number of ham messages), and divide each entry in spam column by 2 (the total number of spam messages).
In addition, to prevent the likelihood of 1 and 0, we replace any likelihood smaller than 0.05 (larger than 0.95) with 0.05 (0.95) by using one of the variants of the Laplace estimator.
Therefore, we get the following likelihood table:
Probability of the row name given the column name | Ham | Spam |
---|---|---|
-am | 0.50 | 0.95 |
-an | 0.95 | 0.50 |
-coming | 0.50 | 0.95 |
-do | 0.95 | 0.50 |
-good | 0.50 | 0.95 |
-i | 0.50 | 0.95 |
-imac | 0.95 | 0.50 |
-need | 0.95 | 0.50 |
-not | 0.50 | 0.95 |
-viagra | 0.95 | 0.50 |
-win | 0.95 | 0.50 |
-work | 0.50 | 0.95 |
-you | 0.95 | 0.50 |
am | 0.50 | 0.05 |
an | 0.05 | 0.50 |
coming | 0.50 | 0.05 |
do | 0.05 | 0.50 |
good | 0.50 | 0.05 |
i | 0.50 | 0.05 |
imac | 0.05 | 0.50 |
need | 0.05 | 0.50 |
not | 0.50 | 0.05 |
viagra | 0.05 | 0.50 |
win | 0.05 | 0.50 |
work | 0.50 | 0.05 |
you | 0.05 | 0.50 |
If
$$P(ham | -i , -am , -not , coming , -good , -work , -do , -you , -need , -viagra , -win , -an , -imac)$$
is greater than
$$P(spam | -i , -am , -not , coming , -good , -work , -do , -you , -need , -viagra , -win , -an , -imac)$$
we say that the message "coming home" is more likely to be a ham message; vice versa.
According to Bayes’ Theorem,
$$P(ham | -i , -am , -not , coming , -good , -work , -do , -you , -need , -viagra , -win , -an , -imac)$$
$Bayes" Therom Longrightarrow$
$$frac{P(ham)P(-i , -am , -not , coming , -good , -work , -do , -you , -need , -viagra , -win , -an , -imac| ham)}{P( -i , -am , -not , coming , -good , -work , -do , -you , -need , -viagra , -win , -an , -imac)}$$
$Total probability law Longrightarrow$
$$frac
{P(ham)P(-i , -am , -not , coming , -good , -work , -do , -you , -need , -viagra , -win , -an , -imac| ham)}
{P(ham)P(-i , -am , -not , coming , -good , -work , -do , -you , -need , -viagra , -win , -an , -imac| ham)
+P(spam)P(-i , -am , -not , coming , -good , -work , -do , -you , -need , -viagra , -win , -an , -imac| spam)} $$
$naive bayes simplify Longrightarrow$
$$ frac{P(ham)[P(-i|ham)P(-am|ham)P(-not|ham)P(coming|ham)P(-good|ham)P(-work|ham) P(-do|ham)P(-you|ham)P(-need|ham)P(-viagra|ham)P(-win|ham)(-an|ham)P(-imac|ham)]}{P(ham)[P(-i|ham)P(-am|ham)P(-not|ham)P(coming|ham)P(-good|ham)P(-work|ham) P(-do|ham)P(-you|ham)P(-need|ham)P(-viagra|ham)P(-win|ham)(-an|ham)P(-imac|ham)]+P(spam)[P(-i|spam)P(-am|spam)P(-not|spam)P(coming|spam)P(-good|spam)P(-work|spam) P(-do|spam)P(-you|spam)P(-need|spam)P(-viagra|spam)P(-win|spam)(-an|spam)P(-imac|spam)]}$$ = $$frac {0.5*(0.5*0.5*0.5*0.5*0.5*0.5*0.95*0.95*0.95*0.95*0.95*0.95*0.95)}{0.5*(0.5*0.5*0.5*0.5*0.5*0.5*0.95*0.95*0.95*0.95*0.95*0.95*0.95)+0.5*(0.95*0.95*0.95*0.05*0.95*0.95*0.5*0.5*0.5*0.5*0.5*0.5*0.5)}$$ = $$frac{0.00545576012}{0.00545576012+0.00015112908}=0.97304582369$$ Alternatively, we can focus only on the propensities which are proportional to posterior probability. $$P(ham | -i , -am , -not , coming , -good , -work , -do , -you , -need , -viagra , -win , -an , -imac)$$ $propto$ $$P(ham)[P(-i|ham)P(-am|ham)P(-not|ham)P(coming|ham)P(-good|ham)P(-work|ham) P(-do|ham)P(-you|ham)P(-need|ham)P(-viagra|ham)P(-win|ham)(-an|ham)P(-imac|ham)]$$ = $$ 0.5*(0.5*0.5*0.5*0.5*0.5*0.5*0.95*0.95*0.95*0.95*0.95*0.95*0.95)$$ = $$0.00545576012$$ Similarly, According to Bayes’ Theorem, $$P(spam | -i , -am , -not , coming , -good , -work , -do , -you , -need , -viagra , -win , -an , -imac)$$ $Bayes" Therom Longrightarrow$ $$frac{P(spam)P(-i , -am , -not , coming , -good , -work , -do , -you , -need , -viagra , -win , -an , -imac| spam)}{P( -i , -am , -not , coming , -good , -work , -do , -you , -need , -viagra , -win , -an , -imac)}$$ $Total probability law Longrightarrow$ $$frac {P(spam)P(-i , -am , -not , coming , -good , -work , -do , -you , -need , -viagra , -win , -an , -imac| spam)} {P(ham)P(-i , -am , -not , coming , -good , -work , -do , -you , -need , -viagra , -win , -an , -imac| ham) +P(spam)P(-i , -am , -not , coming , -good , -work , -do , -you , -need , -viagra , -win , -an , -imac| spam)} $$ $naive bayes simplify Longrightarrow$ $$frac{P(spam)[P(-i|spam)P(-am|spam)P(-not|spam)P(coming|spam)P(-good|spam)P(-work|spam) P(-do|spam)P(-you|spam)P(-need|spam)P(-viagra|spam)P(-win|spam)(-an|spam)P(-imac|spam)]}{P(ham)[P(-i|ham)P(-am|ham)P(-not|ham)P(coming|ham)P(-good|ham)P(-work|ham) P(-do|ham)P(-you|ham)P(-need|ham)P(-viagra|ham)P(-win|ham)(-an|ham)P(-imac|ham)]+P(spam)[P(-i|spam)P(-am|spam)P(-not|spam)P(coming|spam)P(-good|spam)P(-work|spam) P(-do|spam)P(-you|spam)P(-need|spam)P(-viagra|spam)P(-win|spam)(-an|spam)P(-imac|spam)]}$$ = $$frac{0.5*(0.95*0.95*0.95*0.05*0.95*0.95*0.5*0.5*0.5*0.5*0.5*0.5*0.5)}{0.5*(0.5*0.5*0.5*0.5*0.5*0.5*0.95*0.95*0.95*0.95*0.95*0.95*0.95)+0.5*(0.95*0.95*0.95*0.05*0.95*0.95*0.5*0.5*0.5*0.5*0.5*0.5*0.5)} $$
=
$$frac{0.00015112908}{0.00545576012+0.00015112908}=0.0269541763$$
Alternatively, we can focus only on the propensities which are proportional to posterior probability.
$$P(spam | -i , -am , -not , coming , -good , -work , -do , -you , -need , -viagra , -win , -an , -imac)$$
$propto$
$$P(spam)[P(-i|spam)P(-am|spam)P(-not|spam)P(coming|spam)P(-good|spam)P(-work|spam)
P(-do|spam)P(-you|spam)P(-need|spam)P(-viagra|spam)P(-win|spam)(-an|spam)P(-imac|spam)]$$
=
$$ 0.5*(0.95*0.95*0.95*0.05*0.95*0.95*0.5*0.5*0.5*0.5*0.5*0.5*0.5)$$
=
$$0.00015112908$$
Since probability 0.00545576012 > 0.00015112908, or propensity 0.97304582369 > 0.0269541763, we conclude that the message "coming home" is more likely to be a ham message.
2.3.2 For message"Get Viagra now":$$P(ham | -i , -am , -not , -coming , -good , -work , -do , -you , -need , viagra , -win , -an , -imac)$$
$propto$
$$P(ham)[P(-i|ham)P(-am|ham)P(-not|ham)P(-coming|ham)P(-good|ham)P(-work|ham)
P(-do|ham)P(-you|ham)P(-need|ham)P(viagra|ham)P(-win|ham)(-an|ham)P(-imac|ham)]$$
=
$$0.5*(0.5*0.5*0.5*0.5*0.5*0.5*0.95*0.95*0.95*0.05*0.95*0.95*0.95)=0.00028714526$$
$$P(spam | -i , -am , -not , -coming , -good , -work , -do , -you , -need , viagra , -win , -an , -imac) $$
$propto$
$$P(spam)[P(-i|spam)P(-am|spam)P(-not|spam)P(-coming|spam)P(-good|spam)P(-work|spam)
P(-do|spam)P(-you|spam)P(-need|spam)P(viagra|spam)P(-win|spam)(-an|spam)P(-imac|spam)] $$
=
$$0.5*(0.95*0.95*0.95*0.95*0.95*0.95*0.5*0.5*0.5*0.5*0.5*0.5*0.5)=0.00287145269$$
To calculate the probability of we divide 0.00028714526 and 0.00287145269 by (0.00028714526+0.00287145269), respectively, and the probability is 0.09090908831 and 0.90909091168, which again suggested "Get Viagra now" is more likely to be a spam message.
Since the propensity 0.00028714526 < 0.00287145269, or the probability 0.09090908831< 0.90909091168, we believe that the message "Get Viagra now" is more likely to be a spam message.
文章版權歸作者所有,未經允許請勿轉載,若此文章存在違規行為,您可以聯系管理員刪除。
轉載請注明本文地址:http://specialneedsforspecialkids.com/yun/41029.html
摘要:在決策理論中,貝葉斯推斷與主觀概率密切相關,通常被稱為貝葉斯概率。特征分布的假設被稱為樸素貝葉斯分類器的事件模型。多項式樸素貝葉斯對于一個多項分布事件模型,樣本表示了一個特定事件出現的頻率,由多項分布產生,其中,表示事件發生的頻率。 Code: https://github.com/tmac1997/u... Naive Bayes Bayes theorem(貝葉斯法則) 在概率論和...
摘要:本篇內容為機器學習實戰第章基于概率論的分類方法樸素貝葉斯程序清單。樸素貝葉斯優點在數據較少的情況下仍然有效,可以處理多類別問題。參考鏈接機器學習實戰筆記之四基于概率論的分類方法樸素貝葉斯不足之處,歡迎指正。 本篇內容為《機器學習實戰》第 4 章 基于概率論的分類方法:樸素貝葉斯程序清單。所用代碼為 python3。 樸素貝葉斯優點:在數據較少的情況下仍然有效,可以處理多類別問題。 ...
閱讀 2224·2021-11-22 09:34
閱讀 1334·2021-10-11 10:59
閱讀 4427·2021-09-22 15:56
閱讀 3270·2021-09-22 15:08
閱讀 3401·2019-08-30 14:01
閱讀 773·2019-08-30 11:16
閱讀 1129·2019-08-26 13:51
閱讀 2906·2019-08-26 13:43