
Recent Posts
Archives
Categories
Meta
Monthly Archives: April 2011
Four Methods to Estimate Query Language in IR
1.Relevance Model 2.Divergence minimizatioon model 3.simple mixture model 4.regularied mixture model A Comparative Study of Methods for Estimating Query Language Models with Pseudo Feedback
Posted in Uncategorized
Leave a comment
MEMM v.s. CRF
The advantage of CRF is that CRF resolve the label bias problem which can be happened in the MEMM model by global normalization. Conditional Random Fields Probabilistic Models for Segmenting and Labeling Sequence Data
Posted in Uncategorized
Leave a comment
The Label Bias Problem
“This perstate normalization of transition scores implies a “conservation of score mass” (Bottou, 1991) whereby all the mass that arrives at a state must be distributed among the possible successor states. An observation can affect which destination states get the … Continue reading
Posted in Uncategorized
Leave a comment
HMM v.s. MEMM
HMM is a useful model with a long history which has been used in many domains. MEMM is a new model whih is inspired with HMM and Maximum Entropy theory.This model is more feasible than HMM.It can incorporate many features … Continue reading
Posted in Uncategorized
Leave a comment
The Maximum Entropy Model
A useful web site:http://homepages.inf.ed.ac.uk/lzhang10/maxent.html A Maximum Entropy Approach to Natural Language Processing Lagrangian duality and algorithms for the Lagrangian dual problem
Posted in Uncategorized
Leave a comment
The Basic Model of EM Algorithm
The EM algorithm is a general model for maximumlikelihood estimation estimation where the data are “incomplete” or the likelihood function involes latent varibles. The Basic Model of EM Algorithm A Note on the ExpectationMaximization (EM) Algorithm
Posted in Uncategorized
Leave a comment
The Lagrange Multiplier Method
The lagrange multiplier method is a useful approach to solve the optimal problem under several limited conditions. The attachment is in Chinese. Thanks for the web site of http://www.survivor99.com/entropy/zxw/C12a.htm . The lagrange Multiplier Method
Posted in Uncategorized
Leave a comment
The Proof of why the KL divergence is not smaller than zero
KL divergence: sum[p（x）log(p(x)/q(x))] And,sum[p（x）log(p(x)/q(x))]>=0, please see the attachment for the proof. The proof of why the KL divergence is not smaller than zero
Posted in Uncategorized
Leave a comment
Publication List
全文检索系统中多关键词查询功能的实现(含封面)全文检索系统中多关键词查询功能的实现 易物模型及其求解算法 人口模型 2008mum 吉林大学代码库
Posted in Uncategorized
Leave a comment