\name{optimalPMParams} \alias{genericEMoptimizer} \alias{optimalPMParams} \title{Find a set of proficiency model parameters that fit the data.} \description{ The generic function \code{optimalPMParams} does some sort of general optimization to try and find an set of proficiency model parameters which maximizes the weighted likelihood of the given observation conditioned on the given latent variables. The function \code{genericPMoptimzier} provides an possible implementation using interative search (via the \code{\link[stat]{optim}} function), which is available if no closed form solution is available (e.g., solving a normal updating problem). } \usage{ optimalPMParams(pm, latent, weights = 1, background = NA, param = parameters(pm), control = list(), maxit = 100, nowarn = FALSE, Bayes = TRUE) genericPMoptimizer(pm, latent, weights = 1, background = NA, param = parameters(pm), control = list(), maxit = 100, nowarn = FALSE, Bayes = TRUE) } \arguments{ \item{pm}{ An object of class \code{\link{ProficiencyModel}}. } \item{latent}{ A matrix of latent vectors corresponding to the individuals represented in the observations. } \item{weights}{ A vector of weights which should have either length 1 or \code{nrow(latent)}. If weights are supplied a weighted maximum posterior is done instead of the unweighted version. } \item{background}{ An optional set of background (demographic) variables. Some proficiency model types ignore background variables. } \item{param}{ An object representing the initial values for the parameters of \code{pm}. The type required is determined by the class of \code{pm}. } \item{control}{ A list of control flags. The default method uses \code{\link[stats]{optim}} to perform the optimization so the control flags used by \code{optim} can be used. } \item{maxit}{ The maximum number of iterations to perform. May want to set this to a low value if the call to optimalParams is in the middle of an PFEM loop. } \item{nowarn}{ Suppresses the warning when the algorithm did not converge. Again useful for call inside of the main PFEM loop. } \item{Bayes}{ If true, then the log posterior is optimized rather than the likelihood. } } \details{ This attempts to find a set of proficiency model parameters that maximizes the likelihood of the currently hypothesized values for the latent variables. If \code{Bayes} is \code{TRUE} it maximizes the log posterior instead of the log likelihood. (This dependents on the fact that proficiency models contain prior distributions for their parameters.) The default method generates an error; however, a fairly generic utility implementation is available as \code{genericPMoptimzier()}, which used the function \code{\link[stats]{optim}}. The quantity to be maximized is based on an open implementation protocol, which relies on the following other methods: \describe{ \item{\code{\link{pvec}}}{Returns the parameters in a vectorized form which can be used by \code{\link[stats]{optim}}.} \item{\code{\link{lpriorPMParam}}}{Evaluates the prior distribution of the parameters (only if \code{Bayes==TRUE}). Returns \code{-Inf} if the parameter values are not legal values (e.g., singular covariance matrix).} \item{\code{\link{lpriorLatent}}}{Evaluates the likelihood of the latent variables. Returns \code{-Inf} if the parameter values are not legal values (e.g., singular covariance matrix).} } The \code{genericEMoptimzier()} method uses \code{\link[stats]{optim}}. The values of \code{control} should be appropriate for that function. Note that other implementations may find different meanings for \code{control}. The \code{maxit} argument overrides any value in \code{control}. The \code{nowarn} parameter supresses warning messages for lack of convergence: setting \code{nowarn=TRUE} and \code{maxit} to a small value is useful for the M-step of a generalized EM algorithm. } \value{ A list with the following components: \item{param}{A parameter object of the same class as the \code{param} argument containing the optimal parameters.} \item{deviance}{ -2 * log likelihood of the observations at the end of the run.} \item{convergence}{A logical value indicating whether or not the optimization converged} \item{optout}{A method-specific list of outputs from the optimization.} } \references{ %% ~put references to the literature/web site here ~ } \author{ Russell Almond} \seealso{ \code{\link{pvec}}, \code{\link{lpriorLatent}} \code{\link{lpriorPMParam}}, \code{\link{optimalEMParams}} } \examples{ pm1 <- new("TimelessNormalPM", muMean=c(Mechanics=2,Fluency=2), varWeight=3, precMean=solve(matrix(c(.7,.3,.3,.7),2,2)), Sdf=3) parameters(pm1) <- drawPMParam(pm1) pparam1 <- drawPMParam(pm1) stud1k <- drawInitialLatent(pm1,1000,param=pparam1) ##MLE pparam1.mle <- optimalPMParams(pm1,stud1k,Bayes=FALSE) ##MAP pparam1.map <- optimalPMParams(pm1,stud1k,Bayes=TRUE) ## Weighted MAP pparam2.map <- optimalPMParams(pm1,stud1k,Bayes=TRUE,weights=2) Q <- matrix(c(1,0,1,1),2,2) em1 <- new("FixedQNormalEM", Q=Q, zMean=c(Mechanics=0,Fluency=0), zStd=diag(c(1,1)), Rmean=matrix(c(.7,.3,.3,.7),2,2), Rdf=3, hMean=c(1,1,.5), hStd=rep(.25,3)) parameters(em1) <- drawEMParam(em1) eparam1 <- drawEMParam(em1) obs1k <- drawObs(em1,stud1k,param=eparam1) ##MLE eparam1.mle <- optimalEMParams(em1,obs1k,stud1k,Bayes=FALSE) eparam1a.mle <- optimalEMParams(em1,obs1k,stud1k,param=eparam1.mle$param,Bayes=FALSE) ##MAP eparam1.map <- optimalEMParams(em1,obs1k,stud1k,Bayes=TRUE) } \keyword{ manip }