|
|
 |
Search published articles |
 |
|
Sakineh Dehghan, Mohammadreza Farid-Rohani, Volume 20, Issue 1 (4-2015)
Abstract
In this article, first, we introduce depth function as a function for center-outward ranking. Then we present and
use half space or Tukey depth function as one of the most popular depth functions. In the following, multivariate
nonparametric tests for location and scale difference between two population are expressed by ranking and statistics
based on depth versus depth plot. Finally, according to these tests, performance of the suggested non-invasive
distraction method for pain intensity, life quality, operative ability and inflation rate is evaluated for osteoarthritic
and is compared with usual invasive distraction method.
Raziyeh Ansari, Volume 20, Issue 1 (4-2015)
Abstract
In industry or nature, there are systems subjected to a secuence of shocks ocurring randomly in time. these shocks are causing aging or failure of system. According to the type of shocks, shock models divided in two major groups, Extreme Shock Models and Cumulative Shock Models. In the extreme shock models just impact of last shock named fatal shock would be studied and in the cumulative shock models accumulated effects of accured shocks would be studied.
In reality it is possible that effects of shocks on the system are not coincide with no kind of named models so Introducing the other types of shock models and survey of aging systems is necessary. In this article we aimed to introduce some of the new shock models, also in each model survival probability and the corresponding failure rate function were derived.
, , , Volume 20, Issue 1 (4-2015)
Abstract
The problem of sample size estimation is important in medical applications, especially in cases of expensive measurements
of immune biomarkers. This paper describes the problem of logistic regression analysis with the sample
size determination algorithms, namely the methods of univariate statistics, logistics regression, cross-validation and
Bayesian inference. The authors, treating the regression model parameters as multivariate variable, propose to estimate
the sample size using the distance between parameter distribution functions on cross-validated data sets.
Herewith, the authors give a new contribution to data mining and statistical learning, supported by applied mathematics.
Mrs Zahra Niknam, Dr mohammad Hossein Alamatsaz, Volume 20, Issue 1 (4-2015)
Abstract
In many issues of statistical modeling, the common assumption is that observations are normally distributed. In
many real data applications, however, the true distribution is deviated from the normal. Thus, the main concern of
most recent studies on analyzing data is to construct and the use of alternative distributions. In this regard, new
classes of distributions such as slash and skew-slash family of distributions have been introduced .This has been the
main concern of many researcher’s investigations in recent decades. Slash distribution, as a heavy tailed symmetric
distribution, is known in robust studies. But since , in empirical examples, there are many situations where symmetric
distributions are not suitable for fitting the data study of skew distributions has become of particular importance.In
this paper we introduce skew-slash distribution and study their properties. Finally, some applications to several real
data sets are illustrated in order to show the importance of the distribution in regression models.
Eisa Mahmoudi, , Volume 20, Issue 2 (10-2015)
Abstract
Sequential estimation is used where the total sample size is not fix and the problem cannot solve with this fixed sample
size. Sequentially estimating the mean in an exponential distribution (one and two parameter), is an important
problem which has attracted attentions from authors over the years. These largely addressed an exponential distribution
involving a single or two parameters. In this paper, two stage sampling, which introduced by Mukhopadhyay
and Zacks (2007), is employed to estimate linear combinations of the location and scale parameters of a negative
exponential distribution (two parameter) with bounded quadratic risk function. Furthermore some simulation results
are provided.
Shirin Shahsanam, Masoud Yarmohammadi, Volume 20, Issue 2 (10-2015)
Abstract
Nowadays factor analysis has been extended greatly, and one of its applications is to analysis the attributes which are
not measurable directly. While the response variable has a Bernoulli distribution, using factor analysis method for
continuous quantities leads to invalid and misleading results. Factor analysis for Bernoulli response variable base on
Logit model is developed in this paper and its applications have been explained for read data from a research project
about the high school mathematics textbooks in Iran.
Anita Abdollahi, Volume 20, Issue 2 (10-2015)
Abstract
In this paper, after stating the characteristicof some of continuous distributions including, gamma, Crovelli’s
gamma, Rayleigh, Weibull, Pareto, exponential and generalized gamma distribution with each other,these distributions
were fit on drought data of Guilan state and the best distribution was presented. Then, severity and duration of
the drought of different sites were investigated using standardized precipitation index.
Dr. Jalal Chachi, Volume 20, Issue 2 (10-2015)
Abstract
The problem of testing fuzzy hypotheses in the presence of vague data is considered. A new method based on the
necessity index of strict dominance (NSD) is suggested. An example hoe to apply the proposed test in statistical
quality control is shown.
Abazar Khalaji, , Volume 20, Issue 2 (10-2015)
Abstract
Assume that we have m independent random samples each of size n from Np(; ) and our goal is to test whether or
not the ith sample is an outlier (i=1,2,…..m). To date it is well known that a test statistics exist whose null distribution
is Betta and given the relationship between Betta and F distribution, an F test statistic can be used. In the statistical
literature however a clear and precise proof is not accessible and in some cases the proof is incomplete. In this paper
a precise and relatively clear proof is given and through simulation, capability and weakness of the test is considered.
Mehran Naghizadeh Qomi, Ohammad Taghi Kamel Mirmostafaee, , Volume 20, Issue 2 (10-2015)
Abstract
Tolerance interval is a random interval that contains a proportion of the population with a determined confidence
level and is applied in many application fields such as reliability and quality control. In this paper, based on record
data, we obtain a two-sided tolerance interval for the exponential population. An example of real record data is
presented. Finally, we discuss the accuracy of proposed tolerance intervals through a simulation study.
Miss Azade Ghazanfari Hesari, , Volume 20, Issue 2 (10-2015)
Abstract
One of the most important problem in any statistical analysis is the existence of unexpected observations. Some
observations are not a part of the study and are known as outliers. Studies have shown that the outliers affect to the
performance of statistical standard methods in models and predictions. The point of this work is to provide a couple
of statistical package in R software to identify outliers in circular-circular regression which is written by the author,
we introduce a brief explanation about the circular data and circular regression, then the packages in R for circular
regression introduced. After wand, the functions in the package CircOutlier will be described.
, Volume 20, Issue 2 (10-2015)
Abstract
Methods for small area estimation have been received great attention in recent years due to growing demand for
reliable small area estimation that are needed in development planings, allocation of government funds and marking
business decisions. The key question in small area estimation is how to obtain reliable estimations when sample
size is small. When only a few observations(or even no observation) are available from a given small area, small
sample sizes lead to undesirably large standard errors. The only possible solution to the estimation problem is to
borrow strength from available data sets. This is accomplish by using appropriate linking models (included explicit
and implicit models) to increas the effect of sample size for estimation. The generalized linear mixed models and
the empirical best linear unbiased predictor, are extensively used to estimate reliable mean of small areas. In this
article,first we introduce the small area estimation.Then, to obtain reliable small area estimations we introduce the
Fay-Herriot model as a special case of the generalized linear mixed model. Finally, in an Simulation study we use
Iran 1382 agricultural census data to estimate orange production in Fars cities (small areas) in the year 1382 based
on Fay-Herriot model.
Mehran Naghizadeh Qomi, Azita Norozi Firoz, Volume 21, Issue 1 (9-2016)
Abstract
Tolerance interval is a random interval that contains a proportion of the population with a determined confidence level and is applied in many application fields such as reliability and quality control. In this educational paper, we investigate different methods for computing tolerance interval for the binomial random variable using the package Tolerance in statistical software R.
Ameneh Abyar, Mohsen Mohammadzadeh, Kiomars Motarjem, Volume 21, Issue 1 (9-2016)
Abstract
By existing censor and skewness in survival data, some models such as weibull are used to analyzing survival data.
In addition, parametric and semiparametric models can be obtained from baseline hazard function of Cox model to fit to survival data. However these models are popular because of their simple usage but do not consider unknown risk factors, that's why cannot introduce the best fit to the data necessarily.
In this paper by considering multiple random effects in Cox model, frailty models are introduced. Then using presented models, esophageal cancer data in Golestan were modeled and fitted models were evaluated and compared based on generalized coefficient of determination criterion.
Dr farzad Eskandari, Ms imaneh Khodayari Samghabadi, Volume 21, Issue 1 (9-2016)
Abstract
There are different types of classification methods for classifying the certain data. All the time the value of the variables is not certain and they may belong to the interval that is called uncertain data. In recent years, by assuming the distribution of the uncertain data is normal, there are several estimation for the mean and variance of this distribution. In this paper, we consider the mean and variance for each of the start and end of intervals. Thus we assume that the distribution of uncertain data is bivariate normal distribution. We used the maximum likelihood to estimate the means and variances of the bivariate normal distribution. Finally, Based on the Naive Bayesian classification, we propose a Bayesian mixture algorithm for classifying the certain and uncertain data. The experimental results show that the proposed algorithm has high accuracy.
, , Volume 21, Issue 1 (9-2016)
Abstract
In this paper, collinearity in regression models is introduced and then the procedures on how to " remove it" are studied. Moreover preliminary definitions have been given. And the end of this paper, collinearity in regression model will be recognition and a solution will be introduced for remove it.
, Volume 21, Issue 1 (9-2016)
Abstract
Basu’s theorem is one of the most elegant results of classical statistics. Succinctly put, the theorem says: if T is a complete sufficient statistic for a family of probability measures, and V is an ancillary statistic, then T and V are independent. A very novel application of Basu’s theorem appears recently in proving the infinite divisibility of certain statistics. In addition to Basu’s theorem, this application requires a version of the Goldie-Steutel law. By using Basu’s theorem that a large class of functions of random variables, two of which are independent standard normal, is infinitely divisible. The next result provides a representation of functions of normal variables as the product of two random variables, where one is infinitely divisible, while the other is not, and the two are independently distributed.
1049
, , Volume 21, Issue 1 (9-2016)
Abstract
In this paper, we have studied the analysis an interval linear regression model for fuzzy data.
In section one, we have introduced the concepts required in this thesis and then we illustrated linear regression fuzzy sets and some primary definitions. In section two, we have introduced various methods of interval linear regression analysis. In section three, we have implemented numerical examples of the chapter two. Finally, we have improved some methods of interval linear regression analysis that considered in section four. We will showed performance of three methods by several examples. All computations of examples are done by alabama package by R software.
Hossein Nadeb, Hamzeh Torabi, Volume 21, Issue 1 (9-2016)
Abstract
Censored samples are discussed in experiments of life-testing; i.e. whenever the experimenter does not observe the failure times of all units placed on a life test. In recent years, inference based on censored sampling is considered, so that about the parameters of various distributions such as normal, exponential, gamma, Rayleigh, Weibull, log normal, inverse Gaussian, logistic, Laplace, and Pareto, has been inferred based on censored sampling.
In this paper, a procedure for exact hypothesis testing and obtaining confidence interval for mean of the exponential distribution under Type-I progressive hybrid censoring is proposed. Then, performance of the proposed confidence interval is evaluated using simulation. Finally, the proposed procedures are performed on a data set.
Dr Vahid Rezaeitabar, Selva Salimi, Volume 21, Issue 1 (9-2016)
Abstract
A Bayesian network is a graphical model that represents a set of random variables and their causal relationship via a Directed Acyclic Graph (DAG). There are basically two methods used for learning Bayesian network: parameter-learning and structure-learning. One of the most effective structure-learning methods is K2 algorithm. Because the performance of the K2 algorithm depends on node ordering, more effective node ordering inference methods are needed. In this paper, based on the fact that the parent and child variables are identified by estimated Markov Blanket (MB), we first estimate the MB of a variable using Grow-Shrink algorithm, then determine the candidate parents of a variable by evaluating the conditional frequencies using Dirichlet probability density function. Then the candidate parents are used as input for the K2 algorithm. Experimental results for most of the datasets indicate that our proposed method significantly outperforms previous method.
|
|