[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
:: Search published articles ::
Showing 201 results for Type of Study: Research

Mahdi Tavangar, Miri,
Volume 19, Issue 1 (6-2014)
Abstract

‎The equilibrium distributions have many applications in reliability theory, stochastic orderings and random processes. ‎The purpose of this paper is to introduce the equilibrium distributions and presents some results related to this issue. Some results are based on order statistics. ‎In this paper, ‎the generalized Pareto distributions are also analyzed and some basic relationships between the equilibrium distributions are presented.
Zeynab Aghabazaz, Mohammad Hossein Alamatsaz,
Volume 19, Issue 2 (2-2015)
Abstract

The two-parameter Birnbaum–Saunders (BS) distribution was originally proposed as a failure time distribution
for fatigue failure caused under cyclic loading. BS model is a positively skewed statistical distribution which has
received great attention in recent decades. Several extensions of this distribution with various degrees of skewness,
kurtosis and modality are considered. In particular, a generalized version of this model was derived based on symmetrical
distributions in the real line named the generalized BS (GBS) distribution. In this article, we propose a
new family of life distributions, generated from elliptically contoured distributions, and the density and some of its
properties are obtained. Explicit expressions for the density of a number of specific elliptical distributions, such as
Pearson type VII, t, Cauchy, Kotz type, normal, Laplace and logistic are found. Another generalization of the BS
distribution is also presented using skew-elliptical distribution which makes its symmetry more flexible. Finally,
some examples are provided to illustrate application of the distribution.


Dr Yadollah Mehrabi, Parvin Sarbakhsh, Dr Farid Zayeri, Dr Maryam Daneshpour,
Volume 19, Issue 2 (2-2015)
Abstract

Logic regression is a generalized regression and classification method that is able to make Boolean combinations
as new predictive variables from the original binary variables. Logic regression was introduced for case control or
cohort study with independent observations. Although in various studies, correlated observations occur due to different
reasons, logic regression have not been studied in theory and application to analyze of correlated observations
and longitudinal data.
Due to the importance of identifying and considering the interactions between variables in longitudinal studies,
in this paper we propose Transition Logic Regression as an extension of Logic Regression to binary longitudinal
data. AIC of the models are used as score function of Annealing algorithm. In order to assess the performance of
the method, simulation study is done in various conditions of sample size, first order dependency and interaction
effect. According to results of simulation study, by increasing the sample size, percentage of identification of true
interactions and MSE of estimations get better. As an application, we assess interaction effect of some SNPs on
HDL level over time in TLGS study using our proposed model.


Mis Marzieh Baghban,
Volume 19, Issue 2 (2-2015)
Abstract

In reliability theory, some measures are introduced , called importance measures, to evaluate the relative importance
of individual components or groups of components in a system. Importance measures are quantitive criteria
that ranke the components according to their importance. In the literature, different importance measures are presented
based on different scenarios. These measures can be determined based on the system structure, reliability of
the components and/or component liftime distributions. The purpose of this paper is the study different importance
measures of the components of a system in reliability theory.


Fatemeh Asgari,
Volume 19, Issue 2 (2-2015)
Abstract

Unimodality is one of the building structures of distributions that like skewness, kurtosis and symmetry is visible in the shape of a function. Comparing two different distributions, can be a very difficult task. But if both the distributions are of the same types, for example both are unimodal, for comparison we may just compare the modes, dispersions and skewness. So, the concept of unimodality of distributions and its characterizations, is important. In this paper, we discuss the concept of unimodality and its generalizations, namely a-unimodality, for discrete and continuous random variables. We shall also review the concept of a-monotonicity of distributions. Finally, we shall reveal certain upper bounds for the variance of a discrete a-unimodal distribution.


Raziyeh Ansari,
Volume 20, Issue 1 (4-2015)
Abstract

 In industry or nature, ‎there are systems subjected to a secuence of shocks ocurring randomly in time‎. ‎these shocks are causing aging or failure of system‎. ‎According to the type of shocks‎, ‎shock models divided in two major groups‎, ‎Extreme Shock Models and Cumulative Shock Models‎. ‎In the extreme shock models just impact of last shock named fatal shock would be studied and in the cumulative shock models accumulated effects of accured shocks‎ would be studied.

‎In reality it is possible ‎that‎ ‎effects of shocks on the system are not coincide with no kind of named models so Introducing the other types of shock models and survey of aging systems is necessary‎. ‎In this article we aimed to introduce some of the new shock models, also in each model survival probability and the corresponding failure rate function ‎were ‎derived.‎


, , ,
Volume 20, Issue 1 (4-2015)
Abstract

The problem of sample size estimation is important in medical applications, especially in cases of expensive measurements
of immune biomarkers. This paper describes the problem of logistic regression analysis with the sample
size determination algorithms, namely the methods of univariate statistics, logistics regression, cross-validation and
Bayesian inference. The authors, treating the regression model parameters as multivariate variable, propose to estimate
the sample size using the distance between parameter distribution functions on cross-validated data sets.
Herewith, the authors give a new contribution to data mining and statistical learning, supported by applied mathematics.


Dr. A. Asgharzadeh, Mr. Hamed Yahyaee, Mr. M. Abdi,
Volume 20, Issue 1 (4-2015)
Abstract

Confidence intervals are one of the most important topics in mathematical statistics which are related to statistical
hypothesis tests. In a confidence interval, the aim is that to find a random interval that coverage the unknown parameter
with high probability. Confidence intervals and its different forms have been extensively discussed in standard
statistical books. Since the most of statistical distributions have more than one parameter, so joint confidence regions
are more important than confidence intervals. In this paper, we discuss joint confidence regions. Some examples
are given for illustration purposes.


, ,
Volume 20, Issue 1 (4-2015)
Abstract

‎This paper is a Persian translation of R‎. ‎T‎. ‎Cox (1946) famous work concerning subjective probability‎. ‎It establishes an axiomatic foundation for subjective probability‎, ‎akin to Kolmogorov's work‎.


Eisa Mahmoudi, ,
Volume 20, Issue 2 (10-2015)
Abstract

Sequential estimation is used where the total sample size is not fix and the problem cannot solve with this fixed sample
size. Sequentially estimating the mean in an exponential distribution (one and two parameter), is an important
problem which has attracted attentions from authors over the years. These largely addressed an exponential distribution
involving a single or two parameters. In this paper, two stage sampling, which introduced by Mukhopadhyay
and Zacks (2007), is employed to estimate linear combinations of the location and scale parameters of a negative
exponential distribution (two parameter) with bounded quadratic risk function. Furthermore some simulation results
are provided.


Dr. Jalal Chachi,
Volume 20, Issue 2 (10-2015)
Abstract

The problem of testing fuzzy hypotheses in the presence of vague data is considered. A new method based on the
necessity index of strict dominance (NSD) is suggested. An example hoe to apply the proposed test in statistical
quality control is shown.


Abazar Khalaji, ,
Volume 20, Issue 2 (10-2015)
Abstract

Assume that we have m independent random samples each of size n from Np(; ) and our goal is to test whether or
not the ith sample is an outlier (i=1,2,…..m). To date it is well known that a test statistics exist whose null distribution
is Betta and given the relationship between Betta and F distribution, an F test statistic can be used. In the statistical
literature however a clear and precise proof is not accessible and in some cases the proof is incomplete. In this paper
a precise and relatively clear proof is given and through simulation, capability and weakness of the test is considered.


Mehran Naghizadeh Qomi, Ohammad Taghi Kamel Mirmostafaee, ,
Volume 20, Issue 2 (10-2015)
Abstract

Tolerance interval is a random interval that contains a proportion of the population with a determined confidence
level and is applied in many application fields such as reliability and quality control. In this paper, based on record
data, we obtain a two-sided tolerance interval for the exponential population. An example of real record data is
presented. Finally, we discuss the accuracy of proposed tolerance intervals through a simulation study.


, ,
Volume 20, Issue 2 (10-2015)
Abstract

Nadarajah and Haghighi (2011) introduced a new generalization of the exponential distribution as an alternative
to the gamma, Weibull and exponeniated exponential distributions. In this paper, a generalization of the Nadarajah–
Haghighi (NH) distribution namely exponentiated generalized NH distribution is introduced and discussed. The
properties and applications of proposed model to real data are discussed. A Monte Carlo simulation experiment will
be conducted to evaluate the maximum likelihood estimators of unknown parameters.


Mehran Naghizadeh Qomi, Azita Norozi Firoz,
Volume 21, Issue 1 (9-2016)
Abstract

Tolerance interval is a random interval that contains a proportion of the population with a determined confidence level and is applied in many application fields such as reliability and quality control. In this educational paper, we investigate different methods for computing tolerance interval for the binomial random variable using the package Tolerance in statistical software R. 


Ameneh Abyar, Mohsen Mohammadzadeh, Kiomars Motarjem,
Volume 21, Issue 1 (9-2016)
Abstract

‎By existing censor and skewness in survival data‎, ‎some models such as weibull are used to analyzing survival data‎.

‎In addition, parametric and semiparametric models can be obtained from baseline hazard function of Cox model to fit to survival data‎. ‎However these models are popular because of their simple usage but do not consider unknown risk factors‎, ‎that's why cannot introduce the best fit to the data necessarily‎.

‎In this paper by considering multiple random effects in Cox model‎, ‎frailty models are introduced‎. ‎Then using presented models‎, ‎esophageal cancer data in Golestan were modeled and fitted models were evaluated and compared based on generalized coefficient of determination criterion‎.


, ,
Volume 21, Issue 1 (9-2016)
Abstract

‎In this paper‎, ‎collinearity in regression models is introduced and then the procedures on how to‎ " ‎remove it‎" ‎are studied‎. ‎Moreover preliminary definitions have been given‎. ‎And the end of this paper‎, ‎collinearity in regression model will be recognition and a solution will be introduced for remove it‎.   


,
Volume 21, Issue 1 (9-2016)
Abstract

‎Basu’s theorem is one of the most elegant results of classical statistics‎. ‎Succinctly put‎, ‎the theorem says‎: ‎if T is a complete sufficient statistic for a family of probability measures‎, ‎and V is an ancillary statistic‎, ‎then T and V are independent‎. ‎A very novel application of Basu’s theorem appears recently in proving the infinite divisibility of certain statistics‎. ‎In addition to Basu’s theorem‎, ‎this application requires a version of the Goldie-Steutel law‎. ‎By using Basu’s theorem that a large class of functions of random variables‎, ‎two of which are independent standard normal‎, ‎is infinitely divisible‎. ‎The next result provides a representation of functions of normal variables as the product of two random variables‎, ‎where one is infinitely divisible‎, ‎while the other is not‎, ‎and the two are independently distributed‎.


, ,
Volume 21, Issue 1 (9-2016)
Abstract

‎In this paper‎, ‎we have studied the analysis an interval linear regression model for fuzzy data‎.

‎In section one‎, ‎we have introduced the concepts required in this thesis and then we illustrated linear regression fuzzy sets and some primary definitions‎. ‎In section two‎, ‎we have introduced various methods of interval linear regression analysis‎. ‎In section three‎, ‎we have implemented numerical examples of the chapter two‎. ‎Finally‎, ‎we have improved some methods of interval linear regression analysis that considered in section four‎. ‎We will showed performance of three methods by several examples‎. ‎All computations of examples are done by alabama package by R software‎.


Hossein Nadeb, Hamzeh Torabi,
Volume 21, Issue 1 (9-2016)
Abstract

 ‎Censored samples are discussed in experiments of life-testing; i.e‎. ‎whenever the experimenter does not observe the failure times of all units placed on a life test‎. ‎In recent years‎, ‎inference based on censored sampling is considered‎, ‎so that about the parameters of various distributions such as ‎normal‎, ‎exponential‎, ‎gamma‎, ‎Rayleigh‎, ‎Weibull‎, ‎log normal‎, ‎inverse Gaussian‎, ‎logistic‎, ‎Laplace‎, ‎and Pareto‎, ‎has been inferred based on censored sampling‎.

‎In this paper‎, ‎a procedure for exact hypothesis testing and obtaining confidence interval for mean of the exponential distribution under Type-I progressive hybrid censoring is proposed‎. ‎Then‎, ‎performance of the proposed confidence interval is evaluated using simulation‎. ‎Finally‎, ‎the proposed procedures are performed on a data set‎.



Page 4 from 11     

مجله اندیشه آماری Andishe _ye Amari
Persian site map - English site map - Created in 0.06 seconds with 44 queries by YEKTAWEB 4714