[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
:: Search published articles ::

, ,
Volume 24, Issue 2 (3-2020)
Abstract

The Kumaraswamy distribution is a two-parameter distribution on the interval (0,1) that is very similar to beta distribution. This distribution is applicable to many natural phenomena whose outcomes have lower and upper bounds, such as the proportion of people from society who consume certain products in a given interval. 

In this paper, we introduce the family of Kumaraswamy-G distribution, and we detect its hazard rate function, reversible hazard rate function, mean residual life function, mean past life function, and the behavior of each of them. Also, we investigate the stochastic order concept of the family of Kumaraswamy-G distribution. Finally, in the form of a practical example, we analyze the suitability of the Kumaraswamy distribution for real data.


,
Volume 24, Issue 2 (3-2020)
Abstract

Unfortunately, in the recent few years, decreasing the level of educational motivation among students at various levels of education has been increasing and has plagued many of the university educational units in the country and this can have devastating and irreparable effects in the near future. In this research, using statistical analysis of the number of questionnaires, social factors are investigated on the motivation of Bushehr Persian Gulf University students and relationship between each of variables such as job status, the income of educated, gender, marital status, family income, presented content in the textbooks, university educational facilities, and weather conditions have been studied with students' educational motivation. The unequal two-stage cluster sampling method is used to collect the sample data and the required information and multivariate factor analysis used to examine the internal correlations of variables and to find the main factors by predicting the variance of variables by the factors. Also, the effect of variables on the students' educational motivation has been studied.


, , ,
Volume 24, Issue 2 (3-2020)
Abstract

In the analysis of Bernoulli's variables, an investigation of the their dependence is of the prime importance. In this paper, the distribution of the Markov logarithmic series is introduced by the execution of the first-order dependence among Bernoulli variables. In order to estimate the parameters of this distribution, maximum likelihood, moment, Bayesian and also a new method which called the expected Bayesian method (E-Bayesian) are employed. In continuation, using a simulation study, it is shown that the expected Bayesian estimator out performed over the other estimators.


, , ,
Volume 24, Issue 2 (3-2020)
Abstract

Sometimes, in practice, data are a function of another variable, which is called functional data. If the scalar response variable is categorical or discrete, and the covariates are functional, then a generalized functional linear model is used to analyze this type of data. In this paper, a truncated generalized functional linear model is studied and a maximum likelihood approach is used to estimate the model parameters. Finally, in a simulation study and two practical examples, the model and methods presented are implemented.


, ,
Volume 24, Issue 2 (3-2020)
Abstract

The analysis of discrete mixed responses is an important statistical issue in various sciences. Ordinal and overdispersed binomial variables are discrete. Overdispersed binomial data are a sum of correlated Bernoulli experiments with equal success probabilities. In this paper, a joint model with random effects is proposed for analyzing mixed overdispersed binomial and ordinal longitudinal responses. In this model, we assume an overdispersed binomial response variable follows Beta-Binomial distribution and use a latent variable approach for modeling the ordinal response variable. Also, the model parameters are estimated via the Maximum Likelihood method, and the estimates are evaluated with a simulation study via the Monte Carlo method. Finally, an application of the proposed model to real data is introduced.


,
Volume 24, Issue 2 (3-2020)
Abstract

Life testing often is consuming a very long time for testing. Therefore, the engineers and statisticians are looking for some approaches to reduce the running time. There is a recommended method for reducing the time of failure, such that the stress level of the test units will increase, and then they will fail earlier than normal operating conditions. These approaches are called accelerated life tests. One of the most common tests is called the step stress accelerated life test. In this procedure, the stress applied to the units under the test is increased step by step at a predetermined time. The most important aspect to deal with the step stress model is the optimization of test design. In order to optimize the test plan, the best time to increase the level of stress should be chosen. In this paper, at first the step stress testing described. Then, this test is used for exponential lifetime distribution. Since life data are often not complete, this model is applied to type I censored data. By minimizing the asymptotic variance of the maximum likelihood estimator of reliability at time $xi$, the optimal test plan will be obtained. Finally, the simulation studies and one real data are discussed to illustrate the results. A sensitivity analysis shows that the proposed optimum plan is robust.


, , , ,
Volume 24, Issue 2 (3-2020)
Abstract

The Area under the ROC Curve (AUC) is a common index for evaluating the ability of the biomarkers for classification. In practice, a single biomarker has limited classification ability, so to improve the classification performance, we are interested in combining biomarkers linearly and nonlinearly. In this study, while introducing various types of loss functions, the Ramp AUC method and some of its features are introduced as a statistical model based on the AUC index. The aim of this method is to combine biomarkers in a linear or non-linear manner to improve the classification performance of the biomarkers and minimize the experimental loss function by using the Ramp AUC loss function. As an applicable example, in this study, the data of 378 diabetic patients referred to Ardabil and Tabriz Diabetes Centers in 1393-1394 have been used. RAUC method was fitted to classify diabetic patients in terms of functional limitation, based on the demographic and clinical biomarkers. Validation of the model was assessed using the training and test method. The results in the test dataset showed that the area under the RAUC curve for classification of the patients according to the functional limitation, based on the linear kernel pf biomarkers was 0.81 and with a kernel of the radial base function (RBF) was equal to 1.00. The results indicate a strong nonlinear pattern in the data so that the nonlinear combination of the biomarkers had higher classification performance than the linear combination.


, , ,
Volume 24, Issue 2 (3-2020)
Abstract

A sequence of functions (curves) collected over time is called a functional time series. Functional time series analysis is one of the popular research areas in which statistics from such data are frequently observed. The main purpose of the functional time series is to predict and describe random mechanisms that resulted in generating the data. To do so, it is needed to decompose functional time series into trend, periodic, and error components. However, we need to identify and recognize these components beforehand. Hence, in this study, a non-parametric method is presented for detecting and testing the existence of a process in a functional time series using record functions. Then, we implement and use this method for investigating the application of this method in a real functional time series. The effectiveness of this method for determining the trend in a set of real data on fertility rates in Australia has been investigated.


,
Volume 24, Issue 2 (3-2020)
Abstract

One of the most common censoring methods is the progressive type-II censoring. In this method of censoring, a total of $n$ units are placed on the test, and at the time of failure of each unit, some of the remaining units are randomly removed. This will continue to record $m$ failure times, where $m$ is a pre-determined value, and then the experiment ends. The problem of determining the optimal censoring scheme in the progressive type-II censoring has been studied so far by considering different criteria. Another issue in the progressive type-II censoring is choosing the sample size at the start of the experiment, namely $n$. In this paper, assuming the Pareto distribution for the data, we will determine the optimal sample size, $n_ {opt}$, as well as the optimal censoring scheme by means of the Fisher Information. Finally, to evaluate the results, numerical calculations have been presented by using $R$ software.


Samaneh Beheshtizadeh, Hamidreza Navvabpour,
Volume 25, Issue 1 (1-2021)
Abstract

Evidence-based management and development planning relies on official statistics. There are some obstacles that make it impossible to do a single-mode survey. These obstacles are the sampling frame, time, budget, and accuracy of measurement of each mode. Always we can not use a single-mode survey because of these factors. So we need to use other data collection methods to overcome these obstacles. This method is called the mixed-mode survey, which is a combination of several modes. In this article, we show that mixed-mode surveys can produce more accurate official statistics than single-mode surveys.
Miss Zahra Eslami, Miss Mina Norouzirad, Mr Mohammad Arashi,
Volume 25, Issue 1 (1-2021)
Abstract

The proportional hazard Cox regression models play a key role in analyzing censored survival data. We use penalized methods in high dimensional scenarios to achieve more efficient models. This article reviews the penalized Cox regression for some frequently used penalty functions. Analysis of medical data namely ”mgus2” confirms the penalized Cox regression performs better than the cox regression model. Among all penalty functions, LASSO provides the best fit.
Reza Cheraghi, Dr. Reza Hashemi,
Volume 25, Issue 1 (1-2021)
Abstract

Varying coefficient models are among the most important tools for discovering the dynamic patterns when a fixed pattern does not fit adequately well on the data, due to existing diverse temporal or local patterns. These models are natural extensions of classical parametric models that have achieved great popularity in data analysis with good interpretability. The high flexibility and interpretability of these models have led to use in many real applications. In this paper, after presenting a brief review of varying coefficient models, we use the parameter estimation method using the kernel function and cubic
spline then confidence band and hypothesis testing are investigated. Finally, using the real data of Iran’s inflation rate from 1989 to 2017, we show the application and capabilities of the varying coefficient model in interpreting the results. The main challenge in this application is that the panel or longitudinal models or even time series models with heterogeneous variances such as ARCH and GARCH models and their derived models did not fit adequately well on this dataset which justifies the use of varying coefficient models.


Mohammad Mollanoori, Habib Naderi, Hamed Ahmadzadeh, Salman Izadkhah,
Volume 25, Issue 1 (1-2021)
Abstract

Many populations encountered in survival analysis are often not homogeneous. Individuals are flexible in their susceptibility to causes of death, response to treatment, and influence of various risk factors. Ignoring this heterogeneity can result in misleading conclusions. To deal with these problems, the proportional hazard frailty model was introduced. In this paper, the frailty model is explained as the product of the frailty random variable and baseline hazard rate. We examine the fit of the frailty model to the right-censored data from in the presence of explanatory variables (observable variables) and use it as a practical example to fit the frailty model to the data by considering the Weibull basis distribution and exponential in the likelihood functions. It is used to estimate the model parameters and compare the fit of the models with different criteria.
Mr. Mohammad Hossein Poursaeed,
Volume 25, Issue 1 (1-2021)
Abstract

In this study, the interval estimations are prosed for the functions of the parameter in exponential lifetimes, when interval
censoring is used. Optimal monitoring time and simulation studies are examined as well as the applicability of the topics.
Omid Karimi, Fatemeh Hosseini,
Volume 25, Issue 1 (1-2021)
Abstract

Spatial count data is usually found in most sciences such as environmental science, meteorology, geology and medicine. Spatial generalized linear models based on Poisson (Poisson-lognormal spatial model) and binomial (binomial-logitnormal spatial model) distributions are often used to analyze discrete count data in which spatial correlation is observed. The likelihood function of these models is complex as analytic and so computation. The Bayesian approach using Monte Carlo Markov chain algorithms can be a solution to fit these models, although there are usually problems with low sample acceptance rates and long runtime to implement the algorithms. An appropriate solution is to use the Hamilton (hybrid) Monte Carlo algorithm
in The Bayesian approach. In this paper, the new Hamilton (hybrid) Monte Carlo method for Bayesian analysis of spatial count models on air pollution data in Tehran is studied. Also, the two common Monte Carlo algorithms such as the Markov chain (Gibbs and Metropolis-Hastings) and Langevin-Hastings are used to apply the complete Bayesian approach to the data modeling. Finally, an appropriate approach to data analysis and forecasting in all points of the city is introduced with the diagnostic criteria.


Fatemeh Hossini, Omid Karimi,
Volume 25, Issue 1 (1-2021)
Abstract

In spatial generalized linear mixed models, spatial correlation is assumed by adding normal latent variables to the model. In these models because of the non-Gaussian spatial response and the presence of latent variables, the likelihood function cannot usually be given in a closed form, thus the maximum likelihood approach is very challenging. The main purpose of this paper is to introduce two new algorithms for the maximum likelihood estimations of parameters and to compare them in terms of speed and accuracy with existing algorithms. The presented algorithms are applied to a simulation study and their performances are compared.


Mohsen Hajitabar Firuzjayee, Maryam Talebi,
Volume 25, Issue 1 (1-2021)
Abstract

The purpose of the present study was to examine the factor validity and reliability of the questionnaire of student perception of assessment and evaluation task [19]. In this regard, the questionnaire of students’ perception of the assessment and evaluation task was performed on 362 students selected by cluster sampling method among students of Mazandaran University. The Cronbach’s alpha coefficient was used to check the reliability of the questionnaire and the confirmatory factor analysis was used to determine the factor validity. According to [15] and [14], the results showed that this questionnaire had acceptable internal consistency and Cronbach’s alpha coefficient in the subtest. It ranged from 0.71 to 0.78. Also, the results of the confirmatory factor analysis confirm that the structure of the questionnaire has an acceptable fit to the data and all the goodness of fit indices confirm the model. Therefore, the questionnaire can be a useful tool for evaluating students’ perceptions of assessment and evaluation tasks.
Ms Monireh Maanavi, Dr Mahdi Roozbeh,
Volume 25, Issue 1 (1-2021)
Abstract

By evolving science, knowledge, and technology, new and precise methods for measuring, collecting, and recording information have been innovated, which have resulted in the appearance and development of high-dimensional data. The high-dimensional data set, i.e., a data set in which the number of explanatory variables is much larger than the number of observations, cannot be easily analyzed by traditional and classical methods, the same as the ordinary least-squares method, and its interpretability will be very complex. Although in classical regression analysis, the ordinary least-squares estimation is the best estimation method if the essential assumptions are met, it is not applicable for high-dimensional data, and in this condition, we need to apply the modern methods. In this research, it is first mentioned the drawbacks of classical methods in the analysis of high-dimensional data and then, it is proceeded to introduce and explain the modern and common approaches of the regression analysis for high-dimensional data same as principal component analysis and penalized methods. Finally, a simulation study and real-world data analysis are performed to apply and compare the mentioned methods in high-dimensional data.


Alireza Rezaee, Mojtaba Ganjali, Ehsan Bahrami,
Volume 25, Issue 1 (1-2021)
Abstract

Nonrespose is a source of error in the survey results and National statistical organizations are always looking for ways to
control and reduce it. Predicting nonrespons sampling units in the survey before conducting the survey is one of the solutions
that can help a lot in reducing and treating the survey nonresponse. Recent advances in technology and the facilitation of
complex calculations have made it possible to apply machine learning methods, such as regression and classification trees
or support vector machines, to many issues, including predicting the nonresponse of sampling units in statistics. . In this
article, while reviewing the above methods, we will predict the nonresponse sampling units in a establishment survey using
them and we will show that the combination of the above methods is more accurate in predicting the correct nonresponse
than any of the methods.

Mojtaba Rostami, Shahram Fattahi,
Volume 25, Issue 1 (1-2021)
Abstract

Economic theories seek a scientific explanation, or prediction of economic phenomena using a set of axioms, defined expressions and theorems. Mathematically explicit economic models are one of these theories. Due to the unknown structure of each model, the existence of measurement error in economic committees, and the failure of Ceteris Paribus; the Synthetic of any economic theory requires probabilistic and statistical modeling. Therefore, understanding the current method of modeling and the importance of its proper use in economics requires economists to have an accurate knowledge of statistical modeling. The present study seeks to correct the view that although the purpose of providing statistical models is to experimentally test the claims of theories, statistical methods do not play a secondary role in economic theories, but the appropriate method of economic modeling depends on the correct use of statistical methods and probability models in the situation of making a theory.

Page 10 from 14     

مجله اندیشه آماری Andishe _ye Amari
Persian site map - English site map - Created in 0.06 seconds with 43 queries by YEKTAWEB 4714