[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
:: Search published articles ::
Showing 201 results for Type of Study: Research

Ali Bahami, Ebrahim Reyhani, Ehsan Bahami,
Volume 24, Issue 1 (9-2019)
Abstract

The aim of this study, that was carried out in descriptive of the survey, is to assess the understanding and misunderstanding of the concept of probability eighth grade students. The samples of this study are, all eighth grade boy and girl students of Tehran province. The study sample,1330 eighth grade students in Tehran who were selected randomly. A random sample of 1330 students, from different public school. intelligence school, Shahed and perspicacious school randomly classified. and they were given 15 questions. which their validity has been studied by the number of math professors and teachers of mathematics and experienced math teachers. The reliability tests with Cronbach's alpha coefficien. 961 Was confirmed.After analyzing descriptive statistics, misunderstanding of the students were identified in seven groups as follow: lack of understanding rational numbers and its relationship to fractions, lack of understanding some of the concepts prerequisite, language problems, using their own methods to calculate the probability, inability to count all possible states, inappropiate generalization and the inability of the undrestanding of prerequisite problems.
Dr Fatemeh Hosseini, Dr Omid Karimi, Miss Fatemeh Hamedi,
Volume 24, Issue 1 (9-2019)
Abstract

‎Tree models represent a new and innovative way of analyzing large data sets by dividing predictor space into simpler areas‎. ‎Bayesian Additive Regression Trees model‎, ‎a model that we explain in this article‎, ‎uses a totality of trees in its structure‎, ‎since the combination of several trees from a tree only has a higher accuracy‎.

‎Then‎, ‎this model is a tree-based model and a nonparametric model that uses general aggregation methods‎, ‎and boosting algorithms in particular and in fact is extension of the classification and Regression Tree methods in which the decision tree exists in the structure of these methods‎.

‎In this method‎, ‎on the parameters of the model sum of tree and put regular prior then use the boosting algorithms for analysis‎. ‎In this paper‎, ‎first the Bayesian Additive Regression Trees model is introduced and then applied in survival analysis of lung cancer patients‎.


Saeed Zhlzadeh, Sima Zamani,
Volume 24, Issue 1 (9-2019)
Abstract

Consider a coherent system consisting of independent or dependent components and assume that the components are randomly chosen from two different batches, where the components lifetimes of the first batch are larger than those of the second in some stochastic order sense. In this paper, using different stochastic orders, we compare the reliability of such systems and show that the reliability of the systems increases, as the random number of components chosen from the first batch increases in different stochastics orders.  We use copula function to describe dependence structure between component lifetimes.
Ma , ,
Volume 24, Issue 1 (9-2019)
Abstract

‎One of the most common reasons of corneal transplantation in Iran is Keratoconus‎. ‎Keratoconus is a non-inflammatory phenomenon which usually affects the cornea of both eyes‎. ‎Since in corneal transplantation a portion of people may not reject the transplanted organ so for studying the effective factors on survival time of these data‎ , ‎the survival analysis with cure ratio was used‎.
Seyedeh Mona Ehsani Jokandan, Behrouz Fathi Vajargah,
Volume 24, Issue 2 (3-2020)
Abstract

In this paper, the difference between classical regression and fuzzy regression is discussed. In fuzzy regression, nonphase and fuzzy data can be used for modeling. While in classical regression only non-fuzzy data is used.
The purpose of the study is to investigate the possibility of regression method, least squares regression based on regression and linear least squares linear regression method based on fuzzy weight calculation for non-fuzzy input and fuzzy output using symmetric triangular fuzzy numbers. Further reliability, confidence intervals and fitness fit criterion is presented for choosing the optimal model.
Finally, by providing examples of the behavior of the proposed methods, the optimality of the regression hybrid model is shown by the least linear fuzzy squares.
, ,
Volume 24, Issue 2 (3-2020)
Abstract

The minimum density power divergence method provides a robust estimate in the face of a situation where the dataset includes a number of outlier data.

In this study, we introduce and use a robust minimum density power divergence estimator to estimate the parameters of the linear regression model and then with some numerical examples of linear regression model, we show the robustness of this estimator in the face of a dataset which includes a number of outliers.


, ,
Volume 24, Issue 2 (3-2020)
Abstract

The Kumaraswamy distribution is a two-parameter distribution on the interval (0,1) that is very similar to beta distribution. This distribution is applicable to many natural phenomena whose outcomes have lower and upper bounds, such as the proportion of people from society who consume certain products in a given interval. 

In this paper, we introduce the family of Kumaraswamy-G distribution, and we detect its hazard rate function, reversible hazard rate function, mean residual life function, mean past life function, and the behavior of each of them. Also, we investigate the stochastic order concept of the family of Kumaraswamy-G distribution. Finally, in the form of a practical example, we analyze the suitability of the Kumaraswamy distribution for real data.


, , ,
Volume 24, Issue 2 (3-2020)
Abstract

In the analysis of Bernoulli's variables, an investigation of the their dependence is of the prime importance. In this paper, the distribution of the Markov logarithmic series is introduced by the execution of the first-order dependence among Bernoulli variables. In order to estimate the parameters of this distribution, maximum likelihood, moment, Bayesian and also a new method which called the expected Bayesian method (E-Bayesian) are employed. In continuation, using a simulation study, it is shown that the expected Bayesian estimator out performed over the other estimators.


, , ,
Volume 24, Issue 2 (3-2020)
Abstract

Sometimes, in practice, data are a function of another variable, which is called functional data. If the scalar response variable is categorical or discrete, and the covariates are functional, then a generalized functional linear model is used to analyze this type of data. In this paper, a truncated generalized functional linear model is studied and a maximum likelihood approach is used to estimate the model parameters. Finally, in a simulation study and two practical examples, the model and methods presented are implemented.


,
Volume 24, Issue 2 (3-2020)
Abstract

Unfortunately, in the recent few years, decreasing the level of educational motivation among students at various levels of education has been increasing and has plagued many of the university educational units in the country and this can have devastating and irreparable effects in the near future. In this research, using statistical analysis of the number of questionnaires, social factors are investigated on the motivation of Bushehr Persian Gulf University students and relationship between each of variables such as job status, the income of educated, gender, marital status, family income, presented content in the textbooks, university educational facilities, and weather conditions have been studied with students' educational motivation. The unequal two-stage cluster sampling method is used to collect the sample data and the required information and multivariate factor analysis used to examine the internal correlations of variables and to find the main factors by predicting the variance of variables by the factors. Also, the effect of variables on the students' educational motivation has been studied.


, ,
Volume 24, Issue 2 (3-2020)
Abstract

The analysis of discrete mixed responses is an important statistical issue in various sciences. Ordinal and overdispersed binomial variables are discrete. Overdispersed binomial data are a sum of correlated Bernoulli experiments with equal success probabilities. In this paper, a joint model with random effects is proposed for analyzing mixed overdispersed binomial and ordinal longitudinal responses. In this model, we assume an overdispersed binomial response variable follows Beta-Binomial distribution and use a latent variable approach for modeling the ordinal response variable. Also, the model parameters are estimated via the Maximum Likelihood method, and the estimates are evaluated with a simulation study via the Monte Carlo method. Finally, an application of the proposed model to real data is introduced.


,
Volume 24, Issue 2 (3-2020)
Abstract

Life testing often is consuming a very long time for testing. Therefore, the engineers and statisticians are looking for some approaches to reduce the running time. There is a recommended method for reducing the time of failure, such that the stress level of the test units will increase, and then they will fail earlier than normal operating conditions. These approaches are called accelerated life tests. One of the most common tests is called the step stress accelerated life test. In this procedure, the stress applied to the units under the test is increased step by step at a predetermined time. The most important aspect to deal with the step stress model is the optimization of test design. In order to optimize the test plan, the best time to increase the level of stress should be chosen. In this paper, at first the step stress testing described. Then, this test is used for exponential lifetime distribution. Since life data are often not complete, this model is applied to type I censored data. By minimizing the asymptotic variance of the maximum likelihood estimator of reliability at time $xi$, the optimal test plan will be obtained. Finally, the simulation studies and one real data are discussed to illustrate the results. A sensitivity analysis shows that the proposed optimum plan is robust.


, , ,
Volume 24, Issue 2 (3-2020)
Abstract

A sequence of functions (curves) collected over time is called a functional time series. Functional time series analysis is one of the popular research areas in which statistics from such data are frequently observed. The main purpose of the functional time series is to predict and describe random mechanisms that resulted in generating the data. To do so, it is needed to decompose functional time series into trend, periodic, and error components. However, we need to identify and recognize these components beforehand. Hence, in this study, a non-parametric method is presented for detecting and testing the existence of a process in a functional time series using record functions. Then, we implement and use this method for investigating the application of this method in a real functional time series. The effectiveness of this method for determining the trend in a set of real data on fertility rates in Australia has been investigated.


, , , ,
Volume 24, Issue 2 (3-2020)
Abstract

The Area under the ROC Curve (AUC) is a common index for evaluating the ability of the biomarkers for classification. In practice, a single biomarker has limited classification ability, so to improve the classification performance, we are interested in combining biomarkers linearly and nonlinearly. In this study, while introducing various types of loss functions, the Ramp AUC method and some of its features are introduced as a statistical model based on the AUC index. The aim of this method is to combine biomarkers in a linear or non-linear manner to improve the classification performance of the biomarkers and minimize the experimental loss function by using the Ramp AUC loss function. As an applicable example, in this study, the data of 378 diabetic patients referred to Ardabil and Tabriz Diabetes Centers in 1393-1394 have been used. RAUC method was fitted to classify diabetic patients in terms of functional limitation, based on the demographic and clinical biomarkers. Validation of the model was assessed using the training and test method. The results in the test dataset showed that the area under the RAUC curve for classification of the patients according to the functional limitation, based on the linear kernel pf biomarkers was 0.81 and with a kernel of the radial base function (RBF) was equal to 1.00. The results indicate a strong nonlinear pattern in the data so that the nonlinear combination of the biomarkers had higher classification performance than the linear combination.


,
Volume 24, Issue 2 (3-2020)
Abstract

One of the most common censoring methods is the progressive type-II censoring. In this method of censoring, a total of $n$ units are placed on the test, and at the time of failure of each unit, some of the remaining units are randomly removed. This will continue to record $m$ failure times, where $m$ is a pre-determined value, and then the experiment ends. The problem of determining the optimal censoring scheme in the progressive type-II censoring has been studied so far by considering different criteria. Another issue in the progressive type-II censoring is choosing the sample size at the start of the experiment, namely $n$. In this paper, assuming the Pareto distribution for the data, we will determine the optimal sample size, $n_ {opt}$, as well as the optimal censoring scheme by means of the Fisher Information. Finally, to evaluate the results, numerical calculations have been presented by using $R$ software.


Mohammad Mollanoori, Habib Naderi, Hamed Ahmadzadeh, Salman Izadkhah,
Volume 25, Issue 1 (1-2021)
Abstract

Many populations encountered in survival analysis are often not homogeneous. Individuals are flexible in their susceptibility to causes of death, response to treatment, and influence of various risk factors. Ignoring this heterogeneity can result in misleading conclusions. To deal with these problems, the proportional hazard frailty model was introduced. In this paper, the frailty model is explained as the product of the frailty random variable and baseline hazard rate. We examine the fit of the frailty model to the right-censored data from in the presence of explanatory variables (observable variables) and use it as a practical example to fit the frailty model to the data by considering the Weibull basis distribution and exponential in the likelihood functions. It is used to estimate the model parameters and compare the fit of the models with different criteria.
Mr. Mohammad Hossein Poursaeed,
Volume 25, Issue 1 (1-2021)
Abstract

In this study, the interval estimations are prosed for the functions of the parameter in exponential lifetimes, when interval
censoring is used. Optimal monitoring time and simulation studies are examined as well as the applicability of the topics.
Fatemeh Hossini, Omid Karimi,
Volume 25, Issue 1 (1-2021)
Abstract

In spatial generalized linear mixed models, spatial correlation is assumed by adding normal latent variables to the model. In these models because of the non-Gaussian spatial response and the presence of latent variables, the likelihood function cannot usually be given in a closed form, thus the maximum likelihood approach is very challenging. The main purpose of this paper is to introduce two new algorithms for the maximum likelihood estimations of parameters and to compare them in terms of speed and accuracy with existing algorithms. The presented algorithms are applied to a simulation study and their performances are compared.


Mohsen Hajitabar Firuzjayee, Maryam Talebi,
Volume 25, Issue 1 (1-2021)
Abstract

The purpose of the present study was to examine the factor validity and reliability of the questionnaire of student perception of assessment and evaluation task [19]. In this regard, the questionnaire of students’ perception of the assessment and evaluation task was performed on 362 students selected by cluster sampling method among students of Mazandaran University. The Cronbach’s alpha coefficient was used to check the reliability of the questionnaire and the confirmatory factor analysis was used to determine the factor validity. According to [15] and [14], the results showed that this questionnaire had acceptable internal consistency and Cronbach’s alpha coefficient in the subtest. It ranged from 0.71 to 0.78. Also, the results of the confirmatory factor analysis confirm that the structure of the questionnaire has an acceptable fit to the data and all the goodness of fit indices confirm the model. Therefore, the questionnaire can be a useful tool for evaluating students’ perceptions of assessment and evaluation tasks.
Ms Monireh Maanavi, Dr Mahdi Roozbeh,
Volume 25, Issue 1 (1-2021)
Abstract

By evolving science, knowledge, and technology, new and precise methods for measuring, collecting, and recording information have been innovated, which have resulted in the appearance and development of high-dimensional data. The high-dimensional data set, i.e., a data set in which the number of explanatory variables is much larger than the number of observations, cannot be easily analyzed by traditional and classical methods, the same as the ordinary least-squares method, and its interpretability will be very complex. Although in classical regression analysis, the ordinary least-squares estimation is the best estimation method if the essential assumptions are met, it is not applicable for high-dimensional data, and in this condition, we need to apply the modern methods. In this research, it is first mentioned the drawbacks of classical methods in the analysis of high-dimensional data and then, it is proceeded to introduce and explain the modern and common approaches of the regression analysis for high-dimensional data same as principal component analysis and penalized methods. Finally, a simulation study and real-world data analysis are performed to apply and compare the mentioned methods in high-dimensional data.



Page 7 from 11     

مجله اندیشه آماری Andishe _ye Amari
Persian site map - English site map - Created in 0.06 seconds with 42 queries by YEKTAWEB 4714