[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Ethics Considerations::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
Indexing and Abstracting



 
..
Social Media

..
Licenses
Creative Commons License
This Journal is licensed under a Creative Commons Attribution NonCommercial 4.0
International License
(CC BY-NC 4.0).
 
..
Similarity Check Systems


..
:: Search published articles ::
Showing 16 results for Mohammadi

Mehdi Akbarzadeh, Hamid Alavimajd, Yadollah Mehrabi, Maryam Daneshpoor, Anvar Mohammadi,
Volume 3, Issue 2 (3-2010)
Abstract

  One of the important problems that bring up in genetic fields is determining of loci of special gene in order to gene mapping and generating more effective drugs in medicine. Genetic linkage analysis is one important stage in this way. Haseman-Elston method is a quantitative statistical method that is used by biostatisticians and geneticists for genetic linkage analysis. The original Haseman-Elston method is presented in the year 1972 and ever after many investigators recommended some suggestions to make better it. In this article, we introduce the Haseman-Elston regression method and its extensions through 1972 to 2009. and finally we show performance of these methods in a practical example.


Maliheh Abbasnejad Mashhadi, Davood Mohammadi,
Volume 4, Issue 1 (9-2010)
Abstract

In this paper, we characterize symmetric distributions based on Renyi entropy of order statistics in subsamples. A test of symmetry is proposed based on the estimated Renyi entropy. Critical values of the test are computed by Monte Carlo simulation. Also we compute the power of the test under different alternatives and show that it behaves better that the test of Habibi and Arghami (1386).
Shahram Yaghoubzadeh, Ali Shadrokh, Masoud Yarmohammadi,
Volume 9, Issue 1 (9-2015)
Abstract

In this paper, we introduce a new five-parameters distribution with increasing, decreasing, bathtub-shaped failure rate, called as the Beta Weibull-Geometric (BWG) distribution. Using the Sterling Polynomials, the probability density function and several properties of the new distribution such as its reliability and failure rate functions, quantiles and moments, Renyi and Shannon entropies, moments of order statistics, mean residual life, reversed mean residual life are obtained. The maximum likelihood estimation procedure is presented in this paper. Also, we compare the results of fitting this distribution to some of their sub-models, using to a real data set. It is also shown that the BWG distribution fits better to this data set.

Ali Aghamohammadi, Sakineh Mohammadi,
Volume 9, Issue 2 (2-2016)
Abstract

In many medical studies, in order to describe the course of illness and treatment effects, longitudinal studies are used. In longitudinal studies, responses are measured frequently over time, but sometimes these responses are discrete and with two-state. Recently Binary quantile regression methods to analyze this kind of data have been taken into consideration. In this paper, quantile regression model with Lasso and adaptive Lasso penalty for longitudinal data with dichotomous responses is provided. Since in both methods posteriori distributions of the parameters are not in explicit form, thus the full conditional posteriori distributions of parameters are calculated and the Gibbs sampling algorithm is used to deduction. To compare the performance of the proposed methods with the conventional methods, a simulation study was conducted and at the end, applications to a real data set are illustrated.

Hamed Mohamadghasemi, Ehsan Zamanzade, Mohammad Mohammadi,
Volume 10, Issue 1 (8-2016)
Abstract

Judgment post stratification is a sampling strategy which uses ranking information to give more efficient statistical inference than simple random sampling. In this paper, we introduce a new mean estimator for judgment post stratification. The estimator is obtained by using ordering observations in post strata. Our simulation results indicate that the new estimator performs better than its leading competitors in the literature.


Ali Aghamohammadi, Mahdi Sojoudi,
Volume 10, Issue 2 (2-2017)
Abstract

Value-at-Risk and Average Value-at-Risk are tow important risk measures based on statistical methoeds that used to measure the market's risk with quantity structure. Recently, linear regression models such as least squares and quantile methods are introduced to estimate these risk measures. In this paper, these two risk measures are estimated by using omposite quantile regression. To evaluate the performance of the proposed model with the other models, a simulation study was conducted and at the end, applications to real data set from Iran's stock market are illustarted.


Ali Shadrokh, Shahram Yaghoobzadeh, Masoud Yarmohammadi,
Volume 12, Issue 1 (9-2018)
Abstract

In this article, with the help of exponentiated-G distribution, we obtain extensions for the Probability density function and Cumulative distribution function, moments and moment generating functions, mean deviation, Racute{e}nyi and Shannon entropies and order Statistics of this family of distributions. We use maximum liklihood method of estimate the parameters and with the help of a real data set, we show the Risti$acute{c}-Balakrishnan-G family of distributions is a proper model for lifetime distribution.


Ali Mohammadian Mosammam, Serve Mohammadi,
Volume 12, Issue 2 (3-2019)
Abstract

In this paper parameters of spatial covariance functions have been estimated using block composite likelihood method. In this method, the block composite likelihood is constructed from the joint densities of paired spatial blocks. For this purpose, after differencing data, large data sets are splited into many smaller data sets. Then each separated blocks evaluated separately and finally combined through a simple summation. The advantage of this method is that there is no need to inverse and to find determination of high dimensional matrices. The simulation shows that the block composite likelihood estimates as well as the pair composite likelihood. Finally a real data is analysed.


Zahra Khadem Bashiri, Ali Shadrokh, Masoud Yarmohammadi,
Volume 15, Issue 1 (9-2021)
Abstract

One of the most critical discussions in regression models is the selection of the optimal model, by identifying critical explanatory variables and negligible variables and more easily express the relationship between the response variable and explanatory variables. Given the limitations of selecting variables in classical methods, such as stepwise selection, it is possible to use penalized regression methods. One of the penalized regression models is the Lasso regression model, in which it is assumed that errors follow a normal distribution. In this paper, we introduce the Bayesian Lasso regression model with an asymmetric distribution error and the high dimensional setting. Then, using the simulation studies and real data analysis, the performance of the proposed model's performance is discussed.


Morteza Mohammadi, Mahdi Emadi, Mohammad Amini,
Volume 15, Issue 1 (9-2021)
Abstract

Divergence measures can be considered as criteria for analyzing the dependency and can be rewritten based on the copula density function. In this paper, Jeffrey and Hellinger dependency criteria are estimated using the improved probit transformation method, and their asymptotic consistency is proved. In addition, a simulation study is performed to measure the accuracy of the estimators. The simulation results show that for low sample size or weak dependence, the Hellinger dependency criterion performs better than Kullback-Libeler and Jeffrey dependency criteria. Finally, the application of the studied methods in hydrology is presented.

Mr Reza Zabihi Moghadam, Dr Masoud Yarmohammadi, Dr Hossein Hassani, Dr Parviz Nasiri,
Volume 16, Issue 2 (3-2023)
Abstract

The Singular Spectrum Analysis (SSA) method is a powerful non-parametric method in the field of time series analysis and has been considered due to its features such as no need to stationarity assumptions or a limit on the number of collected observations. The main purpose of the SSA method is to decompose time series into interpretable components such as trend, oscillating component, and unstructured noise. In recent years, continuous efforts have been made by researchers in various fields of research to improve this method, especially in the field of time series prediction. In this paper, a new method for improving the prediction of singular spectrum analysis using Kalman filter algorithm in structural models is introduced. Then, the performance of this method and some generalized methods of SSA are compared with the basic SSA   using the root mean square error criterion. For this comparison, simulated data from structural models and real data of gas consumption in the UK have been used. The results of this study show that the newly introduced method is more accurate than other methods.
 
Ali Mohammadian Mosammam, , Jorge Mateu,
Volume 16, Issue 2 (3-2023)
Abstract

An important issue in many cities is related to crime events, and spatio–temporal Bayesian approach leads to identifying crime patterns and hotspots. In Bayesian analysis of spatio–temporal crime data, there is no closed form for posterior distribution because of its non-Gaussian distribution and existence of latent variables. In this case, we face different challenges such as high dimensional parameters, extensive simulation and time-consuming computation in applying MCMC methods. In this paper, we use INLA to analyze crime data in Colombia. The advantages of this method can be the estimation of criminal events at a specific time and location and exploring unusual patterns in places.


Hossein Mohammadi, Mohammad Ghasem Akbari, Gholamreza Hesamian,
Volume 18, Issue 1 (8-2024)
Abstract

First, this article defines a meter between fuzzy numbers using the support function. Then, based on the support function, the concepts of variance, covariance, and correlation coefficient between fuzzy random variables are expressed, and their properties are investigated. Then, using the above concepts, the p-order fuzzy autoregressive model is introduced based on fuzzy random variables, and its properties are investigated. Finally, to explain the problem further, examples will be presented and compared with similar models using some goodness of fit criteria.
Mr. Majid Hashempour, Mr. Morteza Mohammadi,
Volume 18, Issue 2 (2-2025)
Abstract

This paper introduces the dynamic weighted cumulative residual extropy criterion as a generalization of the weighted cumulative residual extropy criterion. The relationship of the proposed criterion with reliability criteria such as weighted mean residual lifetime, hazard rate function, and second-order conditional moment are studied. Also, characterization properties, upper and lower bounds, inequalities, and stochastic orders based on dynamic weighted cumulative residual extropy and the effect of linear transformation on it will be presented. Then, a non-parametric estimator based on the empirical method for the introduced criterion is given, and its asymptotic properties are studied. Finally, an application of the dynamic weighted cumulative residual extropy in selecting the appropriate data distribution on a real data set is discussed.
Tara Mohammadi, Hadi Jabbari, Sohrab Effati,
Volume 19, Issue 1 (9-2025)
Abstract

‎Support vector machine (SVM) as a supervised algorithm was initially invented for the binary case‎, ‎then due to its applications‎, ‎multi-class algorithms were also designed and are still being studied as research‎. ‎Recently‎, ‎models have been presented to improve multi-class methods‎. ‎Most of them examine the cases in which the inputs are non-random‎, ‎while in the real world‎, ‎we are faced with uncertain and imprecise data‎. ‎Therefore‎, ‎this paper examines a model in which the inputs are uncertain and the problem's constraints are also probabilistic‎. ‎Using statistical theorems and mathematical expectations‎, ‎the problem's constraints have been removed from the random state‎. ‎Then‎, ‎the moment estimation method has been used to estimate the mathematical expectation‎. ‎Using Monte Carlo simulation‎, ‎synthetic data has been generated and the bootstrap resampling method has been used to provide samples as input to the model and the accuracy of the model has been examined‎. ‎Finally‎, ‎the proposed model was trained with real data and its accuracy was evaluated with statistical indicators‎. ‎The results from simulation and real examples show the superiority of the proposed model over the model based on deterministic inputs‎.


Dr. Mahdi Alimohammadi, Mrs. Rezvan Gharebaghi,
Volume 19, Issue 2 (4-2025)
Abstract

It was proved about 60 years ago that if a continuous random variable X has an increasing failure rate  then its order statistics will also be increasing failure rate, and this problem remained unproved for the discrete case until recently a proof method using an integral inequality was provided. In this article, we present a completely different method to solve this problem.

Page 1 from 1     

مجله علوم آماری – نشریه علمی پژوهشی انجمن آمار ایران Journal of Statistical Sciences

Persian site map - English site map - Created in 0.04 seconds with 48 queries by YEKTAWEB 4710