Quantcast
Channel: Statalist
Viewing all 73281 articles
Browse latest View live

Standerdized diffrence calculation in complex weighted data

$
0
0
Hey guys,

I am trying to calculate the standardized differences in complex weighted data. I was able to calculate using stddiff command for the plain data but it does not work with svy command. Please help me with this issue.

Thank you.

PPML Gravity Model: interacting continuous variables

$
0
0
Good afternoon! I am estimating a PPML gravity model of trade adding an index of ethnic fractionalisation for each country as independent variables. Ethnic fractionalisation is a continuous variable from 0 to 1 which is a probability that two random individuals in a country are from different ethnic groups.

I had two basic questions regarding the model. First, should I log-transform ethnic fractionalisation or keep it in the standard form (all other variables besides dependent variable and dummies are logged)? Second, If I want to study the interaction between ethnic diversities in two countries, can I add ethnic1 * ethnic2 in the model or it may bias the estimates? Thank you for your help!

margins at percentiles

$
0
0
Hi Statalists,

I have code like this:

Code:
regress y x1 x2 x3
egen p10 = pctile(x1), p(10)
egen p20 = pctile(x1), p(20)
margins, at (x1= (p10 p20))
which gives me the error "invalid numlist".

I know that x1 is between 0 and 1, so my first (and prefered) attempt was

Code:
margins, at (x1=(0(0.1)1))
but margins only works, if 0, 0.1, 0.2, ... are explicitly in my data, which is not the case, as I rather have values like 0.1223123 in my data set.

Can you help me to make either the first (with percentiles) or the second version (evaluation margins at values not in the data set) work? I have the feeling it's just about type casting the variables p10 and p20 correctly.

I really tried to find help online but very different errors are treated under "invalid numlist".

All your help is highly appreciated!

Best,
Chris

Interpolating missing data

$
0
0
Can we interpolate missing data for central bank policy rate by using leading rate or any other economic variable, and if so how can we do that in Stata?

FE with four-way error-components

$
0
0
Hello everyone,

I am estimating a DiD (difference in difference) with Least-squares dummy-variables (LSDV).

My data set contains 8,232 students (i) from 60 schools (j) and 460 classes (c) with a panel data format with T=5 (wave). For each student, I have the test scores (profic_mat) and a list of observed variables ($controlvar) over the time period.
During the time series (2003-2008), a policy change is implemented in state schools in year 2007. Then, students from state schools are my treatment group and students from municipal schools are the control group. My DiD is 1 if student is enrolled in state schools (treated) in post-treatment period (time).

Since the educational achievements follow a hierarchical structure, I assume individual, time, school and class fixed effects.

Fitting two-way fixed-effects models is relatively simple. I estimate a model with individual and time fixed effects including the time effects as dummies and eliminating the individual effects by the withhin transformation.

Code:
xtreg profic_mat DiD time treated $controlvar wave_*, fe i(IDstudent) nonest cluster(IDschool)
By defining a spell, I can fit a three-way fixed-effects. Then I treat as one spell all unique combinations of i and j.
Code:
egen spell=group(IDstudent IDschool)
xtreg profic_mat DiD time treated $controlvar wave_*, fe i(spell) cluster(IDschool)
My question now is how can I fit a model with a four-way error-components?
I have estimated the two following models, but no one produced reliable values.

Code:
egen spellThree=group(IDstudent IDschool IDclass)
xtreg profic_mat DiD time treated $controlvar wave_*, fe i(spellThree) cluster(IDschool)
                                                           /*(AND)*/
xtreg profic_mat DiD time treated $controlvar wave_* IDclass_*, fe i(IDstudent) nonest cluster(IDschool)
Any advice would be highly appreciated!

Is there a command for quickly splitting a categorical variable into multiple binary variables?

$
0
0
Is there a command to split a categorical variable into binary variables? For example, splitting a "race" variable with values "white, black, hispanic" to three variables: race1, race2, race3 where race1 = 1 if race = "white" etc.
Just wondering if someone created a command like this since it would save a lot of time!

What does _rc 111 mean in merge?

$
0
0
I have a merge statement that returns 111 return code, but appears to have worked.

Code:
merge 1:m _taxsimid state year using `original'

Result                           # of obs.
    -----------------------------------------
    not matched                             0
    matched                                 1  (_merge==3)
    -----------------------------------------
Merge failed 111

                 _merge |      Freq.     Percent        Cum.
------------------------+-----------------------------------
            matched (3) |          1      100.00      100.00
------------------------+-----------------------------------
                  Total |          1      100.00
Note that all (one) of the records matched, and I can list the record to confirm that the merge was done correctly. So what could be the problem causing the "Merge failed 111" error message?

group two variables by year

$
0
0
Hello all,

I want to create a gourp based on size and beta and update it once a year.

I tried to use
Code:
bys year: egen id = group(size_quin beta_decile)
However group() can't be combined with by. Is there a way that I can generate an id by size and beta on a yearly basis?

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input double(firm year) float(date size_quin) double beta_decile
10001 2015 660 1 2
10001 2008 581 1 1
10001 2008 584 1 1
10001 2012 625 1 2
10001 2009 588 1 4
10001 2008 586 1 1
10001 2014 656 1 1
10001 2013 646 1 2
10001 2011 623 1 2
10001 2007 573 1 4
10001 2016 681 1 1
10001 2016 678 1 1
10001 2016 676 1 1
10001 2014 649 1 1
10001 2007 575 1 4
10001 2015 667 1 2
10001 2007 571 1 4
10001 2011 622 1 2
10001 2011 614 1 2
10001 2009 599 1 4
10001 2007 567 1 4
10001 2015 670 1 2
10001 2013 645 1 2
10001 2011 618 1 2
10001 2014 655 1 1
10001 2013 643 1 2
10001 2014 658 1 1
10001 2011 615 1 2
10001 2014 650 1 1
10001 2016 680 1 1
10001 2007 570 1 4
10001 2015 663 1 2
10001 2016 677 1 1
10001 2007 566 1 4
10001 2009 589 1 4
10001 2010 600 1 2
10001 2008 576 1 1
10001 2013 642 1 2
10001 2016 673 1 1
10001 2015 671 1 2
10001 2013 644 1 2
10001 2008 577 1 1
10001 2014 659 1 1
10001 2013 647 1 2
10001 2010 606 1 2
10001 2008 582 1 1
10001 2012 632 1 2
10001 2015 666 1 2
10001 2010 605 1 2
10001 2016 672 1 1
10001 2012 633 1 2
10001 2013 639 1 2
10001 2013 636 1 2
10001 2014 652 1 1
10001 2016 674 1 1
10001 2014 654 1 1
10001 2012 631 1 2
10001 2015 668 1 2
10001 2012 626 1 2
10001 2011 620 1 2
10001 2012 630 1 2
10001 2014 648 1 1
10001 2016 683 1 1
10001 2011 621 1 2
10001 2013 641 1 2
10001 2008 583 1 1
10001 2011 619 1 2
10001 2012 624 1 2
10001 2007 569 1 4
10001 2010 603 1 2
10001 2009 592 1 4
10001 2012 634 1 2
10001 2009 591 1 4
10001 2007 564 1 4
10001 2010 604 1 2
10001 2009 594 1 4
10001 2015 669 1 2
10001 2013 637 1 2
10001 2010 611 1 2
10001 2007 574 1 4
10001 2007 572 1 4
10001 2012 629 1 2
10001 2010 608 1 2
10001 2015 664 1 2
10001 2008 587 1 1
10001 2011 613 1 2
10001 2010 609 1 2
10001 2008 580 1 1
10001 2010 610 1 2
10001 2012 628 1 2
10001 2014 657 1 1
10001 2011 616 1 2
10001 2008 578 1 1
10001 2009 598 1 4
10001 2016 675 1 1
10001 2009 593 1 4
10001 2007 568 1 4
10001 2009 596 1 4
10001 2012 627 1 2
10001 2010 607 1 2
end
format %tm date



Problem with Optimal bandwidth regression discontinuity with large dataset

$
0
0
Hi all,

I am trying to estimate the effects of a conditional cash transfer program in some labor market outcomes like employment and wages using an RD design. The data I'm using has approx. 11 million observations. My running variable is the per capita income by family, and it only has discrete values (integer values).

I have generated some results using different bandwidths values, but I am having problems in estimating the RD with the optimal bandwidth procedure created by Calonico, Cattaneo, and Titiunik (rdrobust). The command I am using for this is:

rdrobust `var' ${running_var}, fuzzy(${treatment}) kernel(triangular) p(1) c(70)

And I got the following message “Invertibility problem in the computation of preliminary bandwidth below the threshold Invertibility problem in the computation of preliminary bandwidth above the threshold Invertibility problem in the computation of bias bandwidth (b) below the threshold Invertibility problem in the computation of bias bandwidth (b) above the threshold Invertibility problem in the computation of loc. poly. bandwidth (h) below the threshold Invertibility problem in the computation of loc. poly. bandwidth (h) above the threshold”.

I am not pretty sure why I am getting this mistake. Could it be explained by the discrete nature of the running variable or the number of observations?

I would be very thankful for any help or advice.

Best regards,
Sebastian

Applying the Heckman selection model in panel data with fixed effects

$
0
0
Hello all,

I run a fixed effects regression in a linear probability model of health outcomes/behaviors and local employment change over three waves. One of these behaviors is the quantity of cigarettes consumed. It was suggested that an OLS model with Heckman correction for number of cigarettes consumed would model the decision to smoke or not, and then conditional on this fact, the quantity smoked. I agree with this, but am not sure how to apply a Heckman selection model in panel data with fixed effects.

In my analysis I model several outcomes and behaviors in Stata as below, and would like to keep this approach when applying the heckman correction, for comparability across outcomes studied and also because I need to apply weights to my analysis of cigarette consumption.

I saw a suggestion on stack exchange to cluster the standard errors on the panel id (https://stats.stackexchange.com/ques...and-panel-data) so would that mean updating my current clustering from county to individual id? I xtset the data by id year.

Alternatively I saw a comment by Phil Bromiley that
Fixed effects can be done with i.panel in heckman. You'll probably need to increase matsize and you'll end up with a pile of parameter estimate on the panels that are not of interest. xtreg y x with the panel called panel is identical to reg y x i.panel
(https://www.statalist.org/forums/for...for-panel-data) but I don't know what that would mean in an applied sense in Stata.

Although I found that the UNESCAP suggest doing the following:

Heckman depvar indepvar1 indepvar2 … dum1 dum2 …, select(indepvar1 indepvar2 … dum1 dum2 … overidvar1…) options

https://artnet.unescap.org/tid/artnet/mtg/cbtr7-s12.pdf

But I'm not even sure what the dummies I'm supposed to add are....


I thought xtheckman might save me, but it's a random effects regression with selection and I need fixed effects (https://www.stata.com/new-in-stata/xtheckman/).

I would really appreciate applied advice on what I should do to my analysis to apply a Heckman correction.

Thanks for any help,

John

This is my core model:

Code:
. xtreg no_cigs_cons_deflated_y  psum_unemployed_total_cont_y i.yrlycurrent_county_y1 i.year age_y i.marita
> lstatus_y if has_y0_questionnaire==1 & has_y5_questionnaire==1, cluster (current_county_y1) fe robust 
note: 6.yrlycurrent_county_y1 omitted because of collinearity
note: 15.yrlycurrent_county_y1 omitted because of collinearity
note: 18.yrlycurrent_county_y1 omitted because of collinearity
note: 23.yrlycurrent_county_y1 omitted because of collinearity
note: 25.yrlycurrent_county_y1 omitted because of collinearity
note: 26.yrlycurrent_county_y1 omitted because of collinearity
note: 29.yrlycurrent_county_y1 omitted because of collinearity
note: 5.year omitted because of collinearity

Fixed-effects (within) regression               Number of obs      =      1152
Group variable: id                              Number of groups   =       642

R-sq:  within  = 0.0605                         Obs per group: min =         1
       between = 0.0179                                        avg =       1.8
       overall = 0.0145                                        max =         2

                                                F(13,28)           =         .
corr(u_i, Xb)  = -0.8476                        Prob > F           =         .

                                     (Std. Err. adjusted for 29 clusters in current_county_y1)
----------------------------------------------------------------------------------------------
                             |               Robust
     no_cigs_cons_deflated_y |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-----------------------------+----------------------------------------------------------------
psum_unemployed_total_cont_y |  -.2387741   .1100417    -2.17   0.039    -.4641842    -.013364
                             |
       yrlycurrent_county_y1 |
                      Clare  |    1.84201   2.511288     0.73   0.469    -3.302129     6.98615
                       Cork  |   .9439361   2.271351     0.42   0.681    -3.708716    5.596588
                    Donegal  |          0  (omitted)
                  Dublin 16  |   .0798436   2.427069     0.03   0.974    -4.891781    5.051468
                Dublin City  |   1.268084   2.435825     0.52   0.607    -3.721478    6.257646
     Dún Laoghaire-Rathdown  |   .4580872   2.367576     0.19   0.848    -4.391673    5.307847
                     Fingal  |   .1145035   2.333406     0.05   0.961    -4.665262    4.894269
                     Galway  |  -16.52429   .3514215   -47.02   0.000    -17.24415   -15.80444
                Galway City  |  -17.09233   .4548787   -37.58   0.000     -18.0241   -16.16055
                      Kerry  |   1.898583   2.566648     0.74   0.466    -3.358958    7.156123
                    Kildare  |   1.688322   2.394418     0.71   0.487     -3.21642    6.593064
                   Kilkenny  |          0  (omitted)
                      Laois  |   2.852193   1.208139     2.36   0.025      .377433    5.326952
                    Leitrim  |   2.076192   2.333259     0.89   0.381    -2.703273    6.855657
                   Limerick  |          0  (omitted)
                   Longford  |   .5373577   2.372396     0.23   0.822    -4.322276    5.396991
                      Louth  |   1.385586   2.386451     0.58   0.566    -3.502838     6.27401
                       Mayo  |  -17.88611   .3841588   -46.56   0.000    -18.67302    -17.0992
                      Meath  |   .1920723   2.276061     0.08   0.933    -4.470227    4.854372
                   Monaghan  |          0  (omitted)
                     Offaly  |   .9486299   2.335269     0.41   0.688    -3.834952    5.732212
                  Roscommon  |          0  (omitted)
                      Sligo  |          0  (omitted)
               South Dublin  |   .0798436   2.427069     0.03   0.974    -4.891781    5.051468
                  Tipperary  |  -.0933459   .3734837    -0.25   0.804    -.8583927    .6717008
            Tipperary North  |          0  (omitted)
                  Waterford  |  -15.97167   .4552278   -35.09   0.000    -16.90416   -15.03918
                  Westmeath  |   1.313337   2.349551     0.56   0.581      -3.4995    6.126175
                    Wexford  |   -.604106   2.456075    -0.25   0.808    -5.635147    4.426935
                    Wicklow  |   3.927572    3.03076     1.30   0.206    -2.280659     10.1358
                             |
                      5.year |          0  (omitted)
                       age_y |   .0837821   .0470026     1.78   0.086    -.0124983    .1800625
                             |
             maritalstatus_y |
                 Cohabiting  |   .5289705   .4076338     1.30   0.205    -.3060295    1.363971
                  Separated  |   -.547115   .1271997    -4.30   0.000    -.8076718   -.2865582
                   Divorced  |  -6.950598   1.454566    -4.78   0.000    -9.930142   -3.971054
                    Widowed  |    3.47176   1.996616     1.74   0.093    -.6181229    7.561643
       Single/Never married  |  -1.460055   1.615909    -0.90   0.374    -4.770094    1.849984
                             |
                       _cons |   5.822622   2.518999     2.31   0.028     .6626857    10.98256
-----------------------------+----------------------------------------------------------------
                     sigma_u |  9.0440127
                     sigma_e |  3.4804153
                         rho |  .87100821   (fraction of variance due to u_i)
----------------------------------------------------------------------------------------------
And I want to model it as something like

Code:
heckman no_cigs_cons_y psum_unemployed_total_cont_y i.yrlycurrent_county_y1 i.year age_y i.maritalstatus_y [pw=ipw55] if has_y0_questionnaire==1 & has_y5_questionnaire==1, select(age_y medical_card_y i.year) vce (cluster id)

Crossed effects multilevel models

$
0
0
Hi,

I collected data from an experience sampling study over 14 days, where participants belonged to five different batches (done for convenience, and was not a test variable), and responded to the same questionnaire five times each day of the 14 days. I'm building models to predict for eg sleep from phone use, and have the following syntax:

mixed sleep phone_use || _all: R. Questionnaire_number || _all: R. ID || Batch_No: || Study_day_number:

(questionnaire number and participant ID are crossed because each participant responded to all five questionnaires every day; study day number i.e. 1 to 14 is nested within batch number because the study was conducted over different days for participants belonging to separate batches).

a) Can anyone share shed some light regarding the way I've built up my syntax? I'm a bit lost regarding the use of four levels in a crossed effects model, any advice would be helpful! and b)What sort of post estimation analyses can I conduct to verify that the model is doing what I've asked it to do, eg robustness checks etc?

I must mention that the models are converging, but I just want to make sure that they are the correct models and verify the output.

best wishes,
Ahuti

Latent profile analysis with continuous indicators and local independence

$
0
0
Hello Stata users!

I am doing a Latent Profile Analysis with continuous indicators (gsem, lclass option). These indicators are probably not locally independent as some of them are symptoms that are probably dependent on each other (e.g. pain, anxiety, depression) and some are beliefs that relate to these symptoms (e.g. self-efficacy).

I read Canette's 2017 presentation where she states that conditional independence is not necessary with Gaussian variables and that we can include correlations among them.

Does this imply that we can disregard the assumption of local independence, or that we should explicitly relax the assumption for locally dependent variables within a class?
The latter seem to be suggested by others (e.g. by allowing error terms to covary within a class for these variables).

If the latter, how would one identify the locally dependent indicators in Stata, if possible?

Any help would be greatly appreciated.

Best regards,
Martin

Adding quantity of particular product from all importers and one particular exporter

$
0
0
Dear all,

I have a dataset with information about products (numeric), their quantity, the importer (ISO code) and the exporter (ISO code) who sold the product to the importer. Now I would like to sum up the quanitity of one product (several importers could have bought it from one person (exporter)) of all importers who imported it from one particular exporter.
As I have a huge dataset, I would like to have one universal code for that if that is possible.

I hope this was understandable.

Thank you so much in advance for your help!

Cheers

Maria

Is it possible to get Stata to output the p-value of a Q-residual value?

$
0
0
I am using the metareg command and ereturn list to view the scalars, but the output does not include the significance of the q-residual value along with the q-residual value itself. any ideas?

Checking and Relabeling Value Labels Across Separate Waves of Longitudinal Data

$
0
0
Hi all – was wondering if anyone would be able to help with a complicated problem I'm having

I’m working with a longitudinal dataset that contains two waves of data – (aptly named W1 and W2). It contains just over 5000 variables (about 2500 per wave) and also just over 5000 observations total. Since it's longitudinal, most of the same questions were asked in both waves. Variables have the following naming convention: wave number + section abbreviation + question number. So the variable w1fs001 would translate to:

w1 --> Wave 1
fs -- > Food Security
001 --> question #001 within the Food Security section

While the dataset contains different types of variables (string, categorical, ordinal, nominal, dichotomous, etc.), for the purposes of this question, I’m looking at re-labeling some binary variables that are in the “YES/NO” format. Right now, there are some variables whose values are labeled “0 - YES/1 – NO” in W1, but “1 - YES/2 – NO” in W2 (or even vice versa - “1 - YES/2 – NO” in W1, or “0 - YES/1 – NO” in W2). However, regardless of whatever the labeling is in W1, I want to ‘align’ the value labels so they are consistent ACROSS waves (while not necessarily being consistent WITHIN waves). I guess stated another way, for each variable, whatever the “YES/NO” value label is in W1, I want to make sure the value label is the same for that variable’s W2 counterpart.

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input byte(w1hc001 w1hc002s5 w1hc006 w1gt001s6 w2hc001 w2hc006 w2hc002s5 w2gt001s6)
2  5 1 . 1 2 1 2
1  5 1 6 . . . .
1  5 2 . 1 2 1 2
1  5 2 . . . . .
1  5 2 6 . . . .
1 .r 1 6 . . . .
1  5 2 6 1 1 1 2
1  5 1 6 1 2 1 .
2  5 2 6 . . . .
2  5 2 6 1 2 1 2
1  . 2 6 . . . .
1  5 1 6 . . . .
1  5 1 6 . . . .
1  5 2 6 1 2 1 2
1  5 2 . 1 2 1 2
1  5 1 6 . . . .
1  5 1 . 2 2 2 2
1  5 1 6 1 1 1 2
1  5 2 6 1 2 1 2
1  5 2 . 1 2 2 2
end
label values w1hc001 HAALSI_VL54F
label def HAALSI_VL54F 1 "1 (YES) Yes", modify
label def HAALSI_VL54F 2 "2 (NO) No", modify
label values w1hc002s5 spicesoils
label def spicesoils 5 "5 (Yes) Yes", modify
label values w1hc006 HAALSI_VL105F
label def HAALSI_VL105F 1 "1 (YES) Yes", modify
label def HAALSI_VL105F 2 "2 (NO) No", modify
label values w1gt001s6 oldage
label def oldage 6 "6 (YES) Yes", modify
label values w2hc001 YN
label values w2hc006 YN
label values w2hc002s5 YN
label values w2gt001s6 YN
label def YN 1 "Yes", modify
label def YN 2 "No", modify
Two things that complicate this further though are - 1. there are hundreds of different “YES/NO” value labels that were auto-generated/assigned to variables during data collection, and despite these labels being named slightly differently (VL105F, VL54F, etc.), they all apply some type of “YES/NO” value label to variables, and 2. there are some variables that have a “YES/NO” value label assigned to them, but the label is applied to values that are not 0, 1, or 2 (ex. “Do you have a 5th child?” - where even though the answer is a numeric “5”, the label appears as “5 – YES” indicating that the respondent does have a 5th child - see variables w1hc002s5 or w1gt001s6 in the dataex above for similar examples). Despite these being coded oddly, I still need to include them in this value label check since they still are in a "YES/NO" format.
  1. First, is there a way to limit the dataset to only variables with the “YES/NO" format?
  2. Second, is there any way to ‘check’ that two variables are assigned the same value label?
  3. Third, upon checking the value labels, is there a way to assign whatever the W1 value label is, to its W2 counterpart
I was envisioning some sort of command that loops through all W1 variables, and then checks the value label against its W2 counterpart but am totally lost on how to go about executing it – (especially using extended macro functions which I’m not great with). My thought process was something like this:
  1. Keep only those variables that have “Yes” or “No” in the value label – this would also keep those ‘oddly’ labeled variables too
  2. Order the variables “sequentially” alternating by wave - (w1pl001, w2pl001, w1pl002, w2pl002, etc)
  3. Then, cycle through all of the W1 variables only and put the name of each different value label in order in a local/macro
  4. Run another loop command that cycles through each different value label checking it against each separate ‘pair’ of variables (w1pl201, w2pl201) applying whatever the W1 value label is, to the W2 variable
This is all I have so far – however the findname command keeps giving me an “invalid Syntax” error, and I can’t seem to figure out what I am typing incorrectly. I am unsure of how to order each pair of variables alternating by wave, and then check the value labels of each pair.

Code:
findname, vallabeltext(*YES* *NO*) insensitive local(VALUES)
 
gen valuelist = ""
local lcode = 0
foreach var of varlist w1* {
local lcode = `lcode' + 1
local valuelist : value label `var'
replace valuelist = "`valuelist'" in `lcode'
}
Any insights are appreciated as I am thoroughly stumped!

Chen–Shapiro test for normality

Ipolate & Epolate- two identification variables

$
0
0

Hi Stata users, I have a data set from 2000 to 2016 with some missing values for both dependent and independent variables. My data is divided by state, year and industrial sector. I need to interpolate (extrapolate) using industrial sector and state as identifier. How should I write the command in my dofile?

Help with Program that Demonstrates Central Limit Theorem Nested Loop

$
0
0
Dear all,

I am using Stata 16, on mac. The following program I included selects a particular number of observations from a Chi-Square distribution.

cap program drop randdraw

program define randdraw

clear

args N distribution

set obs `N'

gen x = `distribution'

sum

end

simulate mean_x =r(mean), reps(1000): randdraw 50 "rchi2(2)"


If I I wanted to write a nested loop that will simulate drawing a sample mean from a uniform distribution, poisson distribution, and beta distribution from with sample sizes of 3, 40, 500, and 1,000, replicate each 500 times and plot a histogram of the resulting distribution of the sample means. So that the first histogram will have 500 observations of sample means taken from 3 observations drawn according to the underlying distribution, the second will have 500 observations of sample means taken from 50 observations, etc. Am I able to use the program that I listed previously?


Thank you in advance for your help


Jason Browen

How to Write codes to Draw a Sample Mean from Uniform distribution with Sample Size of 500

$
0
0
Dear all,


I am using Stata 16, on mac. I need help writing codes to draw a sample mean from a uniform distribution with sample size of 500

Thank you in advance for your help


Jason Browen

_rmdcoll issues error after eclass program that does not post estimates

$
0
0
After running "rforest," an eclass program available on repec that does not post estimates, a subsequent call to _rmdcoll will fail with the error message "last estimates not found" r(301). For example, if one tries to run "ivregress" after a call to rforest, Stata will produce the error message because ivregress in turn calls _rmdcoll. I suspect the reason is that _rmdcoll attempts to "hold" previous estimates if the macro `e(cmd)' is not blank. This will be the case if an eclass program had been called previously, but if the eclass program that was run did not post estimates, the attempt to hold previous estimates will fail and produce the error message. Probably programs that do not post estimates should not be classified as eclass (but rather rclass or sclass or something) but _rmdcoll should also be more robust. This would be a good improvement.
Viewing all 73281 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>