Quantcast
Channel: Statalist
Viewing all 72832 articles
Browse latest View live

Downloading enhancedeba


PPML with various constraints

$
0
0
Hello all and Joao,

I have a question on imposing different types of constraints while estimating a gravity equation using PPML. Here's the general STATA code I use:

ppml bi_trade lnA lnB lnC, noconstant iter(50), where lnA lnB and lnC are variables from a gravity equation.

During estimation, the coefficiennts of lnA and lnB are supposed to be constrained to unity. As offset is the only type of constraint that works with PPML, I define a new variable where lnD=lnA+lnB, and use the following code:

ppml bi_trade lnC, offset(lnD) noconstant iter(50). Is this the right way to proceed?

Another constraint I'd like to impose is that the coefficient of lnC being positive. I understand that with some transformation this type of inequality constraint can be used in linear regressions. However, is there a way to impose this kind of constraint with PPML?

Any help is greatly appreciated!

Create loop to store slope and standard error

$
0
0
Dear Statalisters,

Your help with the following problem would be highly appreciated! Using Stata version 14.1, I am trying to compute an industry munificence and an industry dynamism variable as described by Misangyi et al. 2006 (pages 581-582, see below for details and link):

"[Industry munificence] was calculated for each year by first regressing the annual average sales in each industry over the 5 years which contained the focal year as a midpoint (i.e., industry munificence for 1995 is based on the regression of sales for the years 1993–97). The regression slope coefficient obtained from this regression was then divided by the mean value of the sales for those years (to adjust for absolute industry size)...

[Industry Dynamism] was measured as the dispersion about the regression line estimated in the regressions used in arriving at the munificence variable just described, by dividing the standard error of the regression slope coefficient by the mean value of sales"

My panel data has the following (rough) structure:
Industry firm year sales
5010 1 2000 23000
5010 1 2001 24000
5010 1 2003 30000
5230 8 2001 500000
5230 9 2002 500
5230 9 2004 600
5800 10 2001 80000
5800 10 2002 81000
5800 10 2003 80000
Overall, there are about 50 different industries, 4.000 firms, and 14 years for which I have data.

I have no idea about how to set this up in an elegant way without computing everything manually per industry and year. Browsing previous posts did not help.

In case anyone could give advice, please, I would be more than thankful!
Best wishes,
Julia

Misangyi, V. F., H. Elms, T. Greckhamer and J. A. Lepine. 2006. A new perspective on a fundamental debate: a multilevel approach to industry, corporate, and business unit effects. Strategic Management Journal 27 (6): 571-590.
http://onlinelibrary.wiley.com/doi/1...lobalMessage=0

Two-way Simultaneous equations with range function graph

$
0
0
Dear all,

Can you please help me to draw in the cartesian axis 2 simultaneous equations with range?

I know that to draw a function the command is twoway function y=x, but for this type of graphs?

how can I do y=x for x<2 and y=2x for x>=2 ?

THANKS A LOT

Array

Data collection: estimation of a respondent burden in a statistical survey

$
0
0
Dear All,

sorry for an off-topic question not related directly to Stata, but I would like to ask the experts here for an advice on respondent paperwork burden estimation methodology.

Respondent burden estimation is sometimes reported on standardized forms. See for example here:
https://en.wikipedia.org/wiki/Paperwork_Reduction_Act

A particular form or questionnaire may be estimated to carry, say, 2 hours "respondent burden":
illustration

My question is the following: How is the paperwork burden estimated? Or more specifically, is it estimated "directly" or "indirectly"?
  • by directly I mean something like the following: N persons are given the form, time is measured for each one to complete it, then averaged;
  • by indirectly I mean something like the following: this form contains 30 open ended questions (each is assumed 2 minutes) and 60 yes/no questions (each is assumed 30 seconds) hence the total burden is 30x2+60x0.5=90 minutes or 1.5 hour.
In both cases I see significant savings from learning-by-doing: the first time I am filling out the form of any kind will require me way more time than the 100th. In estimating the respondent burden, is there any assumption about the position of the respondent on the learning curve? (e.g for first-timer? for profi?)

If the "indirect" method is applied: Is there any publicly available resource with "costs" per question? per skip condition, etc.

Thank you very much, Sergiy Radyakin



Pulling the log likelihood into outreg2

$
0
0
Hello,

I'd like to add the log likelihood ratio to outreg2 output. Two problems that I'm having (1) Outreg2 is not able to open documents that I run, (2) I am unsure how to add the log likelihood.

In time, I think I will be able to figure out the mapping issue with outreg2 and am most keen on advice from the community about how to specify the log likelihood in the outreg2 command.

Thanks so much!
Preeti

Below is a code example using auto.dta from Stata:

Code:
. logit foreign rep78

Iteration 0:   log likelihood = -42.400729  
Iteration 1:   log likelihood = -28.730111  
Iteration 2:   log likelihood = -27.728894  
Iteration 3:   log likelihood = -27.716046  
Iteration 4:   log likelihood = -27.716037  
Iteration 5:   log likelihood = -27.716037  

Logistic regression                             Number of obs     =         69
                                                LR chi2(1)        =      29.37
                                                Prob > chi2       =     0.0000
Log likelihood = -27.716037                     Pseudo R2         =     0.3463

------------------------------------------------------------------------------
     foreign |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
       rep78 |   1.969267   .4785224     4.12   0.000      1.03138    2.907154
       _cons |  -8.043597   1.848757    -4.35   0.000    -11.66709     -4.4201
------------------------------------------------------------------------------

. outreg2 using test.doc, replace ctitle(logit coeff)
file test.txt could not be opened
r(603);


Dynamic Error Correction Model

$
0
0
Hello,

My name is Paula and I am from Greece. I am sorry for my English.

My issue is this. Following the journal article (attached, page 214 and the last paragraph - "Given the variables..."). I am trying to implement into Stata a dynamic error correction model.

Following the article I have run the Engle-Granger ecm model "in order to obtain the estimated residuals". The Engle-Granger command was downloaded from: SSC INSTALL Egranger

However the article then says to define these lagged residuals as the error correction term. I am unsure of how to do this in Stata.

I have contacted both the writers, Apergis and Payne, but they did not respond to me.

Thank you for your time.

Paula

What remedial action do I need to take to deal with heteroskedasticity?

$
0
0
Hi folks. I need to deal with heteroskedasticity. I would be grateful if someone could tell me what action I need to take to deal with this problem. Thank you

Special Characters in String Vars

$
0
0
Hi everyone--

I have a string var that is riddled with special characters, which is ultimately precluding me to complete a fuzzy match on two data sets. I am trying to strip this string var of all special characters (i.e. "" , . - () [] etc.), but have not been successful in finding a way to get rid of all characters. I have tried using commands such as:

replace shortcounterpartyname=regexr(shortcounterpartyname , "`" "," "." "'" "inc." " inc." "/" " - " "-" " . " "[0-9][0-9][0-9][0-9]" " s.a. " "llc" " llc" " ltd." "(" ")", "")
replace shortcounterpartyname=subinstr(shortcounterpartyna me, "(","",.)

However, I have yet to find a way to get rid of quotation marks (even after reading a plethora of other help articles on this same issue including http://www.stata.com/statalist/archi.../msg00179.html). Additionally, I have found that Stata is dropping the first letter of some names, even if that observation doesn't have any special characters within its name.

If anyone has any suggestions regarding this issue, please let me know!

Thanks,

Rebecca

Nodraw option and .png - can I have both?

$
0
0
I would like to suppress pop-ups when I draw graphs. I hear the nodraw option can do this. I would also like to save in .png but here's the rub: nodraw only works with option saving(filename). This defaults to gph. If I try saving(filename.png), the file is corrupted. Same with anything other than .gph. So it appears I can either use nodraw and deal with .gph down the road. Or deal with the pop-ups now and use gr export (filename.png). I like gr export except that I can't seem to use it with nodraw. But the goal is to be able to employ nodraw AND png. Is this possible? Thanks!

Exporting a table

$
0
0
Hi everyone,

I am trying to export means and t-statistics of variables of a dataset to a table.
I would like to show the means and t-values of the variable "car" for 2 categories ("Acquirer" and "Target" from the variable "nfirm") as well as 2 subcategories ("Low Premium" and "High Premium" from the variable "premium_20d_cat") for both Acquirers and Targets. Also, I am only interested in "car" when "event_day" = 5. I also want to calculate the difference in the mean "car" between Acquirers-High Premium vs. Acquirers-Low Premium and between Targets-High Premium vs. Targets-Low Premium and test for significance.

Basically, the structure would be like this:
  • All Acquirers
    • Acquirers-High Premium
    • Acquirers-Low Premium
    • Difference High Premium vs. Low Premium for Acquirers
  • All Targets
    • Targets-High Premium
    • Targets-Low Premium
    • Difference High Premium vs. Low Premium for Targets
For your information, I attach an example of a table I am trying to approach and an excerpt of my dataset.

I have succeeded in calculating the means and t-values using the following commands but cannot quite get to formatting an output table...

Does anybody have idea on how I could do that? Thank you very much for your help!


Code:
*** Obtaining means and t-statistics ***
by nfirm, sort : ttest car == 0 if event_day==5 by nfirm premium_20d_cat, sort : ttest car == 0 if event_day==5

*** Testing for significance between Low and High Premiums ***
by nfirm, sort : ttest car if event_day==5, by(premium_20d_cat)
Array
Code:
 * Example generated by -dataex-. To install: ssc install dataex clear input float co_id byte event_id float event_day byte day str8 firm long nfirm float(ar tvalue car premium_20d premium_20d_cat)  1 1 -5  1 "Target"   2 -.0059   -.3933       -.0059  .4660268 1  1 1 -4  2 "Target"   2  -.017  -1.1333       -.0229  .4660268 1  1 1 -3  3 "Target"   2 -.0208  -1.3867       -.0437  .4660268 1  1 1 -2  4 "Target"   2  .0102      .68       -.0335  .4660268 1  1 1 -1  5 "Target"   2 -.0217  -1.4467       -.0552  .4660268 1  1 1  0  6 "Target"   2  .3365  22.4333    .28129998  .4660268 1  1 1  1  7 "Target"   2   .004    .2667        .2853  .4660268 1  1 1  2  8 "Target"   2   .016   1.0667        .3013  .4660268 1  1 1  3  9 "Target"   2  .0196   1.3067        .3209  .4660268 1  1 1  4 10 "Target"   2  .0117      .78        .3326  .4660268 1  1 1  5 11 "Target"   2  -.003      -.2        .3296  .4660268 1 67 1 -5  1 "Acquirer" 1 -.0067   -.5447       -.0067  .4660268 1 67 1 -4  2 "Acquirer" 1 -.0123       -1        -.019  .4660268 1 67 1 -3  3 "Acquirer" 1  .0017    .1382  -.017299999  .4660268 1 67 1 -2  4 "Acquirer" 1  .0006    .0488  -.016699998  .4660268 1 67 1 -1  5 "Acquirer" 1 -.0091   -.7398  -.025799997  .4660268 1 67 1  0  6 "Acquirer" 1 -.0366  -2.9756       -.0624  .4660268 1 67 1  1  7 "Acquirer" 1   .006    .4878       -.0564  .4660268 1 67 1  2  8 "Acquirer" 1  .0249   2.0244  -.031499997  .4660268 1 67 1  3  9 "Acquirer" 1  .0019    .1545  -.029599996  .4660268 1 67 1  4 10 "Acquirer" 1 -.0015    -.122  -.031099996  .4660268 1 67 1  5 11 "Acquirer" 1   .002    .1626  -.029099995  .4660268 1  2 2 -5  1 "Target"   2  .0132    .8627        .0132  .3043694 1  2 2 -4  2 "Target"   2 -.0124   -.8105  .0007999996  .3043694 1  2 2 -3  3 "Target"   2  .0821    5.366        .0829  .3043694 1  2 2 -2  4 "Target"   2 -.0016   -.1046        .0813  .3043694 1  2 2 -1  5 "Target"   2 -.0086   -.5621   .072699994  .3043694 1  2 2  0  6 "Target"   2  .0155   1.0131        .0882  .3043694 1  2 2  1  7 "Target"   2  .0933    6.098        .1815  .3043694 1  2 2  2  8 "Target"   2  .0094    .6144        .1909  .3043694 1  2 2  3  9 "Target"   2  .0169   1.1046        .2078  .3043694 1  2 2  4 10 "Target"   2 -.0062   -.4052        .2016  .3043694 1  2 2  5 11 "Target"   2 -.0058   -.3791        .1958  .3043694 1 68 2 -5  1 "Acquirer" 1  .0133   1.3434        .0133  .3043694 1 68 2 -4  2 "Acquirer" 1 -.0032   -.3232        .0101  .3043694 1 68 2 -3  3 "Acquirer" 1  .0217   2.1919        .0318  .3043694 1 68 2 -2  4 "Acquirer" 1 -.0055   -.5556        .0263  .3043694 1 68 2 -1  5 "Acquirer" 1  .0023    .2323        .0286  .3043694 1 68 2  0  6 "Acquirer" 1  .0023    .2323        .0309  .3043694 1 68 2  1  7 "Acquirer" 1 -.1009 -10.1919         -.07  .3043694 1 68 2  2  8 "Acquirer" 1 -.0086   -.8687       -.0786  .3043694 1 68 2  3  9 "Acquirer" 1  .0117   1.1818   -.06690001  .3043694 1 68 2  4 10 "Acquirer" 1 -.0088   -.8889       -.0757  .3043694 1 68 2  5 11 "Acquirer" 1 -.0037   -.3737       -.0794  .3043694 1  3 3 -5  1 "Target"   2  .0432      4.5        .0432  .3158921 1  3 3 -4  2 "Target"   2   .057   5.9375        .1002  .3158921 1  3 3 -3  3 "Target"   2  .0164   1.7083        .1166  .3158921 1  3 3 -2  4 "Target"   2  .0013    .1354        .1179  .3158921 1  3 3 -1  5 "Target"   2  .0079    .8229        .1258  .3158921 1  3 3  0  6 "Target"   2  .0198   2.0625    .14559999  .3158921 1  3 3  1  7 "Target"   2  .0157   1.6354        .1613  .3158921 1  3 3  2  8 "Target"   2   .015   1.5625        .1763  .3158921 1  3 3  3  9 "Target"   2  .0092    .9583        .1855  .3158921 1  3 3  4 10 "Target"   2  .0248   2.5833        .2103  .3158921 1  3 3  5 11 "Target"   2  -.011  -1.1458        .1993  .3158921 1 69 3 -5  1 "Acquirer" 1  .0782    9.093        .0782  .3158921 1 69 3 -4  2 "Acquirer" 1 -.0195  -2.2674        .0587  .3158921 1 69 3 -3  3 "Acquirer" 1 -.0019   -.2209        .0568  .3158921 1 69 3 -2  4 "Acquirer" 1  .0117   1.3605        .0685  .3158921 1 69 3 -1  5 "Acquirer" 1 -.0221  -2.5698        .0464  .3158921 1 69 3  0  6 "Acquirer" 1 -.0775  -9.0116  -.031100005  .3158921 1 69 3  1  7 "Acquirer" 1  .0339   3.9419   .002799995  .3158921 1 69 3  2  8 "Acquirer" 1  .0074    .8605   .010199996  .3158921 1 69 3  3  9 "Acquirer" 1  .0071    .8256   .017299995  .3158921 1 69 3  4 10 "Acquirer" 1  .0156    1.814   .032899994  .3158921 1 69 3  5 11 "Acquirer" 1 -.0071   -.8256   .025799993  .3158921 1  4 4 -5  1 "Target"   2  .0111    .2741        .0111 1.0607027 1  4 4 -4  2 "Target"   2  .0036    .0889        .0147 1.0607027 1  4 4 -3  3 "Target"   2 -.0168   -.4148 -.0021000002 1.0607027 1  4 4 -2  4 "Target"   2  .0059    .1457        .0038 1.0607027 1  4 4 -1  5 "Target"   2   .002    .0494        .0058 1.0607027 1  4 4  0  6 "Target"   2  .4035    9.963        .4093 1.0607027 1  4 4  1  7 "Target"   2  .0184    .4543        .4277 1.0607027 1  4 4  2  8 "Target"   2  .0016    .0395        .4293 1.0607027 1  4 4  3  9 "Target"   2 -.0077   -.1901        .4216 1.0607027 1  4 4  4 10 "Target"   2 -.0051   -.1259        .4165 1.0607027 1  4 4  5 11 "Target"   2  .0189    .4667        .4354 1.0607027 1 70 4 -5  1 "Acquirer" 1  .0142       .5        .0142 1.0607027 1 70 4 -4  2 "Acquirer" 1  .0083    .2923        .0225 1.0607027 1 70 4 -3  3 "Acquirer" 1  .0135    .4754   .036000002 1.0607027 1 70 4 -2  4 "Acquirer" 1 -.0012   -.0423        .0348 1.0607027 1 70 4 -1  5 "Acquirer" 1 -.0072   -.2535        .0276 1.0607027 1 70 4  0  6 "Acquirer" 1   .102   3.5915        .1296 1.0607027 1 70 4  1  7 "Acquirer" 1  .0062    .2183        .1358 1.0607027 1 70 4  2  8 "Acquirer" 1 -.0849  -2.9894        .0509 1.0607027 1 70 4  3  9 "Acquirer" 1  .0065    .2289        .0574 1.0607027 1 70 4  4 10 "Acquirer" 1  .0084    .2958        .0658 1.0607027 1 70 4  5 11 "Acquirer" 1 -.0015   -.0528        .0643 1.0607027 1  5 5 -5  1 "Target"   2  .0052    .7429        .0052 .22611295 0  5 5 -4  2 "Target"   2  .0066    .9429        .0118 .22611295 0  5 5 -3  3 "Target"   2 -.0002   -.0286        .0116 .22611295 0  5 5 -2  4 "Target"   2  .0034    .4857         .015 .22611295 0  5 5 -1  5 "Target"   2  -.009  -1.2857   .006000001 .22611295 0  5 5  0  6 "Target"   2  .0055    .7857        .0115 .22611295 0  5 5  1  7 "Target"   2  .0685   9.7857          .08 .22611295 0  5 5  2  8 "Target"   2  .0112      1.6        .0912 .22611295 0  5 5  3  9 "Target"   2 -.0047   -.6714        .0865 .22611295 0  5 5  4 10 "Target"   2  .0005    .0714         .087 .22611295 0  5 5  5 11 "Target"   2  .0215   3.0714        .1085 .22611295 0 71 5 -5  1 "Acquirer" 1 -.0078    -.624       -.0078 .22611295 0 end label values nfirm nfirm label def nfirm 1 "Acquirer", modify label def nfirm 2 "Target", modify label values premium_20d_cat lblpremium label def lblpremium 0 "Low Premium", modify label def lblpremium 1 "High Premium", modify

The effect of financial crisis 2008 on firm productivity

$
0
0
I do not know this is the right place to ask.

I have a firm-level panel data for a country in Europe. I would like to estimate the effect of financial crisis 2008 on firm productivity. I would like to use difference in difference method but I could not think any control country for this type of estimation.

Is there any econometric method or statistical method for estimate the effect of financial crisis 2008 on firm productivity?

Please write your thought!

Thanks in advance!

Exporting output to MS Excel

$
0
0
Dear All,
Any help on how to export this out to Excel and as options of Copy and Copy Table are not given me what I want.

Code:
. dtcpov need96up need04up need10up , pline(1) alpha(2) appr(dag)  cbias(boot)
ESTIMATION IN PROGRESS
. . . . . . . . . .    10 %
. . . . . . . . . .    20 %
. . . . . . . . . .    30 %
. . . . . . . . . .    40 %
. . . . . . . . . .    50 %
. . . . . . . . . .    60 %
. . . . . . . . . .    70 %
. . . . . . . . . .    80 %
. . . . . . . . . .    90 %
. . . . . . . . . .   100 %
END
- Decomposition of total poverty into transient and chronic components.     
- Duclos, Araar and Giles (2006) approach.      
Poverty line      :        1.00
alpha             :        2.00
# of observations :       11813
# of periods      :           3
----------------------------------------------------------------------------------
Bias   |         With bias correction   |      Without bias correction  
----------------+--------------------------------+--------------------------------
Components   |       Estimate            STE  |       Estimate            STE 
----------------+--------------------------------+--------------------------------
Gamma_1   |           0.264           0.003|           0.264           0.003
C_alpha   |           0.089           0.001|           0.097           0.001
----------------+--------------------------------+--------------------------------
Transient |           0.031           0.001|           0.023           0.000
Chronic   |           0.353           0.003|           0.362           0.003
----------------+--------------------------------+--------------------------------
Total     |           0.384           0.003|           0.384           0.003
----------------------------------------------------------------------------------
Thanks,
Dapel

Make a table with specific outcome

$
0
0
Hi,

I am trying to make a table with specific outcome.

On a daily basis, I use tabstat command to analyze data, using Stata 14.1.
The commands I use often are something like this;
Code:
tabstat var2 if var1 == "1" | var1 == "2", stat(n) by(var1)
Then the result would be like this;
Code:
            var1 |         N
-----------------+----------
               1 |      4145
               2 |       189
-----------------+----------
           Total |      4334
----------------------------
* var1 is a string variable and contains more than 2 types of data ("1" and "2")

Is there any way to make a table with numbers from aforementioned tabstat command, such as 4145 and 189?
FYI, I run more than one commands. Since the tabstat command is not regression command, I don't think I can use estimation store command. tabout command seems nice, but I need specific numbers from tabstat command.

Thank you in advance.

K

copying additional packages to server without internet access

$
0
0
Hi there

i'm trying to install a few packages on Stata 14 MP (windows server) (or Stata12 IC on a Linux box).

Neither is letting me use the -ssc install - option.

Is there a way to copy them to my local PC and copy the items into the correct folders on the server, or set up a path on the server to link to the packages I've already downloaded?

Cheers
Dan

mimrgns and emargins - average marginal effects the same as coefficient values

$
0
0
Dear all,

I have run a logistic regression model on mi data and now I would like to get average marginal effects.

I have tried using both the mimrgns command and the emargins program, but both of these return average marginal effects values that are the same as the original coefficient values. Please see my code and output below:

1. Example with mimrgns:

mi estimate: logit th ib2.cohort i.fert c.stat c.edu i.sex1
Output:
th Coef. Std. Err. t P [95% Conf. Interval]
cohort (ref 2)
1. Cohort1 -.0611989 .0289399 -2.11 0.036 -.1182399 -.004158
3. Cohort 3 .6489638 .0336769 19.27 0.000 .5825208 .7154069
fert (ref 1)
2. -.3343231 .0612948 -5.45 0.000 -.4555199 -.2131262
3. -.3815099 .077192 -4.94 0.000 -.5355125 -.2275072
4. -.5661 .0702629 -8.06 0.000 -.7053152 -.4268849
5. -.5632703 .069933 -8.05 0.000 -.7020042 -.4245364
6. -.7277115 .0733454 -9.92 0.000 -.8734783 -.5819447
7. -.9045366 .0735842 -12.29 0.000 -1.050667 -.7584065
stat .8950652 .0698831 12.81 0.000 .7563983 1.033732
edu .9673386 .0453642 21.32 0.000 .877517 1.05716
1.sex1 .1226577 .0234141 5.24 0.000 .0765511 .1687643
_cons .4311072 .077804 5.54 0.000 .2770226 .5851919
mimrgns, dydx(cohort fert stat edu sex1)
Output:
Multiple-imputation estimates Imputations = 20
Average marginal effects Number of obs = 52,537
Average RVI = 0.5409
Largest FMI = 0.5388
DF adjustment: Large sample DF: min = 68.78
avg = 134.24
Within VCE type: Delta-method max = 258.53
Expression : Linear prediction (log odds), predict(xb)
dy/dx w.r.t. : 1.cohort 3.cohort 2.fert 3.fert 4.fert 5.fert 6.fert 7.fert stat edu..
dy/dx Std. Err. t P>t [95% Conf. Interval]
cohort
1. Cohort 1 -.0611989 .0289399 -2.11 0.036 -.1182399 -.004158
3. Cohort 3 .6489638 .0336769 19.27 0.000 .5825208 .7154069
fert
2. -.3343231 .0612948 -5.45 0.000 -.4555199 -.2131262
3. -.3815099 .077192 -4.94 0.000 -.5355125 -.2275072
4. -.5661 .0702629 -8.06 0.000 -.7053152 -.4268849
5. -.5632703 .069933 -8.05 0.000 -.7020042 -.4245364
6. -.7277115 .0733454 -9.92 0.000 -.8734783 -.5819447
7. -.9045366 .0735842 -12.29 0.000 -1.050667 -.7584065
stat .8950652 .0698831 12.81 0.000 .7563983 1.033732
edu .9673386 .0453642 21.32 0.000 .877517 1.05716
1.sex1 .1226577 .0234141 5.24 0.000 .0765511 .1687643


2. Example with emargins:

program emargins, eclass properties(mi)
version 12
args outcome
logit th ib2.cohort i.fert c.stat c.edu i.sex1
margins, dydx(cohort fert stat edu sex1)
end
mi estimate, cmdok: emargins 1


Output:
Multiple-imputation estimates Imputations = 20
Logistic regression Number of obs = 51,576
Average RVI = 0.5032
Largest FMI = 0.5284
DF adjustment: Large sample DF: min = 71.54
avg = 138.78
max = 239.46
Model F test: Equal FMI F( 11, 1732.8) = 308.70
Within VCE type: OIM Prob > F = 0.0000
th Coef. Std. Err. t P>t [95% Conf. Interval]
cohort
1. Cohort1 -.0625422 .0291314 -2.15 0.033 -.119964 -.0051203
3. Cohort3 .6486531 .0335712 19.32 0.000 .5824863 .7148199
fert
2. -.3334438 .0601941 -5.54 0.000 -.4522741 -.2146135
3. -.3806243 .0770744 -4.94 0.000 -.5342863 -.2269623
4. -.5691863 .0703006 -8.10 0.000 -.7084076 -.4299649
5. -.5655533 .0698688 -8.09 0.000 -.704074 -.4270326
6. -.7310476 .0739668 -9.88 0.000 -.8780468 -.5840485
7. -.9087949 .0740891 -12.27 0.000 -1.055914 -.7616757
stat .8871267 .0699085 12.69 0.000 .7484814 1.025772
edu .9617701 .0448082 21.46 0.000 .8731478 1.050392
sex1
2. Female .1203853 .0237677 5.07 0.000 .0735649 .1672058
_cons .4392243 .0789807 5.56 0.000 .2827478 .5957008
Please can anyone advise on why this command does not appear to be working?

Thank you in advance.


Mollie Bourne

disable automatic scrolling in output window

$
0
0
Hi there

When I'm running a long query I sometimes like to be able scroll up to view earlier results in the output window. The trouble with this is the output window jumps to the latest results as they arrive - is there any way to disable this temporarily?

Adjusted odds ratio!

$
0
0
Dear All,

Could anyone share the STATA command to generate adjusted odds ratio: i could not find the command. At the same time, what is the use o0f adjusted odds ration? and what is the difference between odds ration and adjusted odds ratio.

Looking forward for your kind help and support.

Thanking you

respects,

Converting EFA and Cluster Analysis Data to Multi-Dimensional Scaling in Stata

$
0
0
i am doing research on factors of student success using a 40-question survey. The survey is composed of 5-point Likert scale questions. I will be using the same dataset for Exploratory Factor Analysis with factor rotation (EFA), K-Cluster Analysis (CA), and Multi-dimensional Scaling (MDS). I have already completed the design in Stata for the EFA and CA, and have successfully done a practice run with dummy data.

I am now having difficulties with converting the same dataset (of factors) for use with MDS. Basically, I am looking to start with creating a 40X40 correlation coefficient matrix, then square the results. I know the correlate / pwcorr command and the squaring separately, but I just can't find the appropriate combination of commands for doing that and then pass it on to the "mds" command for the MDS solution and map. Is this the proper way or am I going in the wrong direction?

Thank you so much for any help.

Jose

Getting around listwise deletion

$
0
0
Hi,

How can I run a regression without stata dropping missing variables. Basically, I want to get around the list wise deletion default. For example, in SAS, there is a skip missing command. Do we have something similar in stata?

Thanks!
Sabrina
Viewing all 72832 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>