Quantcast
Channel: Statalist
Viewing all 72779 articles
Browse latest View live

Multiple m:1 merges

$
0
0
Hi all,

I'll try to keep this as brief as possible, and I don't know if -dataex- is even needed here: basically, I successfully used -merge m:1- at the beginning of my programming for a panel dataset, then did a bunch of stuff to it, and now towards the end I have to do another -merge m:1- for different data than the first one.

But when I do -merge m:1- the second time, I get the error "variable _merge already defined", which makes sense given that I already did a merge earlier on. Perhaps there is some easy workaround that I am not aware of, or a way to do multiple m:1 merges at once? Thanks.

transmitting the value of a variable to a new variable

$
0
0
Hi users,

Considering the example in below, I wonder if it would be possible to generate a new variable (date2) and transmit the value of date to the new variable for each observation (ISIN) based on ISIN2. I am not sure which sort of commands help to get such a result as shown in table 1.

I would be grateful if someone could help me.

Example
input str12(ISIN ISIN2) int date
"GB0001771426" "GB0001771426" 20463
"GB0001771426" "NL0009739416" 20494
"GB0001771426" "SE0000103814" 20521
"GB0001771426" "PLTAURN00011" 20489
"GB0001771426"
"GB0001771426"
"GB0001771426"
"GB0001771426"
"NL0009739416"
"NL0009739416"
"NL0009739416"
"NL0009739416"
"NL0009739416"
"NL0009739416"
"NL0009739416"
"NL0009739416"
"NL0009739416"
"PLTAURN00011"
"PLTAURN00011"
"PLTAURN00011"
"PLTAURN00011"
"PLTAURN00011"
"PLTAURN00011"
"PLTAURN00011"
"PLTAURN00011"
"PLTAURN00011"
"PLTAURN00011"
"SE0000103814"
"SE0000103814"
"SE0000103814"
"SE0000103814"
"SE0000103814"
"SE0000103814"
"SE0000103814"
"SE0000103814"
"SE0000103814"
"SE0000103814"

Table1
ISIN New variable (date2) ISIN2 Date
GB0001771426 01/10/2016 GB0001771426 01/10/2016
GB0001771426 01/10/2016 NL0009739416 02/10/2016
GB0001771426 01/10/2016 SE0000103814 03/08/2016
GB0001771426 01/10/2016 PLTAURN00011 02/05/2016
GB0001771426 01/10/2016
GB0001771426 01/10/2016
GB0001771426 01/10/2016
GB0001771426 01/10/2016
NL0009739416 02/10/2016
NL0009739416 02/10/2016
NL0009739416 02/10/2016
NL0009739416 02/10/2016
NL0009739416 02/10/2016
NL0009739416 02/10/2016
NL0009739416 02/10/2016
NL0009739416 02/10/2016
NL0009739416 02/10/2016
PLTAURN00011 02/05/2016
PLTAURN00011 02/05/2016
PLTAURN00011 02/05/2016
PLTAURN00011 02/05/2016
PLTAURN00011 02/05/2016
PLTAURN00011 02/05/2016
PLTAURN00011 02/05/2016
PLTAURN00011 02/05/2016
PLTAURN00011 02/05/2016
PLTAURN00011 02/05/2016
SE0000103814 03/08/2016
SE0000103814 03/08/2016
SE0000103814 03/08/2016
SE0000103814 03/08/2016
SE0000103814 03/08/2016
SE0000103814 03/08/2016
SE0000103814 03/08/2016
SE0000103814 03/08/2016
SE0000103814 03/08/2016
SE0000103814 03/08/2016

Limit on number of items

$
0
0
Fellow Statalisters,

What is the maximum number of items that can be placed in a LIST element in the Stata dialogs?
Based on my experiments it is 150 items (sadly, 4 items shorter than I need), but I want to be sure that it doesn't depend on the length of the items, version, flavor of Stata or unicode content.

Where can I find this and other limits as applicable to dialog programming online?
(ideally by version if these limits are volatile).

Thank you, Sergiy Radyakin

Highlighting bug in doeditor

$
0
0
The following is observed in Stata 15.0 (Windows) (see screenshot below).

Stata is confused by an unmatched closing curly bracket included into the text of the program as part of a string constant. Interestingly it is not confused if the same is included as part of the quoted string, rather than compound-quoted string.

The problem manifests itself in absence of maroon coloring in line 8, and dysfunctional code folding from the point where the bracket occurred.

The problem was discovered in a large mata file, where a similar occurrence at the top of the file effectively renders region (folding) marks throughout the rest of the file inappropriately. The screenshot below is a trivial reproduction example.

While we are on the code folding, here is a wish for Stata 16: collapse to definitions hotkey in the doeditor. Collapse to definitions is a command to fold all code folds to the level of definition (basically hiding away the implementation of the programs in the screenshot below, but leaving the declarations). Ctrl+M,O in Visual Studio. I believe Scintilla has very powerful code folding control, so that should be possible as long as the declarations can be properly identified as such, thank you!

Best, Sergiy Radyakin



Array

Code:
program define test
  display "}"
  display "a"
end

program define test
  display `"}"'
  display `"a"'
end

ICPSR-style codebook creation in a Word doc?

$
0
0
Has anyone run across a command that produces a ICPSR-style codebook in Word format? Something that looks roughly like the attached screenshot for each variable in a dataset? It needs to include the variable name, variable label, type, actual values, value labels, frequencies, etc.

I'm aware of -codebook-, but there's not a great way to get its results into a table in Word without lots of post-pasting formatting. I'm also aware of -codebookout- from SSC, which saves an Excel file but it lacks frequencies.

Since I'm on Stata 15, and I'm going to need to do this more than once, I know I can roll my own solution using -putdocx-. But before I fall down that particular custom code rabbit hole, I figured I'd ask around to see if anyone smarter than me has a better idea.


Array

Very high coefficient 2SLS

$
0
0
Hi

I have run a 2SLS regression with the dependent variable, self reported health (excellent, good, very good, good, fair, poor). My independent variables include exercise(ex1 below). The results show a very high coefficient of exercise. Is there something I've done wrong- - this appears somewhat strange to me?

My results are shown below.

Code:

. ivregress 2sls  W8GENA (ex1 = W2ExPEYP W6FriendNumYP) W8DLOCUS sex  eth1 W8EVERMAR W8DIN
> CW W8WRKHRSA   wksearly W8DDEGP  W8SOCIALMED W1tvYP W1fameatYP W8AUDIT2 W8SLEEP2 W5agebd
> 10mum educp, first

First-stage regressions
-----------------------

                                                Number of obs     =      1,608
                                                F(  17,   1590)   =       8.68
                                                Prob > F          =     0.0000
                                                R-squared         =     0.0849
                                                Adj R-squared     =     0.0751
                                                Root MSE          =     0.5821

-------------------------------------------------------------------------------
          ex1 |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
--------------+----------------------------------------------------------------
     W8DLOCUS |  -.0293182    .008982    -3.26   0.001     -.046936   -.0117005
          sex |  -.1802063   .0322194    -5.59   0.000    -.2434034   -.1170093
         eth1 |  -.1202028   .0483257    -2.49   0.013    -.2149916    -.025414
    W8EVERMAR |  -.0049658    .049327    -0.10   0.920    -.1017187    .0917871
      W8DINCW |  -.0008338   .0003018    -2.76   0.006    -.0014258   -.0002418
    W8WRKHRSA |    .044626   .0484779     0.92   0.357    -.0504613    .1397134
     wksearly |   -.005298   .0081444    -0.65   0.515    -.0212728    .0106768
      W8DDEGP |  -.0153314   .0317504    -0.48   0.629    -.0776085    .0469456
  W8SOCIALMED |   .0034061   .0059422     0.57   0.567    -.0082493    .0150615
       W1tvYP |  -.0965746   .0269797    -3.58   0.000    -.1494941   -.0436551
   W1fameatYP |   .0016477   .0157384     0.10   0.917    -.0292224    .0325178
     W8AUDIT2 |   .0001801   .0139301     0.01   0.990    -.0271432    .0275035
     W8SLEEP2 |  -.0092335   .0141343    -0.65   0.514    -.0369573    .0184904
 W5agebd10mum |  -.0038787   .0256409    -0.15   0.880    -.0541722    .0464148
        educp |   .0032174   .0346187     0.09   0.926    -.0646858    .0711205
     W2ExPEYP |  -.1093906    .031162    -3.51   0.000    -.1705135   -.0482676
W6FriendNumYP |   .0606124   .0139416     4.35   0.000     .0332665    .0879582
        _cons |   3.259722    .248623    13.11   0.000     2.772059    3.747386
-------------------------------------------------------------------------------


Instrumental variables (2SLS) regression          Number of obs   =      1,608
                                                  Wald chi2(16)   =     171.32
                                                  Prob > chi2     =     0.0000
                                                  R-squared       =          .
                                                  Root MSE        =     .92091

------------------------------------------------------------------------------
      W8GENA |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
         ex1 |  -.9303824   .2708754    -3.43   0.001    -1.461288   -.3994764
    W8DLOCUS |   .0868544   .0164686     5.27   0.000     .0545766    .1191323
         sex |  -.0104595   .0771549    -0.14   0.892    -.1616802    .1407613
        eth1 |  -.1772566   .0830206    -2.14   0.033     -.339974   -.0145393
   W8EVERMAR |   .0303651    .078027     0.39   0.697     -.122565    .1832953
     W8DINCW |  -.0015747   .0005072    -3.10   0.002    -.0025687   -.0005806
   W8WRKHRSA |   .1748691   .0769484     2.27   0.023     .0240531    .3256851
    wksearly |  -.0181418   .0130488    -1.39   0.164    -.0437169    .0074333
     W8DDEGP |  -.2017756   .0504776    -4.00   0.000    -.3007099   -.1028413
 W8SOCIALMED |   .0044162   .0094189     0.47   0.639    -.0140446     .022877
      W1tvYP |  -.0358402   .0510921    -0.70   0.483    -.1359788    .0642984
  W1fameatYP |  -.0200314   .0248962    -0.80   0.421     -.068827    .0287641
    W8AUDIT2 |   .0464691   .0220342     2.11   0.035     .0032829    .0896553
    W8SLEEP2 |    -.10722   .0224168    -4.78   0.000    -.1511561   -.0632838
W5agebd10mum |  -.0709148   .0405905    -1.75   0.081    -.1504707    .0086411
       educp |   .0155376   .0546157     0.28   0.776    -.0915071    .1225824
       _cons |   4.933059   .9799335     5.03   0.000     3.012425    6.853694
------------------------------------------------------------------------------
Instrumented:  ex1
Instruments:   W8DLOCUS sex eth1 W8EVERMAR W8DINCW W8WRKHRSA wksearly
               W8DDEGP W8SOCIALMED W1tvYP W1fameatYP W8AUDIT2 W8SLEEP2
               W5agebd10mum educp W2ExPEYP W6FriendNumYP

. 
end of do-file

.

Gravity model , Fot( frequency of trade)

$
0
0
Dear all!

Hello! here is Sujin,

Im covering trade panel data between 100 import, export countries.

I have trade data which is 1 if trade value>0 , 0 if trade value= 0

I hope to get accumulated frequency data for each country combination


For example, US, Canada had a data (if they only had trade 2000,2001)

import, export, year, trade(0 or 1) ,accumulated frequency
US, Cananda,2000, 1, 1
US,Canada, 2001, 1, 2
US, Canada, 2002, 0, 2


I hope to make accumulated frequency
but I do not know command!!

If any of you know about it,

If you share information
It would be really grateful and appreciated

Thank you
Sujin

can't open the itsa command example data set

$
0
0
Hi, everyone,

I was trying to explore the itsa command by typing in help itsa. In the help file under examples, I clicked "use cigsales_single, clear" but couldn't call in the data set. I also tried sysuse plus the file name, but still couldn't bring in the data.

Could anyone please let me know how to solve this?

Thank you so much!
Difei

Generate some conditional sum of variables

$
0
0

Hi all! I am a freshman for the State software and now have a question to ask you.
I want to calculate the sum of WithdrawJan WithdrawFeb WithdrawMar WithdrawApr WithdrawMay WithdrawJun WithdrawJUL by the year 2005, 2006, 2007 and 2008. I want my final outcome is a single value and in a single column. Such as:
WUDSpriID eFactspriID Year Month Withdraw
ID1 ID1 2005 Jan XXX
ID1 ID1 2005 Feb XXX
ID1 ID1 2005 Mar XXX
ID1 ID1 2005 Apr XXX
ID1 ID1 2005 May XXX
ID1 ID1 2005 Jun XXX
ID1 ID1 2005 Jul XXX

ID1 ID1 2006 Jan XXX
ID1 ID1 2006 Feb XXX
ID1 ID1 2006 Mar XXX
ID1 ID1 2006 Apr XXX
ID1 ID1 2006 May XXX
ID1 ID1 2006 Jun XXX
ID1 ID1 2006 Jul XXX

ID1 ID1 2007 Jan XXX
ID1 ID1 2007 Feb XXX
ID1 ID1 2007 Mar XXX
ID1 ID1 2007 Apr XXX
ID1 ID1 2007 May XXX
ID1 ID1 2007 Jun XXX
ID1 ID1 2007 Jul XXX

My first step is to generate Withdraw_Jan = sum(WithdrawJan) if Year = 2005 and apply the command to 2006 2007 2008. But the stata is always shows a invalid syntax error. Could you please help me to fix that error? And if it is allowed, Could you please give me a suggestion how to transform the dataset below to the dataset format I want above?
Thank you very much!

Here is my data set:
Array

Generate the sum of conditional variable

$
0
0

Hi all! I am a freshman for the State software and now have a question to ask you.
I want to calculate the sum of WithdrawJan WithdrawFeb WithdrawMar WithdrawApr WithdrawMay WithdrawJun WithdrawJUL by the year 2005, 2006, 2007 and 2008. I want my final outcome is a single value and in a single column. Such as:
WUDSpriID eFactspriID Year Month Withdraw
ID1 ID1 2005 Jan XXX
ID1 ID1 2005 Feb XXX
ID1 ID1 2005 Mar XXX
ID1 ID1 2005 Apr XXX
ID1 ID1 2005 May XXX
ID1 ID1 2005 Jun XXX
ID1 ID1 2005 Jul XXX

ID1 ID1 2006 Jan XXX
ID1 ID1 2006 Feb XXX
ID1 ID1 2006 Mar XXX
ID1 ID1 2006 Apr XXX
ID1 ID1 2006 May XXX
ID1 ID1 2006 Jun XXX
ID1 ID1 2006 Jul XXX

ID1 ID1 2007 Jan XXX
ID1 ID1 2007 Feb XXX
ID1 ID1 2007 Mar XXX
ID1 ID1 2007 Apr XXX
ID1 ID1 2007 May XXX
ID1 ID1 2007 Jun XXX
ID1 ID1 2007 Jul XXX

My first step is to generate Withdraw_Jan = sum(WithdrawJan) if Year = 2005 and apply the command to 2006 2007 2008. But the stata is always shows a invalid syntax error. Could you please help me to fix that error? And if it is allowed, Could you please give me a suggestion how to transform the dataset below to the dataset format I want above?
The version I use is STATA 14. Thank you very much!

Here is my data set:
[ATTACH=CONFIG]temp_11488_1531878981788_299[/ATTACH]

Generate the sum of conditional variable

$
0
0

Hi all! I am a freshman for the State software and now have a question to ask you.
I want to calculate the sum of WithdrawJan WithdrawFeb WithdrawMar WithdrawApr WithdrawMay WithdrawJun WithdrawJUL by the year 2005, 2006, 2007 and 2008. I want my final outcome is a single value and in a single column. Such as:
WUDSpriID eFactspriID Year Month Withdraw
ID1 ID1 2005 Jan XXX
ID1 ID1 2005 Feb XXX
ID1 ID1 2005 Mar XXX
ID1 ID1 2005 Apr XXX
ID1 ID1 2005 May XXX
ID1 ID1 2005 Jun XXX
ID1 ID1 2005 Jul XXX

ID1 ID1 2006 Jan XXX
ID1 ID1 2006 Feb XXX
ID1 ID1 2006 Mar XXX
ID1 ID1 2006 Apr XXX
ID1 ID1 2006 May XXX
ID1 ID1 2006 Jun XXX
ID1 ID1 2006 Jul XXX

ID1 ID1 2007 Jan XXX
ID1 ID1 2007 Feb XXX
ID1 ID1 2007 Mar XXX
ID1 ID1 2007 Apr XXX
ID1 ID1 2007 May XXX
ID1 ID1 2007 Jun XXX
ID1 ID1 2007 Jul XXX

My first step is to generate Withdraw_Jan = sum(WithdrawJan) if Year = 2005 and apply the command to 2006 2007 2008. But the stata is always shows a invalid syntax error. Could you please help me to fix that error? And if it is allowed, Could you please give me a suggestion how to transform the dataset below to the dataset format I want above?
The version I use is STATA 14. Thank you very much!

Here is my data set:
Array

Getting Modification Indices using FIML in SEM

$
0
0
Hi,

I have recently started using stata for my SEM analysis. Before, I used AMOS. I know AMOS doesn't give you Modification Indices when using FIML for handling missing values. What about stata? Can I get MIs using FIML is stata?

I actually did test this but I got an odd outcome for the modification indices:

Code:
. estat mindices, showpclass(merrvar) min(1)

Modification indices

----------------------------------------------------------------------------
                             |                                      Standard
                             |        MI     df   P>MI        EPC        EPC
-----------------------------+----------------------------------------------
      cov(e.ILRein,e.ILRecol)|         .      1      .          .          .
      cov(e.ILRein,e.ILBreed)|         .      1      .          .          .
       cov(e.ILRein,e.CoRein)|         .      1      .          .          .
      cov(e.ILRein,e.CoRecol)|         .      1      .          .          .
      cov(e.ILRein,e.CoBreed)|         .      1      .          .          .
     cov(e.ILRein,e.DeerRisk)|         .      1      .          .          .
     cov(e.ILRein,e.WildRisk)|         .      1      .          .          .
     cov(e.ILRein,e.LiveRisk)|         .      1      .          .          .
    cov(e.ILRein,e.ChildRisk)|         .      1      .          .          .
       cov(e.ILRein,e.PPRisk)|         .      1      .          .          .
   cov(e.ILRein,e.SafetyRisk)|         .      1      .          .          .
       cov(e.ILRein,e.Danger)|         .      1      .          .          .
          cov(e.ILRein,e.Bad)|         .      1      .          .          .
And here is the code and model outcome:

Code:
. sem (ReInt -> ILRein, ) (ReInt -> ILRecol, ) (ReInt -> ILBreed, ) (ReInt -> CoRein, ) (ReInt -> CoRecol, ) (ReInt -> CoBreed, ) (PRA -> DeerRisk, ) (PRA -> WildRisk, ) (PRA -> LiveRisk, ) (PRH -> 
> ChildRisk, ) (PRH -> PPRisk, ) (PRH -> SafetyRisk, ) (ATT -> Danger, ) (ATT -> Bad, ) (ATT -> Harmful, ) (ATT -> Neg, ) (EMP -> Joy, ) (EMP -> Surp, ) (EMP -> Int, ) (EMP -> Awe, ) (EMN -> Fear, )
>  (EMN -> Anger, ) (EMN -> Hate, ) (EMN -> Disgust, ) (EMN -> Worry, ) (TR -> Resp, ) (TR -> Risk, ), covstruct(_lexogenous, diagonal) method(mlmv) latent(ReInt PRA PRH ATT EMP EMN TR ) cov( PRA*Re
> Int PRH*ReInt PRH*PRA ATT*ReInt ATT*PRA ATT*PRH EMP*ReInt EMP*PRA EMP*PRH EMP*ATT EMN*ReInt EMN*PRA EMN*PRH EMN*ATT EMN*EMP TR*ReInt TR*PRA TR*PRH TR*ATT TR*EMP TR*EMN) nocapslatent
(24 all-missing observations excluded)

Endogenous variables

Measurement:  ILRein ILRecol ILBreed CoRein CoRecol CoBreed DeerRisk WildRisk LiveRisk ChildRisk PPRisk SafetyRisk Danger Bad Harmful Neg Joy Surp Int Awe Fear Anger Hate Disgust Worry Resp Risk

Exogenous variables

Latent:       ReInt PRA PRH ATT EMP EMN TR

Fitting saturated model:

Iteration 0:   log likelihood = -275347.57  
Iteration 1:   log likelihood = -273836.29  
Iteration 2:   log likelihood = -273677.84  
Iteration 3:   log likelihood = -273673.58  
Iteration 4:   log likelihood = -273673.57  

Fitting baseline model:

Iteration 0:   log likelihood =  -372410.1  
Iteration 1:   log likelihood = -372390.03  
Iteration 2:   log likelihood = -372390.02  

Fitting target model:

Iteration 0:   log likelihood =  -289069.1  
Iteration 1:   log likelihood = -287233.42  
Iteration 2:   log likelihood = -285447.14  
Iteration 3:   log likelihood = -284549.52  
Iteration 4:   log likelihood = -284513.09  
Iteration 5:   log likelihood = -284512.73  
Iteration 6:   log likelihood = -284512.73  

Structural equation model                       Number of obs     =      7,726
Estimation method  = mlmv
Log likelihood     = -284512.73

 ( 1)  [ILRein]ReInt = 1
 ( 2)  [DeerRisk]PRA = 1
 ( 3)  [ChildRisk]PRH = 1
 ( 4)  [Danger]ATT = 1
 ( 5)  [Joy]EMP = 1
 ( 6)  [Fear]EMN = 1
 ( 7)  [Resp]TR = 1
----------------------------------------------------------------------------------
                 |                 OIM
                 |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-----------------+----------------------------------------------------------------
Measurement      |
  ILRein <-      |
           ReInt |          1  (constrained)
           _cons |   2.367832   .0157641   150.20   0.000     2.336935    2.398729
  ---------------+----------------------------------------------------------------
  ILRecol <-     |
           ReInt |   1.053291   .0083114   126.73   0.000     1.037001    1.069581
           _cons |   2.761155    .016473   167.62   0.000     2.728868    2.793441
  ---------------+----------------------------------------------------------------
  ILBreed <-     |
           ReInt |   1.066716   .0072229   147.69   0.000      1.05256    1.080873
           _cons |   2.487935   .0158568   156.90   0.000     2.456856    2.519013
  ---------------+----------------------------------------------------------------
  CoRein <-      |
           ReInt |   .9863733   .0075645   130.40   0.000     .9715472    1.001199
           _cons |    2.17125    .015305   141.87   0.000     2.141253    2.201247
  ---------------+----------------------------------------------------------------
  CoRecol <-     |
           ReInt |   1.063077   .0080581   131.93   0.000     1.047284    1.078871
           _cons |   2.512564   .0163285   153.88   0.000     2.480561    2.544567
  ---------------+----------------------------------------------------------------
  CoBreed <-     |
           ReInt |   1.049597   .0072204   145.37   0.000     1.035446    1.063749
           _cons |   2.296776     .01562   147.04   0.000     2.266161     2.32739
  ---------------+----------------------------------------------------------------
  DeerRisk <-    |
             PRA |          1  (constrained)
           _cons |   4.042988   .0154249   262.11   0.000     4.012756    4.073221
  ---------------+----------------------------------------------------------------
  WildRisk <-    |
             PRA |   .9673517   .0121951    79.32   0.000     .9434496    .9912537
           _cons |   4.014848   .0144888   277.10   0.000      3.98645    4.043246
  ---------------+----------------------------------------------------------------
  LiveRisk <-    |
             PRA |   1.021489   .0165212    61.83   0.000     .9891082     1.05387
           _cons |   4.220008   .0153918   274.17   0.000      4.18984    4.250175
  ---------------+----------------------------------------------------------------
  ChildRisk <-   |
             PRH |          1  (constrained)
           _cons |    3.95408   .0166497   237.49   0.000     3.921447    3.986713
  ---------------+----------------------------------------------------------------
  PPRisk <-      |
             PRH |   .9327239   .0090966   102.54   0.000     .9148949    .9505529
           _cons |   3.587606   .0165262   217.09   0.000     3.555216    3.619997
  ---------------+----------------------------------------------------------------
  SafetyRisk <-  |
             PRH |   1.082668   .0091939   117.76   0.000     1.064648    1.100688
           _cons |    3.42565   .0178095   192.35   0.000     3.390744    3.460556
  ---------------+----------------------------------------------------------------
  Danger <-      |
             ATT |          1  (constrained)
           _cons |   2.559816   .0172554   148.35   0.000     2.525996    2.593636
  ---------------+----------------------------------------------------------------
  Bad <-         |
             ATT |   1.553308    .022659    68.55   0.000     1.508897    1.597719
           _cons |    3.32363   .0198359   167.56   0.000     3.284752    3.362507
  ---------------+----------------------------------------------------------------
  Harmful <-     |
             ATT |   1.566248   .0226288    69.21   0.000     1.521896    1.610599
           _cons |   3.109494   .0201049   154.66   0.000     3.070089    3.148899
  ---------------+----------------------------------------------------------------
  Neg <-         |
             ATT |   1.565053   .0234165    66.84   0.000     1.519157    1.610948
           _cons |   3.370954     .02047   164.68   0.000     3.330834    3.411074
  ---------------+----------------------------------------------------------------
  Joy <-         |
             EMP |          1  (constrained)
           _cons |   3.049558   .0231144   131.93   0.000     3.004255    3.094862
  ---------------+----------------------------------------------------------------
  Surp <-        |
             EMP |    .320706   .0162341    19.76   0.000     .2888877    .3525243
           _cons |   4.908053    .021721   225.96   0.000     4.865481    4.950626
  ---------------+----------------------------------------------------------------
  Int <-         |
             EMP |    .751333   .0179425    41.87   0.000     .7161663    .7864998
           _cons |    4.58208   .0225093   203.56   0.000     4.537963    4.626197
  ---------------+----------------------------------------------------------------
  Awe <-         |
             EMP |   .8267311   .0188626    43.83   0.000     .7897611    .8637011
           _cons |    4.26194   .0235476   180.99   0.000     4.215788    4.308093
  ---------------+----------------------------------------------------------------
  Fear <-        |
             EMN |          1  (constrained)
           _cons |   3.491788   .0230882   151.24   0.000     3.446536     3.53704
  ---------------+----------------------------------------------------------------
  Anger <-       |
             EMN |   1.401427   .0200105    70.03   0.000     1.362208    1.440647
           _cons |   2.704232   .0240183   112.59   0.000     2.657157    2.751307
  ---------------+----------------------------------------------------------------
  Hate <-        |
             EMN |    1.24744   .0186942    66.73   0.000       1.2108     1.28408
           _cons |   2.317807   .0221732   104.53   0.000     2.274348    2.361266
  ---------------+----------------------------------------------------------------
  Disgust <-     |
             EMN |   1.356593   .0200441    67.68   0.000     1.317308    1.395879
           _cons |   2.441522   .0236347   103.30   0.000     2.395198    2.487845
  ---------------+----------------------------------------------------------------
  Worry <-       |
             EMN |   1.209849   .0199619    60.61   0.000     1.170725    1.248974
           _cons |   3.801556   .0250947   151.49   0.000     3.752372    3.850741
  ---------------+----------------------------------------------------------------
  Resp <-        |
              TR |          1  (constrained)
           _cons |   2.590681   .0141721   182.80   0.000     2.562904    2.618458
  ---------------+----------------------------------------------------------------
  Risk <-        |
              TR |   .9981851   .0095531   104.49   0.000     .9794613    1.016909
           _cons |   2.744144   .0147319   186.27   0.000      2.71527    2.773018
-----------------+----------------------------------------------------------------
    var(e.ILRein)|   .3682032   .0068186                      .3550786    .3818129
   var(e.ILRecol)|   .3739133   .0071168                      .3602216    .3881254
   var(e.ILBreed)|   .1780845    .003995                       .170424    .1860894
    var(e.CoRein)|   .2996124   .0057586                      .2885356    .3111144
   var(e.CoRecol)|   .3061637   .0059993                      .2946282    .3181509
   var(e.CoBreed)|   .1767298   .0039876                      .1690844    .1847208
  var(e.DeerRisk)|   .6267254   .0168351                      .5945827    .6606057
  var(e.WildRisk)|   .4837604   .0146835                      .4558206    .5134127
  var(e.LiveRisk)|   .5713034   .0177459                      .5375597    .6071652
 var(e.ChildRisk)|   .3824045   .0100904                      .3631303    .4027017
    var(e.PPRisk)|   .5689251   .0117176                      .5464164     .592361
var(e.SafetyRisk)|   .3909516   .0110825                      .3698226    .4132876
    var(e.Danger)|   1.113063   .0204305                      1.073731    1.153835
       var(e.Bad)|   .4305817   .0117229                      .4082075    .4541822
   var(e.Harmful)|   .4793582   .0126248                       .455242    .5047521
       var(e.Neg)|   .5540309   .0135703                       .528062    .5812769
       var(e.Joy)|   1.137634   .0498429                      1.044021    1.239642
      var(e.Surp)|   3.182667   .0546514                      3.077334    3.291604
       var(e.Int)|   2.133382   .0503627                      2.036922     2.23441
       var(e.Awe)|   2.135592   .0543521                      2.031677    2.244822
      var(e.Fear)|   2.132374   .0378683                       2.05943    2.207901
     var(e.Anger)|   .7166806   .0179196                      .6824055    .7526773
      var(e.Hate)|   .8104349   .0179142                      .7760734    .8463177
   var(e.Disgust)|   .8014416   .0189537                      .7651408    .8394646
     var(e.Worry)|    2.02764   .0372388                      1.955951    2.101956
      var(e.Resp)|    .207582   .0092488                      .1902235    .2265244
      var(e.Risk)|     .33481   .0101944                      .3154139     .355399
       var(ReInt)|   1.542687   .0304368                       1.48417     1.60351
         var(PRA)|   1.159012   .0304945                      1.100759    1.220348
         var(PRH)|   1.712068   .0345673                       1.64564    1.781177
         var(ATT)|   .9820016   .0310121                      .9230619    1.044705
         var(EMP)|    2.85128   .0779081                      2.702599     3.00814
         var(EMN)|   1.837868   .0570265                      1.729429    1.953107
          var(TR)|   1.324677   .0260932                      1.274509    1.376818
-----------------+----------------------------------------------------------------
   cov(ReInt,PRA)|  -.6824721   .0195921   -34.83   0.000    -.7208718   -.6440723
   cov(ReInt,PRH)|  -.9486406   .0235449   -40.29   0.000    -.9947878   -.9024934
   cov(ReInt,ATT)|   .8050419   .0209692    38.39   0.000     .7639429    .8461408
   cov(ReInt,EMP)|   1.514507   .0398869    37.97   0.000      1.43633    1.592684
   cov(ReInt,EMN)|  -1.027537   .0275451   -37.30   0.000    -1.081525   -.9735501
    cov(ReInt,TR)|   1.102744   .0226364    48.72   0.000     1.058377     1.14711
     cov(PRA,PRH)|   .9746683   .0232805    41.87   0.000     .9290394    1.020297
     cov(PRA,ATT)|  -.5145831   .0171027   -30.09   0.000    -.5481037   -.4810624
     cov(PRA,EMP)|  -.7113154   .0303917   -23.40   0.000     -.770882   -.6517487
     cov(PRA,EMN)|   .7170415   .0233436    30.72   0.000     .6712889    .7627941
      cov(PRA,TR)|  -.5527369   .0179764   -30.75   0.000    -.5879699   -.5175038
     cov(PRH,ATT)|  -.6663646   .0203852   -32.69   0.000    -.7063189   -.6264104
     cov(PRH,EMP)|  -1.104646   .0369136   -29.93   0.000    -1.176995   -1.032297
     cov(PRH,EMN)|   1.092838   .0297216    36.77   0.000     1.034584    1.151091
      cov(PRH,TR)|  -.7754056   .0214292   -36.18   0.000    -.8174061   -.7334051
     cov(ATT,EMP)|   1.035356   .0324461    31.91   0.000      .971763     1.09895
     cov(ATT,EMN)|  -.7593164   .0236375   -32.12   0.000    -.8056451   -.7129877
      cov(ATT,TR)|    .625531   .0182521    34.27   0.000     .5897576    .6613043
     cov(EMP,EMN)|  -1.026022   .0414317   -24.76   0.000    -1.107227   -.9448173
      cov(EMP,TR)|   1.111355   .0332034    33.47   0.000     1.046277    1.176433
      cov(EMN,TR)|  -.8763244   .0248456   -35.27   0.000    -.9250208   -.8276281
----------------------------------------------------------------------------------
LR test of model vs. saturated: chi2(303) =  21678.33, Prob > chi2 = 0.0000
I even used the "slow" option with "mindices", no results:

Code:
. estat mindices, slow

Modification indices

----------------------------------------------------------------------------
                             |                                      Standard
                             |        MI     df   P>MI        EPC        EPC
-----------------------------+----------------------------------------------
Measurement                  |
  ILRein <-                  |
                         PRA |         .      1      .          .          .
                         PRH |         .      1      .          .          .
                         ATT |         .      1      .          .          .
                         EMP |         .      1      .          .          .
                         EMN |         .      1      .          .          .
                          TR |         .      1      .          .          .
  ---------------------------+----------------------------------------------
  ILRecol <-                 |
                         PRA |         .      1      .          .          .
                         PRH |         .      1      .          .          .
                         ATT |         .      1      .          .          .
                         EMP |         .      1      .          .          .
                         EMN |         .      1      .          .          .
                          TR |         .      1      .          .          .
  ---------------------------+----------------------------------------------
  ILBreed <-                 |
                         PRA |         .      1      .          .          .

xtgee AND repeated time values within panel

$
0
0
Hello all,

I'm hoping that I can get some help with understanding something. I have what I believe is a panel dataset with many firms each period that are operating in many states. I'm performing analysis where I'm including fixed effects for firm, state, and time, and it is my preference to use xtgee if possible in order to account for within group correlation. My question is... if I perform:

xtset firm time

Then I get the message that there are repeated time values within panel. I am aware that this is because there are multiple observations for each firm in each time period and accordingly it is because firm is not uniquely identified within each time period. Therefore, is it valid to only perform:

xtset firm

and go on with my analysis?

e.g., xtgee y x1 x2 x3 i.firm i.state i.time, family(gaussian) link(identity) robust

or would this not be appropriate?

I considered instead creating unique ids for each firm-state combination, but my concern is that would in essence be tricking Stata to xtset the panelvar and timevar for me. e.g.,

xtset firmstateid time

then:

xtgee y x1 x2 x3 i.firm i.state i.time, family(gaussian) link(identity) robust


In this scenario, I don't think of the firm-state combination as necessarily a real higher order unit that exists, such as a 'firm' or 'state,' but instead it is an artifact of the structure of the dataset that I am identifying for analysis purposes, right? However, it does seem like the second approach adjusts for within group correlation of the firm-state observations over time (i.e., the idea that firm1's activity (the DV) in Texas at time 1 may be correlated with firm1's activity (the DV) in Texas at time 2), which I like and sounds like it could be right.

Whereas the first approach just equally adjusts for within group correlations of within firm observations (regardless of state and time). Although, I believe it is less likely that firm1's activity in Texas at time 1 is correlated with firm 1's activity in Alaska at time 1.

Am I thinking about this correctly? What would be the valid approach to analyze the data?

I thank you all in advance for giving this a look



Effect of legal change on innovation - figure

$
0
0
Hello StataList-ers!

Help! I need somebody! I am trying to replicate the following figure.


Array


What I am doing is simply a difference-in-difference in multiple treatment groups and multiple time periods. This figure depicts the effect of ENDAs (employment nondiscrimination acts) on innovation. Several U.S. states adopted the ENDAs in different years during the sample period. I am trying to examine the before-after effect of the legal change in affected states (treatment group) compared to the before-after effect in nonaffected states. On the y-axis: “LnPat” (ln of patent counts) and “LnCit” (ln of patent citations); the x axis shows the time relative to the adoption of the laws, ranging from 5 years prior to the adoption year (year 0) until 10 years afterwards. “Pass” is a dummy=1 if the ENDA is in place in state s in a given year, and 0 otherwise. I also have dummy variables “Year minus 5”, “Year minus 4” … “Year 0”, etc. I literally have no idea how to code it. Please, help! I’ve been reading the help files on graphs … it’s still a total mess in my mind.
Thanks heaps!

Vania

Check whether content of two variables is the same in different columns

$
0
0
Hey guys!

I am looking at the following dataset:

Array

What I essentially want is to check whether the owner of the company, classified as "GUO 50 or DUO 50", shares the last name with one of the directors. So see if shareholder_name of shareholder_type "GUO 50" or "DUO 50" is equal to ANY directors_name.
So far I have managed to isolate the last names and put these next to each other, but if I now just check whether these are the same or not it gets difficult when the names are not aligned perfectly. This is the code I used for that:
Code:
gen director_name = word(directors_name, -1)
gen shareholder_name1 = word(shareholder_name, -2)
Unfortunately I do not know where to go from here. Any ideas?

Using Markdoc Package with a referencing software

$
0
0
Hello

I wonder if someone knows a way to include references in a state markdoc (E.F. Haghish, 2014. "MARKDOC: Stata module for literate programming," Statistical Software Components S457868, Boston College Department of Economics, revised 20 Apr 201) generated document?

I mostly think about using it with word and latex.

Right now, I put them in manually at the end of the day - but I think, there must be some smarter ways to do this.

Thanks for your help.

All the best
G.Brückmann

F statistic not obtained using oneway

$
0
0
Hello,

I ran the following code and the resulting output didn't include the F-statistic. The unequal n and the similarity in the means could be problematic. Any advice is appreciated if this is the likely problem or it is some other issue.

If the output is too garbled and there is a better way to present it, let me know.
Regards

Bob



oneway offencenum why_03, tabulate

| Summary of offencenum
Why | Mean Std. Dev. Freq.
------------+------------------------------------
closed | 3 0 11
new | 3.4056604 0 106
open | 3 0 20
------------+------------------------------------
Total | 3.3138686 .17035961 137

Analysis of Variance
Source SS df MS F Prob > F
------------------------------------------------------------------------
Between groups 3.94704612 2 1.97352306
Within groups 0 134 0
------------------------------------------------------------------------
Total 3.94704612 136 .029022398

.

What is the command to make bar chart, and conc curve and conc Index for the whole population

$
0
0
Dear

I am using DHS dataset .

I have made the Bar graph and con Curve and index of the sample data.

But dont know the command to make bar chart, and conc curve and conc Index for the whole population by using the DHS data set.

How to insert weights in commands for concentration curve and Concentration index.


Please help me.

Thanks
Sabbir

Problem with updating packages (spost13 related)

$
0
0
Dear Community,

I am facing problems when I try to update or download packages for Stata. In particular, I wanted to check whether the mtable command from the spost13 package needs an update as it takes infinitively long to compute (I stopped the program at some point). However, when I wanted to check, I got the following error message:


Code:
. which mtable
j:\ado\plus\m\mtable.ado
*! version 1.0.6 2014-08-14 | long freese | allow non-estimable

. findit spost13

. net from http://www.indiana.edu/~jslsoc/stata/
server refused to send file
http://www.indiana.edu/~jslsoc/stata/ either
  1)  is not a valid URL, or
  2)  could not be contacted, or
  3)  is not a Stata download site (has no stata.toc file).

current site is still http://fmwww.bc.edu/repec/bocode/s/
r(672);
I would like to update my spost13 package, since I suspect the mtable command to be somehow damaged.

Can anybody help?

Thanks a lot in advance!

Carsten

Display selected category + all categories combined

$
0
0
Hi

I think this question has a really simple answer but I can't for the life of me work it out or find one online.

I'm trying to create a bar chart displaying values of a continuous variable stratified by age group (three categories) and by two comparison groups. The comparison groups I'm interested in are:
  • (A) all those in one category of an ordinal variable, and
  • (B) everyone in dataset, regardless of their value for this variable.
To make it clearer, the ordinal variable in question is quintiles of an index of multiple deprivation, so I want to compare:
  • (A) those living in most deprived area (IMD=1)
  • (B) the entire population (IMD=1-5)
I'm not interested in seeing the other categories on their own (i.e. I don't want to see bars for IMD=2, IMD=3, etc).

I can't seem to find a way to specify this using syntax. If I use if, I only get one category. If I use by/over, I get all of them. I've tried creating a dummy variable for SIMD=1 with SIMD~=1 set to missing and using total, but this leaves out the missing from the total.

I'm sure I'm missing something really obvious - any advice as to how to write this in syntax would be very gratefully received

Many thanks

Emily

Viewing all 72779 articles
Browse latest View live