Quantcast
Channel: Statalist
Viewing all 73173 articles
Browse latest View live

SFpanel, model(bc95)

$
0
0
Hello everyone, I'm using sfpanel to estimate technical efficiency (TE) by using model bc95.
[CODE]
[ sfpanel lntradeij lngdpppp lngdpcapppp lndistwij landlockj infrastructure tradeagreement if countryj!="Albania" &countryj!="Zambia" &countryj!="Bangladesh", model(bc95) emean( taxburdensum, noconstant)]

In this command I use emean and I receive the result of TE is from 0.3 to 0.8 . However, if I don't use emean TE is from 0.1 to 0.9 . So, I'm so confuse about function emean, what is its function? and what are the function of usigma and vsigma ?
I hope you can give me some advance. Thank you very much.

Keep Getting Troubles with Data Importations - Time Series

$
0
0
I'm trying to aggregate data to begin work on some demand analysis. The data required in Stata will, almost necessarily, come from different sources. So I need to aggregate the data (i.e. department of labor statistics on income, World Bank Data on Inflation, IMF Interest rates etc). I've tried merging inside Stata and/or using append from different saved datasets but the problem I run into seems to always be with either formatting incorrectly upon 'append' or tsset issues when I try to set the aggregated data as time series. The latter is a concern in that different data may have a different "start date". So upon using append, theres some mismatched columns.

Most recently after using 'append using' I got a bunch of missing values. It's been a problem I've seen to run into more often than not. So I'd love to understand the path of least resistence in going about this with consistent, long set data with time as universal first column to use for tsset.

Here's the data now:

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input float(anaheim92808 yorbalinda92887 newportbeach92625) str7 cityregionname str54 Private_Motor_Reg str37 T_MotorRegistered str272 Time float(HOWNRATEACS006059 CAORAN7URN) int FBITC006059 float MHICA06059A052NCEN
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . .     .
. . . "" "" "" "" .   . . 39592
. . . "" "" "" "" . 2.9 .     .
. . . "" "" "" "" . 2.8 .     .
. . . "" "" "" "" . 2.7 .     .
. . . "" "" "" "" .   3 .     .
. . . "" "" "" "" . 3.1 .     .
. . . "" "" "" "" . 3.4 .     .
. . . "" "" "" "" . 3.8 .     .
. . . "" "" "" "" . 3.8 .     .
. . . "" "" "" "" . 4.1 .     .
. . . "" "" "" "" .   4 .     .
. . . "" "" "" "" . 4.2 .     .
. . . "" "" "" "" . 4.1 .     .
. . . "" "" "" "" . 4.8 .     .
. . . "" "" "" "" .   5 .     .
. . . "" "" "" "" . 4.9 .     .
. . . "" "" "" "" . 4.8 .     .
. . . "" "" "" "" . 5.1 .     .
. . . "" "" "" "" . 5.5 .     .
. . . "" "" "" "" . 5.7 .     .
. . . "" "" "" "" . 5.5 .     .
. . . "" "" "" "" . 5.5 .     .
. . . "" "" "" "" . 5.5 .     .
. . . "" "" "" "" . 5.3 .     .
. . . "" "" "" "" . 5.2 .     .
. . . "" "" "" "" . 5.9 .     .
. . . "" "" "" "" . 6.1 .     .
. . . "" "" "" "" . 6.1 .     .
. . . "" "" "" "" . 6.1 .     .
. . . "" "" "" "" . 6.3 .     .
. . . "" "" "" "" .   7 .     .
. . . "" "" "" "" . 7.2 .     .
. . . "" "" "" "" . 7.1 .     .
. . . "" "" "" "" . 7.3 .     .
. . . "" "" "" "" . 6.9 .     .
. . . "" "" "" "" .   7 .     .
. . . "" "" "" "" . 6.7 .     .
. . . "" "" "" "" . 7.1 . 45116
. . . "" "" "" "" .   7 .     .
. . . "" "" "" "" . 6.8 .     .
. . . "" "" "" "" . 6.7 .     .
. . . "" "" "" "" . 6.6 .     .
. . . "" "" "" "" .   7 .     .
. . . "" "" "" "" . 7.3 .     .
. . . "" "" "" "" .   7 .     .
. . . "" "" "" "" .   7 .     .
. . . "" "" "" "" . 6.7 .     .
. . . "" "" "" "" . 6.4 .     .
. . . "" "" "" "" . 5.8 .     .
. . . "" "" "" "" . 6.7 .     .
. . . "" "" "" "" . 6.3 .     .
. . . "" "" "" "" . 6.1 .     .
. . . "" "" "" "" . 5.9 .     .
. . . "" "" "" "" . 5.5 .     .
. . . "" "" "" "" . 5.8 .     .
. . . "" "" "" "" . 6.2 .     .
. . . "" "" "" "" . 5.9 .     .
. . . "" "" "" "" . 5.6 .     .
. . . "" "" "" "" . 5.3 .     .
. . . "" "" "" "" . 4.8 .     .
. . . "" "" "" "" . 4.4 .     .
. . . "" "" "" "" . 5.2 . 48701
. . . "" "" "" "" . 4.9 .     .
. . . "" "" "" "" . 4.8 .     .
. . . "" "" "" "" . 5.2 .     .
. . . "" "" "" "" .   5 .     .
. . . "" "" "" "" . 5.3 .     .
. . . "" "" "" "" . 5.7 .     .
. . . "" "" "" "" . 5.4 .     .
. . . "" "" "" "" . 5.4 .     .
. . . "" "" "" "" .   5 .     .
. . . "" "" "" "" . 4.7 .     .
. . . "" "" "" "" . 4.1 .     .
. . . "" "" "" "" . 4.8 .     .
. . . "" "" "" "" . 4.5 .     .
. . . "" "" "" "" . 4.4 .     .
. . . "" "" "" "" . 4.2 .     .
. . . "" "" "" "" . 4.3 .     .
. . . "" "" "" "" . 4.2 .     .
. . . "" "" "" "" . 4.6 .     .
end
I have all the constituent parts still under different CSVs that I of course could aggregate in Excel. But I'm trying to do this data aggregation in Stata. Maybe a form of 'merge' is what i'm looking for? If I can specify the 'date' on the separate worksheets as the column to align that exists in both saved data sets?

Merging two panel data sets with different variables and samples that do not perfectly match

$
0
0
Hello everyone,

I have two panel data sets that I want to merge. Both have observations which are uniquely identified by the variables year and id. However, both datasets contain different variables are do not contain perfectly equal samples.

I have posted examples of my datasets below. What type of merge (e.g. 1:1, 1:m, etc.) would be most suitable? And why?

Thank you.

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input long id float year str8 cusip str4 ticker float(lag_IO_USD lag_IO lag_IO_PROP lag_HHI)
2091 2012 "40048D10" "GPFO"      86.5  7.501876e-09  7.917063e-10         1
 465 2006 "15234Q20" "CAIF"   94261.2  2.559048e-08  .00027465922  .9660543
 465 2007 "15234Q20" "CAIF"  196631.2   3.63018e-08  .00007682755  .8612706
2690 2010 "45069P10" "ITVP"    1436.5  4.371304e-08 .000012754213         1
4700 2012 "78658410" "SAFR"       604  4.889976e-08  5.528215e-09         1
4335 2013 "69367X10" "PTAI"   2137.92 6.7193675e-08  5.287189e-06         1
1988 2010 "37441W10" "GFSZ" 2209.2002  7.446808e-08  .00002032706         1
1988 2009 "37441W10" "GFSZ"   1636.14  7.883523e-08  .00001938505         1
1612 2002 "V3267N10" "EXM"        3.5  8.333333e-08 1.7598925e-09         1
2097 2009 "40048520" "GPOV"      1054  8.598452e-08  .00003277862         1
1988 2011 "37441W10" "GFSZ"    2564.9  9.213324e-08 2.3353177e-08         1
4644 2012 "46629410" "RSHY"    887.55 1.0330138e-07  8.119878e-09  .5153576
2163 2012 "39138C10" "GWO/"      2001 1.0526315e-07   1.83145e-08         1
 441 2012 "08861Q10" "BZQI"     556.8 1.1049724e-07  5.096209e-09         1
1817 2012 "30256310" "FPIC"     42.01      1.25e-07  7.163122e-10         1
3212 2007 "98157D30" "MCWE"       .32 1.3445378e-07  6.493758e-09         1
5376 2003 "88244310" "TGN"      104.5     1.375e-07 1.8053064e-06         1
1140 2010 "23282C40" "CYRB"  16584.31 1.3981042e-07  5.198925e-06         1
1140 2011 "23282C40" "CYRB"     15180 1.4184397e-07  1.382125e-07         1
 228 2011 "00753G10" "AVIF"      1415 1.6835017e-07 1.2883445e-08         1
1955 2010 "36159210" "GEAG"     691.3 1.6847827e-07  6.360716e-06         1
3172 2012 "59071710" "MBLT"       352  1.724138e-07  3.221741e-09         1
4200 2011 "74053610" "PMOI"       603  1.724138e-07   5.49026e-09         1
4725 2013 "78404D10" "SBFF"    545.75  1.957672e-07 3.5996813e-06         1
1566 2001 "29358R10" "ESGR"     15.06      2.00e-07  7.113488e-10         1
5958 2005 "96945510" "WCGR"         0  2.016129e-07             0         1
5958 2007 "96945510" "WCGR"         0  2.016129e-07             0         1
4725 2010 "78404D10" "SBFF"    770.64  2.378049e-07  7.090731e-06         1
6038 2013 "98088R50" "WLWH"   1537.92      2.40e-07  .00001014388         1
4549 2000 "76564130" "RICK"      2.94      2.50e-07 2.0426907e-09         1
5463 2003 "88027710" "TNAB"      8000 2.5706942e-07  .00009996756         1
5265 2013 "79584220" "SZGP"    812.76  2.883549e-07   4.68739e-06         1
2097 2011 "40048520" "GPOV"    5075.8   3.56212e-07  4.621469e-08         1
3279 2010 "56490510" "MFI/"    489.72  3.826087e-07 1.2393329e-07         1
3279 2009 "56490510" "MFI/"    394.68 4.1121496e-07 1.1641372e-07         1
3279 2008 "56490510" "MFI/"     662.2 4.1121496e-07 1.1650585e-07         1
5463 2002 "88027710" "TNAB"     14638  4.181409e-07  .00016215292         1
3212 2009 "98157D30" "MCWE"      1.02  4.285714e-07 1.0427176e-08  .8892734
4802 2011 "81748L10" "SECC"    1896.4  4.462475e-07 1.7266547e-08         1
 794 2011 "18047P10" "CLZN"      1995  4.524887e-07 1.8164293e-08         1
3307 2012 "59146510" "MHTL"    1479.6  4.651163e-07 1.3542296e-08         1
5754 2007 "91730220" "USNZ"   2036.34  4.778761e-07 .000012562863         1
4320 2013 "69367J10" "PSGT"    4654.5  4.882155e-07  8.371868e-06  .7330797
5463 1995 "88027710" "TNAB"      6162  5.131579e-07   .0000411108         1
5463 2001 "88027710" "TNAB"     19536  5.146349e-07  5.688995e-07  .6953125
3999 2011 "69606A10" "PALA"    1926.6  5.205479e-07 .000016224065         1
5264 2010 "86469110" "SZEV"   2996.63 5.2857143e-07  7.686928e-06  .3530508
4360 2012 "74343W10" "PUOD"     790.5  5.555556e-07  7.235189e-09         1
4543 2013 "74955W30" "RHDC"         0  5.797102e-07             0         1
4661 2012 "76012510" "RTOK"     997.5  5.801105e-07  9.129792e-09         1
4053 2002 "72365510" "PDC"         40      6.25e-07 1.8756312e-09         1
1879 2000 "31807140" "FTT"        465  6.329114e-07  4.649331e-06         1
6031 2011 "97143310" "WLMI" 18454.799    6.5625e-07 1.6802925e-07         1
1879 2001 "31807140" "FTT"        423  6.578948e-07 3.6879785e-06         1
1879 2002 "31807140" "FTT"      626.5  6.666667e-07  6.793011e-06         1
 267 2009 "21987A20" "BCA"    3044720  6.709121e-07  7.625424e-06 .29342243
1066 2011 "21077120" "CTTA"     10934      7.00e-07  9.955306e-08         1
4443 2011 "75279Q10" "RANJ"    6326.4  7.058824e-07  5.760129e-08         1
6138 2007 "70590460" "XAN"      16.88  7.272727e-07 1.0670233e-10         1
2767 2011 "47649310" "JRON"    7141.5   7.32484e-07 6.5022704e-08         1
3999 2013 "69606A10" "PALA"    698.88  7.619047e-07  4.609703e-06         1
2767 2012 "47649310" "JRON"      7920  7.643312e-07  7.248918e-08         1
5204 2012 "86959C10" "SVNL"   12758.5  7.761438e-07  1.167744e-07         1
4146 2013 "45577X10" "PIFM"   4160.69  7.784091e-07 .000010289606         1
4146 2012 "45577X10" "PIFM"    3479.8  7.784091e-07 .000011057588         1
1982 2011 "40231210" "GFKS"    1611.3  7.894737e-07 1.4670738e-08         1
5963 2013 "98157D10" "WCOE"        24  8.078088e-07  1.050122e-07    .53125
2720 2011 "46611010" "JBSA"    8930.1   8.16812e-07  8.130774e-08         1
2014 2012 "37251T10" "GIGN"     11931   8.40164e-07 1.0920056e-07         1
1988 2012 "37441W10" "GFSZ"      5064  8.510638e-07  4.634914e-08         1
6031 2012 "97143310" "WLMI"   22318.4    9.0625e-07 2.0427305e-07         1
 228 2012 "00753G10" "AVIF"   12025.2  9.192735e-07 1.0925412e-07  .3421061
  93 2010 "02660R10" "AHMI"         2   9.25926e-07  5.061394e-10         1
 267 2010 "21987A20" "BCA"    8695308  9.269607e-07  7.117299e-06  .2421576
5292 2012 "89236010" "TBLE"  9811.141  9.280245e-07  8.939402e-08  .8215861
2731 2003 "47214610" "JDWP"      2680  9.302325e-07  .00001312637         1
2058 2012 "36143A10" "GMHL"      1974  9.343937e-07 1.8067377e-08         1
3212 2013 "98157D30" "MCWE"      2.24  9.411765e-07  1.633016e-09   .502551
5933 2013 "93932210" "WAMU"     33.38  9.788856e-07  2.325995e-08  .5315127
1982 2012 "40231210" "GFKS"   2554.63      1.00e-06 2.3381695e-08         1
1803 2011 "34959F10" "FOJC"   28589.7 1.0603331e-06  2.603066e-07         1
1803 2012 "34959F10" "FOJC"  20049.22 1.0695633e-06 1.8350397e-07         1
5292 2011 "89236010" "TBLE"     11235 1.0719755e-06 1.0229363e-07         1
4024 2012 "69367Y10" "PBMR"    5307.5 1.0784314e-06  4.857782e-08         1
 267 2008 "21987A20" "BCA"    8839121 1.0790714e-06  9.483426e-06 .55533427
5265 2012 "79584220" "SZGP"   3060.24 1.1497227e-06  2.800938e-08         1
3212 2008 "98157D30" "MCWE"      2.74 1.1512604e-06 3.8968686e-09  .3346475
2772 2010 "48122U20" "JSFC"    150616 1.1647668e-06 .000010421089         1
1161 2013 "24736110" "DALR"       4.6 1.1675127e-06 1.6185604e-08  .8130813
 742 2013 "12558110" "CITG"  18315.36 1.1703704e-06  .00006367048         1
 579 2010 "17273710" "CCTY"         2  1.190476e-06 3.2118794e-09         1
1933 2009 "38991410" "GC/"        292 1.2195122e-06  2.422477e-06         1
2421 2012 "44915J10" "HYPM"      3688 1.2779552e-06  3.375506e-08         1
3286 2010 "59410T10" "MGDD"   2891.44 1.2789116e-06  .00002660441         1
3212 2011 "98157D30" "MCWE"      3.06 1.2857142e-06 2.2003188e-09  .3217993
1511 2011 "29248L10" "ENGG"    6206.2 1.2997904e-06  5.650688e-08         1
3889 2012 "67935P10" "OLMI"      5248 1.3114754e-06  4.803323e-08         1
1839 2012 "31543710" "FRRV"   11741.4 1.3487738e-06  1.074652e-07         1
3947 2011 "68633110" "ORKL"     13356  1.372549e-06 1.2160515e-07         1
2223 2012 "42281P20" "HDEL"   11185.3 1.4179104e-06  1.023754e-07         1
end
format %ty year
label values id id
label def id 93 "AHMI", modify
label def id 228 "AVIF", modify
label def id 267 "BCA", modify
label def id 441 "BZQI", modify
label def id 465 "CAIF", modify
label def id 579 "CCTY", modify
label def id 742 "CITG", modify
label def id 794 "CLZN", modify
label def id 1066 "CTTA", modify
label def id 1140 "CYRB", modify
label def id 1161 "DALR", modify
label def id 1511 "ENGG", modify
label def id 1566 "ESGR", modify
label def id 1612 "EXM", modify
label def id 1803 "FOJC", modify
label def id 1817 "FPIC", modify
label def id 1839 "FRRV", modify
label def id 1879 "FTT", modify
label def id 1933 "GC/", modify
label def id 1955 "GEAG", modify
label def id 1982 "GFKS", modify
label def id 1988 "GFSZ", modify
label def id 2014 "GIGN", modify
label def id 2058 "GMHL", modify
label def id 2091 "GPFO", modify
label def id 2097 "GPOV", modify
label def id 2163 "GWO/", modify
label def id 2223 "HDEL", modify
label def id 2421 "HYPM", modify
label def id 2690 "ITVP", modify
label def id 2720 "JBSA", modify
label def id 2731 "JDWP", modify
label def id 2767 "JRON", modify
label def id 2772 "JSFC", modify
label def id 3172 "MBLT", modify
label def id 3212 "MCWE", modify
label def id 3279 "MFI/", modify
label def id 3286 "MGDD", modify
label def id 3307 "MHTL", modify
label def id 3889 "OLMI", modify
label def id 3947 "ORKL", modify
label def id 3999 "PALA", modify
label def id 4024 "PBMR", modify
label def id 4053 "PDC", modify
label def id 4146 "PIFM", modify
label def id 4200 "PMOI", modify
label def id 4320 "PSGT", modify
label def id 4335 "PTAI", modify
label def id 4360 "PUOD", modify
label def id 4443 "RANJ", modify
label def id 4543 "RHDC", modify
label def id 4549 "RICK", modify
label def id 4644 "RSHY", modify
label def id 4661 "RTOK", modify
label def id 4700 "SAFR", modify
label def id 4725 "SBFF", modify
label def id 4802 "SECC", modify
label def id 5204 "SVNL", modify
label def id 5264 "SZEV", modify
label def id 5265 "SZGP", modify
label def id 5292 "TBLE", modify
label def id 5376 "TGN", modify
label def id 5463 "TNAB", modify
label def id 5754 "USNZ", modify
label def id 5933 "WAMU", modify
label def id 5958 "WCGR", modify
label def id 5963 "WCOE", modify
label def id 6031 "WLMI", modify
label def id 6038 "WLWH", modify
label def id 6138 "XAN", modify

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input long id int year byte(CSP_t_s CSP_t_c CSP_e_s CSP_e_c CSP_s_s CSP_s_c CSP_g_s CSP_g_c)
 1 2000  1  2 0 0  1  1 0 1
 1 2001  2  3 0 0  2  2 0 1
 1 2002 11  8 0 0 10  7 1 1
 1 2003  8  9 0 0  7  8 1 1
 1 2004  9  9 0 0  8  8 1 1
 1 2005 13  7 3 0  9  7 1 0
 1 2006 14  5 4 0  9  5 1 0
 1 2007 16  6 4 0 11  6 1 0
 1 2008 13  6 3 0  9  6 1 0
 1 2009 13  6 3 0  9  6 1 0
 1 2010 11  5 4 0  6  5 1 0
 1 2011 10  3 3 0  6  3 1 0
 1 2012  4  3 0 0  4  2 0 1
 1 2013  9  2 2 0  7  2 0 0
 2 2013  6  0 1 0  5  0 0 0
 3 1991  2  2 1 2  1  0 0 0
 3 1992  3  2 1 2  1  0 1 0
 3 1993  3  3 1 3  1  0 1 0
 3 1994  5  4 1 3  3  1 1 0
 3 1995  5  5 1 3  3  1 1 1
 3 1996  6  3 1 3  4  0 1 0
 3 1997  6  5 1 3  4  1 1 1
 3 1998  4  4 2 2  2  1 0 1
 3 1999  4  3 1 2  3  1 0 0
 3 2000  4  5 1 3  3  1 0 1
 3 2001  6  5 1 2  4  2 1 1
 3 2002  7  7 1 4  5  2 1 1
 3 2003  6  7 1 4  4  2 1 1
 3 2004  5  7 1 4  3  2 1 1
 3 2005  9 10 2 4  6  5 1 1
 3 2006  9 10 2 4  6  5 1 1
 3 2007 10 11 3 4  5  6 2 1
 3 2008  9 11 3 4  5  6 1 1
 3 2009  9 11 3 4  5  6 1 1
 3 2010 17 13 5 3 10 10 2 0
 3 2011 17 13 5 3 10 10 2 0
 3 2012 16  7 3 2 11  4 2 1
 3 2013 15  5 2 1 12  3 1 1
 4 2004  0  1 0 0  0  1 0 0
 4 2005  0  2 0 0  0  2 0 0
 4 2006  1  3 0 0  0  2 1 1
 4 2007  1  2 0 0  0  1 1 1
 4 2008  1  2 0 0  0  1 1 1
 4 2009  1  2 0 0  0  1 1 1
 4 2010  0  0 0 0  0  0 0 0
 4 2011  0  1 0 0  0  0 0 1
 4 2012  0  0 0 0  0  0 0 0
 5 2004  0  2 0 0  0  2 0 0
 5 2005  1  2 0 0  0  2 1 0
 6 2013  0  0 0 0  0  0 0 0
 7 2003  0  0 0 0  0  0 0 0
 7 2004  2  1 0 0  2  1 0 0
 7 2005  2  1 0 0  2  1 0 0
 7 2006  2  1 0 0  2  1 0 0
 7 2007  2  1 0 0  2  1 0 0
 7 2008  2  1 0 0  2  1 0 0
 7 2009  2  1 0 0  2  1 0 0
 7 2010  0  1 0 0  0  1 0 0
 8 2003  0  0 0 0  0  0 0 0
 9 2013  0  0 0 0  0  0 0 0
10 1991  0  2 0 0  0  2 0 0
10 1992  1  2 0 0  1  2 0 0
10 1993  1  2 0 0  1  2 0 0
10 1994  1  2 0 0  1  1 0 1
10 1995  0  2 0 0  0  0 0 2
10 1996  2  1 0 0  1  0 1 1
10 2013  3  0 1 0  2  0 0 0
11 2013  0  0 0 0  0  0 0 0
12 2009  1  1 0 0  0  1 1 0
12 2012  0  0 0 0  0  0 0 0
12 2013  1  1 0 0  1  1 0 0
13 2010  0  0 0 0  0  0 0 0
13 2011  0  1 0 0  0  0 0 1
14 2003  1  0 0 0  0  0 1 0
14 2004  1  0 0 0  0  0 1 0
14 2005  0  0 0 0  0  0 0 0
14 2006  0  0 0 0  0  0 0 0
14 2007  0  0 0 0  0  0 0 0
14 2008  0  0 0 0  0  0 0 0
14 2009  0  0 0 0  0  0 0 0
14 2010  0  0 0 0  0  0 0 0
14 2011  0  1 0 0  0  0 0 1
14 2012  0  0 0 0  0  0 0 0
14 2013  0  0 0 0  0  0 0 0
15 2002  1  0 0 0  0  0 1 0
15 2003  0  0 0 0  0  0 0 0
15 2004  0  1 0 0  0  0 0 1
15 2005  0  0 0 0  0  0 0 0
15 2006  0  1 0 0  0  0 0 1
15 2007  0  3 0 0  0  2 0 1
15 2008  0  3 0 0  0  2 0 1
15 2009  0  3 0 0  0  2 0 1
15 2010  1  2 0 0  1  1 0 1
15 2011  1  2 0 0  1  1 0 1
15 2012  0  0 0 0  0  0 0 0
15 2013  0  0 0 0  0  0 0 0
16 1991  5  2 0 0  5  2 0 0
16 1992  6  2 1 0  5  2 0 0
16 1993  6  1 1 0  5  1 0 0
16 1994  6  3 0 0  6  3 0 0
end
format %ty year
label values id id1
label def id1 1 "A", modify
label def id1 2 "A17U", modify
label def id1 3 "AA", modify
label def id1 4 "AACC", modify
label def id1 5 "AACE", modify
label def id1 6 "AAD", modify
label def id1 7 "AAI", modify
label def id1 8 "AAII", modify
label def id1 9 "AAK", modify
label def id1 10 "AAL", modify
label def id1 11 "AALI", modify
label def id1 12 "AAN", modify
label def id1 13 "AAN/XX", modify
label def id1 14 "AAON", modify
label def id1 15 "AAP", modify
label def id1 16 "AAPL", modify

Confusion on how to properly interpret coefficients in relation to the p-value in logistic regression

$
0
0
Hello! In my analysis, I want to see the effect of education (college graduate or not), experience (more than 10 years of experience or less than 10 years of experience), entrepreneurial characteristics (high or not high), legal policies and support (received or not), and resources and finance (considers money a problem or not) to the success of a business. I used Logistic Regression to analyze which factors significantly affect success; however, I am very confused about how to interpret my results.


My questions are:
  1. Does the p-value indicate which factors are significant to success (the event of interest)? So in my case, the significant factors are education and entrepreneurial characteristics?
  2. What does the coefficients indicate? For example, is it correct to interpret that it is less likely for college graduates to become successful since the coefficent is negative? If so, how come that education is a significant factor affecting success? The same goes for experience, I am very confused because it shows that business owners with more than 10 years of experience are more likely to become successful; however, experience is not a significant factor (according to the p-value).
  3. Are my assumptions correct? How can I interpret my results?

I have uploded a summary of the results I got from the logistic regression. Can someone please enlighten me on how to correctly interpret my results? Thank you very much!

​​​​​

Incidental truncation

$
0
0
I am measuring the effects of economic development on total fertility rate, in OECD countries. The data type is panel data ranging from 1960-2015. I have full data for total fertility rate for all countries from 1960-2015. My issue, however is I have missing years for some countries in my economic development indicator (gdp per capita). Namely ex-Soviet countries data on GDP per capita begins in 1990-2015.

I have determined it is systemic, so I cannot treat the missing observation as random and therefore ignore them and continue with unbalanced panels.

My question is, how do I perform incident truncation in Stata? I am using Stata12.

Synthetic Control Method and In Space Placebo

$
0
0
Hello,

I am using STATA 14, an the synth package for conducting the synthetic control by Jens Hainmueller: https://web.stanford.edu/~jhain/synthpage.html

I am trying to conduct an "in Space" placebo test, and have followed STATALIST user Michael Jankowski's guide, which can be found: https://www.statalist.org/forums/for...control-method

As I have understood, this method calculates the difference between the actual unit and the synthetic control unit. If the graph is around zero, it means that the synthetic and the actual unit basically is the same. As you can see from "Synthetic Control France", it has a quite good fit. However, the plaebo graph seems to be different than the France graph (Orange is representing the difference between actual France and the synthetic control). Am I doing something wrong here? I tried to to the exact same thing with Abadie and Hainmuellers "smoking" dataset, and looks good compared to their synthetic control.

Not sure if I'm clear enough, but I hope you'll understand my question.

Cheers
Array

Array



Sorting stacked bars based on the sum of two variables

$
0
0
Code:
graph hbar (asis) pc_cpd pc_cgd , over(country, sort(1) descending) ///
 blabel(bar, position(base) format(%4.1f)) stack nofill scheme(s1color) subtitle(, fcolor(white)) ///
 title("Per capita consumption in Arab Countries 2017") subtitle((US dollars/day)) name(consumption)

RMSE with asreg in rolling window regression

$
0
0
I have recently found and been using “asreg” command for Stata. I find it incredibly useful. I am having trouble completing one task and wonder if someone might be able to shed some light on how to achieve this with asreg. I have a dynamic panel data set with multiple individuals over multiple years. I am trying to run both year specific (relatively easy) and person specific regressions (rolling over the prior 10 year period). Both of which are easily achievable using asreg. The trouble I am having is that I would like to capture the RMSE of each of these regressions. While I can save the residual of the latest observation in the window, I cannot figure out how to either save all residuals or how to save the RMSE. Any suggestions on how this might be possible?


Regression table with output from many different regressions

$
0
0
I want to run 12 different regressions and display the estimated betas, standard errors, and significance stars in the same table. The regressions I want to run are:

loc covars “x2 x3 x4” // control variables

forval v = 1 / 4 {

reg y1`v’ x `covars’, r
reg y2`v’ x `covars’, r
reg y3`v’ x `covars’, r

}

The table that I want to produce is of the following format:
beta_x11
(se_x11)
beta_x21
(se_x21)
beta_x31
(se_x31)
beta_x12
(se_x12)
beta_x22
(se_x22)
beta_x32
(se_x32)
beta_x13
(se_x13)
beta_x23
(se_x23)
beta_x33
(se_x33)
beta_x14
(se_x14)
beta_x24
(se_x24)
beta_x34
(se_x34)

where beta_x11 is the estimated beta on x from the regression of y11 on x, and se_x11 is the SE of the estimate, and so on.

What is the best way to accomplish this? I have tried using outreg2, but I can only successfully produce the first row. Do I have to produce rows 2, 3 and 4 in a separate file and then merge the tables somehow? Or how do I accomplish this? I am not dead set on using outreg2 if there is another better tool. I was thinking of just saving the regression output in a matrix, but then I can’t figure out how to make it display the stars for significance level.

Thanks in advance for help.

Extract date

$
0
0
Hi all,

How can I generate a new string datelike this
Code:
"02/2018"
, based on an existing date
Code:
"02/05/2018"
?

Thanks,

Jack Liang

Bayesian MCMC Gibbs sampling

$
0
0
I want to use Bayesian MCMC to generate a sequence of draws from the posterior distribution. I am using Stata 14.1. I know the command bayesmh which uses the MH,
  • what is the command using Gibbs sampling to simulate draws sequentially for blocks of parameters?
  • How to incorporate instruments within Bayesian framework?
  • Could you give me a simple example and corresponding codes?
  • Or are there any papers to refer?
Appreciate it so much!

Frequency matching multiple variables

$
0
0
Hello,
I am conducting a database analysis involving an exposed cohort and an unexposed cohort. Essentially before running multivariate analysis, I would like to establish a 2:1 cohort match. However, instead of ensuring no significant difference between just age and gender, I would like to ensure no significant differences in other variables/comorbidities between the two cohorts. I’ve seen posts regarding matching 2:1 based on age and/gender but I was wondering if I could expand that to multiple variables. My total sample size is in the thousands. Thank you for your help.

Fixed effect model that take into account dummy variables for independent variable

$
0
0
I don't believe the way I've set out this fixed effect model is correct for stata
I am using stata 15

The Model I am using the model, as reference: Array



X represents the time-covariates: age, mastat (marital status), nkids (number of children, jbstat (job status)
The U is the dummy variables that represents how many years before or after an individual migrates ( Movest)
dependent = happiness ( lfsato)


This is a snippet of the data i have:
input long pid float wave byte(lfsato sex mastat) int age byte(movest nkids) float(lead1 lead2 lead3 lead4 lead5 lag1 lag2 lag3 lag4 lag5)
10017933 6 6 2 5 54 1 0 . . . . . 0 0 0 0 0
10017933 7 6 2 4 55 1 0 0 . . . . 0 0 0 0 0
10017933 8 6 2 4 56 1 0 0 0 . . . 0 0 0 0 0
10017933 9 3 2 4 56 1 0 0 0 0 . . 0 0 0 0 0
10017933 10 5 2 4 58 1 0 0 0 0 0 . 0 0 0 0 0
10017933 11 . 2 4 59 1 0 0 0 0 0 0 0 0 0 0 0
10017933 12 5 2 4 60 1 0 0 0 0 0 0 0 0 0 0 0
10017933 13 6 2 4 60 1 0 0 0 0 0 0 0 0 0 0 0
10017933 14 4 2 4 62 1 0 0 0 0 0 0 0 0 0 0 .
10017933 15 6 2 4 63 1 0 0 0 0 0 0 0 0 0 . .
10017933 16 5 2 4 64 1 0 0 0 0 0 0 0 0 . . .
10017933 17 6 2 4 65 1 0 0 0 0 0 0 0 . . . .
10017933 18 6 2 4 66 1 0 0 0 0 0 0 . . . . .
10017992 6 5 2 6 17 1 0 . . . . . 0 0 0 0 0
10017992 7 6 2 6 18 1 0 0 . . . . 0 0 0 0 0
10017992 8 5 2 6 19 1 0 0 0 . . . 0 0 0 0 0
10017992 9 4 2 6 19 1 0 0 0 0 . . 0 0 0 0 0
10017992 10 6 2 6 21 1 0 0 0 0 0 . 0 0 0 0 0
10017992 11 . 2 6 22 1 0 0 0 0 0 0 0 0 0 0 0
10017992 12 5 2 6 23 1 0 0 0 0 0 0 0 0 0 0 0
10017992 13 . 2 6 23 1 0 0 0 0 0 0 0 0 0 0 .
10017992 14 5 2 6 25 1 0 0 0 0 0 0 0 0 0 . .
10017992 15 7 2 6 26 1 0 0 0 0 0 0 0 0 . . .
10017992 16 4 2 6 27 1 0 0 0 0 0 0 0 . . . .
10017992 17 5 2 6 28 1 0 0 0 0 0 0 . . . . .
10019057 6 . 2 6 64 2 0 . . . . . 0 0 0 0 0
10019057 7 6 2 6 65 1 0 1 . . . . 0 0 0 0 0
10019057 8 6 2 6 66 1 0 0 1 . . . 0 0 0 0 0
10019057 9 6 2 6 67 1 0 0 0 1 . . 0 0 0 0 0
10019057 10 5 2 6 67 1 0 0 0 0 1 . 0 0 0 0 0
10019057 11 . 2 6 68 1 0 0 0 0 0 1 0 0 0 0 0
10019057 12 5 2 6 69 1 0 0 0 0 0 0 0 0 0 0 0
10019057 13 . 2 6 71 1 0 0 0 0 0 0 0 0 0 0 0
10019057 14 6 2 6 71 1 0 0 0 0 0 0 0 0 0 0 .
10019057 15 5 2 6 73 1 0 0 0 0 0 0 0 0 0 . .
10019057 16 6 2 6 74 1 0 0 0 0 0 0 0 0 . . .
10019057 17 6 2 6 75 1 0 0 0 0 0 0 0 . . . .
10019057 18 5 2 6 76 1 0 0 0 0 0 0 . . . . .
10023526 6 4 2 4 43 1 0 . . . . . 0 . . . 0
10023526 7 5 2 4 44 1 0 0 . . . . . . . 0 0
10023526 11 . 2 1 48 1 0 . . . 0 0 0 0 0 0 0
10023526 12 5 2 4 49 1 0 0 . . . 0 0 0 0 0 0
10023526 13 5 2 4 50 1 0 0 0 . . . 0 0 0 0 0
10023526 14 5 2 4 51 1 0 0 0 0 . . 0 0 0 0 .
10023526 15 5 2 4 52 1 0 0 0 0 0 . 0 0 0 . .
10023526 16 5 2 4 53 1 0 0 0 0 0 0 0 0 . . .
10023526 17 5 2 4 54 1 0 0 0 0 0 0 0 . . . .
10023526 18 6 2 4 55 1 0 0 0 0 0 0 . . . . .
end

I was using the code
Code:
xtreg lfsato sex mastat age jbstat movest nkids lead1 lead2 lead3 lead4 lead5 lag1 lag2 lag3 lag4 lag5, fe
This is a follow on from:https://www.statalist.org/forums/for...ational-period

Impulse response functions in Stata 15.1

$
0
0
Hello,

I am trying to replicate the results of the paper "Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy" by Christiano, Eichenbaum, Evans using the code by V. Ramey.
However, I can`t obtain irf graphs and tables in Stata15.1. My friend wrote the same code in Stata 15 and it worked out.

The code is

var lip unemp lcpi lpcom ffr lnbr ltr lm1 if mdate>=m(1965m1) & mdate<=m(1995m6), lags(1/12) level(90);

irf create irf, step(48) bs reps(500) set(cee, replace);
irf table oirf, impulse(ffr) response(lip unemp lcpi lpcom ffr lnbr ltr lm1) level(90);
irf graph oirf, impulse(ffr) response(ffr lip lcpi unemp) byopts(rescale) level(90)

The mistake is "file cee.irf is not a valid irf file"
r(198);


Could you please help me?

Thank you in advance

Produce table of counts, %, means and 95%CI for multiple variables according to other variables

$
0
0
I have a dataset of clinic attendances including a unique identifier for the attendance, characteristics of the person attending, binary variables which each define whether the attendance resulted in a certain diagnosis (or categories of diagnoses) which are not mutually exclusive, and finally some other characteristics of the attendance such as duration and cost.

Something like this:
ID age sex diag1_cardiac Diag2_renal Diag3_fever duration cost
A 1 M 1 0 1 6 2000
B 4 F 0 1 0 3 300
C 34 F 1 0 0 1 500
etc

I would like to produce several tables from this some of which will include counts and %, e.g.:
Number of attendances % of all attendances
Diag1_cardiac
Diag2_renal
Diag3_fever
All attendances xx 100

Others which would include means and 95%CI, e.g.:
Mean age 95% CI age
Diag1_cardiac
Diag2_renal
Diag3_fever
All attendances

I can't find a simple way of doing this. The best that I have found is tabout which allows me to produce the counts, % and means for multiple variables, however, this has several problems: Firstly, for each diagnosis it will report count and % when the diagnosis is 1 and when it is 0 and the total ... I am only interested in when the diagnosis is 1. Given that I have a large number of diagnoses it is clunky having to filter out those that I need from the outputed tables, secondly, there doesn't seem to be a way to produce 95%CIs for the means.

I thought there must be a way of collapsing the data but I can't think of a way of doing this given that the diagnoses are not mutually exclusive.

Does anyone have any suggestions for how I can do this?

Thanks very much,

Jamie

Can you have two interaction terms in one model?

$
0
0
I mean, can you do A##B and A##C in the same model?
Thanks.

Reshaping data long

$
0
0
Hi,

I have some data that I am hoping to reshape so that each facility has a unique year for each observation--there would be one line for 2016 and one line for 2017 per each facility, instead of the way it's currently structured. I tried -reshape long- to no avail, perhaps because there are a whole host of other variables that are also in this dataset that would just get duplicated and come along for the ride, if you will.


Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input str25 facility double(account2016 account2017 cert2016 cert2017 lesson2016 lesson2017)
"Magley Center"                              . 53.645700000000005                   .               .0772                  .             .4259
"Detwiler"                               343.1  938.8000000000001              1.3911  1.3515000000000001             8.5488            7.4541
"Surf City"                 19.605700000000002 53.645700000000005               .0795               .0772 .48850000000000005             .4259
"Humboldt Center"                            .                  .                   .                   .                  .                 .
"Trinity Center"                             .  751.0400000000001                   .  1.0812000000000002                  .            5.9633
"Monort Center"                              .           782.3333                   .              1.1262                  .            6.2118
"Canadens Center"           52.784600000000005           144.4308  .21400000000000002               .2079 1.3152000000000001            1.1468
"Madera Center"                          34.31  93.88000000000001               .1391               .1351              .8549 .7454000000000001
"Lassen  Center"                       87.9744 240.71790000000001               .3567  .34650000000000003              2.192            1.9113
"Lower Moreville"                       171.55 469.40000000000003               .6956   .6757000000000001             4.2744            3.7271
"Uptown East"                          34.8325            95.3096  .14120000000000002  .13720000000000002              .8679             .7568
"Decatur"                               13.724             37.552 .055600000000000004               .0541               .342             .2982
"Los Cabos"                             2.2873             6.2587 .009300000000000001 .009000000000000001               .057             .0497
"Millersville"                               .            45.2434                   .               .0651                  .             .3592
"Detroit Free"                         22.8733            62.5867               .0927               .0901  .5699000000000001             .4969
"Mitchell Franklin"                          .                  .                   .                   .                  .                 .
"Youth Center: Chicago"                 137.24 375.52000000000004               .5564   .5406000000000001             3.4195            2.9817
"Youth Center: Warrenville"              343.1  938.8000000000001              1.3911  1.3515000000000001             8.5488            7.4541
"Montgomery County"                    26.2409            71.8011  .10640000000000001               .1034              .6538             .5701
"Washburn Central "                      343.1  938.8000000000001              1.3911  1.3515000000000001             8.5488            7.4541
"Springfield "                               .            20.9437                   . .030100000000000002                  .             .1663
"Wyndate"                                    .           114.8379                   .               .1653                  .             .9118
"Ramsey Center"                              .             75.104                   .               .1081                  .             .5963
"Christian "                          142.9583           391.1667               .5796               .5631 3.5620000000000003            3.1059
"Ulster "                                    .            74.8048                   .               .1077                  . .5940000000000001
"Allegheny"                             5.4896 15.020800000000001               .0223               .0216              .1368             .1193
"Davis "                               22.3154            61.0602  .09050000000000001               .0879               .556             .4848
"Riverside Regional Center"             9.1493            25.0347               .0371 .036000000000000004               .228             .1988
"Ozark"                                      .           220.8941                   .                .318                  .            1.7539
end

random slope repeat measures commands

$
0
0
Hello,

I have a repeated measure of a variable so that each patient has multiple measurements across time. I would like to fit a line connecting the repeated measurements for each patient where x is time and y is the variable. Then I would like display 50 lines for 50 random patients. Is this possible in stata and if so can you please point me in the right direction?.

Keep only those instances of var1, where another variable (date/time) is the maximum value

$
0
0
Hello,
I have followed Michael Blasnik's post on how to retain var1 when it is at the maximum value of var2:
Code:
bysort var1 (var2): keep if _n==_N
HTML Code:
https://www.stata.com/statalist/archive/2006-09/msg00763.html
I am seeking now to extend that code to identify and keep only those instances of var1, where var2 is the maximum value (or missing), and var3 is the maximum value
In this example below, I would like to retain only the first, third and fourth rows.

Any guidance would be most appreciated.

var1 var2 var3
A .
B 2 21sep2013 17:53:59
B 2 21sep2013 17:55:00
B 3

Outreg2 user created function to export mediation tests like the suest command or the bootstrapping method of preacher and hayes?

$
0
0
Hi, recently I found out that one could use the user created funciton called outreg2 to export regression outputs and descriptive statistics, too. But I am wondering if you can use the outreg2 user created function to export the output of a mediation test like suest or the bootstrapping/ Strobel-Goodman mediation test.
Viewing all 73173 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>