Quantcast
Channel: Statalist
Viewing all 72758 articles
Browse latest View live

'Reshape Command'

$
0
0
Dear all

I am using the South African National Income Dynamics Survey (NIDs) and have successively created a panel dataset across all 5 waves.
I am wanting to reshape my cleaned dataset from wide to long form. I know that the reshape command is simple enough, but after extensive research, i still cannot understand it.
This may be because my stata program does not process the command and makes the computer very slow.

My code is:


reshape long w@_hhid w@_access w@_distance w@_sanitation ///
w@_educ_yrs w@_missed w@_hhsize w@_elec w@_province ///
w@_inc w@_nchild w@_age w@_race w@_order w@_age2 ///
w@_m_mhealth w@_f_mhealth w@_nli w@_head_empl ///
w@_feduc w@_meduc w@_dwell_type ///
w@_hhincome w@_gen_head w@_area w@_health_stat, i(pid) j(wave)

rename w_* *

tab wave, m


Could anyone provide me with any advice as to whether these are the correct steps?

Kind regards
Sophie Gebers

Transposing data from 2nd column onward + merging

$
0
0
Hello everyone,

I am fairly new to stata, so I have some issues with converting my data file into something workable for my analysis.
I have attached a screenshot of the data.

What I need is to convert all the information in the screenshot into a row of data. Right now the variable "opleidingen"(column1) is shown 27 times and column 2 ("variabel") shows all the variables I have data on. Is there a way to have the entry "vmbo economie" thats pasted 27 times as a single entry in a row, and have all the cell values of column 2 as variables in seperate columns?

the end result would look like so:
verwachte uitbreidingsvraag tot 2020 verwachte vervangingsvraag tot 2020 verwachte baanopeningen tot 2020 ........ ........... niet-westerse allochtonen
vmbo economie 33
(next entry)
I guess to summarize, how do I transpose the data from column 2 onward, and then merge everything to a single row.

Any help is much appreciated.

Kind regards,
Niek
Array

Setting panel data observations to same baseline

$
0
0
Hello,

I have panel data for arrivals in 116 municipalities from 1960-2018.

I want to generate the additional change of arrivals in each year to the base year 1960 for each municipality. I want the value in 1960 set to a baseline of 1 or 100 for each municipality so that each have the same initial baseline level.

Which should be: g = nt / nt=1960 t= 1960,....,2018

I used:

. gen arr_n=(arrivals[_n]/arrivals[_cons])

For the municipality m_id 1, the value is set to 1 for 1960 and it calculates right from there but it doesn't set to 1 for 1960 in the m_id 2.

How can I tell Stata to always start the calculation new when the year is 1960?
And second issue, some observations are missing in 1960. Should I rather drop this year and start from 1961 or is there another possibility to account for this.

Thank you very much!

Luis

Accounting and Finance

$
0
0
Hi, I am working with two groups; high intangible and low intangible firms. My objective is to find the absolute error difference between the two groups for 4 valuation models so I can regress this difference against intangible assets to know if intangible assets explain the difference between the two groups. The issue is each time I try to find the difference, the values of the low intangible group become missing. Please how do I overcome this?
I used the code:
gen diff = model_high - Model_low

Outreg2 1st Stage IV Regression Results

$
0
0
Hello,

I am new to Stata and have produced an IV regression using the ivregress 2sls command.

This command displays the 1st stage regression followed by the 2nd stage regression. I want to extract the 1st stage regression table onto a word document and used the outreg2 command to do this. Yet the outreg2 command only extracts the last regression that is performed which is the 2nd stage regression. Is there any way I can just extract the 1st stage regression onto a word document using outreg2?

Chris

Winsorizing Data

$
0
0
I have a number of variables which I want to winsorize at 1% and 99% level.
Can someone guide how can I do this?

Help in editing (replacing some values) a variable in panel data

$
0
0
I have a survey data set (panel). In the dataset, I have a variable in which few numbers are wrongly entered. For example, if an establishment has started in a particular year (starting_year), the starting_year is not supposed to vary. In few of the years, the starting_year has been entered as 0. I have used the following commands to replace the wrongly entered values with the correct values but I am not able to replace it:

egen tag = tag(panel_id starting_year)

egen ndistinct = total(tag), by(panel_id)

bysort panel_id (starting_year) : replace starting_year = starting_year[_N] if missing(inityr) & ndistinct == 1

STATA result
(0 real changes made)

An analogous data file is attached. I would really appreciate the help.

Significance of the main effects when main effects aren't significant but interaction is

$
0
0
Dear Statalisters,

Hi, I have a problem about interpreting significance of the main effects and interaction term.

I have a panel data of X1 and the Y for each country and over a few decades, and I have been analyzing the relationship between these two variables. In my analysis I hypothesized that the X1 does not have significant effect on Y for a few years, but after that it starts to show significant and positive effect on Y. Hence I created a dummy variable X2 that indicates 1 if more than 3 years has passed and 0 otherwise, and introduce an interaction term between X1 and X2. The results shows significant and positive effect of interaction term but insignificant effect of X1. From this I can see that the X1 does not have a significant effect when the X2 is 0, and that the effect of X1 significantly increases when X2 becomes 1. However, from the results, I cannot tell whether the effect of X1 will be significant when X2 becomes 1.

Would it be possible for me to know the significance of the effect of X1 conditional on X2 being 1?

Best,
Umito

Using frequency weights on graph bar to produce weighted averages.

$
0
0
Hello. I am working with a dataset that has the number of births and number of preterm births in different facilities in different districts. I want to show each district's preterm birth rate on an hbar graph. The first way (and more intuitive way, to me) I tried this was to just create variables pretermtotal and totaldels as the total of preterm and total births, respectively, and then divide them to get the district preterm birth rate.

I then thought, maybe it's more efficient to create one preterm birth variable, which in my code is pretermrate2 - the preterm birth rate for each individual observation - and then graph the mean of pretermrate2 using each observation's total deliveries as the frequency weight, which would in effect give me a weighted average. If it works I could cut out two lines of code and create fewer new variables.

The problem is, when I run both versions of this code, on the final graph I get preterm birth numbers that are slightly different. In most cases they are off by between .05-.5, and in only one case is the number the same. I suspect this problem lies in the ado file for Stata weights, but I'm really not sure how to find out if that's true, and running this method on different data gave the same numbers for both graphs.

If anyone knows why one kind of code produces different numbers than the other, I would greatly appreciate it!
* note - in the sample code I gave, the final graphed averages are a bit farther apart than when using the full unedited dataset


Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input str10 district int(pregpreterm delinst)

* Example generated by -dataex-. To install: ssc install dataex
clear
input str10 district int(pregpreterm delinst)
"York" 10 158
"York" 18 272
"York" 11 155
"York" 13 153
"York" 12 206
"York" 14 321
"York" 12 215
"York" 12 222
"York" 14 194
"York" 18 208
"Jersey" 15 220
"Jersey" 18 299
"Jersey" 12 146
"Jersey" 16 175
"Jersey" 10 181
"Jersey" 13 179
"Jersey" 12 175
"Jersey" 15 274
"Jersey" 17 189
"Jersey"  9 160
"Jersey" 12 139
"Jersey" 16 210
"Jersey" 14 171
"Jersey" 14 207
"Jersey"  . 114
"Jersey"  .  84
"Jersey"  .  69
"Jersey"  .  88
"Jersey"  .  75
"Guernsey"  1  89
"Guernsey"  1 138
"Guernsey"  .  55
"Guernsey"  .  96
"Guernsey"  .  59
"Guernsey"  . 102
"Guernsey"  .  66
"Guernsey"  1  76
"Guernsey"  1  92
"Guernsey"  1 114
"Guernsey"  .  67
"Guernsey"  1  72
"Guernsey"  1 103
"Guernsey"  .  44
"Guernsey"  . 122
"Guernsey"  . 117
"Guernsey"  . 135
"Guernsey"  .  57
"Guernsey"  1  73
"Mersey"  .  35
"Mersey"  .  59
"Mersey"  .  31
"Mersey"  1  37
"Mersey"  .  46
"Mersey"  .  37
"Mersey"  1  32
"Mersey"  1  37
"Mersey"  .  46
"Mersey"  .  40
"Mersey"  .  35
"Mersey"  .  48
"Mersey"  .  34
"Mersey"  .  53
"Mersey"  .  50
"Mersey"  .  44
"Mersey"  .  35
"Mersey"  .  52
"Mersey"  1  41
"Mersey"  1  21
"Mersey"  .  32
"Mersey"  1  41
"Mersey"  .  56
"Mersey"  .  20
"Mersey"  .  94
"Mersey"  5 145
"Mersey"  . 117
"Mersey"  5 107
"Mersey"  0  83
"Mersey"  . 106
"Mersey"  2  78
"Mersey"  3  83
"Mersey"  2 101
"Mersey"  3 152
"Percy"  2 102
"Percy"  0  61
"Percy"  0 152
"Percy"  5 192
"Percy"  5  95
"Percy"  .  97
"Percy"  5 103
"Percy"  3 132
"Percy"  3  67
"Percy"  3  64
"Percy"  3  65
"Percy"  . 128
"Percy"  5 138
"Percy"  5  92
"Percy"  0  45
"Percy"  .  40
"Percy"  .  49
"Percy"  .  53
end

        tempfile g1
        tempfile g2
        
        bys district: egen pretermtotal=total(pregpreterm), missing
        bys district: egen totaldels=total(delinst), missing
        gen pretermrate1=100*pretermtotal/totaldels
        sum delinst, d
        local myn `r(sum)'
        graph hbar pretermrate1, over(district) blabel(bar, ///
            size(small)) title("Preterm birth rate by district") ytitle("rate of preterm births") ///
            note("Source: MP HMIS data for FY '17-'18 and '18-'19, n = `:di %-12.0fc `myn''") ///
            bargap(40) saving(`g1', replace)
        
        gen pretermrate2=100* pregpreterm/delinst
        graph hbar (mean) pretermrate2 [fw=delinst], over(district) blabel(bar, ///
            size(small)) title("Preterm birth rate by district") ytitle("rate of preterm births") ///
            note("Source: MP HMIS data for FY '17-'18 and '18-'19, n = `:di %-12.0fc `myn''") ///
            bargap(40) saving(`g2', replace)
        
        graph combine "`g1'" "`g2'" // compare output from both methods

xtprobit with time invariant dependent variable

$
0
0
Hey,

I have a sample of start-ups and want to check if the probability of an Exit (IPO or M&A) increase if there is a special investor participate.

I have a cross-sectional data set with the start-ups, the investor type (A, B, and C), a dummy which equals one if the start-up had an exit, and the total funding amount. Furthermore, I have another data set with the funding rounds of each company and the money raised in the specific round.

Right now I run:

Code:
 probit exit investor_A investor_B total_funding_amount i.firstfundingyear, vce (robust) ]
Because I have the funding rounds and the specific year of the funding round, I also thought about a panel regression. On the other hand, exit is time invariant and always 0 or 1 for the specific company. Furthermore, the investor type is also time invariant and the funding amount respectively the money raised is the only thing that changes over time.

Does it make sense to run xtprobit if the dependent variable is time invariant. And if yes, how can I implement in stata?

Kind regards
Alex

extracting data from numbers

$
0
0
Dear Statalists,
I have an excel file with dates like 29.02.1980, 30.03.1980 and etc... I have appended Y in front of every date in the excel file and then imported it to the STATA. I had to appended Y in front of dates since i have to reshape the data (from wide to long) in the STATA.
I have used codes like: gen year = year(date) - 60 // Getting YEAR from the date
gen month = month(date) // Getting MONTH from the date
With year everything worked fine but with month i am getting wrong values, what can be possible reason for that and how can i fix it?

Dropping duplicates randomly

$
0
0
Hi


I'd like to drop duplicates randomly instead of just the first duplicate observation.
A snapshot of my data set: Array


Each patent-invt_id has several co_invt_id. I want to keep only one co_invt_id but picked randomly.

I found the following code on the predecessor of statalist:
Code:
 bys varnames  : gen rnd = uniform()
bys varnames (rnd) : keep if _n == 1
Does it make sense? (I'm not very familiar with Stata syntax) I can execute it in my dataset but because I have over 1 million observation it's quite difficult to see if it indeed duplicates were dropped randomly. Any feedback would be welcome.

Can I overlay frequency bar and proportion line graph ?

$
0
0
Array

Hi. Always thanks to statalist.

I have binary variable ; outcome (1: positive, 0: negative) and categorical variable grBMI

First I draw frequency graph & proportion graph seperately.

drop if outcome==0
histogram grBMI, frequency discrete xsize(7) ysize(4) scale(1.4) ylabel(0 (100) 600,labsize(small)) xlabel(15 18.5 23 25 30 40 50, labsize(small)) color(navy) fintensity(inten40) lcolor(gr1) lwidth(vvthin) ytitle("Frequency of Outcome(n)") b1title("BMI")

use "C:\Users\heebyung\data.dta", clear
egen mean=mean(outcome),by(grBMI)
gen mean100=100*mean
twoway bar mean100 grBMI if grBMI <55, color(navy)fintensity(inten40) lcolor(gr1) lwidth(vvthin) xsize(7) ysize(4) scale(1.4) xlabel(15 18.5 23 25 30 40 50) xtitle("BMI") ytitle("Prevalence of outcome(%)")



I want to make overlaying graph which include frequency(bar) and proportion(connected line) of outcome over grBMI

Like below.

I wish please someine help me...

Thank you. Array

how to export results of group regressions and subsequent calculations using runby

$
0
0
I need to run panel regressions with fixed effects multiple times. Regressions are run (1) for dependent variables : ret1_1, ret1_2, ret1_3, ret1_4, respectively; (2) for group aumrank=3,4, respectively; (3) for group amirank=1,2, respectively.
If I don't use the nlcom command, this can be realized via 'statsby'. However, statsby does not seem to support multiple subsequent commands. So I tried to use 'runby' but couldn't make it work.
Could someone please advise? The codes and sample data are as below.


global datapath `"D:..."'
display `datapath'

/******code of sample data******************/
clear
input long dt float ret byte(news unsche sche) float(to ami) byte jump float(ret1_1 ret1_2 ret1_3 ret1_4 int_retami int_retto int_retsche int_retunsche) int year byte(etfnumid aumrank amirank)
1609666200 -.002180594 1 1 0 .001124295 .005375426 0 .002527068 .001685422 .00173495 .002368695 -.000011722 -2.45e-06 0 -.002180594 2011 4 3 2
1609667100 .002527068 1 1 0 .000629071 .0111055 0 -.000841647 -.000792118 -.000158374 .001038704 .0000280644 1.59e-06 0 .002527068 2011 4 3 2
1609670700 .001197078 0 0 0 .0001091 .03030171 0 -.0008952 -.000098878 -.000890252 .0000939248 .0000362735 1.31e-07 0 0 2011 4 3 2
1609677000 .000098956 1 0 1 .000108709 .002515881 0 -.000593883 -.000217716 -.000217716 -.001089055 2.49e-07 1.08e-08 .000098956 0 2011 4 3 2
1609688700 -.000495909 0 0 0 .000259986 .005285485 0 -.001515249 -.002973243 -.004960328 -.004274337 -2.62e-06 -1.29e-07 0 0 2011 4 3 2
1609752600 -.001515249 1 0 1 .000452209 .009275963 0 -.001457994 -.003445078 -.002759088 -.003440106 -.000014055 -6.85e-07 -.001515249 0 2011 4 3 2
1609753500 -.001457994 1 0 1 .002042456 .001979022 0 -.001987085 -.001301094 -.001982112 -.001092408 -2.89e-06 -2.98e-06 -.001457994 0 2011 4 3 2
1609754400 -.001987085 0 0 0 .000165913 .03326962 0 .00068599 4.97e-06 .000894677 -.0007412 -.00006611 -3.30e-07 0 0 2011 4 3 2
1609755300 .00068599 0 0 0 .00010033 .01898024 0 -.000681018 .000208686 -.00142719 -.000238552 .0000130203 6.88e-08 0 0 2011 4 3 2
1609756200 -.000681018 0 0 0 .000789788 .00239528 0 .000889704 -.000746172 .000442466 .000989063 -1.63e-06 -5.38e-07 0 0 2011 4 3 2
1609757100 .000889704 0 0 0 .000361377 .006832943 0 -.001635876 -.000447238 .0000993591 -.000894677 6.08e-06 3.22e-07 0 0 2011 4 3 2
1609758000 -.001635876 0 0 0 .000343836 .013226113 0 .001188638 .001735235 .0007412 -.000751709 -.000021636 -5.62e-07 0 0 2011 4 3 2
1609758900 .001188638 1 0 1 .000417686 .007901614 0 .000546597 -.000447438 -.001940347 -.002817226 9.39e-06 4.96e-07 .001188638 0 2011 4 3 2
1609759800 .000546597 1 0 1 .000184962 .008200959 0 -.000994036 -.002486945 -.003363823 -.003483457 4.48e-06 1.01e-07 .000546597 0 2011 4 3 2
1609760700 -.000994036 1 0 1 .000189598 .014563915 0 -.001492909 -.002369787 -.002489421 -.002798542 -.000014477 -1.88e-07 -.000994036 0 2011 4 3 2
1609761600 -.001492909 1 1 0 .0000598849 .06935453 0 -.000876878 -.000996512 -.001305633 -.000498132 -.00010354 -8.94e-08 0 -.001492909 2011 4 3 2
1609762500 -.000876878 1 1 0 .000314452 .00776471 0 -.000119634 -.000428754 .000378746 .001225423 -6.81e-06 -2.76e-07 0 -.000876878 2011 4 3 2
1609763400 -.000119634 1 1 0 .0000847997 .003928725 0 -.000309121 .00049838 .001345057 .001275358 -4.70e-07 -1.01e-08 0 -.000119634 2011 4 3 2
1609764300 -.000309121 1 1 0 .0000668678 .012877663 0 .000807501 .001654178 .001584478 .002301153 -3.98e-06 -2.07e-08 0 -.000309121 2011 4 3 2
1692287100 -.002313923 0 0 0 .000130497 .024277354 0 0 .00212581 .002724087 .002492538 -.000056176 -3.02e-07 0 0 2013 8 3 2
1692523800 0 1 0 1 .000250993 0 0 .00212581 .002724087 .002492538 .001565808 0 0 0 0 2013 8 3 2
1692524700 .00212581 1 0 1 .000117518 .02476693 0 .000598277 .000366728 -.000560002 -.000560002 .0000526498 2.50e-07 .00212581 0 2013 8 3 2
1692525600 .000598277 0 0 0 9.79e-06 .08364372 0 -.000231548 -.001158279 -.001158279 -.000791345 .0000500421 5.86e-09 0 0 2013 8 3 2
1692526500 -.000231548 0 0 0 .0000273759 .01157619 0 -.00092673 -.00092673 -.000559797 -.002651189 -2.68e-06 -6.34e-09 0 0 2013 8 3 2
1692527400 -.00092673 0 0 0 .0000660284 .01922725 0 0 .000366934 -.001724458 -.002338015 -.000017818 -6.12e-08 0 0 2013 8 3 2
1692528300 . 0 0 0 . . 0 .000366934 -.001724458 -.002338015 -.001747678 . . . . 2013 8 3 2
1692529200 .000366934 1 0 1 .0000934752 .005375589 0 -.002091392 -.002704949 -.002114611 -.002242327 1.97e-06 3.43e-08 .000366934 0 2013 8 3 2
1692530100 -.002091392 1 0 1 .0000760993 .03771361 0 -.000613557 -.000023219 -.000150935 -.000286409 -.000078874 -1.59e-07 -.002091392 0 2013 8 3 2
1692531000 -.000613557 1 0 1 .0000368794 .02284443 0 .000590338 .000462622 .000327148 .001157121 -.000014016 -2.26e-08 -.000613557 0 2013 8 3 2
1692531900 .000590338 1 0 1 .0000407801 .019865755 0 -.000127716 -.000263189 .000566783 7.74e-06 .0000117275 2.41e-08 .000590338 0 2013 8 3 2
1692612000 .001062853 1 1 0 .000156028 .00933105 0 .000096567 .00187945 .002025952 .002604043 9.92e-06 1.66e-07 0 .001062853 2013 8 3 2
1692612900 .000096567 1 1 0 .0000551064 .002400191 0 .001782883 .001929385 .002507476 .001352889 2.32e-07 5.32e-09 0 .000096567 2013 8 3 2
1692613800 .001782883 1 1 0 .000124255 .01961789 0 .000146503 .000724593 -.000429993 .000223601 .0000349764 2.22e-07 0 .001782883 2013 8 3 2
1692614700 .000146503 0 0 0 .0000301418 .006644412 0 .00057809 -.000576496 .000077098 .000221641 9.73e-07 4.42e-09 0 0 2013 8 3 2
1692615600 .00057809 0 0 0 .0000735461 .010739053 0 -.001154587 -.000500992 -.00035645 -.000366085 6.21e-06 4.25e-08 0 0 2013 8 3 2
1609932600 -.000771374 1 0 1 .00287378 .0000311048 0 .000257191 -.000171497 -.000171497 .000771374 -2.40e-08 -2.22e-06 -.000771374 0 2011 9 4 1
1609933500 .000257191 0 0 0 .001743209 .0000170927 0 -.000428688 -.000428688 .000514183 .000633264 4.40e-09 4.48e-07 0 0 2011 9 4 1
1609934400 -.000428688 0 0 0 .002290391 .0000216932 0 0 .000942871 .001061952 0 -9.30e-09 -9.82e-07 0 0 2011 9 4 1
1609935300 0 0 0 0 .000890945 0 0 .000942871 .001061952 0 0 0 0 0 0 2011 9 4 1
1609936200 .000942871 0 0 0 .001725612 .000063269 0 .000119081 -.000942871 -.000942871 -.000342759 5.97e-08 1.63e-06 0 0 2011 9 4 1
1609937100 .000119081 0 0 0 .001120097 .0000123089 0 -.001061952 -.001061952 -.000461841 .000822918 1.47e-09 1.33e-07 0 0 2011 9 4 1
1609938000 -.001061952 0 0 0 .002189755 .0000562083 0 0 .000600112 .00188487 .00171365 -5.97e-08 -2.33e-06 0 0 2011 9 4 1
1609938900 0 0 0 0 .001325774 0 0 .000600112 .00188487 .00171365 .001628037 0 0 0 0 2011 9 4 1
1609939800 .000600112 0 0 0 .000935329 .0000743187 0 .001284759 .001113538 .001027925 -8.57e-07 4.46e-08 5.61e-07 0 0 2011 9 4 1
1609940700 .001284759 0 0 0 .000942965 .000157615 0 -.000171221 -.000256833 -.001285616 -.001627636 2.02e-07 1.21e-06 0 0 2011 9 4 1
1609941600 -.000171221 0 0 0 .001539384 .0000128693 0 -.000085613 -.001114395 -.001456415 -.000428137 -2.20e-09 -2.64e-07 0 0 2011 9 4 1
1609942500 -.000085613 0 0 0 .000573713 .0000172675 0 -.001028782 -.001370802 -.000342524 -.001199349 -1.48e-09 -4.91e-08 0 0 2011 9 4 1
1609943400 -.001028782 0 0 0 .001452696 .0000820313 0 -.00034202 .000686258 -.000170567 .000686258 -8.44e-08 -1.49e-06 0 0 2011 9 4 1
1609944300 -.00034202 0 0 0 .002366901 .0000167437 0 .001028278 .000171453 .001028278 .001028278 -5.73e-09 -8.10e-07 0 0 2011 9 4 1
1609945200 .001028278 0 0 0 .001476578 .0000806096 0 -.000856825 0 0 3.42e-08 8.29e-08 1.52e-06 0 0 2011 9 4 1
1609946100 -.000856825 0 0 0 .003282025 .0000302451 0 .000856825 .000856825 .000856859 -.000684008 -2.59e-08 -2.81e-06 0 0 2011 9 4 1
1609947000 .000856825 0 0 0 .001758913 .0000563872 0 0 3.42e-08 -.001540832 -.00008554 4.83e-08 1.51e-06 0 0 2011 9 4 1
1609947900 0 0 0 0 .00512868 0 0 3.42e-08 -.001540832 -.00008554 .000171057 0 0 0 0 2011 9 4 1
1610013600 .001455293 1 1 0 .00358266 .0000469632 0 .000256597 -.001369629 -.001969432 -.002055147 6.83e-08 5.21e-06 0 .001455293 2011 9 4 1
1610014500 .000256597 1 1 0 .002846506 .0000104193 0 -.001626225 -.002226028 -.002311744 -.003770033 2.67e-09 7.30e-07 0 .000256597 2011 9 4 1
1610015400 -.001626225 1 1 0 .003627558 .0000519008 0 -.000599803 -.000685518 -.002143807 -.003088541 -8.44e-08 -5.90e-06 0 -.001626225 2011 9 4 1
1610016300 -.000599803 1 1 0 .004677037 .0000148561 0 -.000085716 -.001544004 -.002488738 -.002488738 -8.91e-09 -2.81e-06 0 -.000599803 2011 9 4 1
1610017200 -.000085716 0 0 0 .004742384 2.09e-06 0 -.001458289 -.002403022 -.002403022 -.003004681 -1.79e-10 -4.06e-07 0 0 2011 9 4 1
1610018100 -.001458289 0 0 0 .00370888 .0000456183 0 -.000944733 -.000944733 -.001546392 -.00266472 -6.65e-08 -5.41e-06 0 0 2011 9 4 1
1610019000 -.000944733 0 0 0 .005087634 .0000215646 0 0 -.000601659 -.001719987 -.003322234 -2.04e-08 -4.81e-06 0 0 2011 9 4 1
1610019900 0 0 0 0 .002857035 0 0 -.000601659 -.001719987 -.003322234 -.002494947 0 0 0 0 2011 9 4 1
1610020800 -.000601659 0 0 0 .000934937 .0000747787 0 -.001118328 -.002720575 -.001893288 -.001979432 -4.50e-08 -5.63e-07 0 0 2011 9 4 1
1610021700 -.001118328 0 0 0 .002839654 .0000458141 0 -.001602247 -.00077496 -.000861104 .0000860696 -5.12e-08 -3.18e-06 0 0 2011 9 4 1
1610022600 -.001602247 0 0 0 .024852196 7.51e-06 0 .000827287 .000741143 .001688317 .002548606 -1.20e-08 -.000039819 0 0 2011 9 4 1
1610023500 .000827287 0 0 0 .00608566 .0000158263 0 -.000086144 .00086103 .001721319 .001119194 1.31e-08 5.03e-06 0 0 2011 9 4 1
1610024400 -.000086144 0 0 0 .00283245 3.54e-06 0 .000947174 .001807463 .001205338 .001807463 -3.05e-10 -2.44e-07 0 0 2011 9 4 1
1610025300 .000947174 1 1 0 .003012407 .0000365741 0 .000860289 .000258165 .000860289 .001290156 3.46e-08 2.85e-06 0 .000947174 2011 9 4 1
1610026200 .000860289 1 1 0 .003948831 .0000253198 0 -.000602125 0 .000429867 .000773602 2.18e-08 3.40e-06 0 .000860289 2011 9 4 1
1610027100 -.000602125 1 1 0 .003316151 .0000211153 0 .000602125 .001031992 .001375727 .001461674 -1.27e-08 -2.00e-06 0 -.000602125 2011 9 4 1
1610028000 .000602125 0 0 0 .003355994 .0000208521 0 .000429867 .000773602 .00085955 .000945464 1.26e-08 2.02e-06 0 0 2011 9 4 1
1610028900 .000429867 0 0 0 .003425706 .0000145775 0 .000343735 .000429683 .000515597 .000601504 6.27e-09 1.47e-06 0 0 2011 9 4 1
1610029800 .000343735 0 0 0 .001377267 .0000289837 0 .0000859475 .000171862 .000257769 .001459692 9.96e-09 4.73e-07 0 0 2011 9 4 1
1610030700 .0000859475 0 0 0 .001588052 6.28e-06 0 .0000859143 .000171821 .001373745 .001459541 5.40e-10 1.36e-07 0 0 2011 9 4 1
1610031600 .0000859143 0 0 0 .00146709 6.80e-06 0 .000085907 .00128783 .001373627 .00085692 5.84e-10 1.26e-07 0 0 2011 9 4 1
1610032500 .000085907 0 0 0 .003985741 2.50e-06 0 .001201923 .00128772 .000771013 -.003199301 2.15e-10 3.42e-07 0 0 2011 9 4 1
1610033400 .001201923 0 0 0 .004587512 .0000303819 0 .0000857964 -.00043091 -.004401225 -.002586734 3.65e-08 5.51e-06 0 0 2011 9 4 1
1610034300 .0000857964 0 0 0 .005537546 1.80e-06 0 -.000516707 -.004487021 -.002672531 -.003017894 1.54e-10 4.75e-07 0 0 2011 9 4 1
1610272800 .001814491 0 0 0 .004811677 .0000439974 0 -.000345363 .000604099 .000172637 0 7.98e-08 8.73e-06 0 0 2011 9 4 1
1610273700 -.000345363 0 0 0 .0095684 4.21e-06 0 .000949463 .000518001 .000345364 .001121996 -1.45e-09 -3.30e-06 0 0 2011 9 4 1
1610274600 .000949463 0 0 0 .004819976 .0000229688 0 -.000431462 -.000604099 .000172533 .000603735 2.18e-08 4.58e-06 0 0 2011 9 4 1
1610275500 -.000431462 0 0 0 .003594783 .0000140011 0 -.000172637 .000603995 .001035197 .002413786 -6.04e-09 -1.55e-06 0 0 2011 9 4 1
1610276400 -.000172637 1 0 1 .002481471 8.12e-06 0 .000776632 .001207834 .002586423 .002155824 -1.40e-09 -4.28e-07 -.000172637 0 2011 9 4 1
1610277300 .000776632 1 0 1 .002075605 .0000436216 0 .000431202 .001809791 .001379192 .002068058 3.39e-08 1.61e-06 .000776632 0 2011 9 4 1
1610278200 .000431202 1 0 1 .001699338 .0000295695 0 .001378589 .00094799 .001636856 .00180902 1.28e-08 7.33e-07 .000431202 0 2011 9 4 1
1610279100 .001378589 1 0 1 .003114678 .0000515071 0 -.000430599 .000258267 .000430431 -.000175657 7.10e-08 4.29e-06 .001378589 0 2011 9 4 1
1610280000 -.000430599 0 0 0 .001723057 .0000290941 0 .000688866 .00086103 .000254942 .001033147 -1.25e-08 -7.42e-07 0 0 2011 9 4 1
1610280900 .000688866 0 0 0 .004363455 .000018367 0 .000172164 -.000433924 .000344281 .000258226 1.27e-08 3.01e-06 0 0 2011 9 4 1
1610281800 .000172164 0 0 0 .001737181 .000011528 0 -.000606088 .000172117 .0000860622 .0000869228 1.98e-09 2.99e-07 0 0 2011 9 4 1
1610282700 -.000606088 1 1 0 .001463373 .0000482061 0 .000778205 .00069215 .000693011 .000175666 -2.92e-08 -8.87e-07 0 -.000606088 2011 9 4 1
1610283600 .000778205 1 1 0 .001080038 .0000837989 0 -.000086055 -.000085194 -.000602539 -.001119291 6.52e-08 8.40e-07 0 .000778205 2011 9 4 1
1610284500 -.000086055 1 1 0 .002473997 4.05e-06 0 8.61e-07 -.000516484 -.001033236 -.000431246 -3.48e-10 -2.13e-07 0 -.000086055 2011 9 4 1
1610285400 8.61e-07 0 0 0 .000893851 1.12e-07 0 -.000517345 -.001034096 -.000432107 -.000345154 9.64e-14 7.69e-10 0 0 2011 9 4 1
1610286300 -.000517345 0 0 0 .000687728 .0000875404 0 -.000516751 .0000852383 .000172191 .000946685 -4.53e-08 -3.56e-07 0 0 2011 9 4 1
1610287200 -.000516751 0 0 0 .002884728 .0000208567 0 .00060199 .000688943 .001463436 .001377411 -1.08e-08 -1.49e-06 0 0 2011 9 4 1
1610288100 .00060199 0 0 0 .000938586 .0000746316 0 .0000869528 .000861446 .000775421 .001549382 4.49e-08 5.65e-07 0 0 2011 9 4 1
1610289000 .0000869528 0 0 0 .000827923 .0000122198 0 .000774493 .000688468 .001462429 .000774493 1.06e-09 7.20e-08 0 0 2011 9 4 1
1610289900 .000774493 0 0 0 .001850534 .000048658 0 -.000086025 .000687935 0 -.0004302 3.77e-08 1.43e-06 0 0 2011 9 4 1
1610290800 -.000086025 0 0 0 .002227571 4.49e-06 0 .000773961 .0000860252 -.000344175 .000170229 -3.86e-10 -1.92e-07 0 0 2011 9 4 1
1610291700 .000773961 0 0 0 .001962225 .0000458253 0 -.000687935 -.001118135 -.000603732 -.001437198 3.55e-08 1.52e-06 0 0 2011 9 4 1
1610292600 -.000687935 0 0 0 .001778497 .0000449706 0 -.0004302 .0000842033 -.000749263 -.000172974 -3.09e-08 -1.22e-06 0 0 2011 9 4 1
end

/*********single regression with nlcom*****/
use "$datapath\subtest.dta",clear
xtset etfnumid dt
set matsize 5400
xtreg ret1_4 ret int_retsche int_retunsche sche unsche int_retto to int_retami ami i.year if jump ==1,fe vce(robust)

nlcom (sche:_b[ret]+_b[int_retsche ]) (unsche:_b[ret]+_b[int_retunsche ])(unsche_sche:_b[int_retunsche]-_b[int_retsche])
return list
return list
mat b1=r(b)
mat v1=r(V)

local sche=b1[1,1]
local sesche=sqrt(v1[1,1])
local zsche = `sche'/`sesche'
local psche = 2*normal(-abs(`zsche'))

local unsche=b1[1,2]
local seunsche=sqrt(v1[2,2])
local zunsche = `unsche'/`seunsche'
local punsche = 2*normal(-abs(`zunsche'))


outreg2 using fe5.doc, keep(ret int_retsche int_retunsche sche unsche int_retto to int_retami ami) nocons /*
*/adds(schen, `sche', psche,`psche', unschen, `unsche', punsche,`punsche') /*
*/ tstat bdec(4) tdec(2) rdec(4) adec(4) addtext(Firm FE, Yes, Year FE, Yes)
/************************************************** **/



//////////////////////group regression without nlcom using statsby//////////////////////////////
foreach depid of numlist 1/4 {
foreach groupid in aumrank amirank{
use "$datapath\subtest.dta",clear
xtset etfnumid dt
set matsize 5400

di "`groupid'"
sort `groupid'
statsby _b _se Rsq=e(r2) adjRsq=e(r2_a) nobs=e(N) , by(`groupid') saving($datapath\by_`groupid'.dta, replace):/*
*/xtreg ret1_`depid' ret int_retsche int_retunsche sche unsche int_retto to int_retami ami i.year if jump ==1,fe vce(robust)


use "$datapath\by_`groupid'.dta",clear
cls
local regid "5"
gen regid= "`regid'"
gen depid= "`depid'"
gen groupid= "`groupid'"
rename `groupid' rank
keep _b_* _se_* _eq2* regid depid groupid rank
save group_`groupid'_`regid'_`depid'toami.dta, replace
}
}
///////////////////////////////////////////////////////////////////////////////////


/******************test code using runby with bugs, not working**********************************/

capture program drop group_regression
program define group_regression
use "$datapath\subtest.dta",clear
xtset etfnumid dt
set matsize 5400
xtreg ret1_`depid' ret int_retsche int_retunsche sche unsche int_retto to int_retami ami i.year if jump ==1,fe vce(robust)

nlcom (sche:_b[ret]+_b[int_retsche ]) (unsche:_b[ret]+_b[int_retunsche ])(unsche_sche:_b[int_retunsche]-_b[int_retsche])
return list
return list
mat b1=r(b)
mat v1=r(V)

local sche=b1[1,1]
local sesche=sqrt(v1[1,1])
local zsche = `sche'/`sesche'
local psche = 2*normal(-abs(`zsche'))

local unsche=b1[1,2]
local seunsche=sqrt(v1[2,2])
local zunsche = `unsche'/`seunsche'
local punsche = 2*normal(-abs(`zunsche'))


rename `groupid' rank
keep _b_* _se_* _eq2* rank , `sche', `psche', `unsche', `punsche'

exit

end

foreach depid of numlist 1/4 {
use "$datapath\subtest.dta",clear
xtset etfnumid dt
set matsize 5400
local groupid aumrank
di "`groupid'"
local regid "5"


runby group_regression, by(`groupid') status
save group_`groupid'_`regid'_`depid'toami.dta, replace }



/************************************************** **/



Comparing two datasets and identify mismatch

$
0
0

Hi,

I am working on two datasets wich contain the same variables, but different number of observations.
By using cf _all using mydata, verbose got the following answer: master has X observations; using X. r(9).
I wonder how I can identify the observations that mismatches between my two datasets.

Regards,
Krisztina

reproducible reports from stata to lyx

$
0
0
is there a command to reproduce or copy tables from stata into LYX (lyx is platform for latex)?

A question about SVY data propoetion

$
0
0
Good day everybody.

I am trying to calculate the proportion of one of the survey variables, over a specific variable
that woyld be like:
svy: prop perm3, over (pain)
but now, I want to have this calculation in a specific age (not for all of the observations)
So, I tried to do this:
svy: prop perm3, over (pain), if (age < 12)
.
Apparently this is not true and I can't add "if" to the end of this code, when I am using svy prefix. (It gives an answer, but it's wrong)

I've read the svy manual, but didn't find any solution.
Any idea to help with this issue?

Thank you in advance

Testing for selection bias

$
0
0
Hello everyone,

I was wondering if anyone knows how to test for differences in characteristics for people that dropped out of the study and people that remained? I know that you should use a chi squared for categorical/binary variables and a t-test for continuous, but when I try to test the group that did not drop out with the group that dropped out using tab ...., chi, stata says there are no observations? Even though there should be 7000. Hopefully, someone can help me with this.
Thank you.

Elvire Landstra

Generalized Diff-in-Diff with different treatment starts

$
0
0
Hello,

I am new to statalist but I used the forum previously to find help which I did. However, I came to a point where I am stuck.

For my work, I want to estimate the effect of protected areas (PA) on tourism development (arrivals and overnight stays) in all 116 municipalities of South Tyrol. There are 8 nature parks (PA) which have been established in different years. I use panel data for arrivals and overnight stays in 116 municipalities from 1961 - 2018 (annually):

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input int m_id str26 municipality int year long(arrivals overnights) byte PA float(log_arr arr_base100 log_arrb)
  6 "ABTEI"   1961   7522   71120 0  8.925588       100 4.6051702
  6 "ABTEI"   1962  13237  165871 0  9.490771 175.97713  5.170354
  6 "ABTEI"   1963  15657  191603 0  9.658673 208.14943  5.338256
  6 "ABTEI"   1964  14396  184650 0  9.574706 191.38527  5.254289
  6 "ABTEI"   1965  15928  184252 0  9.675834  211.7522  5.355417
  6 "ABTEI"   1966  19376  216295 0  9.871791 257.59106  5.551373
  6 "ABTEI"   1967  19704  213352 0  9.888577  261.9516   5.56816
  6 "ABTEI"   1968  19857  220955 0  9.896312 263.98566  5.575895
  6 "ABTEI"   1969  23981  251514 0 10.085017  318.8115    5.7646
  6 "ABTEI"   1970  26459  277988 0 10.183352 351.75485  5.862935
  6 "ABTEI"   1971  31739  320192 0   10.3653  421.9489  6.044884
  6 "ABTEI"   1972  39973  427143 0  10.59596  531.4145  6.275542
  6 "ABTEI"   1973  38053  413686 0 10.546735  505.8894  6.226318
  6 "ABTEI"   1974  39263  441310 0 10.578038  521.9755  6.257621
  6 "ABTEI"   1975  41858  449302 0 10.642038 556.47437  6.321621
  6 "ABTEI"   1976  46712  477531 0 10.751757  621.0051  6.431339
  6 "ABTEI"   1977  53115  544028 0 10.880215  706.1287  6.559797
  6 "ABTEI"   1978  54583  556876 1 10.907477  725.6448   6.58706
  6 "ABTEI"   1979  61419  571494 1 11.025475  816.5248  6.705057
  6 "ABTEI"   1980  64724  621425 1 11.077888  860.4626   6.75747
  6 "ABTEI"   1981  61869  602025 1 11.032775  822.5073  6.712358
  6 "ABTEI"   1982  70020  647032 1 11.156536  930.8694  6.836119
  6 "ABTEI"   1983  75717  700907 1 11.234758 1006.6073  6.914341
  6 "ABTEI"   1984  77979  706885 1 11.264194 1036.6791  6.943778
  6 "ABTEI"   1985  82779  773536 1  11.32393  1100.492  7.003512
  6 "ABTEI"   1986  90312  784212 1 11.411026 1200.6382  7.090609
  6 "ABTEI"   1987  93510  763064 1 11.445824 1243.1534  7.125407
  6 "ABTEI"   1988  99995  824433 1 11.512876 1329.3672  7.192458
  6 "ABTEI"   1989  87205  683006 1 11.376017 1159.3326    7.0556
  6 "ABTEI"   1990  92841  709036 1 11.438643 1234.2595  7.118227
  6 "ABTEI"   1991 112045  841027 1 11.626656  1489.564  7.306239
  6 "ABTEI"   1992 112577  828805 1 11.631392 1496.6365  7.310976
  6 "ABTEI"   1993 114378  818302 1 11.647264 1520.5796  7.326847
  6 "ABTEI"   1994 118910  852218 1 11.686122 1580.8296  7.365705
  6 "ABTEI"   1995 121083  857737 1  11.70423  1609.718  7.383814
  6 "ABTEI"   1996 125680  884659 1 11.741494 1670.8323  7.421077
  6 "ABTEI"   1997 122908  834693 1 11.719192 1633.9803  7.398774
  6 "ABTEI"   1998 125639  826321 1 11.741168  1670.287  7.420751
  6 "ABTEI"   1999 118384  784853 1  11.68169 1573.8368  7.361272
  6 "ABTEI"   2000 122733  789364 1 11.717767 1631.6538  7.397349
  6 "ABTEI"   2001 127152  812215 1 11.753139 1690.4015  7.432721
  6 "ABTEI"   2002 128549  822323 1 11.764066 1708.9736  7.443648
  6 "ABTEI"   2003 147372  901245 1 11.900715  1959.213  7.580298
  6 "ABTEI"   2004 148577  913264 1 11.908858 1975.2327  7.588441
  6 "ABTEI"   2005 154307  920089 1   11.9467 2051.4092  7.626282
  6 "ABTEI"   2006 152078  901103 1  11.93215  2021.776  7.611732
  6 "ABTEI"   2007 159660  937788 1 11.980802 2122.5737  7.660385
  6 "ABTEI"   2008 164169  964797 1 12.008652  2182.518  7.688234
  6 "ABTEI"   2009 169124  974105 1 12.038387 2248.3914   7.71797
  6 "ABTEI"   2010 176108 1018866 1 12.078853  2341.239  7.758436
  6 "ABTEI"   2011 176865 1009760 1 12.083142 2351.3027  7.762725
  6 "ABTEI"   2012 185005 1042815 1 12.128139  2459.519  7.807721
  6 "ABTEI"   2013 186238 1037318 1  12.13478  2475.911  7.814363
  6 "ABTEI"   2014 178510  977337 1   12.0924  2373.172  7.771983
  6 "ABTEI"   2015 190071 1012389 1 12.155153  2526.868  7.834736
  6 "ABTEI"   2016 208679 1091140 1 12.248552  2774.249  7.928135
  6 "ABTEI"   2017 213764 1097517 1 12.272628 2841.8506  7.952211
  6 "ABTEI"   2018 231154 1174720 1  12.35084  3073.039  8.030422
106 "AHRNTAL" 1961   1450   19199 0  7.279319       100 4.6051702
106 "AHRNTAL" 1962   1634   22648 0  7.398786 112.68965 4.7246375
106 "AHRNTAL" 1963   2752   36431 0  7.920083  189.7931  5.245934
106 "AHRNTAL" 1964   5282   52196 0   8.57206 364.27585  5.897912
106 "AHRNTAL" 1965   6313   73572 0  8.750366  435.3793  6.076218
106 "AHRNTAL" 1966   5443   60594 0  8.602086  375.3793  5.927937
106 "AHRNTAL" 1967   5364   65807 0  8.587465   369.931  5.913317
106 "AHRNTAL" 1968   6162   91599 0  8.726156  424.9655  6.052008
106 "AHRNTAL" 1969   7754  111594 0  8.955964  534.7586  6.281816
106 "AHRNTAL" 1970   9796  139704 0   9.18973  675.5862  6.515581
106 "AHRNTAL" 1971  10325  128624 0  9.242324   712.069  6.568175
106 "AHRNTAL" 1972  15321  189095 0   9.63698 1056.6207  6.962831
106 "AHRNTAL" 1973  20621  233070 0  9.934065  1422.138  7.259917
106 "AHRNTAL" 1974  25634  265451 0 10.151675  1767.862  7.477526
106 "AHRNTAL" 1975  29417  316031 0 10.289328 2028.7587   7.61518
106 "AHRNTAL" 1976  30294  312285 0 10.318705 2089.2415  7.644557
106 "AHRNTAL" 1977  34326  332213 0  10.44366 2367.3103   7.76951
106 "AHRNTAL" 1978  42504  397605 0 10.657353   2931.31  7.983205
106 "AHRNTAL" 1979  51702  475314 0 10.853251  3565.655  8.179103
106 "AHRNTAL" 1980  58479  517524 0 10.976423 4033.0344  8.302275
106 "AHRNTAL" 1981  60352  515650 0  11.00795  4162.207    8.3338
106 "AHRNTAL" 1982  57786  511556 0   10.9645 3985.2415  8.290353
106 "AHRNTAL" 1983  57291  464915 0   10.9559 3951.1035   8.28175
106 "AHRNTAL" 1984  60481  484940 0 11.010084 4171.1035  8.335936
106 "AHRNTAL" 1985  62218  490578 0   11.0384 4290.8965  8.364251
106 "AHRNTAL" 1986  68791  544121 0 11.138828  4744.207   8.46468
106 "AHRNTAL" 1987  69395  549393 0  11.14757 4785.8623  8.473421
106 "AHRNTAL" 1988  72740  570428 1 11.194647  5016.552  8.520498
106 "AHRNTAL" 1989  75714  587214 1 11.234718  5221.655   8.56057
106 "AHRNTAL" 1990  77451  577036 1   11.2574  5341.448  8.583252
106 "AHRNTAL" 1991  86199  620580 1 11.364414  5944.759  8.690266
106 "AHRNTAL" 1992  81578  595178 1 11.309315  5626.069  8.635166
106 "AHRNTAL" 1993  83797  599009 1 11.336152  5779.104 8.6620035
106 "AHRNTAL" 1994  83926  600698 1  11.33769      5788  8.663542
106 "AHRNTAL" 1995  86989  630703 1 11.373537  5999.241 8.6993885
106 "AHRNTAL" 1996  88198  635526 1  11.38734  6082.621  8.713191
106 "AHRNTAL" 1997  82290  571886 1 11.318005  5675.172  8.643856
106 "AHRNTAL" 1998  78051  545615 1 11.265118  5382.828  8.590969
106 "AHRNTAL" 1999  83301  585461 1 11.330215  5744.896  8.656067
106 "AHRNTAL" 2000  89884  601053 1 11.406275  6198.896  8.732126
106 "AHRNTAL" 2001  99748  647582 1 11.510403  6879.172  8.836253
106 "AHRNTAL" 2002 102753  673877 1 11.540083  7086.414  8.865934
end
label values PA label_PA
label def label_PA 0 "pre", modify
label def label_PA 1 "post", modify


Because the treatment starts at different points in time, I want to estimate a generalized DD model with the following form:

Yit = ai + bt + Delta * PAit + eit

The interaction term 'PA' = 1 if a municipality is treated AND observed after the respective park establishment. So 'PA' is indicating if treatment is on(POST) or off (PRE) in a respective observation.
(In the dataex example, AHRNTAL became adjacent to a nature park in 1988 whereas ABTEI became adjacent in 1978). a and b stand for municipality and time fixed effects.

My first question:
1) Which command best fits this model? (I will only show for arrivals to shorten things here) So far, I used the following but I am not entirely sure with interpretation:

xtreg arrivals i.year i.PA#i.year, fe r --> coefficient for each year gives me the additional change in arrivals for treatment in each year?

xtreg arrivals c.year i.PA#c.year, fe r --> coefficient over all years gives me the average additional increase in arrivals for the treatment?


If this is right, then my second concern matters and I'll post about that but otherwise I'll have to do the estimation with another command again.

I use Stata 13 and I hope I have been as precise as possible.

Thank you very much and kind regards,

Luis Meier

Procedure to estimate effects with repeated binary outcomes after propensity score

$
0
0
Hello. I'm using Stata 15.1.

I have a dataset of 88 observations in the treatment group, and need to find a control group. Since there are some variables wrt which I want to enforce a perfect match, I selected such control group through "calipmatch" (Stepner and Garland, 2017) creating the "match" variable equal to 1 for the first match,..., N for the Nth match. After selecting the control group, I want to estimate effects on repeated binary outcomes ("month" being the count variable). I've thought about this strategy:

1) xtgee outcome i.month group, family(binomial) eform

to get population-averaged effects,

and

2) melogit cont_bin i.month group || match:|| id:, or

to get individual-level effects.

Is this the correct strategy? I've found here:

https://www.statalist.org/forums/for...=1563786760018

the suggestion to introduce a match-level random effect in the case of continuous outcome, where there are no problems about non-collapsibility, so I guess this only has to do with standard error. I've found (with a fake dataset) that such approach leads to an odds ratio away from the null (1), consistently with the behaviour of conditional (vs marginal) odds ratio (OR). Thus, I think 1) is the model to estimate marginal ORs, 2) to estimate conditional ORs. I think that melogit cont_bin i.month group || id:, or would lead to an intermediate estimate between the two, however not corresponding to any relevant quantity because it would ignore that the control has been selected through matching, so that observations are paired. Is this correct?

Michael Stepner & Allan Garland, 2017. "CALIPMATCH: Stata module for caliper matching without replacement," Statistical Software Components S458342, Boston College Department of Economics.
Viewing all 72758 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>