Quantcast
Channel: Statalist
Viewing all 73405 articles
Browse latest View live

-markstat-: How to include .docx template?

$
0
0
Dear all

I am starting to use the user-written -markstat- command and I am interested in generating .docx documents. My question is: How can I change the default template markstat is using?

I worked with R Markdown before and there a header along the lines of:

Code:
---
title: Fantastic Manuscript
author: Go Natak
date: April 14, 2020
output:
  word_document:
    reference_docx: template.dotx
bibliography: bibliography.bib
---
would create a Word document based on the styles defined in template.dotx, but this does not seem to work with markstat. Or am I just doing it wrong?

Thanks
Go

Difference in Difference trend line

$
0
0
Hello,

I would like to ask for advice on how to plot the parallel trend line (critical assumption) to carry out difference in difference analysis. I have firm-level data set in 50+ countries from 1990 to 2000. Some countries adopted a law at different points in time. For example, France implemented it in 1995, and Belgium in 1998. More than half of the countries (control group) do not have such law in place. I am going to include year fixed effect and firm fixed effects. Because of the year fixed effect and firm fixed effect, the variable for treatment group and the variable for post-treatment years get dropped. Thus, the model will look like this:

y_i,t = beta0 + beta1*treatmentinplace + X_i,t + i fixed effect + t fixed effect

where i = firm, t = year, X = a set of controls, and treatmentinplace = 1 if a country has the law in effect in a given year.

Before I carry out the analysis however, I need to make sure that parallel trend assumption holds. Usually this is done visually using a chart. But I am not sure how I can plot these trends effectively since the treatment takes place at different years for different countries. First of all, I am thinking to take mean of y variable for each country in a given year and plot them. I thought about having a x axis of the chart to represent the number of years since the treatment. For example, for France, year 1995 would be 0, 1994 would be -1, 1996 would be 1... For Belgium, year 1998 would be 0, 1999 would be 1, and so on. However, the problem with this approach is that it would be difficult to include control group (where no such law exists) in the chart with this x-axis.

If you have any comments, I would really appreciate them. Thank you!

New variable, with value based on other variables

$
0
0
hi - newbie question

I'm trying to generate a new variable with values, base on other variables

I'm working on a medical database and I have 2 variables (lab result and result value) as shown

ID Lab test Result
1 WBC 12.3
1 Blood culture S. auerus

I would like to generate a 3rd variable called "WBC"
And replace the value to 12.3 for ID#1.

ID Lab test Result WBC
1 WBC 12.3 12.3
1 Blood culture S. auerus


What are the commands that I need use in order to do that?


Anyone can tip in?


Really appreciate your time in advance.

Plotting panel data

$
0
0
Hi,
I am working with some panel data where I have collapsed my observations and have year as time variable and treatment as a panel variable, which is a dummy variable 0 or 1. When I am trying to plot the graph, the labels is still defined as treatment 0 and treatment 1. What I want to do is to rename these to Control and Treatment. If anybody know the code that would be helpfull.
This is what I have come up so fare.

xtline employed, overlay i(treatment) t(year) title(Employment rate)

/David

Labeling keys in a heatplot

$
0
0
Hi

I am working with the heatplot command and I have a figure that looks a bit like this:

Code:
webuse nhanes2, clear
heatplot height age weight
Array

I realized that there are some ways to manipulate the legend on the right-hand side, but here's my question: Is there a way to remove the numbers that label the keys in the legend and replace them with words, e.g. to put the word "Tall" at the top (instead of 191.23) and the word "Short" at the bottom?

Thanks so much
Nora

REST test OVTEST

$
0
0
So I am running a Ramesy test on stata to show OLS is the best conditionally linear unbiased estimator and to to examine the impact of key omitted variables on our regression. So I am new to stata and I have done the tests (attached) but I am unsure how to interpret the OVTEST, I checked online but could not find any material to help me.
Any help will be appreciated

are there a quicker way to generate a variable?

$
0
0
This thread continues from the below previous thread:

https://www.statalist.org/forums/for...pping-interval


I'd like to create a lower and upper bound for a variable (lambda).

I first compute lambda in Excel by the formula: lambda = a^(1-r)
The results are in Table 1.
And then I create the interval of lambda for each "numbers of accepted options" in Table 2 in Excel, too.

Table 1:
Hypothetical options a r=2.91 r=1.96 r=0.66 r=0.31
1 3 0.12 0.35 1.45 2.13
2 2 0.27 0.51 1.27 1.61
3 1.5 0.46 0.68 1.15 1.32
4 1.2 0.71 0.84 1.06 1.13
5 1 1.00 1.00 1.00 1.00
6 0.86 1.34 1.16 0.95 0.90

Table 2: For each h_safeoption we have one equivalent of r
h_safeoption
3 2 1 0
h_accepted r = 2.91 r = 1.96 r = 0.66 r = 0.31
0 λ ≤ 0.12 λ ≤ 0.35 λ ≥ 1.45 λ ≥ 2.13
1 0.12 < λ ≤ 0.27 0.35 < λ ≤ 0.51 1.27 ≤ λ < 1.45 1.61 ≤ λ < 2.13
2 0.27 < λ ≤ 0.46 0.51 < λ ≤ 0.68 1.15 ≤ λ < 1.27 1.32 ≤ λ < 1.61
3 0.46 < λ ≤ 0.71 0.68 < λ ≤ 0.84 1.06 ≤ λ < 1.15 1.13 ≤ λ < 1.32
4 0.71 < λ ≤ 1 0.84 < λ ≤ 1 1 ≤ λ < 1.06 1 ≤ λ < 1.13
5 1 < λ ≤ 1.34 1 < λ ≤ 1.16 0.95 ≤ λ < 1 0.90 ≤ λ < 1
6 λ > 1.34 λ > 1.16 λ < 0.95 λ < 0.90

Then I generate these lambda1h (min) and lambda2h (max) in Stata by the following command and repeat for lambda2h

gen lambda1h=0.12 if h_safeoption==3&h_accepted==1
replace lambda1h=0.27 if h_safeoption==3&h_accepted==2
replace lambda1h=0.46 if h_safeoption==3&h_accepted==3
replace lambda1h=0.71 if h_safeoption==3&h_accepted==4
replace lambda1h=1 if h_safeoption==3&h_accepted==5
replace lambda1h=1.34 if h_safeoption==3&h_accepted==6

replace lambda1h=0.35 if h_safeoption==2&h_accepted==1
replace lambda1h=0.51 if h_safeoption==2&h_accepted==2
replace lambda1h=0.68 if h_safeoption==2&h_accepted==3
replace lambda1h=0.84 if h_safeoption==2&h_accepted==4
replace lambda1h=1 if h_safeoption==2&h_accepted==5
replace lambda1h=1.16 if h_safeoption==2&h_accepted==6

replace lambda1h=1.45 if h_safeoption==1&h_accepted==0
replace lambda1h=1.27 if h_safeoption==1&h_accepted==1
replace lambda1h=1.15 if h_safeoption==1&h_accepted==2
replace lambda1h=1.06 if h_safeoption==1&h_accepted==3
replace lambda1h=1 if h_safeoption==1&h_accepted==4
replace lambda1h=0.95 if h_safeoption==1&h_accepted==5

replace lambda1h=2.13 if h_safeoption==0&h_accepted==0
replace lambda1h=1.61 if h_safeoption==0&h_accepted==1
replace lambda1h=1.32 if h_safeoption==0&h_accepted==2
replace lambda1h=1.13 if h_safeoption==0&h_accepted==3
replace lambda1h=1 if h_safeoption==0&h_accepted==4
replace lambda1h=0.9 if h_safeoption==0&h_accepted==5

----------------------- copy starting from the next line -----------------------
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input float(h_accepted h_safeoption)
0 3
0 3
0 3
5 1
6 4
4 4
0 1
4 1
3 1
4 0
6 0
6 0
2 2
2 0
3 0
6 0
3 4
0 3
2 0
0 4
0 4
0 3
0 4
0 1
0 4
0 1
0 4
4 0
0 0
0 4
6 0
3 0
2 0
1 0
0 3
0 0
5 4
6 3
6 4
1 3
1 3
0 3
4 0
0 0
0 3
0 1
6 0
6 2
2 0
0 3
3 4
5 0
1 2
0 4
1 3
2 1
2 3
3 0
4 1
0 3
0 0
2 0
5 0
0 4
0 4
5 1
0 4
2 0
0 4
6 0
6 4
3 1
0 4
4 0
6 0
3 4
1 3
0 0
4 0
0 1
0 4
5 2
1 1
4 2
4 0
1 0
4 3
6 0
2 0
6 0
4 1
6 0
6 0
6 0
3 0
6 0
2 0
6 1
0 4
6 4
end
------------------ copy up to and including the previous line ------------------


Are there any faster way to create lambda1h and lambda2h?

Thank you,







CDC forum

$
0
0
Is it just me? Or others also see the CDC forum on this board? (screenshot). I don't recall it being there before.

Regardless, is there a feature to create own forum sections for private discussions? If yes, how to do it?

Thank you, Sergiy


Array

Creating observations from already existing observations in Stata

$
0
0

Dear all,

I have this type of Stata dataset where I want to create some observations splitting the already existing observation.

dist TA_mphc ta_ta Total aLess_1 a1_4
Lilongwe City Area 12 Area 12 1209 18 71
Lilongwe City Area 13 Area 13 8717 146 540
Lilongwe City Area 14/Area 32 Area 14 8180 800 560

I want to split the TA_mphc = "Area 14/Area 32 " into two: Area 14 and Area 32 separately so that these two observations should have "halves" the values of the rest of the variables (Total, aLess_1 and a1_4) as shown below in red color.

dist TA_mphc ta_ta Total aLess_1 a1_4
Lilongwe City Area 12 Area 12 1209 18 71
Lilongwe City Area 13 Area 13 8717 146 540
Lilongwe City Area 14/Area 32 Area 14 8180 800 560
Lilongwe City Area 14/Area 32 Area 14 4090 400 280
Lilongwe City Area 14/Area 32 Area 32 4090 400 280

Kindly help.

Thanks in advance for your help,

Peter

Shifting up by a column

$
0
0
Hi all, currently, I have two data of y for each country - y in 1988 and y in 2014. My goal is to calculate the ratio of y2014 to y1988 for each country. So I've separated the y values into y1988 and y2014. However, I'm not sure how to shift y2014 up a column so that i can delete the repeated country in order to calculate the ratio for each country.

Can someone please help me with this? Thanks in advance!

how to do loop in stata

$
0
0
Hello,

I have a question about how to do loop in Stata. Say, I have 1000 obs with id from 1 to 1000, and now I want to divide them into 50 groups and assign a group id (gid) from 1 to 50. I can do this by typing:

replace gid = 1 if id<=20
replace gid = 2 if id>20 & id<=40
replace gid = 3 if id>40 & id<=60
replace gid = 4 if id>60 & id<=80
replace gid = 5 if id>80 & id<=100
replace gid = 6 if id>100 & id<=120
...
replace gid = 50 id id>800 & id<=1000

This works, but there will be too much typing. Can anyone help me to write a short loop program to achieve this? Thanks in advance.

POOLED OLS, Correct for Autocorrelation

$
0
0
Hello, Everyone.
I am working on a pooled OLS, after executing xttest0. Then I conducted diagnostic tests:
regress $ylist $xlist , vce(cluster id)
estat vif
estat ovtest

all of these tests gave me expected results, except for
Wooldridge test for autocorrelation, xtserial $ylist $xlist

Stata gave me:

. xtserial $ylist $xlist

Wooldridge test for autocorrelation in panel data
H0: no first-order autocorrelation
F( 1, 16) = 1432.673
Prob > F = 0.0000

Thank you.

How to find and identify increase in a variable based on the first value of the variable?

$
0
0
I am working with data where I need to make a variable "REQUIRED".

I have ID, time and Sentiment as variables. Now Based on the first value of the sentiment for each ID, I want to see when sentiment increased by at least by 0.25 or 25% compared to the first value of sentiment for each ID. So, as soon as the sentiment increases by 0.25 or 25%, the "required" variable should give me 1, otherwise, it should give me a missing value.

Now the tricky part is that I need to know when the increase first occurred by 0.25 or 25%. So, when the increase happens, it gives me a value of 1 in "required" variable, otherwise, 0.

For example, for ID 1, the value of sentiment at time 5 and 6 is 1.25 and 1.4 respectively, Although the sentiment increased in both instances, I am only interested to find the First increase.


Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input byte(id time) double sentiment byte required
1  1    1 .
1  2    1 .
1  3  1.2 .
1  4 1.22 .
1  5 1.25 1
1  6  1.4 .
1  7  1.2 .
1  8  1.3 .
1  9  1.4 .
1 10  1.2 .
2  1  1.2 .
2  2   .9 .
2  3    . .
2  4   .8 .
2  5  .95 .
2  6    1 .
2  7  1.5 1
2  8  1.2 .
2  9  1.3 .
2 10   .7 .
end


I have made a simple code but I am not sure about its reliability for more than 50,000 IDs and more than 10 million obs. Plus it looks very messy.

Code:
generate a = sentiment if time == 1 // this generates the first value of sentiment
bysort id: replace a = a[_n-1] if a ==. // now the first value of sentiment is carried forward to populate "a"
generate b = sentiment - a // this will let me know if the value is greater than 0.25 or not.
generate c = 1 if b >= 0.25 & !missing(sentiment) // This will generate value of 1 if "b" is equal to or greater than 0.25. Furthermore, it will ignore any missing value in the original sentiment
// QUESTION: for "c" is there any way to find the increase by lets say 25%? not by a specific number like 0.25?

by id (time), sort: gen d = sum(c) // this will sum my non missing values
by id: gen e = d if d == 1  & d[_n - 1] != d // this will give me answer of 1 for my first occurence
Any help would be greatly appreciated.
Thanks

Effect of Sanctions: with three cross-sectional-data sets

$
0
0
Good Day everyone
I am fairly new to Stata, so my apologies if my question is rather redundant.

I would like to analyze the effect of Sanctions of the European Union in year X on Trade flows between two countries (A and B). (for year X+1 and X+2)
(Country B doesn't sanction country A)
As raw data, I have 3 separate cross-sectional data of the trade flows of all countries worldwide. (year X, X+1, X+2)
The goods are separated into different categories, some of those are Sanctioned by the European Union, some are not.

My aim is to analyze, whether country A and B trade relatively more with each other in years X+1 and X+2 in goods, after the Sanctions went into effect in year X, that are sanctioned compared with those that are not.sanctioned

Now since I am now to the program, in my attempts to analyze this I have encountered several problems, maybe one of you could be so nice to help out with these issues.

a) How would you use the three different datasets (for years X, X+1, X+2), would you just append them or would you suggest a better alternative?
b) I am unsure about the amount of dummy variables that I should create. (Currently I have dummy for (1) Sanctioned goods (2) country A (3) country B. Would you suggest to create others?
c) How would you attempt to analyze the difference of Sanctioned goods to Non Sanctioned goods?
If the questions are too basic or unclear my apologies again

I would be highly glad for any help

IV 2SLS on a categorical dependent variable

$
0
0
Hi, I have a more general econometric question.

I am trying to estimate the causal effect of retirement on health using IV 2SLS. Retirement is a binary outcome, and health is categorical. I am using OLS in both stages, even though this do not provide the most efficient results. But, I am wondering how to interpret the coefficient when I use OLS on a categorical outcome.

E.g., I have a variable taking on 12 categories: 1, 2, ... 12., counting number of specific diseases. When I use stata command "reg" to estimate the effect of retirement on this health-variable, in the second stage (using OLS), what is the interpretation then?

If the variable was continuous instead, it would have been straight foreward. Or if binary, it would also be straight foreward using the linear probability regressions. But I am in doubt when my variables are categorical.

(Extra question: to get more efficient estimates, I could have been told I could exploit a control function method instead of IV. This method is newer and more complicated, but is there a built in function in STATA for this method?)

Thank you.

Problem with RDplot

$
0
0
Hi all,

When using rdplot from the rdrobust package, Stata can plot this graph

rdplot y s , c(4) nbins(24 16) ///
graph_options(title(Bin size of = 0.25)) ///

But cannot plot this one, in the sense that it doesn't give me any kind of error but just executes the command without doing anything

rdplot y s , c(4) nbins(12 8) ///
graph_options(title(Bin size of = 0.5)) ///

output

rdplot y s , c(4) nbins(12, 8) graph_options(title(Bin size of = 0.5)) ///

end of do-file

Afterwards, I can't sketch even the first graph... Weird.

How is this possible?

How to deal with the variables that many values reported as &quot;less than detection threshold&quot;?

$
0
0
Hi Statalisters,

In clinical medicine, it's very common that the machine has detection threshold. For example, the detection threshold is 100 for one indicator , the value of the indicator that machine reports could be like this:

HTML Code:
321.24
320.26
298.54
254.56
216.87
180.65
156.47
123.15
118.46
105.23
<100
The percentage of the value "<100" is not very low to be ignored (as missing).

I want to include it in a multiple regression model as an independent variable.

One solution is: I can transform the continuous variable to a categorical variable.

Any better ideals?

Thank you for your time!

Merge error (ID matched but merged data sometimes erroneous)

$
0
0
Hello Everybody,

This is my second time posting, so I’m going to try to add additional information to see if it helps. I have an existing dataset of 5,597 families that participated in a treatment program, and I noticed that some of their termination dates were missing. Thus, I received an updated dataset in Excel to fill in the missing termination dates, which I converted to a Stata dataset using Stattransfer version 14.

The first time I tried to merge on the additional dataset, it did not recognize the identifier although it was a numeric variable. Thus I used the following code, which helped me match the identifiers:

tostring fpid, replace format (%07.0f)
encode fpid, g (fpid2)
rename fpid fpid_original
rename fpid2 fpid
sort fpid
save

Below is the dataex from this new dataset
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input str7 fpid_original long(FP_REFERRAL_START FP_REFERRAL_TERM FP_REFERRAL_END fpid)
"1087671" 19361 19509 19541  1
"1087672" 19361 19395 19541  2
"1087674" 19361 19416 19541  3
"1087698" 19360 19524 19539  4
"1087703" 19360 19477 19540  5
"1087704" 19366 19473 19534  6
"1087722" 19365 19438 19545  7
"1087756" 19361 19446 19541  8
"1087759" 19361 19386 19541  9
"1087760" 19361 19375 19541 10
"1087762" 19361 19477 19541 11
"1087765" 19366 19414 19540 12
"1087799" 19360 19526 19540 13
"1087813" 19366 19730 19730 14
"1087814" 19366 19502 19546 15
"1087818" 19365 19754 19788 16
"1087822" 19365 19563 19607 17
"1087823" 19365 19458 19545 18
"1087837" 19361 19607 19633 19
"1087839" 19372 19446 19542 20
end
format %tdD_m_Y FP_REFERRAL_START
format %tdD_m_Y FP_REFERRAL_TERM
format %tdD_m_Y FP_REFERRAL_END
label values fpid fpid2
label def fpid2 1 "1087671", modify
label def fpid2 2 "1087672", modify
label def fpid2 3 "1087674", modify
label def fpid2 4 "1087698", modify
label def fpid2 5 "1087703", modify
label def fpid2 6 "1087704", modify
label def fpid2 7 "1087722", modify
label def fpid2 8 "1087756", modify
label def fpid2 9 "1087759", modify
label def fpid2 10 "1087760", modify
label def fpid2 11 "1087762", modify
label def fpid2 12 "1087765", modify
label def fpid2 13 "1087799", modify
label def fpid2 14 "1087813", modify
label def fpid2 15 "1087814", modify
label def fpid2 16 "1087818", modify
label def fpid2 17 "1087822", modify
label def fpid2 18 "1087823", modify
label def fpid2 19 "1087837", modify
label def fpid2 20 "1087839", modify


I successfully merged this dataset by the fpid to the new one, and the merged dataset is below. However, I am finding a curious error when I double-check the data. Some of the merged dates are incorrect i.e. the start dates do not match and should match every time. The two variables that should match are fpstart & FP_REFERRAL_START. I pasted the dataex below and the erroneous variables are in bold font; I am using Stata 15.1.


Below is the dataex from this merged dataset
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input long(fpstart term_date fpid FP_REFERRAL_START FP_REFERRAL_TERM)
19361 19509  1 19361 19509
19361 19395  2 19361 19395
19361 19416  3 19361 19416
19360 19524  4 19360 19524
19360 19477  5 19360 19477
19365 19438  6 19366 19473
19361 19477  7 19365 19438
19360 19526  8 19361 19446
19366 19730  9 19361 19386
19366 19502 10 19361 19375
19365 19754 11 19361 19477
19365 19563 12 19366 19414
19365 19458 13 19360 19526
19361 19607 14 19366 19730
19372 19446 15 19366 19502
19360 19477 16 19365 19754
19372 19558 17 19365 19563
19372 19537 18 19365 19458
19362 19705 19 19361 19607
19372 19409 20 19372 19446
end
format %tdD_m_Y fpstart
format %tdD_m_Y term_date
format %tdD_m_Y FP_REFERRAL_START
format %tdD_m_Y FP_REFERRAL_TERM
label values fpid fpid
label def fpid 1 "1087671", modify
label def fpid 2 "1087672", modify
label def fpid 3 "1087674", modify
label def fpid 4 "1087698", modify
label def fpid 5 "1087703", modify
label def fpid 6 "1087722", modify
label def fpid 7 "1087762", modify
label def fpid 8 "1087799", modify
label def fpid 9 "1087813", modify
label def fpid 10 "1087814", modify
label def fpid 11 "1087818", modify
label def fpid 12 "1087822", modify
label def fpid 13 "1087823", modify
label def fpid 14 "1087837", modify
label def fpid 15 "1087839", modify
label def fpid 16 "1087843", modify
label def fpid 17 "1087844", modify
label def fpid 18 "1087846", modify
label def fpid 19 "1087847", modify
label def fpid 20 "1087848", modify

I would appreciate any help in figuring out why these start dates do not match. To see the error, you will have to compare FP_REFERRAL_START in the old dataset and the merged one. I’m wondering if it had something to do with the initial code I used to convert the fpid from string back to numeric. Thanks!

How to deal with overlapping obs in healthcare data

$
0
0
Hi all,

I have two data sets on healthcare costs during the past 12 months, in which one is at individual level and the other is at household level. I want to merge the two but I encounter an issue of overlapping obs in the individual data. Specifically, the individual data consists of information on: 1) healthcare centers (public or private centers) where individuals visit for healthcare treatment; 2) services they use (whether out-patient or in-patient services); 3) costs incurred by outpatient visits; and 4) costs incurred by in-patient visits. Thus, it could be cases that an individual may use healthcare services more than one time during the last 12 months, at different healthcare centers, and different services. For example, Mr. went to hospitals three times in a given year, in which the 1st and 2nd time he used outpatient services at public healthcare centers, but the 3rd time he was hospitalized at a private healthcare center (note: this may not be the case in my example data but similar things may happen). That created challenges in generating a unique identifier for each individual.

My question is how to generate a unique identifier for each individual so that I can merge the individual data set into the household one, without dropping any variables in the both data sets. Any help is highly appreciated.

Note: prid comid hhid are uniquely identified observations in the household data
prid comid hhid invid should be uniquely identified observations in the individual data, but due to the overlapping issue, yet they are not.

Individual data
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input int prid long(comid hhid invid opcost ipcost) float hcenter byte service
101 1010103 14101 1410101 1000     0 1 0
101 1010103 14102 1410201  500     0 1 0
101 1010103 14102 1410201  700     0 0 0
101 1010103 14103 1410301  500     0 1 0
101 1010103 14103 1410301    0  1000 1 1
101 1010103 14103 1410303    0  3000 1 1
101 1010103 14103 1410303  300     0 0 0
101 1010103 14104 1410401 1500     0 1 0
101 1010109  2002  200201   90     0 1 0
101 1010109  2011  201101  200     0 0 0
101 1010109  2011  201102  210     0 0 0
101 1010109  2011  201103  500     0 1 0
101 1010109  2011  201104  300     0 0 0
101 1010109  2013  201302  150     0 1 0
101 1010109  2014  201403   20     0 1 0
101 1010109  2014  201403  480     0 0 0
101 1010109  2014  201403  800     0 0 0
101 1010109  2014  201404    .   400 1 1
101 1010109  2019  201901    .   200 1 1
101 1010109  2020  202002    .  5000 1 1
101 1010109  2021  202102  700     0 0 0
101 1010109  2021  202102  500     0 0 0
101 1010109  2021  202104  300     0 0 0
101 1010109  2022  202201    0     0 1 0
101 1010115  5006  500601 2000     0 1 0
101 1010115  5006  500603   15     0 1 0
101 1010115  5006  500604   15     0 1 0
101 1010115  5007  500701    .   300 1 1
101 1010115  5007  500701    .   400 1 1
101 1010115  5008  500801    .   315 1 1
101 1010115  5008  500801    .   210 1 1
101 1010115  5008  500801    .   315 1 1
101 1010115  5008  500801    .   315 1 1
101 1010115  5008  500802  200     0 1 0
101 1010115  5010  501001  400     0 1 0
101 1010115  5010  501001    .   500 1 1
101 1010115  5011  501101 3000     0 1 0
101 1010115  5011  501102 1500     0 1 0
101 1010115  5012  501204  195     0 1 0
101 1010115  5012  501204  220     0 1 0
101 1010115  5013  501304    .   600 1 1
101 1010115  5013  501304    .   200 1 1
101 1010115  5015  501501    .   100 1 1
101 1010115  5015  501501    .   120 1 1
101 1010115  5015  501501    .   130 1 1
101 1010115  5015  501503  280     0 1 0
101 1010115  5016  501601  335     0 1 0
101 1010115  5016  501603  335     0 1 0
101 1010115  5016  501604  335     0 1 0
101 1010115  5016  501605  335     0 1 0
101 1010115  5016  501606  440     0 1 0
101 1010115  5020  502002    .  1100 1 1
101 1010115  5020  502002 1000     0 0 0
101 1010115  5020  502003  150     0 0 0
101 1010115  5020  502003    .  1500 1 1
101 1010115  5020  502003   30     0 1 0
101 1010115  5020  502004   30     0 1 0
101 1010115  5020  502004    .  1500 1 1
101 1010115  5020  502004  150     0 0 0
101 1010115  5020  502005   30     0 1 0
101 1010115  5103  510301    0  1000 1 1
101 1010115  5106  510601   80     0 1 0
101 1010115  5107  510702  200     0 1 0
101 1010115  5108  510801   80     0 1 0
101 1010115  5108  510802   30     0 1 0
101 1010115  5109  510901  200     0 1 0
101 1010117 27101 2710101   50     0 1 0
101 1010117 27101 2710102  110     0 1 0
101 1010117 27103 2710301  300     0 0 0
101 1010117 27103 2710304  550     0 0 0
101 1010117 27104 2710401   20     0 1 0
101 1010117 27104 2710402   40     0 1 0
101 1010117 27105 2710501  300     0 1 0
101 1010117 27105 2710503 1150     0 0 0
101 1010117 27105 2710505  850     0 0 0
101 1010117 27106 2710602  500     0 1 0
101 1010123  5101  510101    0  2500 1 1
101 1010123  5102  510201  500     0 0 0
101 1010123  5102  510202 2000     0 0 0
101 1010123  5104  510401   80     0 1 0
101 1010303  5101  510101  100     0 1 0
101 1010303  5101  510102  100     0 1 0
101 1010311  1102  110205  500     0 1 0
101 1010311  1102  110206  600     0 1 0
101 1010311  1104  110401    0  1000 1 1
101 1010313 13001 1300102  400     0 1 0
101 1010313 13002 1300202 2000     0 1 0
101 1010313 13004 1300403  500     0 0 0
101 1010313 13006 1300602    .  4000 1 1
101 1010313 13009 1300901  100     0 1 0
101 1010313 13012 1301201  100     0 0 0
101 1010313 13013 1301302  200     0 1 0
101 1010313 13014 1301402  500     0 1 0
101 1010313 13016 1301603    . 35000 1 1
101 1010313 13019 1301902    .   120 1 1
101 1010313 13101 1310102  200     0 1 0
101 1010313 13103 1310301    0  3000 1 1
101 1010313 13103 1310302    0  2000 1 1
101 1010313 13105 1310504    0   500 1 1
101 1010503  4101  410103    0   200 1 1
end
label values hcenter hcenter
label values service service
Household data
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input int prid long(comid hhid topcost tipcost)
101 1010103 14101 1000     0
101 1010103 14102 1200     0
101 1010103 14103  800  4000
101 1010103 14104 1500     0
101 1010103 14105    0     0
101 1010109  2001    0     0
101 1010109  2002   90     0
101 1010109  2004    0     0
101 1010109  2006    0     0
101 1010109  2007    0     0
101 1010109  2008    0     0
101 1010109  2009    0     0
101 1010109  2010    0     0
101 1010109  2011 1210     0
101 1010109  2012    0     0
101 1010109  2013  150     0
101 1010109  2014 1300   400
101 1010109  2015    0     0
101 1010109  2016    0     0
101 1010109  2019    0   200
101 1010109  2020    0  5000
101 1010109  2021 1500     0
101 1010109  2022    0     0
101 1010109  2023    0     0
101 1010109  2024    0     0
101 1010115  5001    0     0
101 1010115  5002    0     0
101 1010115  5003    0     0
101 1010115  5004    0     0
101 1010115  5005    0     0
101 1010115  5006 2030     0
101 1010115  5007    0   700
101 1010115  5008  200  1155
101 1010115  5009    0     0
101 1010115  5010  400   500
101 1010115  5011 4500     0
101 1010115  5012  415     0
101 1010115  5013    0   800
101 1010115  5014    0     0
101 1010115  5015  280   350
101 1010115  5016 1780     0
101 1010115  5017    0     0
101 1010115  5018    0     0
101 1010115  5019    0     0
101 1010115  5020 1390  4100
101 1010115  5103    0  1000
101 1010115  5106   80     0
101 1010115  5107  200     0
101 1010115  5108  110     0
101 1010115  5109  200     0
101 1010117 27101  160     0
101 1010117 27103  850     0
101 1010117 27104   60     0
101 1010117 27105 2300     0
101 1010117 27106  500     0
101 1010123  5101    0  2500
101 1010123  5102 2500     0
101 1010123  5103    0     0
101 1010123  5104   80     0
101 1010123  5105    0     0
101 1010303  5101  200     0
101 1010303  5102    0     0
101 1010303  5103    0     0
101 1010303  5104    0     0
101 1010303  5105    0     0
101 1010311  1101    0     0
101 1010311  1102 1100     0
101 1010311  1103    0     0
101 1010311  1104    0  1000
101 1010311  1105    0     0
101 1010313 13001  400     0
101 1010313 13002 2000     0
101 1010313 13003    0     0
101 1010313 13004  500     0
101 1010313 13005    0     0
101 1010313 13006    0  4000
101 1010313 13007    0     0
101 1010313 13008    0     0
101 1010313 13009  100     0
101 1010313 13010    0     0
101 1010313 13011    0     0
101 1010313 13012  100     0
101 1010313 13013  200     0
101 1010313 13014  500     0
101 1010313 13015    0     0
101 1010313 13016    0 35000
101 1010313 13017    0     0
101 1010313 13018    0     0
101 1010313 13019    0   120
101 1010313 13020    0     0
101 1010313 13101  200     0
101 1010313 13102    0     0
101 1010313 13103    0  5000
101 1010313 13104    0     0
101 1010313 13105    0   500
101 1010503  4101    0   200
101 1010503  4103    0     0
101 1010503  4104    0     0
101 1010503  4106  200     0
101 1010503  4107    0     0
end

Outreg2 specific tables - and not the last one per default

$
0
0
Hi,

We wunder if we can "outreg" results from a regression, where we "bysort" per country, and therefore get 32 tables (we have 32 countries). Can we "outreg" every one of the tables or some of them, and not just the last table per default?

Many thanks in advanced.

Our codes are:

> bysort i: xtivreg2 dlogsigma KAPITALSKAT logsigma INF ÅBEN VALUTA YMIDDEL STAT INFSD ÅBENSD VALUTASD STATSD, fe robust

> outreg2 using results if== (i=6), excel
Viewing all 73405 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>