Quantcast
Channel: Statalist
Viewing all 72756 articles
Browse latest View live

gr_edit documentation

$
0
0
Sometimes it would be convenient to edit a graph from the command line or from a do file.

This 2010 post describes a way to do that by recording a manual editing session and then prepending gr_edit to the recorded commands. That works, but it's clunky.

It seems to me I should be able to write gr_edit commands directly into a do file, but help gr_edit returns no documentation.

Limit axis range even if it cuts off results

$
0
0
I apologize in advance if this query exists already; it seems like it should. Occasionally, I want to limit the range of my y-axis in a marginsplot, even when it might mean cutting off the display of confidence intervals after a certain point on the x-axis. Usually this is for comparability across graphics. Here is an example of what I wish I could do:

Code:
use http://www.stata-press.com/data/r13/auto
generate tons = weight/2000
regress mpg foreign##c.tons##c.tons
margins, at(tons=(.8(.05)2.4)) over(r.for)
marginsplot, yline(0) yscale(r(-20 20))
But I wish that the yscale range would actually cut the scale, cutting off the error bars at along with the scale --- exactly as it would look if I altered the graphic by hand, basically, to cut that y-range above 20 and between 0 and -20.

This is a general question for two-way plots, I think, but the application makes most sense with marginsplots, where one can clearly follow the error bars in one's mind, even if you cut them off for easy display or comparability across graphics.

Thanks!
Leah

Storing labels and restoring after reshape wide

$
0
0
Hi all,

I have a variable 'fglabel' which contains labels (as strings) that correspond to values of the variable 'gfrpfg'. I want to 1) temporarily store the values of 'gfrpfg' and corresponding values from fglabel before reshaping wide. After reshaping I want to 2) apply stored variable labels to wide reshaped variables according to suffix. Here is my code:

clear all

input seqn kcal grams gfrpfg str200 fglabel
31127 48 16 20200 "Poultry"
31127 146 42.75 20300 "Processed meats, poultry, & products"
31127 76 60 30200 "Eggs & egg dishes"
31128 70 24 40100 "Breads, rolls, and tortillas"
31128 20 60 50100 "Fruit, fresh, frozen, canned, or dried"
31129 16 12.06 50310 "Fried starchy vegetables or starchy vegetable dishes"
31129 42 13.3 50400 "Fried potatoes"
31129 25 4.77 60100 "Vegetable oils & animal fats"
end

net install gr0034.pkg
labmask gfrpfg, values(fglabel)
/*This program 'writes' over values of gfrpfg so that while actual values are unchanged, displayed values correspond to fglabel.
Note: this is where I could use some help, as I am not sure this is the best way to temporarily store the values of gfrpfg and corresponding value labels from fglabel before collapsing*/

collapse (sum) kcal grams, by(seqn gfrpfg)
reshape wide kcal grams, i(seqn) j(gfrpfg)
/*

Data long -> wide
-----------------------------------------------------------------------------------------------------------
Number of obs. 8 -> 3
Number of variables 4 -> 17
j variable (8 values) gfrpfg -> (dropped)
xij variables: kcal -> kcal20200 kcal20300 ... kcal60100
grams -> grams20200 grams20300 ... grams60100
-----------------------------------------------------------------------------------------------------------

/In this example, would like to apply stored labels to new wide variables (e.g., kcal20200; grams20200) based on their suffixes (e.g., 20200).
Desired: the variable kcal20200 would be labeled "Poultry"; grams20200 would also be labeled "Poultry"
*/

local i = 1
local j = 2
foreach v of varlist kcal20200 kcal20300 kcal60100 {
mean `v'
putexcel set "$results\Table 2.xlsx", sheet("kcal") modify
matrix results = r(table)
matrix results = results[1...,1...]'
local label : variable label `v'
putexcel A`j' = "`label'" B`j' = results[`i',1]
local j = `j' + 1
}

/*Desired result: variable kcal20200 is labeled "Poultry"
Actual result: variable kcal20200 is labeled "kcal 20200"

END of FILE
/*

Averaging Yearly Data to Construct Panel

$
0
0
Hi,

I have a panel dataset which spans 12 countries over 5 years at the country level (1 observation per country per year). I want to merge it with another dataset that spans the same countries and years but at the individual level (c. 2000 observations per country per year). I want to average the observations per country per year in order to be able to merge both datasets. Is there a command on stata that allows to do that?

Thank you,

Joan

Keeping the order in reshape

$
0
0
Dear Statalisters,
I am using the reshape wide command and my ids are string ordered numbers (i.e., 1, 2, 3, ... 22, 23, 24, etc.).
When I run the reshape wide command it creates new variables var1 var10 var2 var 20 ... instead of var1 var2 var3, etc...
Is there an option to keep the order of variables?
Thank you and warm regards
LR

goodness of fit outcomes

$
0
0
Dear Statalists,

I hope you are well. I would like to ask please if I got Pearson Chi2 at a significant (estat gof) but the Hosmer test is insignificant, does this means that my model fit reasonable. In fact, I have used these tests on probit regression (11 categorical independent variables for 300 firms).

These are the outcomes of the goodness of fit tests

. estat gof

Probit model for Disc_APP_NO_informal_01, goodness-of-fit test

number of observations = 152
number of covariate patterns = 142
Pearson chi2(115) = 141.36
Prob > chi2 = 0.0481



estat gof, group(10) table

number of observations = 152
number of groups = 10
Hosmer-Lemeshow chi2(8) = 11.45
Prob > chi2 = 0.1774

lroc

Probit model for Disc_APP_NO_informal_01

number of observations = 152
area under ROC curve = 0.8294


Can you please help in interpreting these results? Does this show an acceptable fit for the model? I have attached also graph for Iroc and lsens syntax command.

Greatly appreciate your help and support

Kind regards,
Rabab

tabout option stpos() not allowed

$
0
0
I an new to STATA. I am trying to produce output tables from a survey. I am trying to run this code:

Code:
tabout age_cat1 female hem_type using test.xls, replace style(xlsx) c( col) svy pop f(2) ptotal(none) percent  font(bold) dropc(5) h1(Table 1. Estimated Descriptive Statistics of Patient Demographics and Clinical Characteristics after Intracerebral Hemorrhage NIS 2015Q4-2017) 

tabout age_cat1 female hem_type using test.xls, append style(xlsx) c( freq lb ub) svy  pop  stats(chi2) stpos(col) ptotal(none) dropc(1) location (2 5)
I can't get the column percent and confidence interval bounds in the same statement so I had to break it up and put the tables side by side.
I am trying to put the chi square statistics as a column at the end of the table instead of rows below, and I want to drop the total column from the first statement and first column from the second statement.

I keep getting the following two errors:
option dropc() not allowed
option stpos() not allowed

What am I doing wrong.

Thanks for all the help.

Panel data - Following the death of a participant, how to remove all following observations for them

$
0
0
Hi everyone!

I currently am cleaning a very big dataset (52 variables, 82284 observations) for longitudinal analysis. The dataset is based on information returned from 6 different surveys. I have converted the dataset to long format so currently there are about 6 different observations (in years) for each ID. There are approximately 13,000 unique ID variables. This dataset is confidential so I have created a fake example dataset to use for this question (hopefully inserted correctly below).

So this is my issue - I have tried to create a "death after this wave" variable - which would indicate that this was the last wave of data from the person before dying. Therefore, I need to delete the waves that the person didn't participate in (so if someone only participated in three waves and then died == then only have 3 rows of data, whereas someone who was alive for all waves, will have 6 rows of data), however I am struggling to find a code that will achieve this. Does anyone have any ideas? Apologies, I am quite a novice!

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input double idalias int year float(wave_sg Death_After_This_Wave)
1 1901 0 .
1 1904 1 .
1 1907 2 1
1 1910 3 .
1 1913 4 .
1 1916 5 .
2 1901 0 .
2 1904 1 .
2 1907 2 .
2 1910 3 .
2 1913 4 1
2 1916 5 .
3 1901 0 .
3 1904 1 .
3 1907 2 1
3 1910 3 .
3 1913 4 .
3 1916 5 .
4 1901 0 .
4 1904 1 .
4 1907 2 .
4 1910 3 .
4 1913 4 .
4 1916 5 1
5 1901 0 .
5 1904 1 .
5 1907 2 1
5 1910 3 .
5 1913 4 .
5 1916 5 .
6 1901 0 .
6 1904 1 .
6 1907 2 .
6 1910 3 .
6 1913 4 1
6 1916 5 .
7 1901 0 .
7 1904 1 1
7 1907 2 .
7 1910 3 .
7 1913 4 .
7 1916 5 .
end

I was thinking something like this: by idalias, sort: drop in 2/5 if _n=1 for Death_After_This_Wave (which to me means: for each ID, drop the years 1904 1907 1910 1913 1916 (i.e. observations 2 to 5) if the person has died just after the first observation (1901). I could then just edit this code and repeat it for the remaining years.


Thanks for taking the time to read my query.
Warm regards,
Sarah

How to download API data from the Bureau of Labor Statistics?

Importing a matrix into stata file: Col numbers

$
0
0
Dear Statalisters
I have a matrix with column names equal numbers, when I run svmat matrix, and Stata creates a file with variable names like: X1,..., Xn. How can I keep the numbers from the original matrix as variable names into Stata, something like: var99 var100 var101 instead of X1 X2 X3?
Thank you and regards
LR

mtefe with interactive fixed effects

$
0
0
Hello everyone!
First time poster, long time lurker here

I'm working on a project that requires obtaining the PRTEs for a few hypothetical policies. The main specification requires using interactive fixed effects (age and district), but whenever I try and do that, I get the following error:

invalid interaction specification;
multiple 'o' operators attached to a single variable are not allowed within an interaction specification
Here's an example of the code I'm running:

Code:
qui probit var2 instrument $controls i.age i.district i.age#i.district 
gen temp1=instrument
replace instrument=1 if instrument>1
predict double policy_1
replace instrument=temp1

mtefe var1 $controls  i.age i.district i.age#i.district  (var2 = instrument) , trimsupport(.05) prte(policy_1) pol(2) vce(robust)
Am I doing something wrong?

Thanks in advance for any input!

Converting from long to wide format

$
0
0
Dear all,

I'm trying to convert from long to wide format:

Code:
reshape wide _proc_code _proc_eye _slt_eye _age istent hydrus istent_add goniotomy trabectome concur_phaco ab_int_canal other_glau_proc_code other_glau_proc_date other_glau_proc_eye, i(patient_guid) j(_proc_date)
However I'm being met with the following error code:

Code:
macro substitution results in line that is too long
    The line resulting from substituting macros would be longer than allowed.  The
    maximum allowed length is 264,408 characters, which is calculated on the basis of
    set maxvar.

    You can change that in Stata/SE and Stata/MP.  What follows is relevant only if you
    are using Stata/SE or Stata/MP.

    The maximum line length is defined as 16 more than the maximum macro length, which
    is currently 264,392 characters.  Each unit increase in set maxvar increases the
    length maximums by 129.  The maximum value of set maxvar is 32,767.  Thus, the
    maximum line length may be set up to 4,227,159 characters if you set maxvar to its
    largest value.
r(920);
The data set is large (~20 million rows) so I'm not sure whether that is part of the problem?
Or whether the number of variables I want to include under each "_proc_date" is the problem (i.e. the 14 variables: _proc_code _proc_eye _slt_eye _age istent hydrus istent_add goniotomy trabectome concur_phaco ab_int_canal other_glau_proc_code other_glau_proc_date other_glau_proc_eye)?

Any help would be so appreciated; thanks for your time.

Will

Polygons hidden behind others when creating choropleth map using spmap

$
0
0
Hi everyone - this is my first ever post on this forum.

I've been having a problem using the spmap command. I'm plotting proportions of antibiotic-resistant isolates by region. I also have the data split up by community and hospital isolates. In order to compare the proportion resistant from community isolates with hospital isolates within a particular region I created artificial circular "regions" to represent the hospitals - it seems to me this was the only way to do it since there isn't an option in spmap to insert points/markers on a map and have them display the data using the same (color) scale for both the community and hospital isolates. The problem is that some of these circles become either completely or partially hidden under the base map.

Here is the code I used:

Code:
   
levelsof abx_class, local(abx)
    *E. coli
        foreach a of local abx {
            #delimit ;
            spmap percent_resistant
                if organism == "E.coli" & abx_class == "`a'"
                using coordinates_XY.dta, id(_ID)
                title("`a'", size(small))
                clbreaks(0(10)100) clmethod(custom) fcolor(Reds2) ndfcolor(white) ndlab("no isolates")
                legtitle("% resistant") legstyle(2)
                graphregion(color(white) margin(b=0 t=0)) plotregion(margin(b=0 t=0)) bgcolor(white)
                name("`a'", replace)
            ;
            #delimit cr
        }
        grc1leg ESBLs Fluoroquinolones Gentamicin SXT, col(2) graphregion(color(white) margin(b=0 t=0)) title("{it:E. coli}", size(medsmall)) position(9) name(Ecoli, replace)
Here are the data:

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input str3 jurisdiction str36 region str48 organism str33 abx_class double(num_of_tests resistant susceptible) float percent_resistant long _ID
"FNQ" "Mackay"                "E.coli" "ESBLs"               .   .    .         .       1
"FNQ" "Mackay"                "E.coli" "Fluoroquinolones"    .   .    .         .       1
"FNQ" "Mackay"                "E.coli" "Gentamicin"          .   .    .         .       1
"FNQ" "Mackay"                "E.coli" "SXT"                 .   .    .         .       1
"FNQ" "Townsville"            "E.coli" "SXT"                 .   .    .         .       2
"FNQ" "Townsville"            "E.coli" "Gentamicin"          .   .    .         .       2
"FNQ" "Townsville"            "E.coli" "ESBLs"               .   .    .         .       2
"FNQ" "Townsville"            "E.coli" "Fluoroquinolones"    .   .    .         .       2
"FNQ" "North West"            "E.coli" "SXT"                 .   .    .         .       3
"FNQ" "North West"            "E.coli" "Gentamicin"          .   .    .         .       3
"FNQ" "North West"            "E.coli" "ESBLs"               .   .    .         .       3
"FNQ" "North West"            "E.coli" "Fluoroquinolones"    .   .    .         .       3
"FNQ" "Cairns and Hinterland" "E.coli" "SXT"                 .   .    .         .       4
"FNQ" "Cairns and Hinterland" "E.coli" "Gentamicin"         15   0   15         0       4
"FNQ" "Cairns and Hinterland" "E.coli" "ESBLs"              10   0   10         0       4
"FNQ" "Cairns and Hinterland" "E.coli" "Fluoroquinolones"   10   0   10         0       4
"FNQ" "Torres and Cape"       "E.coli" "Fluoroquinolones"    .   .    .         .       5
"FNQ" "Torres and Cape"       "E.coli" "Gentamicin"          .   .    .         .       5
"FNQ" "Torres and Cape"       "E.coli" "ESBLs"               .   .    .         .       5
"FNQ" "Torres and Cape"       "E.coli" "SXT"                 .   .    .         .       5
"WA"  "Kimberley"             "E.coli" "Fluoroquinolones"   55   7   48 12.727273     304
"WA"  "Kimberley"             "E.coli" "SXT"                 9   5    4  55.55556     304
"WA"  "Kimberley"             "E.coli" "Gentamicin"         13   3   10 23.076923     304
"WA"  "Kimberley"             "E.coli" "ESBLs"               9   5    4  55.55556     304
"NT"  "Alice Springs"         "E.coli" "Fluoroquinolones"  523  67  456 12.810707     334
"NT"  "Alice Springs"         "E.coli" "SXT"               123  61   62  49.59349     334
"NT"  "Alice Springs"         "E.coli" "ESBLs"             123  40   83 32.520325     334
"NT"  "Alice Springs"         "E.coli" "Gentamicin"        506  61  445 12.055336     334
"NT"  "Barkly"                "E.coli" "Fluoroquinolones"  745  92  653 12.348993     335
"NT"  "Barkly"                "E.coli" "SXT"               212 106  106        50     335
"NT"  "Barkly"                "E.coli" "Gentamicin"        724 106  618 14.640884     335
"NT"  "Barkly"                "E.coli" "ESBLs"             213  56  157  26.29108     335
"NT"  "East Arnhem"           "E.coli" "Fluoroquinolones"  206  12  194  5.825243     337
"NT"  "East Arnhem"           "E.coli" "SXT"                36  24   12 66.666664     337
"NT"  "East Arnhem"           "E.coli" "ESBLs"              36  11   25 30.555555     337
"NT"  "East Arnhem"           "E.coli" "Gentamicin"        197  27  170 13.705584     337
"NT"  "Katherine"             "E.coli" "Gentamicin"       1375  94 1281  6.836364     338
"NT"  "Katherine"             "E.coli" "SXT"               315 162  153  51.42857     338
"NT"  "Katherine"             "E.coli" "Fluoroquinolones" 1429  88 1341  6.158153     338
"NT"  "Katherine"             "E.coli" "ESBLs"             316  61  255 19.303797     338
"WA"  "Pilbara"               "E.coli" "Gentamicin"         47   3   44  6.382979   99999
"WA"  "Pilbara"               "E.coli" "ESBLs"              44  12   32  27.27273   99999
"WA"  "Pilbara"               "E.coli" "SXT"                44  21   23  47.72727   99999
"WA"  "Pilbara"               "E.coli" "Fluoroquinolones"  281  21  260   7.47331   99999
"NT"  "Darwin"                "E.coli" "SXT"               323 155  168  47.98762  999999
"NT"  "Darwin"                "E.coli" "Fluoroquinolones" 1856 147 1709  7.920259  999999
"NT"  "Darwin"                "E.coli" "ESBLs"             323 105  218  32.50774  999999
"NT"  "Darwin"                "E.coli" "Gentamicin"       1719 110 1609  6.399069  999999
"NT"  "Darwin_hospital"       "E.coli" "Gentamicin"       1172 109 1063  9.300342 9000000
"NT"  "Darwin_hospital"       "E.coli" "SXT"              1169 393  776 33.618477 9000000
"NT"  "Darwin_hospital"       "E.coli" "ESBLs"            1172 118 1054  10.06826 9000000
"NT"  "Darwin_hospital"       "E.coli" "Fluoroquinolones" 1172 130 1042  11.09215 9000000
"NT"  "Gove_hospital"         "E.coli" "Gentamicin"        127  20  107 15.748032 9000001
"NT"  "Gove_hospital"         "E.coli" "Fluoroquinolones"  127  10  117  7.874016 9000001
"NT"  "Gove_hospital"         "E.coli" "SXT"               127  48   79 37.795277 9000001
"NT"  "Gove_hospital"         "E.coli" "ESBLs"             127  10  117  7.874016 9000001
"NT"  "Tennant_hospital"      "E.coli" "ESBLs"             131   9  122  6.870229 9000002
"NT"  "Tennant_hospital"      "E.coli" "SXT"               130  53   77  40.76923 9000002
"NT"  "Tennant_hospital"      "E.coli" "Gentamicin"        131  18  113 13.740458 9000002
"NT"  "Tennant_hospital"      "E.coli" "Fluoroquinolones"  131  18  113 13.740458 9000002
"NT"  "Katherine_hospital"    "E.coli" "SXT"                37  13   24 35.135136 9000003
"NT"  "Katherine_hospital"    "E.coli" "Fluoroquinolones"   37   1   36  2.702703 9000003
"NT"  "Katherine_hospital"    "E.coli" "ESBLs"              37   2   35  5.405406 9000003
"NT"  "Katherine_hospital"    "E.coli" "Gentamicin"         37   3   34 8.1081085 9000003
"NT"  "Alice_hospital"        "E.coli" "Fluoroquinolones"   67  14   53 20.895523 9000004
"NT"  "Alice_hospital"        "E.coli" "Gentamicin"         67  11   56  16.41791 9000004
"NT"  "Alice_hospital"        "E.coli" "SXT"                67  33   34  49.25373 9000004
"NT"  "Alice_hospital"        "E.coli" "ESBLs"              67   8   59 11.940298 9000004
"FNQ" "Cairns_hospital"       "E.coli" "Fluoroquinolones" 1721 125 1596  7.263219 9000007
"FNQ" "Cairns_hospital"       "E.coli" "Gentamicin"       2082  93 1989  4.466859 9000007
"FNQ" "Cairns_hospital"       "E.coli" "ESBLs"            1720  35 1685 2.0348837 9000007
end
I've attached the corresponding coordinates .dta file. Array

Any help much appreciated.

Will


query of correlation and coefficient

$
0
0
Dear all,
I have a question of correlation and my coefficient.

I use pwcorr to test the correlation of my variables, whereas the sign of the correlation shows different with my regression. I know it may be a statistical issue. But before moving there, I am wondering can I know whether it might be influenced due to I use reghafe, absorb(sic_2 fiscal_year analys) vce(cluster analys) in my regression? I use the same control variables with previous literature, so it should not be an multicollinearity issue.

For example, the correlation between forecast error and high_tech is negative and significant, whereas the coefficient in the regression is positive and significant.

Thank you so much.

mvnp ml maxmize plugin not loaded : use the adoonly option

$
0
0
I am currently running the test_mc_mvp3_train_h.do file of Cappellari and Jenkins (2006) , Calculation of multivariate normal probabilities by simulation, with applications to maximum simulated likelihood estimation. However this message appears after running the ml maximize command: mvnp ml maxmize plugin not loaded : use the adoonly option. I am using Stata 14, 64 bit.

May I know how to solve this? I tried to uninstall and reinstall the mvnp package but the same problem persists. Also, read about the adoonly option but cannot figure it out on my own.

How do I load data in Stata from the internet? I am not able to make -webuse- work...

$
0
0
Good morning,

I am trying to load from within Stata a file on this website https://www.lisdatacenter.org/resources/self-teaching/, it is in the column LIS sample files, US16 Household dataset.

So I went to the side, right clicked on the file, and copy link address. Then from withing Stata I typed

Code:
. webuse https://www.lisdatacenter.org/wp-content/uploads/files/us16ih.dta
file
    http://www.stata-press.com/data/r15/https://www.lisdatacenter.org/wp-content/uploads/files/us1
    > 6ih.dta not found
r(601);

.
So this did not do the trick. Then I tried

Code:
. webuse set https://www.lisdatacenter.org/wp-content/uploads/files/
(prefix now "https://www.lisdatacenter.org/wp-content/uploads/files")

. webuse us16ih.dta
sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath
> .SunCertPathBuilderException: unable to find valid certification path to requested target
r(5100);
and this did not do the trick either.

Do you know how I can retrieve a file from the web from within Stata?

Problems in obtaining the right hand side confidence interval of a regression

$
0
0
Hi there,

I calculated the abnormal returns of german federal state bonds as well as I plotted them with their left- and right hand side confidence intervals into a graph, on which I had success doing it.
Unfortunately, it is not working the same way, it did yesterday. As i tried to run the regression this morning again, I obtained on every dummy a right hand side confidence interval of the value of 0 instead of a positive one. Since I did not changed anything in my do file ( i just ran the regression for a second time) I really would need your guidance on that!

Many thanks in advance
Freddy

here my do file ( shortened to just two bonds as well as one regression)

clear

cd "G:\GAST\Praktikanten\Frederik Witzemann\Stata intro\NRW floating"

local bonds NR2008FR0538S BRCPN040740

foreach x in `bonds'{
clear
import excel nrwbrd_final.xlsx, sheet("`x'")
drop if _n<5
ren A date
ren B bp_`x'
destring bp_`x', replace ignore("NA")

save bp_`x', replace
}
clear
use bp_NR2008FR0538S

local bonds BRCPN040740


foreach x in `bonds'{
merge 1:1 date using bp_`x'.dta
drop _merge*

}

gen date2 = date(date, "MDY")
format date2 %td
gen event_date = date("5/15/2017", "MDY")
gen date3 = date(date, "MDY")

sort date2

tsset date2

local bonds NR2008FR0538S BRCPN040740


foreach i in `bonds'{
gen r_`i'=(bp_`i'- bp_`i'[_n-1])/ bp_`i'[_n-1]

}


gen dif = event_date - date3

gen event_window=0
replace event_window=1 if dif>=-10 & dif<=10
egen count_event_obs=count(event_window),
gen estimation_window=1 if dif<45 & dif>-45
egen count_est_obs=count(estimation_window),
replace event_window=0 if event_window==.
replace estimation_window=0 if estimation_window==.



forval i = 0(1)20{
gen dummy_`i' = 1 if dif>`i'-11 & dif<`i'+-9
replace dummy_`i'=0 if dummy_`i'==.

}

reg r_NR2008FR0538S r_BRCPN040740 dummy_0 dummy_1 dummy_2 dummy_3 dummy_4 dummy_5 dummy_6 dummy_7 dummy_8 dummy_9 dummy_10 dummy_11 dummy_12 dummy_13 dummy_14 dummy_15 dummy_16 dummy_17 dummy_18 dummy_19 dummy_20 if estimation_window==1


here my regression with the value of 0 on the right hand confidence intervals:


> ------
r_NR20~0538S Coef. Std. Err. t P>t [95% Conf. Int
> erval]

> ------
r_BRC~040740 .0153346 .0077454 1.98 0.054 -.000256 .0
> 309252
dummy_0 -.0022093 .0004813 -4.59 0.000 -.0031781 -.0
> 012406
dummy_1 -.0003903 .0004772 -0.82 0.418 -.0013508 .0
> 005702
dummy_2 -.000165 .0004772 -0.35 0.731 -.0011256 .0
> 007956
dummy_3 -.0000542 .0004811 -0.11 0.911 -.0010225 .0
> 009142
dummy_4 0 (omitted)
dummy_5 0 (omitted)
dummy_6 -.0001159 .0004781 -0.24 0.809 -.0010783 .0
> 008465
dummy_7 -.0002397 .0004787 -0.50 0.619 -.0012033 .0
> 007238
dummy_8 -.0001289 .0004845 -0.27 0.791 -.0011041 .0
> 008464
dummy_9 -.0004905 .00048 -1.02 0.312 -.0014568 .0
> 004757
dummy_10 .0000395 .0004893 0.08 0.936 -.0009455 .0
> 010244
dummy_11 0 (omitted)
dummy_12 0 (omitted)
dummy_13 -.0001012 .0004823 -0.21 0.835 -.001072 .0
> 008696
dummy_14 .0001426 .0004796 0.30 0.768 -.0008228 .0
> 011079
dummy_15 -.0002955 .0004811 -0.61 0.542 -.0012638 .0
> 006728
dummy_16 -.0001507 .0004773 -0.32 0.754 -.0011116 .0
> 008101
dummy_17 .0001299 .0004795 0.27 0.788 -.0008352 .
> 001095
dummy_18 0 (omitted)
dummy_19 0 (omitted)
dummy_20 -.0005231 .0004787 -1.09 0.280 -.0014868 .0
> 004406
_cons .0001736 .0000682 2.55 0.014 .0000364 .0
> 003108

> ------





here my regression with positive values on the right hand confidence intervals:
Coef. Std. Err. t P>t [95% Conf. Interval]
.0153346 .0077454 Jan 98 0.054 -.000256 .0309252
-.0022093 .0004813 -4.59 0.000 -.0031781 -.0012406
-.0003903 .0004772 -0.82 0.418 -.0013508 .0005702
-.000165 .0004772 -0.35 0.731 -.0011256 .0007956
-.0000542 .0004811 -0.11 0.911 -.0010225 .0009142
-.0001159 .0004781 -0.24 0.809 -.0010783 .0008465
-.0002397 .0004787 -0.50 0.619 -.0012033 .0007238
-.0001289 .0004845 -0.27 0.791 -.0011041 .0008464
-.0004905 .00048 -1.02 0.312 -.0014568 .0004757
.0000395 .0004893 0.08 0.936 -.0009455 .0010244
-.0001012 .0004823 -0.21 0.835 -.001072 .0008696
.0001426 .0004796 0.30 0.768 -.0008228 .0011079
-.0002955 .0004811 -0.61 0.542 -.0012638 .0006728
-.0001507 .0004773 -0.32 0.754 -.0011116 .0008101
.0001299 .0004795 0.27 0.788 -.0008352 .001095
-.0005231 .0004787 -1.09 0.280 -.0014868 .0004406
.0001736 .0000682 Feb 55 0.014 .0000364 .0003108

psgraph

$
0
0
I'm using the psgraph command to generate a propensity score histogram with the binary treatment variable and the pscore varaiable. The command is thus; "psgraph, NHIS mypscore". However, the results I get is; "unknown egen function sum()"

pscore

$
0
0
Is the command pscore present in Stata 16?

Questions about asdoc with option keep

$
0
0
Hello, everyone!

I am using asdoc to report some nested regression tables to Word. For the sake of simplicity, I only report specific coefficients with option keep. The code is as follows:
Code:
    asdoc reg y1 x1, vce(cluster id), nest add(Month, NO, Id, NO) tzok replace save(Regression.doc)
    asdoc reg y2 x1, vce(cluster id), nest add(Month, NO, Id, NO) tzok
    asdoc xtreg y1 x1 i.month, fe cluster(id), nest add(Month, YES, Id, YES) tzok 
    asdoc xtreg y2 x1 i.month, fe cluster(id), nest add(Month, YES, Id, YES) tzok keep(x1 _cons) title(Table 1:)

    asdoc reg y3 x1, vce(cluster id), nest add(Month, NO, Id, NO) tzok replace save(Regression.doc)
    asdoc reg y4 x1, vce(cluster id), nest add(Month, NO, Id, NO) tzok
    asdoc xtreg y3 x1 i.month, fe cluster(id), nest add(Month, YES, Id, YES) tzok 
    asdoc xtreg y4 x1 i.month, fe cluster(id), nest add(Month, YES, Id, YES) tzok keep(x1 _cons) title(Table 2:)
However, there is only Table 1 in the Word. So could anyone tell me why and how to solve it?

Thank you in advance and best wishes!

Jishuang Yu
Viewing all 72756 articles
Browse latest View live