Quantcast
Channel: Statalist
Viewing all 72752 articles
Browse latest View live

Sample set and variable creation

$
0
0
Dear Stata Experts,

Please help me to overcome the following issue. This concerns one of the questions that I have already asked sometime ago. “personid” is ID of a person; “cusip6” firm ID; “trandate” is a date of transaction when a person (“personid”) makes a transaction. What I need is to find individuals (“personid”) who make transactions in three consecutive years in the same month within a given firm. (Say if person A made transaction in 2000M1, 1999M1, 1998M1 with firm XXXX than he has “1” in all the observation for year 2000 (this variable is “consecutive”). I employ the code below, but what it gives me is that it does not necessarily takes into account consecutive years. I upload a sample of data. Please, advise me how can I improve it.
Code:
sort personid
gen exp_no = _n
gen year = year(trandate)
gen ym = mofd(trandate)
format %tm ym
 
* exports in the same month for 3 consecutive years
gen trade_month = month(trandate)
isid cusip6 personid trade_month year exp_no, sort
by cusip6 personid trade_month year: gen cm_tag = _n == 1
bysort cusip6 personid cm_tag trade_month (year): gen has3m = (year[_n-3] == year - 3) & cm_tag
 
bysort personid cusip6 year: egen consequtive=sum(has3m)


Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input double personid str6 cusip6 long trandate float(exp_no year ym trade_month cm_tag has3m consequtive)
12099844 "909911" 11526  5 1991 378  7 1 0 0
12099844 "909911" 11548  1 1991 379  8 1 0 0
12099844 "909911" 11834  6 1992 388  5 1 0 0
12099844 "909911" 11848  3 1992 389  6 1 0 0
12099844 "909911" 13205  4 1996 433  2 0 0 0
12099844 "909911" 13205  2 1996 433  2 1 0 0
12099846 "909911" 13020  7 1995 427  8 1 0 0
12099846 "909911" 13179  8 1996 432  1 1 0 0
12099847 "909911" 12597  9 1994 413  6 1 0 0
12099849 "909911" 16029 10 2003 526 11 1 0 0
12099849 "909911" 16007 14 2003 525 10 1 0 0
12099849 "909911" 15740 24 2003 517  2 1 0 0
12099849 "909911" 15923 23 2003 523  8 1 0 0
12099849 "909911" 16103 21 2004 529  2 1 0 0
12099849 "909911" 16198 15 2004 532  5 1 0 0
12099849 "909911" 16310 16 2004 535  8 1 0 0
12099849 "909911" 16572 13 2005 544  5 1 0 0
12099849 "909911" 16496 18 2005 542  3 1 0 0
12099849 "909911" 16685 11 2005 548  9 1 0 0
12099849 "909911" 16762 20 2005 550 11 1 0 0
12099849 "909911" 16855 19 2006 553  2 1 0 0
12099849 "909911" 16937 22 2006 556  5 1 0 0
12099849 "909911" 17730 17 2008 582  7 1 0 0
12099849 "909911" 17933 12 2009 589  2 1 0 0
12099858 "909911" 16007 33 2003 525 10 1 0 0
12099858 "909911" 16198 29 2004 532  5 1 0 0
12099858 "909911" 16685 32 2005 548  9 1 0 0
12099858 "909911" 16762 27 2005 550 11 1 0 0
12099858 "909911" 16572 31 2005 544  5 1 0 0
12099858 "909911" 16490 30 2005 541  2 1 0 0
12099858 "909911" 16926 38 2006 556  5 1 0 0
12099858 "909911" 17135 28 2006 562 11 0 0 0
12099858 "909911" 16985 35 2006 558  7 1 0 0
12099858 "909911" 17115 25 2006 562 11 1 0 0
12099858 "909911" 17321 42 2007 569  6 1 0 0
12099858 "909911" 17231 41 2007 566  3 1 0 0
12099858 "909911" 17493 26 2007 574 11 1 0 0
12099858 "909911" 17563 36 2008 577  2 1 0 1
12099858 "909911" 17850 39 2008 586 11 1 1 1
12099858 "909911" 18697 34 2011 614  3 1 0 0
12099858 "909911" 19068 40 2012 626  3 1 0 0
12099858 "909911" 19492 37 2013 640  5 1 0 0
12099861 "909911" 17021 52 2006 559  8 1 0 0
12099861 "909911" 17218 47 2007 565  2 1 0 0
12099861 "909911" 17400 44 2007 571  8 1 0 0
12099861 "909911" 17399 53 2007 571  8 0 0 0
12099861 "909911" 18400 46 2010 604  5 1 0 0
12099861 "909911" 18498 43 2010 607  8 1 0 0
12099861 "909911" 18697 48 2011 614  3 1 0 0
12099861 "909911" 18949 54 2011 622 11 1 0 0
12099861 "909911" 19134 51 2012 628  5 1 0 0
12099861 "909911" 19683 49 2013 646 11 1 0 0
12099861 "909911" 20041 50 2014 658 11 1 0 0
12099861 "909911" 20235 45 2015 664  5 1 0 0
12305414 "007974" 13368 58 1996 439  8 1 0 0
12305414 "007974" 13440 63 1996 441 10 1 0 0
12305414 "007974" 13986 59 1998 459  4 1 0 0
12305414 "007974" 14637 69 2000 480  1 0 0 0
12305414 "007974" 14812 68 2000 486  7 0 0 0
12305414 "007974" 14815 70 2000 486  7 0 0 0
12305414 "007974" 14741 67 2000 484  5 1 0 0
12305414 "007974" 14640 64 2000 480  1 1 0 0
12305414 "007974" 14817 57 2000 486  7 1 0 0
12305414 "007974" 15299 56 2001 502 11 1 0 0
12305414 "007974" 15034 62 2001 493  2 1 0 0
12305414 "007974" 15110 65 2001 496  5 1 0 0
12305414 "007974" 15091 61 2001 495  4 1 0 0
12305414 "007974" 15299 60 2001 502 11 0 0 0
12305414 "007974" 15267 55 2001 501 10 1 0 0
12305414 "007974" 15020 66 2001 493  2 0 0 0
12901687 "007974" 15916 71 2003 522  7 1 0 0
12901687 "007974" 16188 72 2004 531  4 1 0 0
13088267 "007974" 18109 80 2009 594  7 1 0 0
13088267 "007974" 18112 90 2009 595  8 1 0 0
13088267 "007974" 18571 91 2010 610 11 0 0 0
13088267 "007974" 18507 81 2010 608  9 1 0 0
13088267 "007974" 18323 92 2010 602  3 0 0 0
13088267 "007974" 18574 76 2010 610 11 1 0 0
13088267 "007974" 18571 79 2010 610 11 0 0 0
13088267 "007974" 18323 73 2010 602  3 1 0 0
13088267 "007974" 18323 82 2010 602  3 0 0 0
13088267 "007974" 18571 89 2010 610 11 0 0 0
13088267 "007974" 18507 87 2010 608  9 0 0 0
13088267 "007974" 18931 77 2011 621 10 0 0 0
13088267 "007974" 18928 74 2011 621 10 1 0 0
13088267 "007974" 19117 83 2012 628  5 1 0 0
13088267 "007974" 19120 86 2012 628  5 0 0 0
13088267 "007974" 20026 88 2014 657 10 1 0 0
13088267 "007974" 19956 93 2014 655  8 0 0 0
13088267 "007974" 19955 85 2014 655  8 1 0 0
13088267 "007974" 20151 84 2015 662  3 0 0 0
13088267 "007974" 20150 75 2015 662  3 1 0 0
13088267 "007974" 20152 78 2015 662  3 0 0 0
end
format %d trandate
format %tm ym


Reshaping a panel dataset from wide to long

$
0
0
Hello everyone,

I was wondering if anyone would be able to help me understand what I may be doing wrong with my ID variables, I would like to reshape my dataset from wide to long.

I have a panel dataset with 4 waves (but I'm only wanting to use waves 3 and 4, each wave has a person variable and a case/household variable. I merged the two waves using these person and case variables.

I have attached a screenshot of a section of what these variables look like:
Array Array

I then created one ID variable:
egen personID = group(person household)

And finally tried to reshape using this:
reshape long PresyrW PresmonW TypeW sexw DVGIEMPw PermJbW EmpStYW EdLevelW DVHasDCW PFTyp1W POMeth1W POEmFr1W ORetIncW OriskaW OriskcW OSaferetW OunderW OPenSavW DVAge9W PDCVal1W SpendMW LvTdayW, i(personID) j(wave)

When I try and reshape the dataset I get the error (r(9)) says that my variable ID does not uniquely identify the observations.

I was wondering how I might deal with duplicates in my ID variable or how to generate one that doesn't contain duplicates so that I am able to reshape.

Any help is greatly appreciated, thanks,

​​​​​​​Alice

Hosmer-Lemeshow test

$
0
0

Hi I'm trying to use the command sequence -estat gof- to do the Hosmer-Lemesshow test after a xtlogit, however, the result is: . estat gof invalid subcommand gof r(321); Could you help me to clarify the type of error?

Replacing missing values by sample average to retain sample size

$
0
0
I intend to replace missing values by sample average in order to retain the sample size.
It is an acceptable approach and if so, how can I proceed ( need code help)?

Thank you for your usual assistance and knowledge sharing.

Jean

Convert SAS Programs to Stata?

$
0
0
Hi

I am a current SAS user and I want to be able to convert some of my SAS programs to Stata. Since it may take me awhile to learn Stata well enough to do that, is there any Stata add in or consulting group that would offer those services? Thanks.

How to format?

$
0
0
Suppose that I have the following data:
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input int id float(tm x1 y0 x2)
1101 564        . 11922113 11922113
1101 565        . 11922113        .
1101 566        . 11922113        .
1101 567        . 11922113        .
1101 568        . 11922113        .
1101 569 11922113 11922113        .
1101 570        .  9817626  9817626
1101 571        .  9817626        .
1101 572        .  9817626        .
1101 573        .  9817626        .
1101 574        .  9817626        .
1101 575  9817626  9817626        .
1101 576        . 12423117 12423117
1101 577        . 12423117        .
1101 578 12423117 12423117        .
1101 579        . 11760556 10615331
1101 580        . 11760556        .
1101 581 11760556 11760556        .
1101 582        . 10615331 10615331
1101 583        . 10615331        .
1101 584 10615331 10615331        .
1101 585        . 13033535 13033535
1101 586        . 13033535        .
1101 587 13033535 13033535        .
1102 564        .  6037845  6037845
1102 565        .  6037845        .
1102 566        .  6037845        .
1102 567        .  6037845        .
1102 568        .  6037845        .
1102 569  6037845  6037845        .
1102 570        .  6207117  6207117
1102 571        .  6207117        .
1102 572        .  6207117        .
1102 573        .  6207117        .
1102 574        .  6207117        .
1102 575  6207117  6207117        .
1102 576        . 12815413 12815413
1102 577        . 12815413        .
1102 578 12815413 12815413        .
1102 579        . 20180704 20180704
1102 580        . 20180704        .
1102 581 20180704 20180704        .
1102 582        . 17554606 17554606
1102 583        . 17554606        .
1102 584 17554606 17554606        .
1102 585        . 16931368 16931368
1102 586        . 16931368        .
1102 587 16931368 16931368        .
end
format %tm tm
I performed the `mipolate' command as follows:
Code:
set seed 8633
gen r0 = floor((100)*runiform())/10

set seed 3368
gen r = floor((100)*runiform())/10

replace x1 = 0 if x1 < .
replace x1 = x1 + r
replace x2 = 0 if x2 < .
replace x2 = x2 + r

keep id tm x1 x2

mipolate x2 tm, by(id) gen(rxf) forward

gen x3 = sqrt(x1)
foreach v of varlist x1 x3 {
  mipolate `v' tm, by(id) gen(`v'b) backward
}
and have the following results.
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input int id float(tm x1 x2) double x2f float x3 double(x1b x3b)
1101 564   .  .7   .699999988079071         . 2.5999999046325684 1.6124515533447266
1101 565   .   .   .699999988079071         . 2.5999999046325684 1.6124515533447266
1101 566   .   .   .699999988079071         . 2.5999999046325684 1.6124515533447266
1101 567   .   .   .699999988079071         . 2.5999999046325684 1.6124515533447266
1101 568   .   .   .699999988079071         . 2.5999999046325684 1.6124515533447266
1101 569 2.6   .   .699999988079071 1.6124516 2.5999999046325684 1.6124515533447266
1101 570   .   7                  7         .  3.200000047683716 1.7888543605804443
1101 571   .   .                  7         .  3.200000047683716 1.7888543605804443
1101 572   .   .                  7         .  3.200000047683716 1.7888543605804443
1101 573   .   .                  7         .  3.200000047683716 1.7888543605804443
1101 574   .   .                  7         .  3.200000047683716 1.7888543605804443
1101 575 3.2   .                  7 1.7888544  3.200000047683716 1.7888543605804443
1101 576   . 4.8  4.800000190734863         .                 .5  .7071067690849304
1101 577   .   .  4.800000190734863         .                 .5  .7071067690849304
1101 578  .5   .  4.800000190734863  .7071068                 .5  .7071067690849304
1101 579   . 9.5                9.5         .                  8 2.8284270763397217
1101 580   .   .                9.5         .                  8 2.8284270763397217
1101 581   8   .                9.5  2.828427                  8 2.8284270763397217
1101 582   . 3.8  3.799999952316284         .  9.699999809265137 3.1144821643829346
1101 583   .   .  3.799999952316284         .  9.699999809265137 3.1144821643829346
1101 584 9.7   .  3.799999952316284  3.114482  9.699999809265137 3.1144821643829346
1101 585   . 9.2  9.199999809265137         .  4.199999809265137 2.0493900775909424
1101 586   .   .  9.199999809265137         .  4.199999809265137 2.0493900775909424
1101 587 4.2   .  9.199999809265137   2.04939  4.199999809265137 2.0493900775909424
1102 564   .  .6  .6000000238418579         .   .699999988079071  .8366600275039673
1102 565   .   .  .6000000238418579         .   .699999988079071  .8366600275039673
1102 566   .   .  .6000000238418579         .   .699999988079071  .8366600275039673
1102 567   .   .  .6000000238418579         .   .699999988079071  .8366600275039673
1102 568   .   .  .6000000238418579         .   .699999988079071  .8366600275039673
1102 569  .7   .  .6000000238418579    .83666   .699999988079071  .8366600275039673
1102 570   .  .9  .8999999761581421         .  7.800000190734863 2.7928481101989746
1102 571   .   .  .8999999761581421         .  7.800000190734863 2.7928481101989746
1102 572   .   .  .8999999761581421         .  7.800000190734863 2.7928481101989746
1102 573   .   .  .8999999761581421         .  7.800000190734863 2.7928481101989746
1102 574   .   .  .8999999761581421         .  7.800000190734863 2.7928481101989746
1102 575 7.8   .  .8999999761581421  2.792848  7.800000190734863 2.7928481101989746
1102 576   . 7.6  7.599999904632568         .  7.199999809265137  2.683281421661377
1102 577   .   .  7.599999904632568         .  7.199999809265137  2.683281421661377
1102 578 7.2   .  7.599999904632568 2.6832814  7.199999809265137  2.683281421661377
1102 579   . 7.8  7.800000190734863         .  8.800000190734863  2.966479539871216
1102 580   .   .  7.800000190734863         .  8.800000190734863  2.966479539871216
1102 581 8.8   .  7.800000190734863 2.9664795  8.800000190734863  2.966479539871216
1102 582   .   6                  6         .   .800000011920929  .8944271802902222
1102 583   .   .                  6         .   .800000011920929  .8944271802902222
1102 584  .8   .                  6  .8944272   .800000011920929  .8944271802902222
1102 585   . 1.8 1.7999999523162842         .  5.800000190734863 2.4083189964294434
1102 586   .   . 1.7999999523162842         .  5.800000190734863 2.4083189964294434
1102 587 5.8   . 1.7999999523162842  2.408319  5.800000190734863 2.4083189964294434
end
format %tm tm
As you can see, x2f is generated from x2 but with much more digits. My purpose is re-formulate x2f (and x1b, x3b) so that, when I type `list' command, the outcomes will be displayed in only one digit. I think that it is related to something like float or double format, but I don't know exactly how to deal with it.

IRF Graph axis values

$
0
0
How can i program Stata to show the irf graph results on the y axis from values 0.00 to 0.5, instead from 0 to 1. Because when the results are shown from 0 to 1 it seems like there is no elasticity at all.

Interpreting effect of X on Y when X has a linear and quadratic term in a mixed model

$
0
0
Hello,

I am using Stata 14. I have a panel dataset - firms and years (firm is the higher level). I am running a mixed model which includes both fixed and random effects.

The model equation looks like: Y = (b0 + b0i) + (b1 + b1i) X1 + (b2) X1sq + (b3 + b3i) X2 for firm i
where X1sq = X1^2 ; and I have a random intercept and a random slope for X1 and one for X2. The bi's denote the random effects (BLUPS)

b3 gives the mean effect for X2 and if I want to get a firm-specific value wrt X2 then for each firm i in year t, the value = b3 + b3i

How to get the same for X1? For a 'non-mixed' model, the marginal effect of X1 = b1+2(b2)(X1)
but I have a random slope for b1 and none for b2, in which case would the firm-specific value = (b1 + b1i)+2(b2)(X1) for firm i in year t ?

Thanks.

how to create Dummy variale

$
0
0
I have 8 countries in my data. I want to create dummy variable for each country. How can i do that.

Ovelay two histograms (with just the normal plots)

$
0
0
Hi,

I am trying to overlay two graphs. (Stata version 13)

Graph 1:

Code:
histogram depvar if (cat==0), normal lcolor(white) fcolor(white) xtitle("X variable") title("Results - Cat=0") legend(off) addplot(pci 0 0 8 0, lpattern(shortdash) lcolor(black))
As you can see, I am completely removing the histogram bars and only the normal density plot is visible (with a line going through the '0' value (mean) added via addplot()).

The second graph is the same, but for cat==1

Code:
histogram depvar if (cat==1), normal lcolor(white) fcolor(white) xtitle("X variable") title("Results - Cat=1") legend(off) addplot(pci 0 0 8 0, lpattern(shortdash) lcolor(black))
How do I combine both, so that the normal curves appear on the same graph rather than one by one.
Also, please suggest how to make one 'solid' and other 'dashed' with some legend.

Thanks in advance.

Correlated Random effects model to*solve endogeneity problem

$
0
0
Hi,
I have done random effects model to check the impact of time-independent variables. Now I could not find suitable IV to control for endogeneity. After performing correlated random effects proposed by mundlak my results are still holding (I have incorporated mean of time-dependent variables as independent variables). Can I say my results are consistent and unbiased in spite of endogeneity? (ref: Bell, A., & Jones, K. (2015). Explaining fixed effects: Random effects modeling of time-series cross-sectional and panel data. Political Science Research and Methods, 3(01), 133-153.)

reestimating a model of immigration on the native wage - help with stata input/output

$
0
0
Hello,

I'm trying to use a similar reg3 model as a paper by Bodvarsson et al (http://repec.iza.org/dp2919.pdf) to find the effects of immigration on the native wage for my undergrad dissertation. the dependant variables from each channel equation are included as explanatory variables in the aggregate equation, while other variables may also effect each channel.

aggregate equation: nativewage = immigrantwage + retaisalesgrowthflorida(%) + minwage + fairrent + usrealgdp (billions)
immigrantwage channel: immigrantwage = immigrantpopulationshare(%) + experience + highestgrade
retailsales channel: retailsalesgrowthflorida(%) = immigrantpopulationshare(%) nationalunemployment + growthofusrealgdp(%) + fedfundsrate(%)


I am using CPS data for Florida between 2010-2015, including around 6000 observations of which 1200 are foreign born, making up the 'immigrant' wage variable of my model. most of my other variables are yearly data which I merged with the CPS data. In order to overcome this and include immigrant wage as a variable, I collapsed the data so each observation is a year-retail sector-county combination weighted by subgroup (white, black, hispanic, Cuban)

collapse (mean) nativewage immigrantwage, immigrantpopulationshare ..., [aweight=subgroup], by (year retailsector county)

this collapses the data to 304 observations where each observation has a corresponding native and immigrant wage.

However, when I then run a reg3 regression (or even ols each equation) my output is incredibly sporadic. For the retail sales equation for example, I am getting every observation with a t value of >10 and p>t of 0.000. For most other variables of each equation, the signs are not even the way I would expect them to be (nationalunemployment significantly positive for retail sales, etc) could this be because im regressing annual data with a high n?

Can anyone explain to me where I may have gone wrong with either imputing my data or why I am getting these results? I have tried this now 150 different ways and am totally out of ideas. Thank you in advance!

----------------------------------------------------------

Odds ratio table with forest plot

$
0
0

Hello,

I saw this image(attached) in the following journal:

Chromosomal Instability Portends Superior Response of Rectal Adenocarcinoma to Chemoradiation Therapy
Cancer 120(11) · June 2014.

It was performed on XLSTAT. Is it possible to do such table/image in Stata ?

Thank you,
Ritu

xtprobit - endogeneity tests?

$
0
0
Hi,

I have a panel dataset and I am using a Probit model with random effects.

Could you please suggest a way (if there is one) that I can test for endogeneity of a variable, and conduct tests for instrument relevance and instrument exogeneity that is compatible with xtprobit, re?

My understanding is that ivprobit is not applicable to panel data.

Many thanks

Count groups per country

$
0
0
Hi,

my data panel consists of about 900 companies from all over the world over several years. For my descriptive analysis I would like to show a table including:

1. the number of observations per country. I get this by: -tab country. (just for the sake of completeness, not my question)

2. the number of companies per country. How can I obtain this information? A company's unique identifier is ISIN.

Many thanks
Marc

fractional probit model added to -cmp-

$
0
0
The latest version of cmp, now on SSC, adds the fractional probit model of Papke and Wooldridge (1996) as a model type. This is same model also implemented in isolation in Stata 14's -fracreg- command.

As usual, the purpose of cmp is not mimic other commands, but showing how it can do so is informative. In Stata 14, these give the same results:

Code:
webuse 401k
fracreg probit prate mrate ltotemp age i.sole
margins, dydx(mrate)

cmp setup
cmp (prate = mrate ltotemp age i.sole), ind($cmp_frac) qui
margins, dydx(mrate) predict(pr)
You can also do bivariate fractional probits, IV fractional probits, etc.

Install with "ssc install cmp, replace". Comments welcome.

--David

Save certain coefficients from many regressions for the graph

$
0
0
Hello to all,

I think my question is trvial, but still I can't rule out it.
I have a data on unemployment by age groups quarterly, my regressors includes "state unemploement" and many others. So there are 12 regressions (panel regressions) according to the age group. I'm interested in saving estimates of only "state unemploement" and plotting them in one graph, for the last purpose I want to use coefplot. If I would save all the estimates from each regression, it would be impossible to read the graph, because my model includes many regressors.

Hope to get an anwer...

lincom and interactions in linear mixed models

$
0
0
Hi, I'm Laure ROUCH, PharmD, PhD, working on linear mixed models using STATA to assess the effect of hypertension on cognitive decline in a population of middle-aged sujects. Patients were followed 10 years (inclusion : 1996, 5-year follow-up : 2001 and 10-year follow-up : 2006). At 5 and 10-year follow-up, cognitive function was assessed. Hypertension was assessed at baseline.

My model can be written as follows (in a very simplified way just to explain my problem).

xi:xtmixed perfgen hypertension##year sex##year || numsujet : i.year

perfgen is a cognitive score ranging from 0 to 100
hypertension is the hypertension status at baseline coded as 1 or 0 (year=1996)
sex is coded as 1 or 0
year is a dummy variable with 1996 as a reference

I would like to assess the evolution in cognitive performance in the hypertensive group between the inclusion (1996) and 10-year follow-up (2006).

Without the interaction term sex##year, I had no problems. I had done :

margins hypertension#year, atmeans
lincom 2006.year+1.hypertension#2006.year

And I find the same thing with the lincom as manually when I look at the results of the margins.

But when I do my model with the interaction term sex##year, obviously when I want to write my lincom, it depends on the status sex to answer the same question (evolution in cognitive performance in the hypertensive group between the inclusion (1996) and 10-year follow-up (2006)) since there is an interaction sex##year.

Is it possible to have the beta and his p-value of the evolution in cognitive performances in the hypertensive group between 1996 and 2006, WHATEVER THE SEX ? Is there any possibilty to have this result with the lincom command? The same result as manually if I look at the margins results but with the p-value of the difference of cognition between 1996 and 2006 for hypertensive people?

Could you please help me to answer this question?

Thank you so much +++ in advance for you help,

Best regards,

Dr Laure ROUCH

changing CI on the graph (from horizontal to vertical) in the coefplot command

$
0
0
Hello everyone,
I made a graph, but CI are totally in the wrong drection -- they should be vertical, I tried to change option ciopts( ) but it only changes the width, and not height. Graph.gph

I mainly used coefplot mod1 mod2 .... ciopts(height(5 ..)) vert

P.S. Somehow can't upload the picture due to 'Image resize failed due to your image library not having support for this image type. jpg'

Announcing labeldatasyntax: Stata module to produce syntax to label variables and values, given a data dictionary

$
0
0
Dear Statalisters,

I have just posted my program labeldatasyntax on SSC.

It creates a syntax file (with syntax like that below) to label variables and/or values, given a data dictionary is provided in one of a few specific formats.

label define regionlbl 1 "North" 2 "East" 3 "South" 4 "West"
label define sexlbl 1 "Male" 2 "Female"
label define yesnolbl 0 "No" 1 "Yes"

label values sex sexlbl
label values region regionlbl
label values badears yesnolbl
label values respprobs yesnolbl

label variable sex "Gender"
label variable region "Where the child lives"
label variable age "Age of child in years"
label variable dob "Date of birth"
label variable badears "Has bad ears?"
label variable respprobs "Has respiratory problems?"

It is also hoped that the .csv files provided in this package could be useful for users in communicating to data providers what is a convenient format to receive a data dictionary associated with a dataset.

Feedback welcome.

Best wishes, Mark
Viewing all 72752 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>