Quantcast
Channel: Statalist
Viewing all 72840 articles
Browse latest View live

Odds ratio for continous variables in logistic regression

$
0
0
Hi everybody

I have a question about the interpretation of the following logistic regression:

The independent variable and its coding: 0 (eGFR >90), 1 (eGFR 60-89)
One of my dependent variables is a continous variable, and has a odds ratio of 0,999

Can someone help me interpret this result ?

Is it something like:
- For every increase in the variable X, the odds for having a eGFR of 60-89 decreases by 0,999

Or is it:
- For every increase in the variable X, the odds for having a eGFR >90 decreases by 0,999 ?

I hope someone can help

Hakan Hakansen.

comparison of coefficients

$
0
0
Dear Stata Users,


I am trying to interpret a comparison of coefficients. I have two different ways of operationalizing a variable, so basically two different modells and i am supposed to compare the coefficients to see if they they are similar. This is the command i am was told to use.

reg av uv if wave==1
est sto compare1
reg av uv if wave==0
est sto compare2
suest compare1 compare2
test [compare1]uv=[compare2]uv

This is my output (rollenbild is the variable). My professor told me that the output will only be the sigificance. Now I dont really know how to interpret the whole output and what this means for my model. Can someone help? I am also thankful if someone has literatur on that topic or knows where i can find the answer because i couldnt find it anywhere. --------------------------------------------------------------------------------
| Robust
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
---------------+----------------------------------------------------------------
compare1_mean |
rollenbild | .2413509 .0298794 8.08 0.000 .1827883 .2999135
_cons | .7316354 .0248671 29.42 0.000 .6828967 .780374
---------------+----------------------------------------------------------------
compare1_lnvar |
_cons | .2346791 .0465184 5.04 0.000 .1435047 .3258535
---------------+----------------------------------------------------------------
compare2_mean |
rollenbild | -.0551894 .031637 -1.74 0.081 -.1171968 .006818
_cons | .9364011 .0253071 37.00 0.000 .8868 .9860022
---------------+----------------------------------------------------------------
compare2_lnvar |
_cons | .5572786 .034583 16.11 0.000 .4894971 .6250601
--------------------------------------------------------------------------------


Thank You!


Dropping all observations from a subgroup if a variable of only or more of them is negative

$
0
0
Hi there,

I'm trying to drop all observations from a subgroup if a variable of only or more of them is negative.
Code:
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input long(hhnum permno) float edate str9 date float(quantityb priceb quantitys prices inv)
  629 19561 11820 "12may1992"  300   43.75    .      .  300
  629 19561 13059 "03oct1995"    .       .  300   66.5    0
  629 40707 11611 "16oct1991"  300    40.5    .      .  300
  629 40707 12387 "30nov1993"  300  26.375    .      .  600
  629 40707 12417 "30dec1993"    .       .  300 26.375  300
  629 40707 12737 "15nov1994"    .       .  300 24.125    0
  629 43916 13075 "19oct1995"  100    52.5    .      .  100
  629 43916 13333 "03jul1996"    .       .  100     39    0
  629 51692 12093 "09feb1993"  500   11.25    .      .  500
  629 51692 12198 "25may1993"    .       .  500  8.625    0
  629 55976 11458 "16may1991"  200 41.0625    .      .  200
  629 55976 11550 "16aug1991"    .       .  200  47.25    0
  629 55976 13268 "29apr1996"  200    23.5    .      .  200
  629 56223 13275 "06may1996"  200  27.375    .      .  200
  629 57795 11806 "28apr1992"  800    12.5    .      .  800
  629 57795 12015 "23nov1992"  800   7.375    .      . 1600
  629 57795 12052 "30dec1992"    .       .  800      7  800
  629 57795 12540 "02may1994"    .       .  800   .375    0
  629 62148 11753 "06mar1992"  200    57.5    .      .  200
  629 62148 11819 "11may1992"    .       .  200     66    0
  629 65568 12501 "24mar1994"  300  13.125    .      .  300
  629 66069 11491 "18jun1991" 1000   9.375    .      . 1000
  629 66069 11826 "18may1992"    .       .  500 12.625  500
  629 66069 12079 "26jan1993"    .       .  500 14.625    0
  629 69489 12737 "15nov1994"  300  10.875    .      .  300
  629 76129 12211 "07jun1993"  200  26.375    .      .  200
  629 76129 12220 "16jun1993"    .       .  200  22.25    0
  629 77064 12205 "01jun1993"  300  16.375    .      .  300
  629 77064 12282 "17aug1993"    .       .  300 14.375    0
  629 78203 12222 "18jun1993"  100      42    .      .  100
  629 78203 12492 "15mar1994"    .       .  200  22.75 -100
  629 78203 12715 "24oct1994"  300    15.5    .      .  200
  629 79856 12246 "12jul1993" 1000  3.8125    .      . 1000
 6664 11749 12303 "07sep1993"  800     2.5    .      .  800
 6664 11749 12436 "18jan1994"    .       .  800 3.4375    0
 6664 76351 13200 "21feb1996" 4000    .625    .      . 4000
 6664 76809 12269 "04aug1993"  500     5.5    .      .  500
 6664 76809 12758 "06dec1994"    .       .  500 3.0625    0
 6664 87127 12390 "03dec1993"  200   23.25    .      .  200
 7980 10775 12214 "10jun1993" 1000   6.375    .      . 1000
 7980 10775 12395 "08dec1993"    .       . 1000   6.25    0
 7980 75607 12312 "16sep1993"  300    17.5    .      .  300
 7980 75607 13181 "02feb1996"  300  13.125    .      .  600
Basically, the data tracks stock purchases and sells by households. I'm tracking the running inventory in inv, but sometimes that one turns negative (which means stocks are bought before the period). If a stock has negative inventory, all purchases and sells from that stock by that specific investor should be dropped. Any ideas?

Many thanks in advance!

Linear Regression or Panel Regression

$
0
0
Hello

I just read a paper and would like to try to replicate it. However, I don't exactly understand whether the author of the paper is doing a linear regression or a panel regression. His regression equation looks like this

Array

with y_it denoting subjective inequality indices for individual i in year t. The regressor of main interest is East_it, a dummy variable denoting current residency in East Germany. The regression also includes a series of control variables and survey-year fixed effects, denoted by x_it and λ_t, respectively.


I know that he has a dataset for 3 years (1987, 1992 and 1999) in which he performs the regression, but he does not mention whether he is performing a linear regression or a panel regression, he only mentions, and I quote: "I run a series of simple regression models".
What do you think the author does in his paper a panel regression or a linear regression? I'm confused because I'm not quite sure if you can do a linear regression with a dataset over 3 years with a dummy.

Thank you so much for your help.

Using detring to remove special characters

$
0
0
Hi all,

I'm trying to remove special characters from a string variable. I have two variables. One contains only "-", another contains "-" and "+".
I successfully removed "-" from the first variable. When I try to remove "-" and "+" together, The Stata shows 'option "+ not allowed'. Here is my code: destring var, gen(new) ignore("-", "+"). How can I fix this? Thank you!

Creation of New Variable from Long Variable

$
0
0
Hello Everyone,

I am currently working on a "long" variable of 9 digits and want to make a new variable consisting of first 4 digits of long variable. I used the following command where x is the long variable and y is new first four element variable:

gen id=string(x)
gen y=substr(id,1,4)

However, i am getting decimals whereas the original long variable do not have decimals example:
for x=110100101
i am getting y = 1.10
whereas it want is 1101

Please Help. Thanks in advance.

Regards





Multicollinearity in simple diference-in-differences model

$
0
0
Hi All,
I have a simple 2-period panel data and I am trying to run a difference-in-differences model using the xtreg command.

generate treatxpost = treatment*post
xtreg sales post treatment treatxpost, fe

treatment and post are the dummy variables treatment group and post period.

I have balanced data for households and each household has sales in the pre and post period.

I am not getting an estimate for treatment as Stata is telling me: treatment omitted because of collinearity.

How do I get an estimate for both the treatment and post along with the interaction term treatxpost?

Appreciate your pointers on this.

Thanks,
Laxman.

Help with nested loops in regression and interaction terms

$
0
0
Hi,
I am trying to create a nested loop to run a series of interactions in a regression command, and I have the following questions:
  1. I have tried the following command and receive the error “invalid vce”
local hetvar “prosociality intrinsic combined_motivation high_income pracdoc male”

foreach t of local hetvar {
eststo clear
foreach y of global treatment {
eststo, title("`y'"):reghdfe `y' nodrug_b nodrug_a `t’##(nodrug_b nodrug_a) if levelcode==2, vce(robust) absorb(diseasecode clinicid)
su `y' if nodrug_b==0 & nodrug_a==0 & e(sample)==1
estadd scalar conmean = `r(mean)'
}
}

However, on running it in the following format, it runs fine

eststo clear
foreach y of global treatment {
eststo, title("`y'"):reghdfe `y' nodrug_b nodrug_a prosociality##(nodrug_b nodrug_a) intrinsic##(nodrug_b nodrug_a)combined##(nodrug_b nodrug_a) ///
high_income##(nodrug_b nodrug_a) pracdoc##(nodrug_b nodrug_a) male##(nodrug_b nodrug_a) if levelcode==2, vce(robust) absorb(diseasecode clinicid)
su `y' if nodrug_b==0 & nodrug_a==0 & e(sample)==1
estadd scalar conmean = `r(mean)'
}

Could you please tell me why this is happening? And how can I generate a nested loop where each term in the local "hetvar" with the variables "nodrug_b" "nodrug_a" in separate regressions.

2. When outputting using esttab and the following commands, while it appears to run fine, there is no output file generated. How can I generate an output file for this?

esttab using "$path/Output/Motivation/treat_motivation_thc.csv", b(%9.3fc) se(%9.3fc) ///
starlevels( * 0.1 ** 0.05 *** 0.01) ar2(2) keep(nodrug_b nodrug_a 1.prosociality##(1.nodrug_b 1.nodrug_a)) intrinsic##(nodrug_b nodrug_a) ///
combined##(nodrug_b nodrug_a) high_income##(nodrug_b nodrug_a) pracdoc##(nodrug_b nodrug_a) ///

3. Is there anyway for me to keep only the valid interaction terms in the output file, i.e. 1.prosociality#1.nodrug_a. And how can I label these interaction terms in the output file?

Many thanks,
Karishma

Stacked bar chart for panel data

$
0
0
Hi Statalist

My question is about creating a stacked bar chart for panel data. To illustrate:

Code:
. webuse citytemp

## the rest of the code is to create a made-up panel data
. generate id = _n
. generate year = 2000
. expand 4
. bysort id: replace year = year + _n
From this I want to see the changing composition of, say, the various regions over time (it doesn't change in this made-up example but in my actual dataset there are differences over time).

Code:
. tabulate region year, column nofreq

    Census |                    year
    Region |      2001       2002       2003       2004 |     Total
-----------+--------------------------------------------+----------
        NE |     17.36      17.36      17.36      17.36 |     17.36
   N Cntrl |     29.71      29.71      29.71      29.71 |     29.71
     South |     26.15      26.15      26.15      26.15 |     26.15
      West |     26.78      26.78      26.78      26.78 |     26.78
-----------+--------------------------------------------+----------
     Total |    100.00     100.00     100.00     100.00 |    100.00
Question: What I would like to do is to graph the above two-way tabulation table like the following using Stata.

The code I tried so far is
Code:
. graph hbar region, over(year) stack percent
which doesn't work. Any suggestions would be greatly appreciated.

Thanks

Interpretation of Chow test

$
0
0
Hi guys, I am new on Stata and also a non native English speaker. I have been learning Stata on my own through watching various videos on youtube and reading different posts on different websites. Currently I am stuck on how to interpret chow test, I have read different posts including https://www.stata.com/support/faqs/s...how-statistic/ , https://www.stata.com/support/faqs/s...cs/chow-tests/ but i am failing to understand the interpretation of the results.

For an example, after calculating Chow test, another author found this.....The Chow test is F(k,N_1+N_2-2*k) = F(3,174), so our test statistic is F(3,174) = 5.0064466. According to his final results I failed to understand if the grouping of 2 groups is justified or not.

perhaps I am failing to understand because of my poor level of English.

Please can someone help me in simple English how to interpret Chow test or to tell if the data is justified to be separated into 2 groups.

Split data frame by years

$
0
0

Hi everybody,

I am new in Stata. I want to split a data frame into several smaller ones. This looks like a very trivial question, however, I cannot find a solution from a web search.
My data like that,
Stock Year Return
AAA 2001 0.1
ABC 2001 0.2
AAA 2002 0.15
ABC 2002 0.2
AAA 2003 0.12
ABC 2003 0.21
ABS 2003 0.3
AAA 2004 0.1
ABC 2004 0.2
ABS 2004 0.31
HSC 2004 0.4
AAA 2005 0.13
ABS 2005 0.2
ABC 2005 0.12
Now, I want to split this data file( in 5 years) into a seperate year. 2001, 2002,2003,2004,2005. How can I do it?
Thank you in advance.


Power calculation in Cohen's kappa

$
0
0
Can anyone please tell me command for power calculation in Cohen's kappa?

Many thanks!

Error: Repeated time values within panel

$
0
0
Hello

I'm trying to make a pooled OLS regression for the years t=2 (1992 and 1999) and for the regions t = 2 (East and West)

These are survey data and I have for the year 1992, 749 observations and for the year 1999, 880 observations.

My codes look like this:

Code:
sort region year
egen panel_id = group(region)
sort panel_id year
xtset panel_id year
as soon as I type "xtset panel_id year", I get the answer "repeated time values within panel." I tried to find something about it, but I don't understand exactly where the problem lies.

Data Envelopment Analysis Subscript invalid

$
0
0
Hello together,

I am currently working on an efficiency study. For this I am using the data envelopment analysis (dea) package.

With my first runs everything worked fine with simple commands like

dea Expenses = Revenue
But after a few runs I get with the exactly same data the following error:

name: dealog
log: C:\Users\chris\OneDrive\Desktop\Stata Uni\dea.log
log type: text
opened on: 14 Jul 2019, 17:03:28
rankdea(): 3301 subscript invalid
<istmt>: - function returned error
r(3301);



Could somebody please help me to fix this problem? I already deleted the ado and reinstalled it, still the same problem...

Thank you very much!

Best Regards
Christoph

Question regarding loop function.

$
0
0
Dear Statalist members,

I am trying to set up a loop function, but so far my efforts have been fruitless. The loop function should add, depending of the value in column "k", "value of k" lagged values of variable sum_ed (I inserted the manual approach for illustrative purposes). Since k could have a maximum value of 25 I would prefer a loop function to work out the problem.



xtset c_id year, yearly
gen sum_ed = e+d
gen cum_ed = .
replace cum_ed = sum_ed if k == 0
replace cum_ed = sum_ed + L.sum_ed if k == 1
replace cum_ed = sum_ed + L.sum_ed + L2.sum_ed if k == 2
replace cum_ed = sum_ed + L.sum_ed + L2.sum_ed + L3.sum_ed if k == 3
replace cum_ed = sum_ed + L.sum_ed + L2.sum_ed + L3.sum_ed + L4.sum_ed if k == 4

I would be very grateful for any tipps how to solve my problem!

Best regards

Jan

merging the same dates into my data

$
0
0
Dear statalist members,

Im trying to manage my data. My data is like this dataset

clear all
input double id str6 V1 str6 V2 long V3 long V4 long V5
1 "A" "p" 34 20180103 20180430
1 "B" "p" 35 20160409 20160718
1 "B" "c" 34 20180203 20180530

end
gen mdate1 = mofd(daily(string(V4, "%8.0f"), "YMD"))
gen mdate2 = mofd(daily(string(V5, "%8.0f"), "YMD"))
gen length = mdate2 - mdate1 + 1
expand length
bys id mdate1 mdate2: gen mdate = mdate1[1] + _n-1

format mdate* %tm

list, sepby(id)


Because I don't want to have a repetitive date (see mdate variable), I want to reconfigure my data like this

clear all
input double id str6 mdate int A int B int D str6 A_V2 long A_V3 str6 B_V2 str6 B_V3
1 "2016m4" 0 1 0 "." 0 "p" 35
1 "2016m5" 0 1 0 "." 0 "p" 35
1 "2016m6" 0 1 0 "." 0 "p" 35
1 "2016m7" 0 1 0 "." 0 "p" 35
1 "2018m1" 1 0 0 "p" 34 "." .
1 "2018m2" 1 1 0 "p" 34 "c" 34
1 "2018m3" 1 1 1 "p" 34 "c" 34
1 "2018m4" 1 1 0 "p" 34 "c" 34
1 "2018m5" 0 1 0 "." 0 "c" 34
end


Thanks for your help

Why some control variables are dropped when I run a regression?

$
0
0
Dear Stata Users,

I have the following problem. When I run the regression as specified below my variable "d" and "i" are dropped:

Code:
reghdfe y a b c d e f g h i j, ab( industry fyear) vce(cluster gvkey)
However, when I run regression using the code below nothing is dropped:

Code:
reg y a b c d e f g h i j i.industry i.fyear, vce(cluster gvkey)
I thought that both estimation should lead to same results, but what I get is different. I attach a sample of data to understand what kind of data I have. The sample is pretty small thus, it is impossible to run "reghdfe" command. Can anyone explain me what is the problem, please?


Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input long gvkey double fyear float(g c i) byte industry float(y b f h j a e d)
1004 1996  6.198286 .14575116 0 10  .6421934 1.1670092   .02176793  .2216945            0      20.6   .001643579  .01068376
1004 1997   6.06051  .1543568 0 10  .6084191 1.3271438   .04309609  .3866963            0 15.558333  .0013579838 .009689922
1004 1998  5.986406  .2231196 0 10  .7718859 1.1737745   .04253913    .43845            0        15  .0010117558  .00956023
1004 1999  6.005518  .1931751 0 10  .9570713 1.1157874   .01383235  .6116424            0        15  .0008473565 .009451796
1004 2000  6.055693 .17841566 1 10  .9500433  .8534871   .06220395 .54848164            0        15   .000720369  .00929368
1004 2001  7.025525 .21986166 1 10  .9289029  .7305889  -.04746714  .7052776  -.023762355        15  .0007418485 .011210762
1004 2002  7.012822 .25137508 0 10 1.2834938  .9492987   .04890601  .6708933  -.027913507 15.166667   .000889821 .011160715
1004 2003  7.459666  .2385747 1 10  .9899191 1.0752404   .02122277  .4158287  -.020995585        17  .0011152189 .011415525
1004 2004  7.387214  .4020428 1 10   .778831 1.1470801  .071815275  .3976725  -.027012004 12.958334  .0009360462 .010504202
1004 2005  7.183852  .5648304 1 10  .6803353 1.1998214  -.05528591  .3535434  -.014141532    27.875  .0008638606 .011299435
1004 2006  7.089853  .4018006 1 10   .593267 1.1826457   -.0216986  .3613859   -.01301852    22.125  .0007991635 .011454754
1004 2007  6.990122  .3374212 1 10  .8935324  1.305088  .015853763  .7383251  -.009553527    18.125  .0007201883 .011682243
1004 2008  6.927704   .280715 1 10 1.0660111 1.0282017    .0473205  .5276916  -.017419824    15.875  .0006946016 .015698588
1004 2009  6.849563  .3665982 0 10  .9798111  .9495602   .11118314   .422771   -.01975028   14.3125  .0010070831 .015082956
1004 2010  5.396974  .2368699 0 10  .8884209 1.3133016   .07234841  .4521519  -.010943655 12.321428  .0009334664 .013927577
1004 2011  5.313925  .2032794 1 10 1.2088646 1.1682166   .05530053  .4200464   -.02513603     13.44  .0008223532  .01485884
1004 2012  5.310477 .23935854 0 10 1.0640327 1.0446383  .074192055  .3026769  -.019888625 10.583333  .0007734007  .01305483
1004 2013  5.245927 .17233406 0 10 1.0176708   .939043   .06542187  .3322792   -.01332121 10.954545    .00067975 .012626262
1004 2014  6.214704 .15596236 0 10  .8827152  .7834398 -.019549897  .3079313   -.02666667        11 .00056234695 .011890606
1013 2002  8.321442 .13910107 0  1  .6829544 .43603295  .024202904  .9282407  -.012847404 15.166667  .0009728841 .011160715
1013 2003   7.85847 .23123246 1  1   .472948  .7379975  .033997554  .7301185  -.016346673        15  .0010656086 .011415525
1013 2004    6.8807  .2401816 1  1  .5580432 1.0143559  .002390315  .5022993   -.00945312 13.158334    .00089606 .010504202
1013 2005  6.788796  .1923098 1  1  .5493866  1.490756   .04103354  .4548816  -.016677525 13.541667  .0008344221 .011299435
1013 2006  6.740957 .19932663 1  1  .6672375 1.0963907   .05674267  .3609815    -.0063299 14.166667   .000774268 .011454754
1013 2007  6.653339 .26965496 1  1  .5969584 1.0314378   .08799802  .3976329  .0015299184 11.458333  .0006815634 .011682243
1013 2008  7.576073 .26374537 1  1 1.1217898 1.1014975   .09950136  .6186197  -.007339927 10.083333   .000762744 .015698588
1013 2009 8.1798725  .2985752 0  1  .7493402  .6843587   .04159292  .6890941  -.009973207  8.666667   .001100561 .015082956
1013 2010  6.982082  .3852441 0  1  .6477264 1.1604294   .10166717  .3634293   -.01356392 10.818182  .0009095285 .013927577
1021 1997  9.846075  .6136998 0 30  .7580069 1.0142353   .27016488  .8539172            0 24.499075     .0013258 .009689922
1021 1998 11.488844  .3661151 0 30  .9612135  .9115766  -.03207253 1.1125025            0 22.320833  .0009909763  .00956023
1021 1999  11.71965  .6010197 0 30 1.1632849  .8696296  .017308826  1.247252            0   21.6625   .000833301 .009451796
1034 1996  7.196516  .2846989 0 13  .8226163  .9333861   .03909724  .3851183            0        25    .00148182  .01068376
1034 1997  6.271467  .3382001 0 11  .6689221 1.0290096   .05047709   .405334            0 24.916666  .0011417317 .009689922
1034 1998  6.100126  .3301407 0 11  .5701097  1.208472     .089663   .314932            0 26.666666  .0009192395  .00956023
1034 1999  5.913328  .6212426 0 11  .6749921 1.2114826    .0769449  .4647437   -.02947686 24.791666    .00075155 .009451796
1034 2000  5.501762  .7284945 1 11  .6373071 1.2298486  .028548626  .5461127            0 25.583334    .00070436  .00929368
1034 2001  6.611051  .2851371 1 11  .8949319 1.0823673  .074131526 .51649714   -.04148103    22.525  .0008433457 .011210762
1034 2002  6.608616 .26189217 0 11 1.2039704  1.262333   .06786588  .6128323  -.008750834      17.5  .0010115434 .011160715
1034 2003  5.277863 .27360022 1 11 1.0383857 1.0540502   .06835228  .4066382    .04112193   15.7125  .0010331325 .011415525
1034 2004   6.31967  .2160992 1 11  .9942163 1.0325257   .07992296 .42664465    .08064608   14.1875  .0008818614 .010504202
1034 2005  5.431157 .23490778 1 11  .7224447  .4133074   .12340344  .4061404   .029476717 17.229166  .0008279722 .011299435
1034 2006   5.58317 .16981913 1 11  .7466138 1.1810114  .026446624  .3685225    .06281013      17.8  .0007585493 .011454754
1034 2007  7.764644 .09624628 1 11  .8948778  1.104916   .05067841 .24631956    .05459006      12.2  .0006765461 .011682243
1036 1999  5.266032 .27062204 0 12  1.066985 1.0633367   .08517072  .3148709            0 11.051666    .00075155 .009451796
1038 1996  8.027528  .4926955 0  3  .8012618 1.1394268   .27732295 .39965925            0      17.5  .0017079517  .01068376
1038 1997 8.8081875  .2010124 0  3  .6956688  1.129667    .1264235  .5050235            0 19.083334   .001410056 .009689922
1038 1998  8.753667 .21488856 0  3  .7990577  1.212479   .08440398  .4985398            0    21.175  .0010612312  .00956023
1038 1999  8.698174 .17775317 0  3  .9518452 1.1365716   .09124143  .9601218            0 13.558333  .0008741743 .009451796
1038 2000  8.849781  .2652055 1  3  .8230169 1.0410123   .03655604  .7333021            0 14.083333  .0007307217  .00929368
1038 2001  8.462026  .1849345 1  3  .8629745  1.104301   .09610662  .5407371   -.01326407        15  .0007253578 .011210762
1038 2002  8.335834 .15700454 0  3  .9847959 1.3354917   .10064886 .35991305  -.005881217  8.958333  .0008666087 .011160715
1038 2003 8.3184805  .6012077 1  3  .8407105   .995116   .12286535 .26329944  -.001322904       8.5  .0010885568 .011415525
1045 1996  4.926919 .10391614 0 10  .8970879 1.0498521    .1388832 .27936104            0        10    .00148182  .01068376
1045 1997  4.883658 .10285097 0 10  .8098123 1.0460204   .14245987 .29136774            0  8.366667  .0011417317 .009689922
1045 1998  4.817433 .09687072 0 10  .8855592  1.034195   .15276118  .4392747            0  8.533334  .0009192395  .00956023
1045 1999  4.827485 .10309393 0 10   .887993  .9231971     .101511  .4327255            0      8.75    .00075155 .009451796
1045 2000  4.758995 .13574487 1 10 1.0486891 1.1112803   .12890786 .51182884            0  7.916667    .00070436  .00929368
1045 2001  5.552554  .1064688 1 10  1.062368  .9624423  .019494144  .4825249 -.0044456623  8.541667  .0008433457 .011210762
1045 2002  5.731862 .09763461 0 10  .9975878  .9122502 -.033829663    .80065   -.03555027  8.583333  .0010115434 .011160715
1045 2003  5.761844 .10436606 1 10  .9355487 1.0081508   .01985661  .9045359  -.026764406  7.208333  .0010331325 .011415525
1045 2004  5.785875 .12626468 1 10  .9246221 1.0690941   .02444596 .56725377   -.02307719    6.6875  .0008818614 .010504202
1045 2005   5.77018 .13546236 1 10  .8418692 1.1108608   .03558892 .48650345  -.033192065        13  .0008279722 .011299435
1045 2006  4.895624 .13846496 1 10  .7991756 1.0893685   .06573996  .5022422   -.04429576         6  .0007585493 .011454754
1045 2007   4.84094 .14339505 1 10  .9713714 1.0147587  .066392176  .5074719    .02345035 19.139166  .0006765461 .011682243
1045 2008  5.862623 .12202184 1 10    .80984  1.037998  -.04879073  1.030656   -.12619662 25.083334  .0008228951 .015698588
1045 2009  5.860704 .13366649 0 10  .8076021   .838046   .03694141  .8323458   -.10708389 22.916666  .0010542768 .015082956
1045 2010   5.83002 .12797599 0 10  .7931566 1.1131195   .04878528  .5005696   -.10981346       2.1  .0008844222 .013927577
1045 2011  5.820522 .14982237 1 10  .7674004 1.0835363   .02710459  .7161176   -.15113264         3   .000780789  .01485884
1045 2012  5.845049 .16866033 0 10  .7401564 1.0346766   .05363133  .1663352   -.12675457         3   .000721238  .01305483
1045 2013  5.431897  .1706839 0 10  .8193253 1.0747133  .028711187  .3460524   -.04806282    28.875  .0006052191 .012626262
1045 2014 3.1949916 .16441144 0 10  .5529742 1.5966607   .07285113  .3852571   -.10415572     44.74 .00051429373 .011890606
1050 1996 10.386128  .3504654 0 12  .4974858 1.1675164  .012724118   .922819            0 22.333334    .00148182  .01068376
1050 1997 10.910416  .3763798 0 12  .4415233  1.475528    .2239974  .8131528            0        23  .0011417317 .009689922
1050 1998 10.037476 .19709544 0 12  .4736617 1.8155668  -.05436573  .8309203            0 17.041666  .0009192395  .00956023
1050 1999  9.997805  .3888539 0 12  .8150489  .9044803  -.05408724  .8893306            0        13    .00075155 .009451796
1050 2000 10.700335   .320147 1 12  .9357916  3.764018    .0515145  .8429219            0        13    .00070436  .00929368
1050 2001 10.615652   .174635 1 12  .7076781 1.0131044    .0783956  .6977643   -.01295493 12.333333  .0008433457 .011210762
1050 2002 10.695136   .397152 0 12   .848557  .8668374   .06979068  .5690974  -.018531611 11.347222  .0010115434 .011160715
1050 2003  10.77963  .4065734 1 12  .8297303  .8641176  .034128156  .6510108  -.021723283    11.625  .0010331325 .011415525
1050 2004 10.739077  .5007769 1 12  .6155913 1.0177085   .04573067  .6675558  -.017494993 14.946428  .0008818614 .010504202
1050 2005  10.70977  .3997813 1 12  .4589422 1.1752299   .05952902  .6023861   -.01843823 15.583334  .0008279722 .011299435
1050 2006  9.488387  .3279868 1 12   .417674 1.6604187   -.0967366 .58406126   -.02025701 14.311508  .0007585493 .011454754
1050 2007  8.568949  .2759072 1 12  .4428369 1.7431644   .06263848  .6125492    -.0138499  13.83712  .0006765461 .011682243
1050 2008  8.584995  .2736984 1 12  1.089423  .9234466   .05259232  .7774599    -.0275211 15.966666  .0008228951 .015698588
1050 2009  9.834237  .4026526 0 12  .7584894  .6378677   .12408242  .7541596  -.026072374   16.8125  .0010542768 .015082956
1050 2010  8.860461  .6315836 0 12   .598528 1.0116343   .03790234 .47264585  -.021499913 10.672222  .0008844222 .013927577
1050 2011   7.42163  .3515422 1 12  .6786334  .9899717   .11683224 .43550175  -.029617494 14.316667   .000780789  .01485884
1050 2012  7.183993  .3270205 0 12   .468524  .9702569   .21209906   .356545  -.024568563 11.795834   .000721238  .01305483
1050 2013  6.542821  .3585493 0 12 .58932936 1.4610447    .2569604   .388982  -.002788808        20  .0006052191 .012626262
1050 2014  6.424712 .26572967 0 12  .6461024 1.3339803   .04666089    .34191  -.015978666        18 .00051429373 .011890606
1055 1996   8.52476 .06239302 0  2  .7634674  .8524424  -.13604979  .6584575            0 11.458333    .00148182  .01068376
1056 1996 10.043566 .11398792 0  1  .6455593 1.0457581   .06266682 .56957924            0 23.375164   .001615722  .01068376
1056 1997  8.977699 .07738095 0  1   .736703  1.268022   .10754105   .623451            0        30     .0013258 .009689922
1056 1998  8.427081 .13074987 0  1 .57096946 1.2604693    .1685812  .6989759            0        30  .0009909763  .00956023
1056 1999   8.31446 .13502447 0  1  .3869433 1.3217455   .09168339  .6170912            0        30   .000833301 .009451796
1056 2000   7.90638  .2022348 1  1  .1740207 1.1834453   .08861127 1.0825646            0      32.5   .000716845  .00929368
1056 2001  7.727344  .1712505 1  1 .45513785 1.2521676    .1898137 1.0419106 -.0004963707  40.83333  .0007525573 .011210762
1056 2002   8.60405   .090686 0  1   .655282  .8703911   .05194165  .7921726   .005906458        25   .000905575 .011160715
1056 2003  7.706922  .0442023 1  1  .6192387  1.439936    .0671471  .6056529    .01154209 21.333334  .0011168079 .011415525
1056 2004  7.315645 .29206565 1  1  .4653152 1.4192234   .07289423  .4830334    .02065141 17.416666  .0009240564 .010504202
end

Multiple variables in one matrix - to putexcel

$
0
0
Hello folks,

Been trying to figure this one out but keep getting roadblocks. Wondered if anyone else here has any suggestions.

Basically I am trying to send the results of multiple tabulated variables to excel in a usable format (i.e. output the variable names, e(b), e(pct) , e(cumpct) all outputted in the same respective columns). See below for context.

I can get as far as getting Stata to compute all the tables in a similar format - which I currently then copy paste into excel. This works, but is clunky, slow, and error prone. Is it possible to have Stata send this output into excel all in one hit (i.e. starting from a particular cell)?

Thanks in advance

Code:
putexcel set "$RESULTS/Tables/Main Table.xlsx", sheet("Descriptives") modify

foreach var of varlist CVD_total  ///
sex edlevel09 social_class smokstat prev_dm_base_dmdrg /*dmdrg*/ antihyp lipid antidep  ///
fh_cvd fh_mi fh_cva fh_ca fh_dm {
    estpost tabulate `var' if incl_ex_prevCVD==1   //ALL (analytic sample)
    
    matrix freq = e(b)
    matrix pct = e(pct)

    matrix mtx_`var' = freq \ pct
    matrix mtx_`var'_long = mtx_`var''

    }
matrix dir

// FROM HERE SOMEHOW PUTEXCEL THE LARGE OUTPUT OF TABLES TO EXCEL IN ONE HIT (STARTING FROM A CERTAIN CELL (e.g. A1)
Example of the output I get in Stata - which I simply copy-paste into excel. Means all the results are aligned and I can work with them systematically.

Array


Bottom of output

Array



Hope I am making sense

Renaming variables into more accurate names

$
0
0
Hello, I want to rename a variable into something more easily understandable however I keep running into an error 198. The problem follows below:

Code:
rename (lefthandside) (ln[H1/Ht]-2lnlnt)
syntax error
    In parsing newname, either nothing or something invalid precedes the dash.  The dash syntax is varname-varname, and the variable names
    cannot have wildcard characters in them.
r(198);
I thought I could just relabel label my axis, since I want to create a scatterplot with ln[H1/Ht]-2lnlnt on the y-axis but then I run into the same problem:

Code:
scatter lefthandside lnt, title("Log-t regression: all clubs") xtitle("lnt") ytitle ("lnH1/Ht-2lnlnt") scheme (economist)
"lnH1/Ht-2lnlnt invalid name
r(198);
How can I rename my variable
lefthandside
into
ln[H1/Ht]-2lnlnt
or label my y-axis in my scatterplot with?
ln[H1/Ht]-2lnlnt

Reflow code in do file keyboard shortcut

$
0
0
I'm looking for a way to reflow code so that code that extends beyond the page guide is automatically moved to the following line. Many editors such as Atom and RStudio have this feature. I know Stata has the option to wrap lines inside a do file that are not in view, but I'm looking for a slightly different solution that wraps lines based on the page guide. For example, in Atom you can click alt-cmd-q after selecting multiple lines and the code will be automatically reflowed based on the vertical page guide set. Is there something similar to this in Stata? I find that reflowing text in this way
makes it easier for me to read.

I'm anticipating answers that point out the ability to manually reflow long lines using three forward slashes, but I'd like to preemptively note that this is not what I'm looking for.

Also, I seem to be unable to change the page guide to a value greater than 99 in the do-file editor settings. When I set a value greater than 99 the application preferences window crashes. Is this a bug? Can someone else confirm this behavior?
Viewing all 72840 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>