Quantcast
Channel: Statalist
Viewing all 73200 articles
Browse latest View live

multilevel mixed effects, close to zero value random coefficients

$
0
0

I am experiencing some strange behavior with the mixed command where the random effects std dev is on the order of 1e-9. My Stata version is 13.1. Only two levels are used in the analysis and the mle option is employed. The residual std dev is as expected. Estimating the same model with xtreg , re using the default glm estimation method yields reasonable sigma_u and sigma_e estimates and a rho of about 0.25. My question is, can the coefficient estimates of the xtreg , re model be plugged into the mixed model so the starting point of the maximum likelihood estimate is closer to a solution? Perhaps then I can avoid the degenerate random effects variance. If not, I may have to go to ml programming for a solution and I am not looking forward to that exercise.

I would provide the output and data, but the dataset is large and proprietary. If there is interest, I can provide the output.
Thanks for your help.



multilevel mixed effects, close to zero value random variance

$
0
0

I am experiencing some strange behavior with the mixed command where the random effects standard deviationis on the order of 1e-9. My Stata version is 13.1. Only two levels are used in the analysis and the mle option is employed. The residual standard deviation is as expected. Estimating the same model with xtreg , re using the default glm estimation method yields reasonable sigma_u and sigma_e estimates and a rho of about 0.25. My question is, can the coefficient estimates of the xtreg , re model be plugged into the mixed model so the starting point of the maximum likelihood estimate is closer to a solution? Perhaps then I can avoid the degenerate random effects variance. If not, I may have to go to ml programming for a solution and I am not looking forward to that exercise.

I would provide the output and data, but the dataset is large and proprietary. If there is interest, I can provide the output.
Thanks for your help.


tripple difference regression: Interpretation and ttest in a paneldatasetting

$
0
0
Dear Statalist's,

I am trying to estimate the effect of a drop in captial gain taxes on the trading volume. Capital gains turned tax free in Germany after a one year holding-period for stocks bought before 01.01.2009. After 01.01.2009 this regulation was abolished.

Therefore I use a panel dataset with about 200 IPO's issued between 2000 and 2018 with daily trading volumes, on Stata v. 14. I am running a random-effects regression with tripple difference, as I have three dummies defining my effect:
(1) y_2009: dummy is 1 for every stock issued before 2009 (old regulation)
(2) e_m20_p20_p: dummy is 1 for a time range of 40 days around the IssueDate + 1 year (effect-period)
(3) cg_m20_p20_p: dummy is 1 for stocks with capitalgains which accured in the effect-period

(4)I also created dummy combi__m20_p20p which returns 1 if all dummies above are returning 1. This is a variable to run a t-test on whether the mean of the trading volume in the effect-period is different to the other trading volumes.

So I have two control groups: Stocks issued after 2009 and stocks with no accured capital gains in the effect-period.


This is my data, with id = id and time = DaysAfter, and I use the log(Trading Volume) = l_VC as my trading volume, as trading volume(VC) is not normally distributed.

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input int(id DaysAfter) float(VC l_VC d_2009 e_m20_p20p cg_m20_p20p combi__m20_p20p)
1 1 20608.193 9.933444 0 0 0 0
1 2    4888.4  8.49462 0 0 0 0
1 3    2365.1 7.768576 0 0 0 0
1 4 1113.4003 7.015174 0 0 0 0
1 5 1038.7002 6.945725 0 0 0 0
end
And this is the regression result:

Code:
. xtreg l_VC  i.e_m20_p20p##i.cg_m20_p20p##i.d_2009, re

Random-effects GLS regression                   Number of obs     =     91,638
Group variable: id                              Number of groups  =        160

R-sq:                                           Obs per group:
     within  = 0.0003                                         min =         28
     between = 0.0841                                         avg =      572.7
     overall = 0.0573                                         max =        600

                                                Wald chi2(7)      =      41.57
corr(u_i, X)   = 0 (assumed)                    Prob > chi2       =     0.0000

-----------------------------------------------------------------------------------------------
                         l_VC |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
------------------------------+----------------------------------------------------------------
                 1.e_m20_p20p |  -.0619384   .0445417    -1.39   0.164    -.1492386    .0253618
                1.cg_m20_p20p |   1.477089   .4878709     3.03   0.002     .5208795    2.433298
                              |
       e_m20_p20p#cg_m20_p20p |
                         1 1  |   .1816618   .0571906     3.18   0.001     .0695703    .2937532
                              |
                     1.d_2009 |   .2777146   .4466912     0.62   0.534     -.597784    1.153213
                              |
            e_m20_p20p#d_2009 |
                         1 1  |   .1966056   .0574507     3.42   0.001     .0840042    .3092069
                              |
           cg_m20_p20p#d_2009 |
                         1 1  |  -.5789455   .6348865    -0.91   0.362      -1.8233    .6654092
                              |
e_m20_p20p#cg_m20_p20p#d_2009 |
                       1 1 1  |   -.318946   .0768454    -4.15   0.000    -.4695602   -.1683318
                              |
                        _cons |   2.194597   .3709917     5.92   0.000     1.467466    2.921727
------------------------------+----------------------------------------------------------------
                      sigma_u |  1.9265326
                      sigma_e |  1.1342814
                          rho |  .74258446   (fraction of variance due to u_i)
-----------------------------------------------------------------------------------------------

. ttest l_VC, by(combi__m20_p20p)

Two-sample t test with equal variances
------------------------------------------------------------------------------
   Group |     Obs        Mean    Std. Err.   Std. Dev.   [95% Conf. Interval]
---------+--------------------------------------------------------------------
       0 |  90,613    2.936599    .0075864    2.283643     2.92173    2.951468
       1 |   1,025    3.356913    .0691598    2.214195    3.221202    3.492624
---------+--------------------------------------------------------------------
combined |  91,638      2.9413    .0075427    2.283294    2.926517    2.956084
---------+--------------------------------------------------------------------
    diff |           -.4203143    .0717073               -.5608599   -.2797687
------------------------------------------------------------------------------
    diff = mean(0) - mean(1)                                      t =  -5.8615
Ho: diff = 0                                     degrees of freedom =    91636

    Ha: diff < 0                 Ha: diff != 0                 Ha: diff > 0
 Pr(T < t) = 0.0000         Pr(|T| > |t|) = 0.0000          Pr(T > t) = 1.0000

So, it would be SO great if you could help me or give me some suggestions with the following questions:

(a) I am trying to calculate the effect of the drop in capital gains on the trading volume: As it is a tripple Difference-in-Difference, I would suggest it is the sum of ( 1.e_m20_p20p + 1.cg_m20_p20p + 1.d_2009 + e_m20_p20p#cg_m20_p20p#d_2009), isn't it? I am trying to replicate the "margins, dydx()" command from https://www.statalist.org/forums/for...ation-in-stata , but it is not possible for me, as I don't know what to put in the dydx() in a tripple Dif.
My interpretation so far is: The isolated effect of capital gains, accured during the effect-period in stocks issued before 2009 is a significant growth of (-6,1 % + 147,7 % + 27,7% - 31,8% ) (approximately) in trading volume

(b) Is it appropriate to use the logarithm of traiding volume (VC) in the regression and also in the ttest?

(c) Is the ttest an appropriate way to test, wether the mean of my target-group is significant different from the mean of my controll-group, as it is a paneldataset?


I am very very thankfull for everyone helping me and giving suggestions,

Best wishes,

Phill

multilevel mixed effects, close to zero value random variance

$
0
0
I am experiencing some strange behavior with the mixed command where the random effects std dev is on the order of 1e-9. My Stata version is 13.1. Only two levels are used in the analysis and the mle option is employed. The residual std dev is as expected. Estimating the same model with xtreg , re using the default glm estimation method yields reasonable sigma_u and sigma_e estimates and a rho of about 0.25. My question is, can the coefficient estimates of the xtreg , re model be plugged into the mixed model so the starting point of the maximum likelihood estimate is closer to a solution? Perhaps then I can avoid the degenerate random effects variance. If not, I may have to go to ml programming for a solution and I am not looking forward to that exercise.

I would provide the output and data, but the dataset is large and proprietary. If there is interest, I can provide the output.
Thanks for your help.

Common method bias in panel data?

$
0
0
A reviewer asked if I've checked common method bias in my data. From my understanding, common method bias only applies to survey data (not to secondary panel data), correct? If not, how should I get the test done with secondary data? Thank you all.

ljnhjkgghuhiojjkjjjklnklnklnlk

$
0
0
jhklhuiluliggiui
Code:
. dataex make id DaysAfter VC l_VC d_2009 e_m20_p20p cg_m20_p20p combi_m20_p20p
variable make not found
r(111);

. dataex id DaysAfter VC l_VC d_2009 e_m20_p20p cg_m20_p20p combi_m20_p20p in 1/5
variable combi_m20_p20p not found
r(111);

. dataex id DaysAfter VC l_VC d_2009 e_m20_p20p cg_m20_p20p combi__m20_p20p in 1/5

----------------------- copy starting from the next line -----------------------

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input int(id DaysAfter) float(VC l_VC d_2009 e_m20_p20p cg_m20_p20p combi__m20_p20p)
1 1 20608.193 9.933444 0 0 0 0
1 2    4888.4  8.49462 0 0 0 0
1 3    2365.1 7.768576 0 0 0 0
1 4 1113.4003 7.015174 0 0 0 0
1 5 1038.7002 6.945725 0 0 0 0
end
------------------ copy up to and including the previous line ------------------ Listed 5 out of 91765 observations . xtreg l_VC i.e_m20_p20p##i.cg_m20_p20p##i.d_2009, re Random-effects GLS regression Number of obs = 91,638 Group variable: id Number of groups = 160 R-sq: Obs per group: within = 0.0003 min = 28 between = 0.0841 avg = 572.7 overall = 0.0573 max = 600 Wald chi2(7) = 41.57 corr(u_i, X) = 0 (assumed) Prob > chi2 = 0.0000 ----------------------------------------------------------------------------------------------- l_VC | Coef. Std. Err. z P>|z| [95% Conf. Interval] ------------------------------+---------------------------------------------------------------- 1.e_m20_p20p | -.0619384 .0445417 -1.39 0.164 -.1492386 .0253618 1.cg_m20_p20p | 1.477089 .4878709 3.03 0.002 .5208795 2.433298 | e_m20_p20p#cg_m20_p20p | 1 1 | .1816618 .0571906 3.18 0.001 .0695703 .2937532 | 1.d_2009 | .2777146 .4466912 0.62 0.534 -.597784 1.153213 | e_m20_p20p#d_2009 | 1 1 | .1966056 .0574507 3.42 0.001 .0840042 .3092069 | cg_m20_p20p#d_2009 | 1 1 | -.5789455 .6348865 -0.91 0.362 -1.8233 .6654092 | e_m20_p20p#cg_m20_p20p#d_2009 | 1 1 1 | -.318946 .0768454 -4.15 0.000 -.4695602 -.1683318 | _cons | 2.194597 .3709917 5.92 0.000 1.467466 2.921727 ------------------------------+---------------------------------------------------------------- sigma_u | 1.9265326 sigma_e | 1.1342814 rho | .74258446 (fraction of variance due to u_i) -----------------------------------------------------------------------------------------------

stcurve with dummy variables

$
0
0
Hi all,

I'm using -stcurve- after a Cox model but I wanted to plot the graph not at 2 different levels of a binary variable like sex but at the 3 levels of a dummy variable (tertiles).

My dummy variable is i.bmicat.

This doesn't work : stcurve, survival at1(bmicat=1) at2(bmicat=2) at3(bmicat=3)

ERROR : at() variable bmicat not in the estimated model

I don't understand why because my model is

xi: stcox i.dyslipidemia i.bmicat etc...


In the Cox model, the dummy variable appears as follows: _Ibmicat_2 and _Ibmicat_3

What is the good syntax in the command?

Thank you very much!

Javier




Creating a variable that indicates a change - loop

$
0
0
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input str1 group int year str5(name1 name2 name3)
"A" 2007 "Ben" "Eli"   "Kate" 
"A" 2008 "Ben" "Kate"  "Eli"  
"A" 2009 "Ben" "Kate"  "Adel" 
"B" 2007 "Mia" "Sue"   "Guy"  
"B" 2008 "Mia" "Guy"   "Suzie"
"B" 2009 "Guy" "Lizzy" "Alex" 
end

I have groups (in that example A and B) and each of group has the same number of members. I consider composition of each group in 2007-2009. I want to generate variable change = 1 if there is AT LEAST ONE name change in a given year. For 2007 I assume that variable=.
I want to get this effect:

Code:
"A" 2007 "Ben" "Eli"   "Kate" .
"A" 2008 "Ben" "Kate"  "Eli"  0
"A" 2009 "Ben" "Kate"  "Adel" 1
"B" 2007 "Mia" "Sue"   "Guy"  .
"B" 2008 "Mia" "Guy"   "Suzie"1
"B" 2009 "Guy" "Lizzy" "Alex" 1

I was trying to resolve my problem using loop. The most difficult is case in which the next year, the names are the same but they are listed in a different order.
I would be grateful for the advice

Looping over correlation function

$
0
0
I am trying to produce a dataset that includes the autocorrelations of a variable (ret) across groups (portfolio). I am able to report the first autocorrelation(ret,lag_ret1) for each portfolio, but I can not figure out how to loop through and save the 2nd (ret, lag_ret2) and 3rd (ret, lag_ret3) autocorrelation for each portfolio.
Ultimately, I want to sum up all three correlation coefficients. So if the matrix methodology is not the most efficient way, I am open to other options.

Code:
matrix corre = J(3,3,0)
matrix list corre

forvalues port=1/3{
forvalues lag=1/3{
correlate ret lag_ret`lag' if portfolio ==`port'
matrix c=r(C)
matrix corre[`lag',`port']= c[`lag'+1,1]
}
}
matrix list corre
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input float portfolio double ret float(lag_ret1 lag_ret2 lag_ret3)
1  .0023099415536437717 -.00009128346   -.001888566  -.0013676324
1 -.0005077351815998554   .0023099415 -.00009128346   -.001888566
1 -.0011798939152530073  -.0005077352   .0023099415 -.00009128346
1  .0018680473880964184   -.001179894  -.0005077352   .0023099415
1  .0052912740931999915   .0018680474   -.001179894  -.0005077352
1  -.002362792730488092    .005291274   .0018680474   -.001179894
1  -.004439577525402656   -.002362793    .005291274   .0018680474
1   .011633304806676608  -.0044395775   -.002362793    .005291274
2   .010083204463551768    .009573128    -.00733168    -.00981641
2  .0029652350642756235    .010083204    .009573128    -.00733168
2   .002781096110813135    .002965235    .010083204    .009573128
2 -.0004845770619188746    .002781096    .002965235    .010083204
2   .009512511099169611  -.0004845771    .002781096    .002965235
2 -.0031710873696614394     .00951251  -.0004845771    .002781096
2  .0022258799652889617  -.0031710875     .00951251  -.0004845771
2   .010312552598198174     .00222588  -.0031710875     .00951251
3  .0014073778989679012    .005959528   -.007621187  -.0040159286
3   .014511975238374511    .001407378    .005959528   -.007621187
3  -.014570034809472146    .014511975    .001407378    .005959528
3   .011700462176626132   -.014570035    .014511975    .001407378
3 -.0024378968148746276    .011700463   -.014570035    .014511975
3 -.0025385521429901322   -.002437897    .011700463   -.014570035
3  -.004463553329410612   -.002538552   -.002437897    .011700463
3    .01900731705770385  -.0044635534   -.002538552   -.002437897
end

Logit Regression Model / Plotting Interactions

$
0
0
Hi everyone,

I am working at my final paper and need some guidance.

I want to look at the party identification with a specific party_x in the time period 2014 to 2017. My dependent variable is party identification with party_x (binary coding: 0= if identification with another party, 1= if identification with party_x). My independent variables are theoretically based and about right wing extremism.

I would like to answer two questions:

First I want to look at the general trend regarding my indicators and the party_x identification. I am planning to do so with:

logit party_x independent _variable_1 independent _variable_2 independent _variable_n ,or

My idea is to look at the Odds Ratio for getting information about the general trend over that time period in regard to the independent variables.


Second I want to look if there is an interaction term by looking at just one independent variable and the interaction with a binary coded time-dummy variable (1= 2016/2017 and 0=2014/2015):

logit party_x independent_variable_1##time-dummy independent_variable_2 independent_variable_n

Than I would look at the – margins independent_variable_1, at (time-dummy=(0(1)1)) – for the logit coefficients and plot this with – marginsplot – .

The idea behind that is to visualize the different potential interactions with the gradients of the logit coefficients.

Is that a suitable way to achieve my goals?

I am really thankful for any help or advise

Best

ICD 10 to ICD-9 mapping using GEMs

$
0
0
Hi stata users,
anybody has any experience in using GEMs files to map backwards from icd10 to ICD10 codes in stata. I have downloaded following GEM file from below link but not sure whats the way to do it accurately.
https://data.nber.org/data/icd9-icd-...e-mapping.html
Can you please share any tips?

Loop over multiple arrays via foreach

$
0
0
Hello together!

trying now since a few hours to fix my problem - I am not able to come up with an adequate solution. I need your help / advise!

I would like to run the blow shown STATA code. My problem is that it will only work for the first observation (example1 @ 1999 and example2 @ 2005).

How can I work with multiple entries? Any ideas?

Code:
tsset ID YEAR

local example1 1999
local example1 2000
local example2 1999
local example2 2005

foreach i in numlist 100005 {                  
local t `example`i''
replace  TEST_variable  = 10 if ID == "`i'" & YEAR == `t'
}
Thank you for your help!

Konstantin

-margins- after -xtlogit,fe-

$
0
0
Dear Statalist

This is a question about interpreting the results from a panel data fixed-effects logistic regression. The outcome variable is binary & the main regressor is categorical with 4 levels.

As the estimated odds ratios change depending on which base level is selected, in a cross-sectional setting I prefer to use -margins- and interpret the results in terms of average adjusted predictions (which is unaffected by the base level). However, when using -xtlogit-, the average adjusted predictions appear to change depending on the base level.

Question: is this the expected behaviour for -margins- after -xtlogit-? If so, would it be preferable to interpret the results in terms of odds ratio instead of probabilities in a panel-data setting?

Code:
use http://www.stata-press.com/data/r16/union.dta, clear

xtset idcode year, yearly

* Discretize the -grade- variable into 4 levels for illustration purpose
egen grade_category = cut(grade), at(0,7,13,16,19) icodes
label define grade_category 0 "primary" 1 "secondary" 2 "undergraduate" 3 "postgraduate"
label values grade_category grade_category
If we treat the data as cross-sectional, the results from -margins- are unchanged by the base level of the regressor.

Code:
quietly logit union i.year ib(0).grade_category

margins grade_category

Predictive margins                              Number of obs     =     26,200
Model VCE    : OIM

Expression   : Pr(union), predict()

--------------------------------------------------------------------------------
               |            Delta-method
               |     Margin   Std. Err.      z    P>|z|     [95% Conf. Interval]
---------------+----------------------------------------------------------------
grade_category |
      primary  |   .2349991   .0276247     8.51   0.000     .1808556    .2891425
    secondary  |   .2073589   .0031732    65.35   0.000     .2011395    .2135782
undergraduate  |   .1943004   .0058311    33.32   0.000     .1828718    .2057291
 postgraduate  |   .2937748   .0064781    45.35   0.000      .281078    .3064717
--------------------------------------------------------------------------------

quietly logit union i.year ib(1).grade_category
margins grade_category
*(output omitted)

quietly logit union i.year ib(2).grade_category
margins grade_category
*(output omitted)

quietly logit union i.year ib(3).grade_category
margins grade_category
*(output omitted)
This is not the case, however, with panel-data -xtlogit-
Code:
. quietly xtlogit union i.year ib(0).grade_category, fe

. margins grade_category

Predictive margins                              Number of obs     =     12,035
Model VCE    : OIM

Expression   : Pr(union|fixed effect is 0), predict(pu0)

--------------------------------------------------------------------------------
               |            Delta-method
               |     Margin   Std. Err.      z    P>|z|     [95% Conf. Interval]
---------------+----------------------------------------------------------------
grade_category |
      primary  |   .5184114   .0215869    24.02   0.000     .4761018    .5607209
    secondary  |   .5703154   .2774507     2.06   0.040      .026522    1.114109
undergraduate  |   .5507514   .2823345     1.95   0.051    -.0026142    1.104117
 postgraduate  |   .6687735   .2569906     2.60   0.009     .1650813    1.172466
--------------------------------------------------------------------------------

. quietly xtlogit union i.year ib(1).grade_category, fe

. margins grade_category

Predictive margins                              Number of obs     =     12,035
Model VCE    : OIM

Expression   : Pr(union|fixed effect is 0), predict(pu0)

--------------------------------------------------------------------------------
               |            Delta-method
               |     Margin   Std. Err.      z    P>|z|     [95% Conf. Interval]
---------------+----------------------------------------------------------------
grade_category |
      primary  |   .4661257   .2823028     1.65   0.099    -.0871777    1.019429
    secondary  |   .5184114   .0215869    24.02   0.000     .4761018    .5607209
undergraduate  |   .4985708   .0396701    12.57   0.000     .4208188    .5763228
 postgraduate  |   .6207837   .0584854    10.61   0.000     .5061544     .735413
--------------------------------------------------------------------------------

*and so on
Thanks,
Junran

Dummy variable for neonatal Mortality, Infant mortality and Child mortality

$
0
0
Hi, I am working on the Demographic health survey (DHS) data. I have to compute the neonatal Mortality (death during the first 28 days of life (0-27 days)), Infant mortality (dies with first 12 months) and Child mortality(dies with first 60 months).

I have the following variables
storage display value
variable name type format label variable label
--------------------------------------------------------------------------------
b1 byte %8.0g month of birth
b2 int %8.0g year of birth
b3 int %8.0g date of birth (cmc)
b4 byte %8.0g LABL sex of child
b5 byte %8.0g LABN child is alive
b6 int %8.0g b6 age at death
b7 int %8.0g age at death (months-imputed)
b8 byte %8.0g current age of child
b9 byte %8.0g b9 child lives with whom
b10 byte %8.0g LABB completeness of information
b11 int %8.0g preceding birth interval
b12 int %8.0g succeeding birth interval
b13 byte %8.0g b13 flag for age at death
b15 byte %8.0g LABN live birth between births
b16 byte %8.0g b16 child's line number in household


any one please how I can compute the neonatal Mortality, Infant mortality and Child mortality as a dummy variable taking value 1 if the child dies within the specified period and 0 otherwise.
{res}
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input byte b1 int(b2 b3) byte(b4 b5) int(b6 b7) byte(b8 b9 b10) int(b11 b12) byte(b13 b15 b16)
10 1994 1138 1 1   . . 12 0 1  11   . . 0 5
11 1993 1127 1 0 100 0  . . 5  62  11 0 0 .
 9 1988 1065 2 1   . . 18 0 1  25  62 . 0 4
 8 1986 1040 2 1   . . 20 0 1  16  25 . 0 3
 4 1985 1024 2 1   . . 21 4 1  10  16 . 0 0
 6 1984 1014 1 0 100 0  . . 5  12  10 0 0 .
 6 1983 1002 1 0 100 0  . . 5  13  12 0 0 .
 5 1982  989 2 1   . . 24 4 1   .  13 . 0 0
 9 2001 1221 2 1   . .  5 0 1 105   . . 0 2
12 1992 1116 1 0 105 0  . . 1 156 105 0 0 .
end
label values b4 LABL
label def LABL 1 "male", modify
label def LABL 2 "female", modify
label values b5 LABN
label values b15 LABN
label def LABN 0 "no", modify
label def LABN 1 "yes", modify
label values b6 b6
label def b6 100 "0 days", modify
label values b9 b9
label def b9 0 "respondent", modify
label def b9 4 "lives elsewhere", modify
label values b10 LABB
label def LABB 1 "month and year", modify
label def LABB 5 "year - a, m imp", modify
label values b13 b13
label def b13 0 "no flag", modify
label values b16 b16
label def b16 0 "not listed in household", modify

SEM model group analysis not concave

$
0
0
Hello there !

I am trying to use Stata and check my SEM model for measurement in-variance among two different groups. The issue that i am facing is that my model doesn't concave when i apply NO constrains to it. However as soon as i apply constrains to the measurement intercepts and thereafter to other parameters (e.g. measurement coefficients, structural coefficients) the model concaves. Essentially I would like to assess the model without any constrains and then check for goodness of fit when applying the constrains, however I can't get it to run..What is your idea about that ? I sense that the variability in the measurement intercepts is not "helping" stata to run the model !??


Thanks in advance for your response.
George

Destring with many decimal issue

$
0
0
Hi

I am fairly new to Stata, and have searched a lot on my issue but have not yet found a solution..

My issue is with destring of my variables. I have imported an excel sheets and some of my observations (due to currency conversion) contains a lot of decimals: eg.


FirmID fias
1 1604.26972699244
2 1454.1477388612

I then use the following destring command: (I have missing data as "n.a.")

destring fias, gen(fias_num) ignore("n.a.")

I get the following result:

FirmID fias_num
1 1.60e+14
2 1.45e+13

Hence that is a completely different number!!

Do anyone have a solution for this?

Fingers crossed
Rasmus

How to filter out string observations with common characters?

$
0
0
Hi Statalisters,

I am new to Stata and when I work on my data. I want to filter out string observations with common characters. For example, as the following chart shows, the string variable name is InstituteName and it contains many observations. However, these five observations presented below have a common character "Government" thus I want filter out them with character "Government". What command should I use to deal with this problem?
InstituteName
US Government
US Government
U.S Government
U.S Government
U S Government

SAMPLE SIZE CALCULATION FOR PANEL DATA/ USE OF Stata PSS

$
0
0
Good morning all
I need assistance to calculate the minimum sample size for a piece of hospital-based research work. I have read through the Stata documentation for power and sample size (Release 15) and I still do not have a sense of direction about how to do so
The aim is to determine the appropriate time for the estimation of haemoglobin concentration after blood transfusion in children aged 1- 17 years.
With objectives to determine the time of equilibration of haemoglobin concentration after a blood transfusion and to determine the relationship between time of equilibration and recipients’ variables like age, sex, and body mass index and pre-transfusion Hb concentration and To determine the relationship between donor’s Hb concentration, duration of storage of blood and equilibration time.
I wish to know what level of change occurs in haemoglobin concentration between 1, 6, 12, and 24 hours after blood transfusion
I suppose this is a time series or panel data and I am needed to show how I will derive the minimum sample size
I couldn’t decide the aspect of power and sample size section (Stata 16) to use in Stata and or how to calculate it manually and will appreciate help
Thank you
Ezeanosike Obum

Manipulating forest plot

$
0
0
Hello,

So I am working on my first meta-analysis in which I am assessing the incidence of traumatic brain injury in LMIC, so to pool the rate I needed the incidence rate reported per the specific study, the lower bound and upper bound. After using the metan command this how the forest plot looks like.
How do I zoom the plots so that the confidence interval lines of each study can be seen?
What is the command that I can use to give the weight of the studies? i.e. Population-based studies/ cohort studies/ I would want them to be given more weight.

Regards,
Gideon.
Array

Creating a sum variable of the prior 12 months

$
0
0
Dear Users,
For a current project I need to create a new variable (call it X), which exist of the sum of the previous 12 months.
I need to compute the total loan facility amount originated in the prior 12 months per Lender. Lender is the obviously the lender , Mdate is the monthly date and Money is the loan facility reported at the certain date. The total dataset is about 200.000 observations from 2000-2018. I've tried it in a few ways, but non of them reported a logical or even close to logical number.

My question: Could you help me to obtain code to generate this variable in stata?

* Example generated by -dataex-. To install: ssc install dataex
clear
input str74 Lender float(Mdate Money)
"Bank of America" 480 126420000
"General Electric Capital Corp" 480 1.20e+08
"ABN AMRO Bank NV [RBS]" 480 166666672
"Bank of America" 480 133333336
"Bank of New York" 480 7.50e+07
"Guaranty Federal Bank" 480 100625000
"Chase Manhattan Bank" 480 5.45e+08
"Chase Manhattan Bank" 480 6.00e+07
"Bank of America" 480 7000000
"Heller Financial Inc" 480 6500000
end
format %tm Mdate
[/CODE]

The method is based on the paper from Cai et all (2018)*
*Cai, Jian & Eidam, Frederik & Saunders, Anthony & Steffen, Sascha. (2017). Syndication, Interconnectedness, and Systemic Risk. Journal of Financial Stability. 34. 10.1016/j.jfs.2017.12.005.

(I'm using the most recent version of Stata).
Viewing all 73200 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>