Quantcast
Channel: Statalist
Viewing all 72873 articles
Browse latest View live

Generating data according to a pattern

$
0
0
Hello list,
I am trying to generate data according to the following pattern:

Obs A B C
----------------
1 0 0 0
2 0 0 1
3 0 1 0
4 0 1 1
5 1 0 0
6 1 0 1
7 1 1 0
8 1 1 1

Any help would be appreciated.
Thanks
André

Marginsplots for different values of x

$
0
0
Hello,
first post so apologies if something similar has been asked before.
I'm investigating the relationship between intergroup ethnic contact at the workplace and tolerance with educational years as interaction.

My X (intergroup contact) has after recoding 3 levels of contact (no contact (baseline) some contact and a lot of contact). I'd like to produce two marginsplots displaying the effect of contact with educational years interacting. One marginsplot displaying the effect of some contact at different levels of education and another displaying the effect of a lot of contact at different levels of education.

My regression looks like this:
xtreg tolerance i.RCimgclg##c.eduyrs i.gndr agea i. empl i.domicil hincfel lrscale, fe robust

So far my marginsplot looks like picture attatched

https://imgur.com/a/2nDpJbc

Thank you in advance

Testing of adjusted Kaplan-Meier survival

$
0
0
How to test for adjusted kaplan-meier survival analysis ?
I am able to create adjusted KM graph but i don't know how to test adjusted KM?
Thanks

'End Duplicates' error in mata programing

$
0
0

Hello, everyone. I'm Zhang.

I would like to ask you a question about an 'End Duplicates' error in my mata programming.

My program is used to compute some matrixs. Considering the computing limit of stata, I need to use mata language. I wrote a do file for mata-stata interaface. I also want to use an ado programmingwhichmakes all kinds of do files run. But the problem is, in my program, the ado programming and the mata programming have the repeated same 'End'. Thus my stata report a error called ‘End Duplicates’.

So, I would like to ask you two questions.
First, is it wrong to write like this, and is stata not allowing this?
Second, if my idea is resaonable, how to code in order to interface mata programming with ado programming?


My codes are:
Code:
/*Define program*/                                                          

program define MYPROGRAM
version 14.0

/*Define syntax*/                                                          

syntax using/, [name(string)  *
                       [ ... ]   // some other options
            
use "`using'", clear   // import data
confirm name `name'

     /*Create and Compute Matrix use Mata*/                                                          

     mata:
      ...
      create a matrix named MATRIX
      ...

     /*End Mata*/
     end

/*End Program*/                  
end   // you can see that, there are two 'end's make 'end duplicates' and end unrecognize
Your answer is very important to me.
Thank you very much for your answer!

Mediation analysis with STATA and control variables?

$
0
0
I've seen a lot of forms to do a mediation analysis, being Baron and Kenny (1986) steps the most popular. However, I see that they do the regressions just with the three variables of interest (reg DV IV; reg Mediator IV; reg DV IV Mediator). My first question is: is necessary to include, also, the control variables to do the analysis? And, if so, how it can be done with STATA. I've read that SEM is a good way, but it is normally done withouth control variables.

georoute command and HERE API not working?

$
0
0
hello all,

I'm having some trouble with the HERE API and the georoute package. I registered for a HERE account and generated both a javascript and a REST ID and code, and neither will work with georoute! I keep getting the Stata message "There seems so be a problem with your HERE account". Am I missing something obvious? Thanks so much, I'm very confused.

Difference of a variable based on corresponding variables in other columns

$
0
0
Hi, Based on a subset of data pasted below:

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input double year str2 state double growth str2(neigh1 neigh2 neigh3) str3 neigh4
1989 "WY"  9.05 "MT" "SD" "NE" "CO" 
1989 "IN"  4.62 "MI" "OH" "KY" "IL" 
1989 "CO"  3.01 "WY" "NE" "KS" "OK" 
1989 "MN"  2.57 "WI" "IA" "SD" "ND" 
1989 "ID"  5.31 "MT" "WY" "UT" "NV" 
1989 "NH"  3.41 "ME" "MA" "VT" ""   
1989 "DE"  2.87 "PA" "NJ" "MD" ""   
1989 "KS"  1.91 "NE" "MO" "OK" "CO" 
1989 "MI"  3.81 "OH" "IN" "WI" ""   
1989 "NV"  3.55 "ID" "UT" "AZ" "CA" 
1989 "RI"  6.21 "MA" "CT" ""   ""   
1989 "WV"  9.03 "PA" "MD" "VA" "KY" 
1989 "ND" -7.22 "MN" "SD" "MT" ""   
1989 "HI"  5.65 ""   ""   ""   ""   
1989 "NC"  4.54 "VA" "SC" "GA" "TN" 
1989 "IL"  5.65 "WI" "IN" "KY" "MO" 
1989 "AZ"  1.87 "UT" "CO" "NM" "CA" 
1989 "IA"  5.15 "MN" "WI" "IL" "MO" 
1989 "OH"  3.45 "PA" "WV" "KY" "IN" 
1989 "TX"   5.7 "OK" "AR" "LA" "NM" 
1989 "VA"  3.38 "MD" "NC" "TN" "KY" 
1989 "UT"   4.6 "ID" "WY" "CO" "NM" 
1989 "MD"  4.79 "PA" "DE" "VA" "WV" 
1989 "AL"  4.67 "TN" "GA" "FL" "MS" 
1989 "NM"  1.17 "CO" "OK" "TX" "AZ" 
1989 "AR"  3.82 "MO" "TN" "MS" "LA "
1989 "KY"  6.79 "OH" "WV" "VA" "TN" 
1989 "FL"  3.54 "GA" "AL" ""   ""   
1989 "LA"  5.04 "AR" "MS" "TX" ""   
1989 "TN"  3.83 "KY" "VA" "NC" "GA" 
1989 "CA"  3.11 "OR" "NV" "AZ" ""   
1989 "MO"  3.89 "IA" "IL" "KY" "TN" 
1989 "PA"  4.55 "NY" "NJ" "DE" "MD" 
1989 "WI"  5.14 "MI" "IL" "IA" "MN" 
1989 "VT"  6.79 "NH" "MA" "NY" ""   
1989 "ME"  5.82 "NH" ""   ""   ""   
1989 "GA"  2.68 "NC" "SC" "FL" "AL" 
1989 "OK"  6.23 "KS" "MO" "AR" "TX" 
1989 "OR"  4.72 "WA" "ID" "NV" "CA" 
1989 "NE"  5.35 "SD" "IA" "MO" "KS" 
1989 "SC"  4.19 "NC" "GA" ""   ""   
1989 "WA"  3.58 "ID" "OR" ""   ""   
1989 "MA"  5.14 "NH" "RI" "CT" "NY" 
1989 "NJ"  6.79 "NY" "CT" "DE" "PA" 
1989 "MS"  2.69 "TN" "AL" "LA" "AR" 
1989 "AK" -6.16 ""   ""   ""   ""   
1989 "CT"  5.29 "MA" "RI" "NY" ""   
1989 "SD"    .5 "ND" "MN" "IA" "NE" 
1989 "NY"  5.34 "VT" "MA" "CT" "NJ" 
1989 "MT"  -.27 "ND" "SD" "WY" "ID" 
1990 "MS"  1.18 "TN" "AL" "LA" "AR" 
1990 "IA"  3.54 "MN" "WI" "IL" "MO" 
1990 "AL"  -.28 "TN" "GA" "FL" "MS" 
1990 "WY"  1.23 "MT" "SD" "NE" "CO" 
1990 "IL"  1.76 "WI" "IN" "KY" "MO" 
1990 "CT"   .96 "MA" "RI" "NY" ""   
1990 "NV"  1.74 "ID" "UT" "AZ" "CA" 
1990 "FL"   .92 "GA" "AL" ""   ""   
1990 "WA"  2.74 "ID" "OR" ""   ""   
1990 "ME"   .62 "NH" ""   ""   ""   
1990 "MN"  2.01 "WI" "IA" "SD" "ND" 
1990 "CO"   .77 "WY" "NE" "KS" "OK" 
1990 "ID"  5.46 "MT" "WY" "UT" "NV" 
1990 "UT"   .23 "ID" "WY" "CO" "NM" 
1990 "AR"  1.82 "MO" "TN" "MS" "LA "
1990 "IN"  3.02 "MI" "OH" "KY" "IL" 
1990 "NJ"  1.23 "NY" "CT" "DE" "PA" 
1990 "LA"   .82 "AR" "MS" "TX" ""   
1990 "WI"  1.31 "MI" "IL" "IA" "MN" 
1990 "PA"  1.47 "NY" "NJ" "DE" "MD" 
1990 "ND"  7.37 "MN" "SD" "MT" ""   
1990 "AZ" -1.54 "UT" "CO" "NM" "CA" 
1990 "MO"   1.8 "IA" "IL" "KY" "TN" 
1990 "MA"  -.08 "NH" "RI" "CT" "NY" 
1990 "AK"  3.12 ""   ""   ""   ""   
1990 "GA"   .52 "NC" "SC" "FL" "AL" 
1990 "WV"  1.15 "PA" "MD" "VA" "KY" 
1990 "OR"  1.03 "WA" "ID" "NV" "CA" 
1990 "NC"  1.96 "VA" "SC" "GA" "TN" 
1990 "NM"  1.24 "CO" "OK" "TX" "AZ" 
1990 "NE"  2.77 "SD" "IA" "MO" "KS" 
1990 "MT"  4.16 "ND" "SD" "WY" "ID" 
1990 "NY"  -.41 "VT" "MA" "CT" "NJ" 
1990 "MD"   .65 "PA" "DE" "VA" "WV" 
1990 "VT"  3.07 "NH" "MA" "NY" ""   
1990 "SC"  1.95 "NC" "GA" ""   ""   
1990 "OK"    .8 "KS" "MO" "AR" "TX" 
1990 "DE"  5.52 "PA" "NJ" "MD" ""   
1990 "KS"   .31 "NE" "MO" "OK" "CO" 
1990 "OH"  1.84 "PA" "WV" "KY" "IN" 
1990 "MI"   1.2 "OH" "IN" "WI" ""   
1990 "RI"  2.25 "MA" "CT" ""   ""   
1990 "TX"   1.7 "OK" "AR" "LA" "NM" 
1990 "NH" -2.13 "ME" "MA" "VT" ""   
1990 "KY"  2.28 "OH" "WV" "VA" "TN" 
1990 "TN"   .33 "KY" "VA" "NC" "GA" 
1990 "HI"  4.51 ""   ""   ""   ""   
1990 "SD"  1.87 "ND" "MN" "IA" "NE" 
1990 "VA"  2.06 "MD" "NC" "TN" "KY" 
1990 "CA"   1.1 "OR" "NV" "AZ" ""   
end

For every state (by year) I would like to take the difference between the growth of that state and its neighboring states (for which the neighbors data exist) denoted by columns: neigh1 neigh2 neigh3 neigh4. E.g. for year 1989 for state WY, I would like to take the difference between the growth of WY and its neighbors MT, SD, NE, CO.

Would appreciate help in this regards. Thanks.

PPML multicolineality

$
0
0
Hi,


I have estimated a model with OLS and PPML.

With OLS I obtain R2 = 0.78
estat vif = 1.2

With PPML I obtain R2 = 0.95

Is it mean that I have a multicollinearity problem?

Thank you!



Resolving "Initial values not feasible" error after using melogit command and the choice between melogit and meqrlogit

$
0
0
Dear Statalists,

I'm working on a multi-level model using data from cross-countries survey data for the 2016 year. But I am encountering a problem with stata command melogit and i hope you will help me to overcome it. It's the first time I work on the multilevel model.

You can see an extract of my data structure below:
countryID is the country's identification number,
id is ID number of respondent (which is so long),
health and pensions are the binary outcomes.
AGE1 (grand mean-centered) and SEX1 are individual predictors
Primary, Secondary and Tertiary are country-level variables which represent the proportion of immigrant with primary, secondary and tertiary education in a different country

I select only one country here 56 which Country ISO 3166 Code for Belgium.

clear
input float(countryID id health pensions AGE1) float SEX1 double(Primary Secondary Tertiary)

56 2.016056e+15 1 1 -7.302176 0 43.7 31.4 24.9
56 2.016056e+15 1 0 -7.302176 1 43.7 31.4 24.9
56 2.016056e+15 0 0 -8.3021755 0 43.7 31.4 24.9
56 2.016056e+15 1 1 -9.3021755 0 43.7 31.4 24.9
56 2.016056e+15 1 1 -14.302176 1 43.7 31.4 24.9
56 2.016056e+15 1 0 -8.3021755 1 43.7 31.4 24.9
56 2.016056e+15 1 1 -15.302176 0 43.7 31.4 24.9
56 2.016056e+15 0 0 -1.3021756 1 43.7 31.4 24.9
56 2.016056e+15 0 0 -10.302176 1 43.7 31.4 24.9
56 2.016056e+15 1 1 32.697823 0 43.7 31.4 24.9
56 2.016056e+15 1 0 -11.302176 1 43.7 31.4 24.9
56 2.016056e+15 0 0 -12.302176 1 43.7 31.4 24.9
56 2.016056e+15 0 0 -11.302176 0 43.7 31.4 24.9
56 2.016056e+15 1 0 -12.302176 0 43.7 31.4 24.9
56 2.016056e+15 0 0 26.697824 1 43.7 31.4 24.9
56 2.016056e+15 1 1 -5.302176 0 43.7 31.4 24.9
56 2.016056e+15 0 1 -13.302176 1 43.7 31.4 24.9
56 2.016056e+15 1 0 -14.302176 1 43.7 31.4 24.9
56 2.016056e+15 1 1 -25.302176 1 43.7 31.4 24.9
56 2.016056e+15 1 1 -13.302176 0 43.7 31.4 24.9

When I run melogit command, i obtain this result:

melogit health SEX1 AGE1 Primary Secondary Tertiary || countryID:

Fitting fixed-effects model:

Iteration 0: log likelihood = -13691.836
Iteration 1: log likelihood = -13670.184
Iteration 2: log likelihood = -13670.165
Iteration 3: log likelihood = -13670.165

Refining starting values:

Grid node 0: log likelihood = -13053.881

Fitting full model:

initial values not feasible
r(1400);



But if meqrlogit command, I obtain the following result:

meqrlogit health SEX1 AGE1 Primary Secondary Tertiary || countryID:

meqrlogit health SEX1 AGE1 Primary Secondary Tertiary || countryID:

Refining starting values:

Iteration 0: log likelihood = -12323.137 (not concave)
Iteration 1: log likelihood = -12291.611 (not concave)
Iteration 2: log likelihood = -12283.832

Performing gradient-based optimization:

Iteration 0: log likelihood = -12283.832
Iteration 1: log likelihood = -12281.103
Iteration 2: log likelihood = -12280.951
Iteration 3: log likelihood = -12280.95
Iteration 4: log likelihood = -12280.949
Iteration 5: log likelihood = -12280.949 (not concave)
Iteration 6: log likelihood = -12280.949 (backed up)

Mixed-effects logistic regression Number of obs = 25769
Group variable: countryID Number of groups = 16

Obs per group: min = 1002
avg = 1610.6
max = 2269

Integration points = 7 Wald chi2(5) = 27.60
Log likelihood = -12280.949 Prob > chi2 = 0.0000

------------------------------------------------------------------------------
health | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
SEX1 | .1093097 .033151 3.30 0.001 .0443349 .1742844
AGE1 | -.0037471 .0009407 -3.98 0.000 -.0055908 -.0019034
Primary | -.6085568 3.400016 -0.18 0.858 -7.272466 6.055352
Secondary | -.5969638 3.397986 -0.18 0.861 -7.256895 6.062967
Tertiary | -.6036249 3.393375 -0.18 0.859 -7.254519 6.047269
_cons | 61.6622 339.7135 0.18 0.856 -604.1641 727.4885
------------------------------------------------------------------------------

------------------------------------------------------------------------------
Random-effects Parameters | Estimate Std. Err. [95% Conf. Interval]
-----------------------------+------------------------------------------------
countryID: Identity |
var(_cons) | .7042374 .2518047 .349435 1.419292
------------------------------------------------------------------------------
LR test vs. logistic regression: chibar2(01) = 2778.43 Prob>=chibar2 = 0.0000


Questions:
What is the problem with the melogit ? and with what command can I fix it ?
What do you think about meqrlogit estimation result ? Is it better then the melogit one ?
If yes, Why ?

Many thanks

Cisse abs





Panel Data

$
0
0
Hi
How do I estimate a random effects of individual and time effects in panel data models with stata?
(I want the final estimate of the model)
{ xtreg Variables ,fe Prob > F = 0.0000}
{ hausman fe re Prob>chi2 = 0.4301}
{Breusch–Pagan, Honda, king-wu, SLM, GHM (in EViews) prob cross -section = 0.0000 prob period = 0.0000 Both = 0.0000 }


How to do the final model estimation??

CMP model for multnomial probit with varying choice set

$
0
0
Hello,

I am looking to build a multinomial probit with a varying choice set for each individual via CMP. The idea is to later add other choice dimensions for building a joint model and hence I am looking for a workaround using CMP.

I understand that the "asmprobit" function does a good job of this by just not adding the rows corresponding to the alternatives that are not present.

Multi-Logistics Regression on DHS data

$
0
0
Hi All,

I am using DHS India data to undertake analysis on the topic related to Child Health. I have completed my analysis but before I publish or present my findings I need to verify that my analysis is robust.

My request: Has anyone here conducted a multi-logistics regression using DHS data and willing to share their 'do file'. I will be very grateful to you.

Thank You.

Standardized concentration indices with conindex

$
0
0
Hello everyone,

I am trying to modify the conindex user-written program so as it would calculate indirectly standardized concentration indices as well. However, I get an error 102 "too few variables specified", and I am not sure how to fix it. The option that I have added is [, STvar(varname)]. Below you can find the code. The added code has been highlighted with a red color. I haven't tried to modify the compare option so as it would incorporate the comparison of standardized coefficients. But that would be great too.

Code:
capture program drop conindex2
program define conindex2, rclass sortpreserve byable(recall)
version 11.0
syntax varname [if] [in] [fweight aweight pweight]  , [RANKvar(varname)] [, robust] [, CLUSter(varname)] [, truezero] [, LIMits(numlist min=1 max=2 missingokay)] [, generalized][, generalised] [, bounded] [, WAGstaff] [, ERReygers]  [, v(string)] [,beta(string)] [, graph] [, loud] [, COMPare(varname)] [, KEEPrank(string)] [, ytitle(string)] [, xtitle(string)] [,compkeep(numlist)] [,extended] [,symmetric] [,bygroup(numlist)] [,svy] [, STvar(varname)]
marksample touse
tempname grouptest counter
tempvar wght sumw cumw cumw_1 cumwr cumwr_1 frnk temp sigma2 meanlhs meanlhs_star cumlhs cumlhs1 lhs rhs1 rhs2 xmin xmax varlist_star weight1 meanweight1 tempx temp1x sumlhsx  temps tempex lhsex rhs1ex rhs2ex sigma2ex exrank tempgx  lhsgex lhsgexstar symrank smrankmean tempsym sigma2sym lhssym lhssymstar rhs1sym rhs2sym lhsgsym tempgxstar raw_rank_c wi_c cusum_c wj_c rank_c var_rank_c mean_c lhs_c split_c ranking  extwght temp1 meanweight  sumlhs sumwr  counts meanoverall tempdis temp0 meanlhs2  rhs temp2  frnktest meanlhsex2  equality group lhscomp  rhs1comp rhs2comp rhscomp intercept scale stvar
local weighted [`weight'`exp']
if "`weight'" != "" local weighted [`weight'`exp']
if "`weight'" == "" qui gen byte `wght' = 1
else qui gen `wght'`exp'

if "`svy'"!=""{
    if "`weight'" != ""  {
        di as error "When the svy option is used, weights should only be specified using svyset."
        exit 498
    }
    if "`cluster'"!="" {
        di as error "Warning: cluster option is redundant when using the svy option. svyset should be used to identify the survey design characteristics"
    }
    if "`robust'"!="" {
        di as error "Warning: robust option is redundant when using the svy option. svyset should be used to identify the survey design characteristics"
    }
    qui svyset
    if r(settings) == ", clear"{
        di as error "svyset must be used to identify the survey design characteristics prior to running conindex2 with the svy option."
        exit 498
    }
    local wtype = r(wtype)
    local wvar = r(wvar)
    if "`wtype'" != "." {
        local weighted "[`wtype' = `wvar']"
        qui replace `wght'=`wvar'
    }
    else replace `wght'=1
    local survey "svy:"
}

markout `touse' `rankvar' `wght' `clus' `compare'

quietly {
    local xxmin: word 1 of `limits'
    local xxmax: word 2 of `limits'

    if _by()==1 {
        if "`compare'"!="" {
            di as error "The option compare cannot be used in conjunction with by."
            exit 498
        }
    }
    if "`compkeep'"=="" local bygroup = _byindex()
    
    if "`generalised'"=="generalised" local generalized="generalized"
    
    if "`extended'"!="" | "`symmetric'"!="" {
        di as error "Please see the help file for the correct syntax for the extended and symmetric indices"
        exit 498
    }
    
    if "`xxmin'"=="" {
        scalar xmin=.
    }
    else scalar xmin=`xxmin'
    if "`xxmax'"=="" {
        scalar xmax=.
    }
    else scalar xmax=`xxmax'
    
    if "`weight'"!="" {
        sum `varlist' [aweight`exp'] if `touse'
    }
    else sum `varlist' if `touse'
    return scalar N=r(N)
    
    scalar testmean=r(mean)
    count if `varlist' < 0 & `touse'
    if r(N) > 0 {
        noisily disp as txt _n "Note: `varlist' has `r(N)' values less than 0"
    }
    
    if "`rankvar'" == "`varlist'" | "`rankvar'" ==""{
        local index = "Gini"
    }
    else local index = "CI"
    
       gen double `standvar'=`varlist'
    if "`stvar'" != "" {
        replace `standvar'=`stvar'    
        local label : variable label `stvar'
        label variable `standvar' `"`label'"'    
    }    
    
    gen double `ranking'=`varlist'
    if "`rankvar'" != "" {
        replace `ranking'=`rankvar'    
        local label : variable label `rankvar'
        label variable `ranking' `"`label'"'    
    }    
    gen double `varlist_star'=`varlist'
    
    local CompWT_options = " `varlist'"
    if "`if'"!="" {
        local compif0="`if' & `compare'==0"
        local compif1="`if' & `compare'==1"
    }
    else {
        local compif0=" if `compare'==0"
        local compif1=" if `compare'==1"
    }
    forvalues i=0(1)1 {
        if "`weight'"!=""{
            local CompWT_options`i' = "`CompWT_options' [`weight'`exp'] `compif`i'' `in',"
        }
        else local CompWT_options`i' = "`CompWT_options' `compif`i'' `in',"
    }
    if "`rankvar'"!="" {
        local Comp_options = "`Comp_options' rankvar(`rankvar')"
    }
    if "`cluster'"!="" {
        local Comp_options = "`Comp_options' cluster(`cluster')"
    }
    if xmin!=. {
        local Comp_options = "`Comp_options' limits(`limits')"
    }
    if "`v'"!="" {
        local Comp_options = "`Comp_options' v(`v')"
    }
    if "`beta'"!="" {
        local Comp_options = "`Comp_options' beta(`beta')"
    }
    if "`loud'"!="" {
        local Comp_options = "`Comp_options' loud"
    }
    if "`'"!="" {
        local Comp_options = "`Comp_options' "
    }
    foreach opt in robust truezero generalized bounded wagstaff erreygers svy{
        if "``opt''"!="" {
            local Comp_options = "`Comp_options' `opt'"
        }
    }
    
    local extended=0
    local symmetric=0
    local modified=0
    local problem=0
    
    if "`truezero'"=="truezero" {
        if testmean==0 {
            if `problem'==0  di as err="The mean of the variable (`varlist') is 0 - the standard concentration index is not defined in this case."
            local problem=1
        }
        if xmin != . {
            if xmin>0 {
                if `problem'==0 di as err="The lower bound for a ratio scale variable cannot be greater than 0."
                local problem=1
            }
        }
    }    
    if "`generalized'"=="generalized" {
        local generalized=1
    }
    else local generalized=0
    if "`truezero'"!="truezero" {
        if `generalized'==1 {
            if `problem'==0  di as err="The option truezero must be used when specifying the generalized option."
            local problem=1
        }    
        else local generalized=0
    }
    
    if "`bounded'"!="" {
        if xmax==. {
            if `problem'==0 di as err="For bounded variables, the limits option must be specified as limits(#1 #2) where #1 is the minimum and #2 is the maximum."
            local problem=1    
        }
        local bounded=1
        if xmin > xmax |xmin == xmax | xmin ==.{
            if `problem'==0 di as err="For bounded variables, the limits option must be specified as limits(#1 #2) where #1 is the minimum and #2 is the maximum."
            local problem=1
        }
        sum `varlist'
        if xmin!=.{
            if r(min)<xmin |r(max)>xmax{
                if `problem'==0 di as err="The variable (`varlist') takes values outside of the specified limits."
                local problem=1
            }    
            if r(min)>=xmin & r(max)<=xmax{        
                replace `varlist_star'=(`varlist'-xmin)/(xmax-xmin)        
            }
        }
    }
    else local bounded=0
    if "`wagstaff'"=="wagstaff" local wagstaff=1
        else local wagstaff=0
    if "`erreygers'"=="erreygers" local erreygers=1
        else local erreygers=0    
    if `bounded'==0 & (`erreygers'==1| `wagstaff'==1){
        di as err="Wagstaff and Erreygers Normalisations are only for use with bounded variables."
        di as err="Hence the bounded and limits(#1 #2) options must be used to specify the theoretical minimum (#1) and maximum (#2)."
        local problem=1
    }    
    if (`erreygers'==1 & `wagstaff'==1){
        di as err="The option wagstaff cannot be used in conjunction with the option erreygers."    
        local problem=1
    }
    if "`v'"!="" {
        capture confirm number `v'
        if _rc {
            di as err="For the option v(#), # must be a number greater than 1."
            local problem=1
        }
        if `v'<=1 & _rc==0 {
            di as err="For the option v(#), # must not be less than 1."
            local problem=1
        }
        local extended=1
    }
    if "`beta'"!=""  {
        capture  confirm number `beta'
        if _rc {
            di as err="For the option beta(#), # must be a number greater than 1."
            local problem=1
        }
        if `beta'<=1 & _rc==0 {
            di as err="For the option beta(#), # must not be less than 1."
            local problem=1
        }
        local symmetric=1
    }
    
    if `extended'==1 & `symmetric'==1{
        di as err="The option v(#) cannot be used in conjunction with the option beta(#)."
        local problem=1
    }
    
    if (`extended'==1 | `symmetric'==1) & (`erreygers'==1| `wagstaff'==1){
        di as err="Wagstaff and Erreygers Normalisations are not supported for extended/symmetric indices."
        local problem=1
    }    
    
    if (`generalized'==1) & (`erreygers'==1| `wagstaff'==1){
        di as err="Cannot specify generalized in conjunction with Wagstaff or Erreygers Normalisations."
        local problem=1
    }    
    
    if xmin != . {
        sum `varlist'
        if r(min)<xmin{
            if `problem'==0 di as err="The variable (`varlist') takes values outside of the specified limits."
            exit 498
        }
        if "`truezero'"=="truezero" {
            di as txt="Note: The option truezero has been specified in conjunction with the limits option."
            if `extended'==1 | `symmetric'==1{
                di as txt="      The index will be calculated using the standardised variable (`varlist' - min)/(max - min)."
            }
            else di as txt="      The limits are redundant as the variable is assumed to be ratio scaled (or fixed)."
        }
    }
        
    if "`truezero'"!="truezero" & `extended'==0 & `symmetric'==0 & `erreygers'==0 & `wagstaff'==0  & `generalized'==0 & `bounded'==0{
        local modified=1
        if xmin == . | xmax != . {
            di as err="For the modified concentration index, the limits option must be specified as limits(#1) where #1 is the minimum."
            di as err="If you require an alternative index, please look at the help file by typing - help conindex2 - to find the correct syntax."
            local problem=1
        }    
        if xmin == . {
            di as err="For the modified concentration index (the default), a missing value (.) may not be used as the lower limit. "
            local problem=1
        }
        sum `varlist'
        if r(min)==r(max){
            di as err="The modified concentration index cannot be computed since the variable (`varlist') is always equal to its minimum value."
            local problem=1
        }
    }
    
    if "`truezero'"!="truezero" {
        if `extended'==1 | `symmetric'==1{
            di as err="The extended and symmetric indices should be used for ratio-scale variables and hence truezero must be specified also."
            local problem=1
        }
    }    
    
    if "`graph'"=="graph"{
        if "`truezero'"!="truezero" & `bounded'!=0{
            di as err="Graph option only available for ratio-scale variables - please also specify the truezero option if the variable is ratio-scale or the bounded option if the variable is bounded."
            local problem=1
        }
        if "`wagstaff'"=="wagstaff" | "`erreygers'"=="erreygers"{
            di as err="Graph option not supported for Wagstaff or Erreygers Normalisations."
            local problem=1
        }
        if `extended'==1 | `symmetric'==1{
            di as err="Graph option not supported for Extended or Symmetric Indices."
            local problem=1
        }
    }
    
    if "`loud'"=="loud" local noisily="noisily"    
    if `problem'==1  exit 498
    if `generalized'==1 & `extended'==1 noisily disp as txt _n "Note: The extended index equals the Erreygers normalised CI when v=2"
    if `generalized'==1 & `symmetric'==1 noisily disp as txt _n "Note: The symmetric index equals the Erreygers normalised CI when beta=2"
    
    if "`robust'"=="robust" | "`cluster'"!=""{
        local SEtype="Robust std. error"
    }
    else local SEtype="Std. error"


    if "`svy'"!="" & (`extended'==0 & `symmetric'==0) gen `scale'=1
    else gen double `scale'=sqrt(`wght')
    
    gsort -`touse' `ranking'
    egen double `sumw'=sum(`wght') if `touse'
    gen double `cumw'=sum(`wght') if `touse'
    gen double `cumw_1'=`cumw'[_n-1] if `touse'
    replace `cumw_1'=0 if `cumw_1'==.
    bys `ranking': egen double `cumwr'=max(`cumw') if `touse'
    bys `ranking': egen double `cumwr_1'=min(`cumw_1') if `touse'
    gen double `frnk'=(`cumwr_1'+0.5*(`cumwr'-`cumwr_1'))/`sumw' if `touse'
    gen double `temp'=(`wght'/`sumw')*((`frnk'-0.5)^2) if `touse'
    egen double `sigma2'=sum(`temp') if `touse'
    replace `temp'=`wght'*`varlist_star'
    egen double `meanlhs'=sum(`temp') if `touse'
    replace `meanlhs'=`meanlhs'/`sumw'
    
    if  `modified'==1 & `bounded'==0{
        replace `meanlhs'=`meanlhs'-xmin
    }


    if "`graph'"=="graph" {
         capture which lorenz
         if _rc==111 disp "conindex2 requires the lorenz.ado by Ben Jahn to produce graphs. Please install this before using conindex2."
        if "`ytitle'" ==""{
            local ytext : variable label `varlist'
            if "`ytext'" == "" local ytext "`varlist'"
            local ytitle = "Cumulative share of `ytext'"
            if `generalized'==1 {
                if "`ytext'" == "" local ytext "`varlist'"
                local ytitle = "Cumulative average of `ytext'"
            }
        }
        if "`xtitle'" ==""{
            if "`rankvar'"  == "" local xtext : variable label `varlist'
            if "`rankvar'"  != "" local xtext : variable label `ranking'
            if "`xtext'" == "" local xtext "`rankvar'"
            if "`xtext'" == "" local xtext "`varlist'"
            local xtitle = "Rank of `xtext'"
        }    
        if `generalized'== 0{
            lorenz estimate `varlist_star', pvar(`ranking')
            lorenz graph, ytitle(`ytitle', size(medsmall)) yscale(titlegap(5))  xtitle(`xtitle', size(medsmall))  ytitle(`ytitle', size(medsmall)) graphregion(color(white)) bgcolor(white)
        }
        if `generalized'==1 {
            lorenz estimate `varlist_star', pvar(`ranking') generalized
            lorenz graph, ytitle(`ytitle', size(medsmall)) yscale(titlegap(5))  xtitle(`xtitle', size(medsmall))  ytitle(`ytitle', size(medsmall)) graphregion(color(white)) bgcolor(white)
        }    
    }

    
    noisily  di in smcl ///
        "{hline 19}{c TT}{hline 13}{c TT}{hline 13}{c TT}{hline 19}" _c
    noi di in smcl  "{c TT}{hline 10}{c TRC}"

    noisily  di in text "Index:" _col(20) "{c |} No. of obs." _col(34) ///
          "{c |} Index value" _col(48) "{c |} `SEtype'" _col(68) ///
          "{c |} p-value" _col(79) "{c |}"
    noisily  di in smcl ///
        "{hline 19}{c +}{hline 13}{c +}{hline 13}{c +}{hline 19}" _c
    noi di in smcl  "{c +}{hline 10}{c RT}"
    
    gen double `lhs'=2*`sigma2'*(`varlist_star'/`meanlhs')*`scale' if `touse'
    gen double `intercept'=`scale' if `touse'
    gen double `rhs'=`frnk'*`scale' if `touse'
    
    local type = "`index'"
    
    if  `modified'==1 & `bounded'==0{
        replace `meanlhs'=`meanlhs'+xmin
    }
    
    if `generalized'==0 & `erreygers'==0 & `wagstaff'==0{
        `noisily'  disp "`index'"
        local type = "`index'"
    }
    if `modified'==1 {
        `noisily'  disp "Modified `index'"
        local type = "Modified `index'"
        replace `lhs'=`lhs'*(`meanlhs')/(`meanlhs'-xmin) if `touse' ==1
    }    
    if `wagstaff'==1{
        `noisily'  disp "Wagstaff Normalisation"
        local type = "Wagstaff norm. `index'"
        replace `lhs'= `lhs'/(1-`meanlhs') if `touse'
    }
    if `erreygers'==1{
        `noisily'  disp "Errygers Normalisation"
        local type = "Erreygers norm. `index'"
        replace `lhs'= `lhs'*(4*`meanlhs') if `touse'
    }
    if `generalized'==1 {
        `noisily'  disp "Gen. standard `index'"
        local type = "Gen. `index'"
        replace `lhs'=`lhs'*`meanlhs' if `touse'
    }    
    
    if `extended'==1 | `symmetric'==1{
        gsort -`touse' `frnk'
        gen double `temp1'=`wght'*`varlist_star' if `touse'
        egen double `sumlhs'=sum(`temp1') if `touse'
        bys `ranking': egen double `sumwr'=sum(`wght') if `touse'
        bys `ranking': egen double `counts'=count(`temp1') if `touse'
        gen `meanoverall'=`sumlhs'/`sumw' if `touse'
        bys `ranking': egen double `temp0'=rank(`ranking') if `touse', unique
        bys `ranking': egen double `meanlhs2'=sum(`temp1') if `touse'
        replace `meanlhs2'=`meanlhs2'/`sumwr' if `touse'
    }    
    
    
    if `extended'==1{
        capture drop `lhs'
        capture drop `rhs'
        capture drop `temp2'
        gen double `rhs'=((`sumwr'/`sumw')+((1-(`cumwr'/`sumw'))^`v')-((1-(`cumwr_1'/`sumw'))^`v')) if `temp0'==1
        egen double `temp2'=sum(`rhs'^2) if `temp0'==1
        gen double `lhs'=(`meanlhs2'/`meanoverall')*`temp2' if `touse' & `temp0'==1
        local type = "Extended `index'"    
        if `generalized'==1{
            local type = "Gen. extended `index'"
            replace `lhs'=(`meanlhs2'*(`v'^(`v'/(`v'-1)))/(`v'-1))*`temp2' if `touse' & `temp0'==1
        }
    }            
    
    if `symmetric'==1{
        capture drop `lhs'
        capture drop `rhs'
        capture drop `temp2'
        gen double `rhs'=(2^(`beta'-2))*(abs((`cumwr'/`sumw'-0.5))^`beta'-(abs(`cumwr_1'/`sumw'-0.5))^`beta') if `temp0'==1
        egen double `temp2'=sum(`rhs'^2) if `temp0'==1
        gen double `lhs'=(`meanlhs2'/`meanoverall')*`temp2' if `touse' & `temp0'==1
        local type = "Symmetric `index'"
    
        if `generalized'==1{
            local type = "Gen. symmetric `index'"
            replace `lhs'=`meanlhs2'*4*`temp2' if `touse' & `temp0'==1
        }
    }
    `noisily'  regress `lhs' `rhs' `intercept' `standvar' if `touse'==1, `robust' cluster(`cluster') noconstant
    if "`survey'"=="" `noisily'  regress `lhs' `rhs' `intercept' `standvar' if `touse'==1, `robust' cluster(`cluster') noconstant
    if "`survey'"=="svy:" `noisily' svy: regress `lhs' `rhs' `intercept' `standvar' if `touse'==1,  noconstant

    
    return scalar RSS=e(rss)
     mat b=e(b)
     mat V=e(V)
     return scalar CI= b[1,1]
     return scalar CIse= sqrt(V[1,1])

    if `extended'==1 | `symmetric'==1{
        `noisily'   regress `lhs' `rhs' `standvar' if `temp0'==1, robust
        return scalar RSS=e(rss)
        mat b=e(b)
        mat V=e(V)
        return scalar CI= b[1,1]
        return scalar CIse = .
    }
    
    return scalar Nunique= e(N)
    local nclus= e(N_clust)
    local t=return(CI)/return(CIse)
     local p=2*ttail(e(df_r),abs(`t'))
     noisily  di in text "`type'" _col(20) "{c |} " as result return(N) ///
        _col(34) "{c |} " as result return(CI) _col(48) "{c | }" ///
         as result return(CIse) _col(68) "{c |} " as result %7.4f ///
        `p' _col(79)"{c |}"
     noisily  di in smcl ///
        "{hline 19}{c BT}{hline 13}{c BT}{hline 13}{c BT}{hline 19}" _c
    noi di in smcl  "{c BT}{hline 10}{c BRC}"

    if `nclus'!=. noisily  di in text "(Note: Std. error adjusted for `nclus' clusters in `cluster')"
    if return(Nunique)!=return(N) noisily  di in text "(Note: Only " return(Nunique) " unique values for `rankvar')"
    if `extended'==1 | `symmetric'==1{
        noisily  di in text "(Note: Standard errors for the extended and symmetric indices are not calculated by the current version of conindex2.)"
    }
    
    if "`keeprank'"!="" {
        tempname savedrank
        gen  double `savedrank'=`frnk'
        if _by()==0  {
            confirm new variable `keeprank'`compkeep'
            gen  double `keeprank'`compkeep'=`savedrank'
        }
        if _by()==1 {
            gen  double `keeprank'_`bygroup'=`savedrank'
            }            
    }
    



    if "`compkeep'"!="" {
        confirm new variable templhs
        gen double templhs=`lhs'
        confirm new variable temprhs
        gen double temprhs=`rhs'
    }
    if "`compare'"!=""{
        egen `group' = group(`compare')
        qui sum `group' if `touse' , meanonly
        scalar gmax=r(max)
        noisily  di in text ""
        noisily  di in text ""
        noisily  di in text "For groups:"
        noisily  di in text ""
        noisily  di in text ""
        
        gen double `lhscomp'=.  
        gen double `rhscomp'=.
        foreach i of num 1/`=scalar(gmax)'  {
            if "`if'"!="" {
                local compif`i'="`if' & `group'==`i'"
            }
            else {
                local compif`i'=" if `group'==`i'"
            }
            if "`weight'"!=""{
                local CompWT_options`i' = "`CompWT_options' [`weight'`exp'] `compif`i'' `in',"
            }
            else local CompWT_options`i' = "`CompWT_options' `compif`i'' `in',"
            qui sum `compare' if `touse' & `group'==`i', meanonly
            noisily  di in text "CI for group `i': `compare' = "r(mean)
            noisily conindex2 `CompWT_options`i'' `Comp_options' keeprank(`keeprank') compkeep(`i')
            noisily  di in text ""
            replace `lhscomp'=templhs if `touse' & `group'==`i'
            replace `rhscomp'=temprhs if `touse' & `group'==`i'
            drop templhs temprhs
            }    
        `noisily'  regress `lhscomp' c.`rhscomp' i.`group' if `touse',  `robust' cluster(`cluster')
        return scalar N_restricted=e(N)
        return scalar SSE_restricted=e(rss)
        `noisily'  regress `lhscomp' c.`rhscomp'##i.`group' if `touse',  `robust' cluster(`cluster')
        noisily  di in text ""
        return scalar SSE_unrestricted=e(rss)
        return scalar N_unrestricted=e(N)

        return scalar F=[(return(SSE_restricted)-return(SSE_unrestricted))/(gmax-1)]/(return(SSE_unrestricted)/(return(N_restricted)-2*gmax))
        local p=1 - F(gmax-1,(return(N_restricted)- 2*gmax), return(F))                        /* OO'D made two changes to second df 28.5.14 */
        noisily  di in text "Test for stat. significant differences with Ho: diff=0 (assuming equal variances)" _col(50) "
        noi di in smcl "{hline 19}{c TT}{hline 19}{c TRC}"
        noisily  di in text "F-stat = " as result return(F) _col(20) "{c |} p-value= "  as result %7.4f `p' _col(40) "{c |}"        
        noi di in smcl "{hline 19}{c BT}{hline 19}{c BRC}"

        if gmax==2{
            disp "Group: `compare'=0"
            conindex2 `CompWT_options1' `Comp_options'
            return scalar CI0=r(CI)
            return scalar CIse0=r(CIse)
            disp "Group: `compare'=1"

            conindex2 `CompWT_options2' `Comp_options'
            return scalar CI1=r(CI)
            return scalar CIse1=r(CIse)
            return scalar Diff= return(CI1)-return(CI0)
    
            return scalar Diffse= sqrt((return(CIse0))^2 + (return(CIse1))^2)
            return scalar z=return(Diff)/return(Diffse)
            local p=2*(1-normal(abs(return(z))))
            noisily  di in text "Test for stat. significant differences with Ho: diff=0 " _col(50) "(large sample assumed)"
            noi di in smcl ///
                "{hline 19}{c TT}{hline 23}{c TT}{hline 17}{c TT}{hline 18}{c TRC}"
            noisily  di in text "Diff. = " as result return(Diff) _col(20) ///
                "{c |} Std. err. = " as result return(Diffse) _col(44) ///
                "{c |} z-stat = " as result %7.2f return(z) _col(59) "{c |} p-value = " as result %7.4f `p' _col(79)"{c |}"                
            noi di in smcl ///
                "{hline 19}{c BT}{hline 23}{c BT}{hline 17}{c BT}{hline 18}{c BRC}"
        }
    }    
}
end


Any help would be much appreciated.

Thanos


Firm fixed effects

$
0
0
Hi Statalist,


I have a question about firm-fixed effects.

My regression looks like:

Dependent var = independent var + controls

My dependent var is a continuous variable, and my independent var is a dummy variable. This dummy variable can, of course, be 1 or 0. It can go from 1 to 0 in consecutive years, but NOT from 0 to 1.

I made paneldata by xtset CIK fyear, where CIK is the company identifier.

My research supervisor said that when I include firm-fixed effects, for the B1 coefficient stata only looks at those firms that go from 1 to 0 in consecutive years (because all other firm-years are 'constant').

Is this true, and can anyone elaborate on this so that I will be able to defend this story more strongly?
If you need more information please feel free to ask...

Delete records with missing valuables

$
0
0
How can I delete all observations that have missing data in any of the variables that I have

I have more than 61000 observations with 3809 variables and i need to keep only observation that are complete

ttest for time series

$
0
0
Hello all,

I got monthly data of the standard deviation of my betas (Sd_beta) and three dummy variables P1-P3, which indicate wheter the volatility of Ted (Ted_Vol) is in the first second or third tercile.
I've regressed this dummies on the above mentioned standard Deviation and got the coefficients for P1 and P3. In the next step I have to determine if the difference (P3-P1) between the standard deviation given that P1=1 or P3 is statistically significant. I'm not qute sure how to approach this task. Is there a way to compute monthly differences and use them for a t-test?
Another thought of mine was to calculate the difference of Ted_Vol if P1=1 and P3=1 but I do not know how to match this numbers regarding my time variable.

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input float date int Jahr byte Monat double(Sd_Beta Ted_Vol) float(P1 P3)
469 1999  2  .2162872850894928 .00042292120633646846 1 0
470 1999  3 .21683630347251892  .0005120532005093992 1 0
471 1999  4 .21574001014232636  .0005089303012937307 1 0
472 1999  5 .21684658527374268   .001667482778429985 0 0
473 1999  6 .21839885413646698  .0011676736176013947 0 0
474 1999  7 .21961617469787598  .0005640562158077955 0 0
475 1999  8 .22290168702602386  .0003095806168857962 1 0
476 1999  9 .23819047212600708   .001692846417427063 0 1
477 1999 10  .2405623197555542   .001672371756285429 0 0
478 1999 11 .23849868774414063  .0029762284830212593 0 1
479 1999 12 .24061444401741028   .002350094262510538 0 1
480 2000  1 .23947422206401825  .0033114443067461252 0 1
481 2000  2 .23897820711135864  .0017416990594938397 0 1
482 2000  3 .23724332451820374  .0005756043246947229 0 0
483 2000  4 .23789244890213013  .0004283250018488616 1 0
484 2000  5 .23350174725055695   .001657421002164483 0 1
485 2000  6 .23567992448806763   .002668452449142933 0 1
486 2000  7 .23621943593025208  .0005725464434362948 1 0
487 2000  8 .23868688941001892  .0008709787507541478 0 0
488 2000  9 .24259942770004272  .0012692536693066359 0 0
489 2000 10 .24906545877456665 .00048570294165983796 1 0
end
format %tm date

Thank you in advance.

IV estimation with ordinal endogenous variable and ordinal instrumental variable

$
0
0
Hello, everyone. I am actually quite new with Statalist, and just beginning to learn Stata beyond what was taught in our syllabi. I need your help with IV estimation. I wish to estimate mortality risk with BMI as main predictor (with survival analysis). To address the issue of reverse causation involving BMI and comorbid illness, I would like to use BMItime–1 as instrumental variable for BMI... with both BMIs as ordinal variables (underweight, normal[baseline], overweight, obese 1, obese 2, obese 3). What should I use in Stata? Is it ivregress or ivpoisson. Also, can anyone help me on how to code this in Stata? So far, the Stata manual hasn't been really helpful (the instrument is treated as continuous), and I've searched as extensively as I can, but came up with nothing. Do I create a separate dummy variable for each BMI class (e.g., BMI_time-1_underweight = 0 or 1, etc.)?

For completion, my other exogenous variables are age, sex, and current smoking status, and my other instruments are diabetestime-1 (0 or 1) and cardiovascular diseasetime-1 (0 or 1) and smoking status at time-1 (smoker vs nonsmoker).

Thank you all so much for your time and understanding.

Running sum of observations by group for last 3 years

$
0
0
Dear Statalists,

my dataset includes company IDs and patents the companies invented per year. Each line is a patent invented in a certain year by a certain company, so that there might be several lines per company/year. I am struggeling with the running sum of the number of patents (= number of obs.) per company for the last 3 years.

In my example, for 1994 I would like to have a 2 as in that year two patents have been invented and there are no previous years for that company. For 1995, I would like to have 8 (6 from 1995 and 2 from 1994). For 1996 its 11, for 1997 its 12 (1994 drops out) and so on...

Any ideas? Thanks in advance!

I am using Stata MP 15.0.


Code:
clear
input long permno float grant_year
10016 1994
10016 1994
10016 1995
10016 1995
10016 1995
10016 1995
10016 1995
10016 1995
10016 1996
10016 1996
10016 1996
10016 1997
10016 1997
10016 1997
10016 1998
10016 1998
10016 1998
10016 1998
10016 1998
10016 1998
10016 1998
10016 1998
end

Creating a local list from a variable

$
0
0
I have the following string values for two variables. I would like to create a local list from Var1 and/or Var2.

clear
input str4 Var1 str3 Var2
"A f" "H O"
"B" "L"
"C" "Z"
"D" "N t"
"E g" "m o"
"F" "a p"
"G" "w"
"" "q"
"" "po"
end
[/CODE]

when I use levels of this is what I get:
levelsof Var1, local(levels)
`"A f"' `"B"' `"C"' `"D"' `"E g"' `"F"' `"G"'

local List1 I desire is:
`" "A f" "B" "C" "D" "E g" "F" "G" "'

Similarly,
levelsof Var2, local(levels) is:
`"H O"' `"L"' `"N t"' `"Z"' `"a p"' `"m o"' `"po"' `"q"' `"w"'

local List2 I desire is:
`" "H O" "L" "N t" "Z" "a p" "m o" "po" "q" "w" "'

The goal is to eliminate manual entry to create List1 and and List2. Instead just grab them from the Var1 or Var2 and create a local list.

Any help would be appreciated.
Thanks

Estimating adjusted means and 95% CI using regression stata

$
0
0
Dear all,

Now i am analyzing a repeated measurable longitudinal data.

by linear regress model, Y, x, covariates ( age, sex, education, incomes) , i.obesity

i would like to get the adjusted means (95% confidence interval ) of Y at different level of obesity.

do i perform the command of margins?

what is the correct code to get this above results ?

i am grateful for your help.

Jianbo
Viewing all 72873 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>