Quantcast
Channel: Statalist
Viewing all 72832 articles
Browse latest View live

Signal-to-noise (lambda) coefficient and significance when using sfpanel in stochastic frontier analysis

$
0
0
I am calculating a stochastic frontier model in STATA with the comand "sfpanel". If I use a frontier model (such as True Fixed Effects of Greene, 2005) in the "normal" way with "sfpanel" I get the coefficient of the signal-to-noise (lambda) parameter, and also its significance. The code and the output would be similar to:


sfpanel dependent_variable var1 var2 var3, model(tfe)

-----------------+----------------------------------------------------------------
Usigma |
_cons | -5.816884 .2313009 -25.15 0.000 -6.270225 -5.363542
-----------------+----------------------------------------------------------------
Vsigma |
_cons | -6.935131 .2543468 -27.27 0.000 -7.433641 -6.43662
-----------------+----------------------------------------------------------------
sigma_u | .0545607 .00631 8.65 0.000 .0434949 .0684418
sigma_v | .0311929 .0039669 7.86 0.000 .0243111 .0400226
lambda | 1.749138 .0093071 187.94 0.000 1.730897 1.76738
------------------------------------------------------------------------------



However, when I execute the same model, but including an exogenous factor for the inefficiency term (u), the STATA output does not provide the coefficient nor the significance of lambda. I can calculate the coefficient of lambda as the ratio E(sigma_u)/sigma_v, but how can I calculate the significance? These would be the code and the output:

sfpanel dependent_variable var1 var2 var3, model(tfe) u(IA)

-----------------+----------------------------------------------------------------
Usigma |
IA | -.125778 .1331336 -0.94 0.345 -.3867151 .1351591
_cons | -5.440955 .4137336 -13.15 0.000 -6.251858 -4.630052
-----------------+----------------------------------------------------------------
Vsigma |
_cons | -6.944392 .2449492 -28.35 0.000 -7.424483 -6.4643
-----------------+----------------------------------------------------------------
E(sigma_u) | .0563411 .0560036 .0566787
sigma_v | .0310488 .0038027 8.16 0.000 .0244227 .0394725
------------------------------------------------------------------------------


How can I get the p-value of the lambda parameter? Is also there another way to get lambda coefficient?


I have tried using the ereturn list, but it does not provided any information on the lambda parameter. Anybody can help me?

Using weighted least squares on Heteroskedasticity

$
0
0
I used the Breusch-Pagan to check for Heteroskedasticity which I found (1st attachment). Then I used weighted least squares to mitigate it. I did this by trial and error and used the code [aw=1/experience]. I have attached the results (2nd attachment) . Does the 2nd image still show Heteroskedasticity?

Also I am running two regressions this one(the images attached) is for return to education for white ethnicity in the UK and my other regression which does not have any heterskedascity is for returns to education for ethnic minorities in the UK. Although I do not have any Heteroskedasticity in the ethnic minorities regression do I still need to use weighted least squares. As both regressions run the same variables and I don't want to create any bias in either one.

I am new to stata and any responses will be very appreciated

Calculate % coefficient effect after regression

$
0
0
Hey everybody,
I have a regression:

Code:
xtreg IMD ulc GDPpc capitalform mor sen  pop pat fd van_index i.year , fe ro
and I would like to calculate the % ulc
effect which is the ratio of the coefficient divided by the linear prediction (and multiplied by 100), however I don't know how to calculate the "linear prediction" after the FE regression (the coefficient of ulc in my model is -0.018).

Thank you in advance!
Stay safe!
Sincerely,
John Economou.

[HELP] Merging two graphs into one?

$
0
0
Hellow statlist-

I am trying to create a graph to visually reflected the combination of data for the states of Washington and California for proportion of children covered by WIC by noncitizen tercile over time (in one figure).

Before creating the graphs, I did some light cleaning to draw some limits to the graphs:
Code:
format %tmNN/CCYY date
drop if wic_tot_per==.
drop if year<2015
drop if year>2019
bysort date noncit_co_ter: egen wic_per_ter_tot= mean(wic_tot_per)
I've come up with the following two codes on the two separate graphs I would like to become one. The lines look correct as is what I want but just want it all in one graph:

Code:
twoway (line wic_per_ter_tot date if noncit_co_ter==1  & state_name=="Washington",  lwidth(medthick) lpattern(solid)) || (line wic_per_ter_tot date  if noncit_co_ter==2 & state_name=="Washington",  lwidth(medthick) lpattern(solid)) || (line wic_per_ter_tot date  if noncit_co_ter==3   & state_name=="Washington",  lwidth(medthick) lpattern(solid))
Code:
twoway (line wic_per_ter_tot date if noncit_co_ter==1  & state_name=="California",  lwidth(medthick) lpattern(solid)) || (line wic_per_ter_tot date  if noncit_co_ter==2 & state_name=="California",  lwidth(medthick) lpattern(solid)) || (line wic_per_ter_tot date  if noncit_co_ter==3   & state_name=="California",  lwidth(medthick) lpattern(solid))
The dataset I am working with is (no worries, this is all public data), in case it is of interest:

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input str21 state_name double(date wic_tot_per) byte noncit_co_ter float wic_per_ter_tot
"Washington" 660  2.763554020049756 1 2.3100698
"California" 660 1.1407267827655025 1 2.3100698
"California" 660 3.1217586804290174 1 2.3100698
"California" 660  1.178381891042998 1 2.3100698
"California" 660  2.250131424829609 1 2.3100698
"Washington" 660 2.9252925292529253 1 2.3100698
"California" 660  1.771440915964571 1 2.3100698
"Washington" 660 2.0967297446261255 1 2.3100698
"Washington" 660  5.624109793272367 1 2.3100698
"Washington" 660 1.8868417321483206 1 2.3100698
"Washington" 660  2.763554020049756 1 2.3100698
"California" 660 3.3190679980979554 1 2.3100698
"California" 660  3.710636423695221 1 2.3100698
"California" 660 1.3392350471002157 1 2.3100698
"California" 660 1.8558118222086453 1 2.3100698
"Washington" 660  2.032551386872469 1 2.3100698
"California" 660 2.3624494870997825 1 2.3100698
"Washington" 660 1.6469081837649293 1 2.3100698
"Washington" 660 2.0967297446261255 1 2.3100698
"California" 660  .9148711460416342 1 2.3100698
"Washington" 660  2.774021021014877 1 2.3100698
"Washington" 660 2.4806252355283207 1 2.3100698
"Washington" 660  2.135295173156289 1 2.3100698
"Washington" 660 2.4993688462509467 1 2.3100698
"Washington" 660 1.1703684000531984 1 2.3100698
"California" 660 2.7238519709202498 1 2.3100698
"California" 660 2.6337562051847767 1 2.3100698
"California" 660 3.8805352638322854 1 2.3100698
"Washington" 660 2.4711532398747345 1 2.3100698
"California" 660 1.2812242809796026 1 2.3100698
"California" 660 3.6760357331839266 1 2.3100698
"Washington" 660  1.833286921006219 1 2.3100698
"Washington" 660 2.4993688462509467 1 2.3100698
"California" 660  2.072829131652661 1 2.3100698
"California" 660  2.778694838780676 1 2.3100698
"Washington" 660 2.1868014384293906 1 2.3100698
"Washington" 660 1.7630990812018872 1 2.3100698
"California" 660 2.9596585410903073 1 2.3100698
"Washington" 660  2.347980257798649 1 2.3100698
"Washington" 660 3.3333775308609237 1 2.3100698
"California" 660  .9148711460416342 1 2.3100698
"Washington" 660 1.1074002460889436 1 2.3100698
"Washington" 660 1.5812001329787235 1 2.3100698
"Washington" 660 3.4874061976431494 1 2.3100698
"Washington" 660 1.7332679382345486 1 2.3100698
"California" 660  2.376215738284704 1 2.3100698
"Washington" 660  5.039659508609015 1 2.3100698
"California" 660 1.8162699550712167 1 2.3100698
"Washington" 660  2.139284703217892 1 2.3100698
"Washington" 660 2.1629118716497358 1 2.3100698
"Washington" 660  .5029709805468576 1 2.3100698
"California" 660  1.178381891042998 1 2.3100698
"Washington" 660 1.3418614618706934 1 2.3100698
"California" 660 3.8805352638322854 1 2.3100698
"California" 660  .9424709596346584 1 2.3100698
"California" 660  1.979101979101979 1 2.3100698
"California" 660 4.6933844471741155 1 2.3100698
"Washington" 660 1.7332679382345486 1 2.3100698
"California" 660 2.5613808381022705 1 2.3100698
"Washington" 660 3.5409995147986413 1 2.3100698
"California" 660  1.719969841187523 1 2.3100698
"Washington" 660 1.4075811828620817 1 2.3100698
"Washington" 660  2.032551386872469 1 2.3100698
"Washington" 660  .5029709805468576 1 2.3100698
"Washington" 660 2.4711532398747345 1 2.3100698
"Washington" 660 1.1074002460889436 1 2.3100698
"California" 660 2.9596585410903073 1 2.3100698
"California" 660 3.0791116005873715 1 2.3100698
"Washington" 660 2.8004000571510215 1 2.3100698
"Washington" 660  3.184105785993882 1 2.3100698
"Washington" 660  2.045833219524187 1 2.3100698
"Washington" 660  3.039549152994531 1 2.3100698
"California" 660 1.4276995120836917 1 2.3100698
"Washington" 660   2.44277571699991 1 2.3100698
"Washington" 660  2.019512571340429 1 2.3100698
"California" 660 1.1359795146300917 2 3.4305794
"California" 660  3.174630427690881 2 3.4305794
"California" 660   5.62963621626613 2 3.4305794
"Washington" 660 7.1449015379033725 2 3.4305794
"California" 660 1.1359795146300917 2 3.4305794
"California" 660 2.5037714468341723 2 3.4305794
"California" 660 1.6692068155425102 2 3.4305794
"California" 660 3.4556528946246488 2 3.4305794
"California" 660  2.907771810572337 2 3.4305794
"California" 660 2.3823917668923467 2 3.4305794
"California" 660  5.932657490092144 2 3.4305794
"California" 660  4.213318237094812 2 3.4305794
"California" 660  5.389516574677594 2 3.4305794
"California" 660  4.047489434556583 2 3.4305794
"California" 660 2.3182797527924253 2 3.4305794
"California" 660 1.6692068155425102 2 3.4305794
"California" 660  4.213318237094812 2 3.4305794
"California" 660  2.326776796366946 2 3.4305794
"California" 660  3.174630427690881 2 3.4305794
"California" 660  4.773905529953917 2 3.4305794
"California" 660 3.7699565538466153 2 3.4305794
"California" 660 2.5037714468341723 2 3.4305794
"California" 660  5.775409947073305 3  4.581469
"California" 660  4.734590248718792 3  4.581469
"California" 660 3.6975896988393724 3  4.581469
end
format %tmNN/CCYY date

massive reshape long terminated by error r(608)

$
0
0
Hi Everyone,

I was troubled by an error code r(608) when I run a "reshape long" a large size of data. There is nothing wrong with the reshape command, and it runs fine at the command prompt (took a long while though). However, if I run the same line of code from the do editor, I received the following error: file XX.tmp cannot be modified or erased; likely cause is read-only directory or file." Same error appear if I run the entire do file which contains the line of reshape. I looked up in the forum, and tried to add a sleep command after the reshape command; sadly that did not help. Because I need to execute the reshape on 12 data set, I would very much like to avoid the error r(608) and run it in a do file. Do you have any suggestion?

Best,
Karen

selection test algorithm

$
0
0
Dear statalist member,

I am trying to conduct a selection bias test for my analysis, but can't seem to come up with an algorithm how to implement it in stata.

Here is a subset of the dataset

Code:
input byte project_type long start_date double project_id int manager_id
1 17599   52000471 194
1 17478   83000442 206
1 16869   62000028 214
1 16917   62000054 214
1 16974   45006794 216
2 17021   45007248 216
2 17275   45009016 216
2 17329   45009408 216
2 17333   45009422 216
3 17360   79000073 216
3 17373   45009664 216
3 17436   45009892 216
3 17457   45010174 216
3 17480   45010360 216
3 17508   45010381 216
3 17541   45010657 216
3 17553   45010451 216
4 17574   45010819 216
4 17584   45010902 216
4 17597   45010951 216
4 17603   45011012 216
4 17668   45011378 216
4 17728   45011644 216
4 17967   45012631 216
4 17858   48004687 237
4 17282   67000265 286
4 17968   80000702 291

and here is what I am trying to do


Step 1. calculate the number of managers who started (start_date) the same project_type, as the current observation is during +/- 1 month period. Alternatively, it might be easier to calculate previous/next 10 times?
Step 2. store the result of the step 1 as variable n_count
Step 3. using expand n_count, create new dataset
Step 4. replace (except in the original observation) project_id values of the new observations in the expanded dataset with the values of project_id variable of the managers who started this type of project during +/- 1 month period (same managers I am counting in Step1)
Step 5. the new dataset will be used to calculate the "selection bias correction" value to include in the regression on the original dataset

Can't figure out how to complete step 1 and step 4.... other than manually, which will take forever, given that my original dataset has 20,000 observations....


Hope someone can help with it or suggest alternative approach...


















How do I create a cumulative index?

$
0
0
Hello,

I am trying to perform an operation which in Excel or Python would be very easy. However, I have tried different approaches but can't figure out how to do it in Stata.

This is the pattern of my data looks, in reality with many more groups and observations:
Group Counter Deduct Desired Auxiliary Variable Desired Output
A 1 0 0 1
A 2 0 0 2
A 3 1 1 2
A 4 0 1 3
A 5 0 1 4
A 6 0 1 5
A 7 0 1 6
A 8 1 2 6
A 9 0 2 7
A 10 0 2 8
A 11 0 2 9
A 12 1 3 9
B 1 1 1 0
B 2 0 1 1
B 3 0 1 2
B 4 1 2 2
B 5 0 2 3
... ... ... ... ...
The three columns on the left are what I have, the fourth column is a kind of cumulative index which I tried to create in order to reach my desired output as in the fifth column. The fifth column is simply the second minus the fourth column.

My first thought was to write
Code:
gen desired_aux_variable = 0,
replace desired_aux_variable = desired_aux_variable + 1 if deduct ==1 else desired_aux_variable = desired_aux_variable[_n-1]
but then I learned about the difference between if commands and if qualifiers and saw that the latter don't allow for an else statement. I then looked for a way to process the observations line by line but I read that such an approach is very untypical for Stata.

Which other approach can I use in this situation?

Testing linearity with a plot of standardized residuals

$
0
0
Hi forum!
I have a question for you. I tried to assess the linearity assumption of my multiple linear regression model by testing the "structure" of the standardized residuals against the values of my predictors; but I'm not so sure this is the best way to do that. I attached an example of what I've done.
Code:
. regress consumilog disoccup inattivitàlog dem_impreselog retrib_medialog componenti_f
> amsqr

      Source |       SS       df       MS              Number of obs =     107
-------------+------------------------------           F(  5,   101) =  161.28
       Model |  5.44754193     5  1.08950839           Prob > F      =  0.0000
    Residual |  .682297905   101  .006755425           R-squared     =  0.8887
-------------+------------------------------           Adj R-squared =  0.8832
       Total |  6.12983984   106  .057828678           Root MSE      =  .08219

-----------------------------------------------------------------------------------
       consumilog |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
------------------+----------------------------------------------------------------
         disoccup |  -.0139411   .0028199    -4.94   0.000     -.019535   -.0083473
    inattivitàlog |  -.6843887   .0814612    -8.40   0.000    -.8459859   -.5227915
   dem_impreselog |    .015953   .0126544     1.26   0.210      -.00915     .041056
  retrib_medialog |   .3489728   .1838609     1.90   0.061    -.0157578    .7137034
componenti_famsqr |   .0385899   .0133636     2.89   0.005     .0120802    .0650997
            _cons |   6.341054   1.937408     3.27   0.001     2.497758    10.18435
-----------------------------------------------------------------------------------
Code:
predict consumires, rstandard
Code:
 scatter consumires disoccup

How to take fixed effects into account using "bysort i"

$
0
0
Hi,

I would like to regress an equation using paneldata, but I would like to find estimates for every country seperately. I can find one estimate for all countries (panel data) and with fixed effects, but my problem is when I want to look at effects in only one country, and I use "bysort i:" (where "i" is a country), it will not take the fixed effects into account. Do you guys know how to take this into account?

I find the same results using the two different commands (but I think thats a problem, as fixed effects are not taking into account using "bysort i:"):

> bysort i: xtivreg2 dlogsigma KAPITALSKAT INF ÅBEN VALUTA YMIDDEL STAT INFSD ÅBENSD VALUTASD STATSD, fe robust
> bysort i: reg dlogsigma KAPITALSKAT INF ÅBEN VALUTA YMIDDEL STAT INFSD ÅBENSD VALUTASD STATSD

Many thanks.

Error with the XTLSDVC command

$
0
0
I have run the xtlsdvc command for panel estimation and there is the following error.
I would appreciate if someone could help me what I should do.
I only checked that I cannot use the option for the bootsraping vcov(#). But without this ioption st. errors and p-values are not displayed.
Code:
xtlsdvc y x1 x2 x3 x4, initial(ab) bias(3) vcov(500)
factor-variable and time-series operators not allowed

command -> xtlsdvc_b longterm debt debt2 inflation grdp_gr ca ecb elections europe fitch2 councils lnm3 , initial(ab) bi(3)
an error occurred when command was executed on original dataset
please rerun and specify the trace option to see a trace of the commands executed
r(101);


Ridge Parameter K

$
0
0
Can somebody please tell me how to calculate, the ridge parameter k which determines by how much the coefficients are shrunk?

Calling a program inside a loop

$
0
0
Dear All,

I have been trying to call a certain program inside a loop. (I have placed it outside the loop as placing it inside caused problems)

Code:

//program drop name
program name
drop `2'
end

clear
local myfilelist : dir "/Users/adrianomariani/Desktop/Research Assistant scheme/Forecasts excel" files "*.xlsx"

foreach filename of local myfilelist {

.....

findname instit , not local(g)
name `g'
program drop name

......

}

When I try to run this loop, it stops at the first called program: 'name', it executes the command (ie drops the variable in the second position of the list of variables provided by 'findname' and placed in local macro 'g') however nonetheless it runs it, it stops the loop with the error message: r(111)-"variable E not found", which is the variable that was successfully dropped.
I have checked there may be a problem with the ordering of the variables, but this is not the case (and even if there was it should drop anyways the second variable in the list provided by the local)

If I try to run this outside of a loop, ie if I do something like:

Code:
//program drop name
program name
drop `2'
end

findname instit , not local(g)
name `g'
program drop name

It returns no problems, drops the variable and returns no error message. Thus I believe this specific problem is when the program is called within a loop.

Any suggestions would be largely appreciated.

Thanks to all

Ordinal or negative binomial regression

$
0
0
I have an outcome variable which represents the number of aggravating clinical occurrences. For example if someone is obese, with high triglycerides and high blood pressure then the variable takes the value 3. I first thought to analyze this variable as a count variable using the poisson regression but I found out that it does not follow the poisson distrubution. The histogramm of this variable shows a very normal distributed variable; however, with seven possible values (0-6) it is not appropriate to use the linear regression. Then I thought that each level of the variable is worse than the previous (higher burden to health) and the ordinal regression may be suitable. What whould be the best regression to use with this variable as the dependet (outcome) variable?

parameter K (ridge regression)

$
0
0
how to calculate the ridge parameter k in the ridge regression?

Table with correlations per sector (dataset containing observations from different sectors)

$
0
0
Hello

I have panel data comprising companies from multiple economic sectors. I am trying to create a table containing correlations between a variable Y and several variables X1 X2 X3 per economic sector (sector). Please note that I am not interested in the correlations among X1,X2,X3. I have looked at the statsby documentation but can't find how to do it. Does anybody know of a command that would do this? Any help would be much appreciated!

Kind regards,
Joao

Help needed to create a (potentially simple) variable

$
0
0
Hi,

I need to create a variable that essentially sums the number of 1s each participant has accumulated so far. The variable I need to create is I2. On the left of the image is an example of some data but with the I2 column blank, on the right is the I2 column filled in with the correct values. In STATA, how would I create variable I2.

So for example, with participant 14973 at the first date they had a 0 in I1, therefore I2 = 0. At the next date, that participant had a 1 in I1, therefore I2 = 1. At the next date, that participant had a 0 in I1, therefore I2 = 1 (this is because I2 is a running total of how many 1's in I1 that the participant has accumulated so far). etc.

I would like to know how to create that variable in STATA, and I have so many participants that it will be impossible to do this individually for each person.

Please any help would be greatly appreciated!

Many thanks!

[ATTACH=CONFIG]temp_17774_1587212011051_342[/ATTACH]

PVAR problems: how to run pvarsoc command

$
0
0
Hello everyone,
I am new to the PVAR methods and I am having issues trying to run the pvarsoc command to determine the optimal lag. The id variable is the id of the respective country in the panel. See output below. Can someone tell my STATA is stating that the panel variable is not set?

. xtset id
panel variable: id (unbalanced)

. xtset TIME
panel variable: TIME (unbalanced)

. pvarsoc RealHouseCost RealGDPGrowth
Running panel VAR lag order selection on estimation sample
panel variable not set; use xtset varname ...
r(459);

Create a dummy with 2 condition

$
0
0
Hi everyone,
I am a university student, so I am a beginner with STATA.
My question is simple:
I have 2 dummies; Fondation (0,1) and Export (0,1).
I need to create a new dummy that will have value=1 ONLY IF both Fondation and Export are =1.
What command should I use?
Thank you, Michele :D

Esttab produces wrong results

$
0
0
Hello,

I conducted logit regressions using eststo for three different countries and then put the results together in a nice format using eststab. The problem is that when I get odds ratios lower than 1, which are denoted using dot instead of zero dot (for example .7 instead of 0.7). eststab is not only changing this to minus sign but also it displays different odds ratio.

In the example below, you can see it if you run the following commands:

Code:
eststo: quietly logit achieved_all  i.rural i.not_poor i.Bicycle i.Motor_cycle i.car_all_type  inc_cap_oecd_doll indirect clean_fuels tot_resid_fuels share_dirty_fuels F2 F8 F7 F9  F11 if country_id==1, or
esttab,pr2  scalars (chi2)  drop(   0.rural  0.not_poor 0.Bicycle 0.Motor_cycle 0.car_all_type ) wide compress
Does anyone know how to fix it? I need eststab to be able to export the results in a nice looking format to excel, but I want these results to be correct.

Thank you for your help,

Cheers,

Marta

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input float(country_id achieved_all rural not_poor Bicycle Motor_cycle car_all_type inc_cap_oecd_doll indirect clean_fuels tot_resid_fuels share_dirty_fuels F2 F8 F7 F9 F11)
1 0 1 0 0 0 0  757.9745  1.818205 1.3209975 1.3209975 0        0  .7429588  1.0246736 1.1186874   1.0292457
1 0 1 1 1 0 0 2673.8345 1.4272217 1.8167483 1.8167483 0        0  .7429588   .2917905  .9167902    .1812828
1 1 0 1 1 0 0  991.9992  5.540638  2.540866  2.540866 0        0  .7429588  1.0246736 1.1186874    .1787506
1 1 0 1 1 1 0         0     5.508  2.800494  2.800494 0 .9923012  .7429588  1.0427761 1.1186874   1.0267136
1 1 0 1 0 0 0 1706.7218 3.9105165  6.034572  6.034572 0        0  .5373792  1.0246736 1.1186874   1.0267136
1 1 0 0 0 1 0   3462.32  4.135731   3.46402   3.46402 0 .4961506  .7429588  1.0427761 1.1186874    .1787506
1 1 0 1 0 1 0 2483.2886 10.465042 3.6580694 3.6580694 0 .4961506  .7429588   .9914134 1.1186874    .1787506
1 1 0 1 0 0 0 1280.9094  2.086266 1.8023653 1.8023653 0        0  .5373792  1.0246736 1.1186874    .1812828
1 1 0 1 1 1 0 4134.4067   8.81647  3.972038  3.972038 0 .9923012  .7429588  1.0427761 1.1186874    .1787506
1 1 0 1 0 0 0 1358.3536  3.374758 1.0038222 1.0038222 0        0  .7429588   .9914134 1.1186874    .1787506
1 1 0 0 0 1 0  1490.486  6.854953  4.976077  4.976077 0 .4961506  .7429588  1.0246736 1.1186874   1.0267136
1 0 0 1 0 0 0 1089.1621 4.0606194  7.891852  7.891852 0        0  .7429588  1.0427761 1.1186874    .1787506
1 0 1 0 1 0 0  574.3931 1.6343614  .3921904  .3921904 0        0  .5373792          0 .10576556           0
1 0 1 1 1 0 0 139.79893  .7215557 2.3682427 2.3682427 0        0 .20557962  1.0246736  .5258843    .1787506
1 0 1 0 1 0 0  276.6452  .7938556 .10501961 .10501961 0        0  .5373792 .018102502  .3873815    .1812828
1 0 0 1 1 1 0  8215.799  3.288267  2.196237  2.196237 0 .4961506  .7429588  1.0246736 1.1186874    .1787506
1 0 0 1 0 0 0  23319.09 18.159407  2.623492  2.623492 0        0  .9305042  1.0246736 1.1186874   1.0267136
1 1 0 1 0 1 0 3734.9194  8.443437 4.6054773 4.6054773 0 .4961506  .7429588  1.0427761 1.1186874    .1787506
1 0 0 1 0 1 0  6255.727  8.710135  3.639527  3.639527 0        0  .7429588  1.0246736  .9167902   1.0267136
1 0 0 1 0 0 0  1375.408 2.4173565   1.29989   1.29989 0        0  .7429588  1.0246736 1.1186874    .1812828
1 0 1 1 1 1 0  6045.034  6.661806 1.5452955 1.5452955 0 .4961506 1.1360838  1.0246736  .9167902    .1812828
1 0 1 0 0 0 0 546.68915   .623929 .03063987 .03063987 0        0         0          0  .5258843    .1787506
1 0 0 1 1 0 0 2222.9133 1.0951575  .3325109  .3325109 0        0  .7429588  1.0246736 1.1186874    .1787506
1 0 0 0 0 0 0   750.361  8.661407 2.1615949 2.1615949 0        0  .7429588  1.0246736 1.1186874   1.0267136
1 0 0 0 0 1 0  501.7462 2.0649343  2.773471  2.773471 0 .4961506  .7429588  1.0246736 1.1186874    .1787506
1 0 1 0 1 0 0  289.3862  .4500268 .13277277 .13277277 0        0  .5373792          0  .7277814 -.002532199
1 1 0 1 1 1 1  7724.405  5.886311  2.737499  2.737499 0 .9923012  .7429588   .9914134 1.1186874   1.0267136
1 1 0 1 1 0 0  521.1075  8.950868 4.2190456 4.2190456 0        0  .7429588  1.0246736 1.1186874    .1812828
1 0 0 0 0 0 0  380.3408 2.1151793  1.886821  1.886821 0        0  .7429588  1.0246736  .9167902   1.0267136
1 0 1 0 1 0 0 272.80994  12.74034 3.6933355 3.6933355 0        0  .7429588  .24042773  .5258843    .1787506
1 0 1 1 1 1 0  738.4012  .8541672 .13288856 .13288856 0        0  .7429588  .06946525  .4151386    .1787506
1 1 0 1 0 0 0  2456.785 2.4539504 3.6827474 3.6827474 0        0  .7429588  1.0246736 1.1186874   1.0267136
1 1 1 1 1 0 0 378.91965 1.8898576 1.7207745 1.7207745 0 .4961506  .7429588  1.0427761 1.1186874    .1787506
1 0 0 1 0 1 0 2846.7114 2.0611656 4.4000807 4.4000807 0 .4961506 1.1360838  1.0427761 1.1186874   1.0267136
1 0 1 1 0 1 0  2250.412  5.783936 1.6091415 1.6091415 0        0  .7429588  1.0427761  .9167902    .1812828
1 1 0 1 0 0 0  1278.676  2.058258 2.0978172 2.0978172 0 .4961506  .7429588  1.0246736 1.1186874   1.0267136
1 1 0 1 0 1 0  51.34093 1.5561166  3.349987  3.349987 0 .9923012  .7429588  1.0427761 1.1186874    .1812828
1 0 1 1 0 0 0  476.5765  2.985764 1.2590555 1.2590555 0        0  .5373792  1.0427761 1.1186874           0
1 0 0 1 0 0 0 1908.5078 10.664527  4.489582  4.489582 0 .4961506  .7429588  1.0246736 1.1186874   1.0267136
1 0 1 1 1 0 0  815.5109  .8282692   .140068   .140068 0        0  .5373792          0 .27663592    .1787506
1 0 1 0 0 0 0   41.9593 .49353355 .26947635 .26947635 0        0         0  .05136275  .2442683    .1787506
1 0 1 1 0 0 0  479.6558  3.404394  2.752832  2.752832 0        0  .7429588  1.0246736 1.1186874   1.0267136
1 0 0 1 0 0 0  886.7067  3.116522 2.4641626 2.4641626 0        0 1.1360838  1.0246736 1.1186874   1.0267136
1 0 1 1 0 0 0 428.92395  2.911554  .1823392  .1823392 0        0  .5373792  .05136275  .3873815    .1787506
1 1 0 1 1 0 0  1235.829  2.055065   .628662   .628662 0        0  .7429588   .9914134 1.1186874   1.0292457
1 1 0 1 1 1 0  2643.436  2.485381 1.9447215 1.9447215 0        0  .7429588  1.0246736 1.1186874    .1812828
1 0 0 1 0 0 0  967.5109 1.8017677  2.411617  2.411617 0        0  .5373792   .9733109 1.1186874   1.0267136
1 0 1 1 1 0 0  784.4089  1.990735 .05106645 .05106645 0        0  .7429588 .018102502  .4151386    .1787506
1 0 0 0 0 0 0  613.3909 2.0426724 2.2747047 2.2747047 0        0  .7429588  1.0246736 1.1186874   1.0267136
1 0 0 0 0 0 0  2751.211   2.99116 3.8082156 3.8082156 0 .4961506  .7429588  1.0246736 1.1186874   1.0267136
1 0 0 1 0 0 0  14484.42 2.1035259 1.9708703 1.9708703 0        0  .7429588   .9914134  .9167902   1.0267136
1 0 0 1 0 1 0 1591.8506 3.1408675  3.718132  3.718132 0 .4961506  .7429588  1.0246736 1.1186874    .1787506
1 1 0 1 0 0 0 2803.3215  5.308203 1.9413234 1.9413234 0        0  .5373792  1.0246736 1.1186874   1.0267136
1 0 0 1 0 1 0  2860.043  4.305994 2.0286179 2.0286179 0        0  .7429588  .24042773 1.1186874    .1787506
1 0 0 1 0 1 0  893.7509  4.813969  1.370874  1.370874 0 .4961506  .7429588  1.0427761 1.1186874   1.0267136
1 0 0 1 0 0 0  1474.343 2.0205126 1.6385516 1.6385516 0        0  .7429588   .9733109 1.1186874   1.0267136
1 1 0 1 0 0 0  2623.758  3.017717  3.140761  3.140761 0        0  .7429588  1.0427761 1.1186874   1.0292457
1 0 1 0 0 0 0  685.8723  4.924563  3.944714  3.944714 0        0  .7429588   .9914134  .5258843    .1812828
1 0 0 0 0 1 1  1841.878  43.23208 4.3874226 4.3874226 0 .4961506  .7429588  1.0246736 1.1186874    .1787506
1 0 0 0 0 0 0 1524.4933   8.48151  6.750727  6.750727 0        0  .7429588  1.0246736 1.1186874   1.0267136
1 1 0 0 1 0 0  1881.155 2.0411909 2.2448184 2.2448184 0        0  .7429588  1.0246736 1.1186874   1.0267136
1 0 1 1 1 0 0 14438.918  6.117969  2.534173  2.534173 0        0  .7429588  1.0427761  .9167902    .1812828
1 0 0 1 0 0 0 1600.8265 15.319732 1.9584242 1.9584242 0        0  .5373792  1.0427761 1.1186874   1.0267136
1 0 0 1 1 0 0 1370.4452  4.357891  3.068029  3.068029 0        0  .7429588   .9733109 1.1186874    .1787506
1 0 0 0 0 1 0  901.9169 4.3425703   .925101   .925101 0        0  .7429588  1.0246736 1.1186874    .1787506
1 0 1 0 1 0 0  772.8143  .6059626  .4289582  .4289582 0        0         0          0  .7277814           0
1 0 0 1 0 0 0  2476.276    3.9078 1.7750047 1.7750047 0        0  .7429588  1.0246736 1.1186874   1.0267136
1 0 0 0 0 0 0  656.4601  .9557348 2.0507808 2.0507808 0        0  .7429588  1.0246736 1.1186874   1.0267136
1 0 1 1 1 1 0  7146.722  7.156996 2.5807204 2.5807204 0        0  .7429588   .7509856 1.1186874    .1812828
1 0 0 0 0 0 0  8356.845 1.3894944   2.42955   2.42955 0        0  .7429588  1.0427761  .9167902    .1787506
1 1 0 1 0 0 0  2722.076 10.581377  6.614033  6.614033 0 .4961506  .7429588   .9914134 1.1186874   1.0267136
1 1 0 1 0 1 0 2003.3496 14.024404  2.992872  2.992872 0 .4961506  .5373792  1.0427761 1.1186874    .1812828
1 1 0 1 0 0 0  2455.269   2.03014   1.74597   1.74597 0        0  .7429588  1.0246736  .9167902   1.0267136
1 1 0 0 1 0 0  4003.053  6.668846  3.325474  3.325474 0        0  .7429588  1.0427761 1.1186874   1.0267136
1 1 0 1 0 0 0  2817.865   8.70369 2.3510478 2.3510478 0        0  .7429588  1.0246736 1.1186874   1.0267136
1 0 1 1 1 0 0  224.4214  .8666341 .09191962 .09191962 0        0  .5373792 .018102502  .5258843 -.002532199
1 0 0 1 0 0 0  1796.755  3.286916 2.2554588 2.2554588 0 .4961506  .7429588   .9733109 1.1186874   1.0267136
1 0 0 1 0 0 0 246.34174  .8734494   .694689   .694689 0        0         0   .9733109 1.1186874   1.0292457
1 1 0 1 0 0 0  1664.386  4.675694  2.916072  2.916072 0        0  .9305042  1.0246736 1.1186874   1.0267136
1 0 1 1 1 0 0 140.60735 1.1907806 .12255949 .12255949 0        0         0  .06946525  .5258843 -.002532199
1 0 0 1 0 1 0 3151.4595 1.5809453  3.140761  3.140761 0 .9923012  .7429588  1.0246736 1.1186874    .1787506
1 0 1 1 1 1 0 1057.8483 3.7813365  .6355289  .6355289 0        0  .7429588   .9733109  .9167902   1.0267136
1 1 0 1 0 0 0  1705.443 4.1641617  2.448053  2.448053 0        0  .7429588   .9733109 1.1186874    .1787506
1 0 0 1 1 0 0  1724.392  2.340013 3.7173524 3.7173524 0        0  .7429588  1.0427761 1.1186874    .1787506
1 1 0 1 1 1 1   9518.36  9.579969  6.614033  6.614033 0 .9923012  .7429588  1.0427761 1.1186874   1.0267136
1 0 0 1 0 0 0  966.0508 1.3245606  1.393438  1.393438 0        0  .7429588   .9914134 1.1186874   1.0267136
1 0 0 1 1 0 0  1270.177  1.747799  .8372993  .8372993 0        0  .7429588   .2917905 1.1186874    .1812828
1 1 1 1 0 0 0 1168.8821  3.062119  .1983231  .1983231 0        0  .7429588   .9914134  .5258843    .1787506
1 0 0 1 0 0 0  506.2879  2.591333 2.3306327 2.3306327 0        0  .7429588  1.0427761 1.1186874   1.0267136
1 0 0 1 0 0 0   963.946 1.0315026  2.641995  2.641995 0        0  .7429588   .9733109 1.1186874   1.0267136
1 0 0 0 0 1 1  5606.643  5.723766 1.9952846 1.9952846 0 .4961506 1.1360838  1.0246736 1.1186874    .1787506
1 0 0 0 0 0 0  1592.931   3.78114  2.540333  2.540333 0 .4961506  .7429588   .9733109  .9167902    .1812828
1 0 0 1 0 1 0  3753.912 1.7877698   2.43646   2.43646 0 .4961506 1.1360838  1.0427761 1.1186874   1.0267136
1 0 1 1 0 0 0  781.6613 1.6665493  2.318872  2.318872 0        0  .5373792  1.0246736 1.1186874    .1812828
1 0 0 1 0 0 0 2011.0865 4.7554226  9.690102  9.690102 0 .4961506  .7429588  1.0427761 1.1186874    .1787506
1 1 0 1 1 0 0  9456.769 17.186018  2.257081  2.257081 0        0 1.1360838  1.0427761 1.1186874   1.0267136
1 0 0 1 1 0 0 2077.8503 2.0123734  1.546868  1.546868 0        0  .7429588   .9914134 1.1186874   1.0267136
1 0 0 1 0 1 0 1703.4293 3.8445356  1.733584  1.733584 0        0  .7429588   .9733109 1.1186874   1.0292457
1 0 0 1 0 0 0  6551.735  5.382673  2.641995  2.641995 0        0  .7429588  1.0246736 1.1186874    .1812828
1 0 0 0 1 0 0 2225.0776 2.2129316  2.729646  2.729646 0 .4961506  .7429588  1.0427761 1.1186874    .1787506
end
label values country_id country
label def country 1 "Nepal", modify
label values achieved_all achieved_all
label def achieved_all 0 "Not achieved", modify
label def achieved_all 1 "Achieved", modify
label values Bicycle V06_05
label values Motor_cycle V06_05
label values car_all_type V06_05
label def V06_05 1 "yes", modify

Quantile regression

$
0
0
Hello,
I'm doing a thesis on the basis of RAND HIE data. I would like to run a quantile regression which shows the effect of different coinsurance plans on the average spending for different income levels.

Since plans were assigned at family level (not individual), the regressions results cluster the standard errors on the familiy.


This is the original code which looks at the spending levels in the different insurance groups. But this does not account for different income levels which I would like it to.

Code:
quietly myqreg spending_infl rand_plan_group2 rand_plan_group3 rand_plan_group4 rand_plan_group5 rand_plan_group6 demeaned_fam_start_month_site* demeaned_cal_year*, quantile(`q')
    bs, cluster(ifamily) reps(2): myqreg spending_infl rand_plan_group2 rand_plan_group3 rand_plan_group4 rand_plan_group5 rand_plan_group6 demeaned_fam_start_month_site* demeaned_cal_year*, quantile(`q')
I have no clue how to "ad another level" to the regression.

Thank you in advance!
Viewing all 72832 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>