Quantcast
Channel: Statalist
Viewing all 72868 articles
Browse latest View live

Do you need to use the same number of data observations for different types of regression?

$
0
0
Hello,

I want to perform a linear (1 independent variable) and a multiple regression with a sample of 30,000 observations.
Let's say that I can use the total sample for the linear regression, but before running the multiple regression I only want to keep the positive values of the additional independent variables before running this multiple regression (by using 'keep if var1>=0, var2>=0', etc.). This reduces the number of observations to 16,000 observations. Am I still able to discuss both regressions in an unbiased way or do I also need to use the same smaller sample of 16,000 obs for the linear regression?

Ps I have a good reason for removing the negative values as these are irrelevant for my research.

Thanks!

Show a dummies effect over time

$
0
0
Hello guys,

I am analysing a panel data set of bilateral trade flow of 15 Eu countries and 7 industrial countries between 1960 and 2018. My dependent variable is total trade whereas my explanatory variables are GDP, distance, population and a few more..and a dummy for common EU membership, which is the one of most interest to me. I would like to receive the coefficients of this dummy for every year, for example:
coefficiant
...
EU2 1990 ?
EU2 1991 ?
EU2 1992 ?
EU2 1993 ?
...

any ideas which command I can use to get these results in one table?
My regression looks as follows:

regress total_trade GDP_importer GDP_exporter Distance Pop_importer ... EU2

Let me know if you need more information. Thanks a lot!!

Daniel

Help with graphing % of frequencies of two different variables on one graph

$
0
0
Hello Statalist,

I am a new Stata user (using Stata 15). I am attempting to graph the frequencies within a category with 2 different variables on one bar graph.

I have variable "pindeye" which has 5 possible results. "pindeye" was measured over two years: 2018 and 2019 (variable year). Without actually splitting the data, I was wondering if it is possible to graph the frequencies of the 5 possible results of "pindeye" for 2018 and again for 2019, but to combine these on one graph.

. graph bar, by(year) over(pindeye) asyvars bar(1, bfcolor(blue*0.5)) bar(2, bfcolor(green*0.5))
This gives me the following, which displays the correct data, but not overlapping
Array


graph bar, over(year) over(pindeye) asyvars bar(1, bfcolor(blue*0.5)) bar(2, bfcolor(green*0.5))
This gives me the visual representation I am attempting to achieve, but gives me the wrong data (it gives the overall frequency within the category of variable "pindeye" (it doesn't allow for calculating frequency separately for 2018 and 2019)
Array


I understand a simple fix for this would be to split the data for 2018 and 2019 into their own variables, but as I have many variables beyond "pindeye" that I am testing, I feel like that would be somewhat time consuming. I have searched the forums and reviewed the manuals I have access to and haven't found a command line combination to achieve the above goal. I appreciate any assistance with this. Thank you!

Latent Growth Curve Model in SEM Framework not concaving/ backing up

$
0
0
Dear all,

I try to run a latent growth curve model in a structural equation framework.
I have 8 waves of data in my model and want to run a conditional LCM with a multiple group comparison (5 groups) with six time invariant covariates. I want to examine the hourly wage growth in different groups.
My unconditional model is running, but when it comes to the conditional model, it does work, when there are just one or two covariates. But when I start adding more covariates, Stata keeps on iterating and either says (not concave) or (backed up).

This is my code (I use Stata15.0 on Windows10):
sem (Intercept@1 Slope@0 Slopepiec@0 -> stundenlohnvebreal2010) ///
(Intercept@1 Slope@1 Slopepiec@0 -> stundenlohnvebreal2011) ///
(Intercept@1 Slope@2 Slopepiec@0 -> stundenlohnvebreal2012) ///
(Intercept@1 Slope@3 Slopepiec@0 -> stundenlohnvebreal2013) ///
(Intercept@1 Slope@4 Slopepiec@0 -> stundenlohnvebreal2014) ///
(Intercept@1 Slope@5 Slopepiec@1 -> stundenlohnvebreal2015) ///
(Intercept@1 Slope@6 Slopepiec@2 -> stundenlohnvebreal2016) ///
(Intercept@1 Slope@7 Slopepiec@3 -> stundenlohnvebreal2017) ///
(Intercept Slope Slopepiec <- male west education _cons), ///
group(groupvar) method(mlmv) noconstant
estat gof, stats(all)



The output looks like this (the log likelihood value doesn't change anymore, not matter how long I wait):
Iteration 22: log likelihood = -429635.32 (not concave)
Iteration 23: log likelihood = -429635.32 (not concave)
Iteration 24: log likelihood = -429635.32 (not concave)

or like that:
Iteration 20: log likelihood = -458401.74 (backed up)
Iteration 21: log likelihood = -458401.74 (backed up)
Iteration 22: log likelihood = -458401.74 (backed up)


It happens while fitting the conditional model.

Or I get an error message, that looks like this (I changed nothing in the code above but adding even more covariates):

Fitting saturated model for group 4:

initial values not feasible
r(1400);


It only happens when Stata is fitting the saturated model for group 4.

It also doesn't matter, which covariates are in the model, it stops working, when there are too many.

I hope you can help me with this problem. This is my first post here, so I hope, I gave you all the information you need to help me.
Thanks a lot in advance
Best regards
Katharina

VAR vs MVREG for time series data

$
0
0
Hello, I am working with multivariate time series data where i have 5 dependent variables and 10-15 independent variables. I have checked for all the basic diagnostic tests such as stationarity, arch effects and serial correlation and my data does not have any, so a multivariate regression using the mvreg command seems appropriate.. however, because the dependent variables are index returns, there's a high likelihood that values of y1 for example at time t probably depend on lagged values of y1 as well as values of y2 to y5... If i try to fit a 2-lag VAR model, it actually seems to work better with higher R-squared, and there are some significant inter-dependencies.. my question is how to objectively determine which model fits better, mvreg or VAR? apparently R-squared is not a good criteria for comparison. Also, i am not able to perform an lrtest between the two models. can you please help me with doing comparison between my mvreg model and my VAR model - the first one only comprises all exogenous variables while the second one also includes lagged values of the 5 dependent variables and their cross-autoregressions..

Creating connected scatter plot with pre and post data

$
0
0
Hello,

I am trying to create a scatter (bubble?) plot showing the values of my pre- and post-intervention data for both control and intervention participants. I have two variables with my outcome (one for each time period) labeled prop2011 and prop2017. I would like 2011 and 2017 to be on the x-axis and the values of prop2011 and prop2017 to appear on the y axis. I would also like a line connecting the points for each different ID number (idn). Additionally, I would like the size of the circle to correspond with the variables denom2011 and denom2017. I have included a dataex sample of my data below. Any help would be much appreciated!

Thank you,

Sarah


Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input float(idn intervention prop2011 prop2017) double(denom2011 denom2017)
 1 0          0  .15151516   19   33
 2 0        .35   .4545455   40   33
 3 0  .14035088        .48   57   25
 4 0  .14583333    .171875  144  192
 5 0   .3421053   .4864865   76   37
 6 0  .24489796  .17567568   98   74
 7 0   .3333333   .8333333    6    6
 8 0   .2108626   .3072289  313  166
 9 0  .05769231  .06060606   52   66
10 0  .14285715   .0888889   14   45
11 0   .3529412  .08333334   34   48
12 0  .14503817        .15  131  280
13 0   .4340659         .4  182  110
14 0   .3305085   .3970588  236  136
15 0   .1948052   .2368421   77   38
16 0  .23741007  .28703704  139  216
17 0  .12244898   .1764706   49   17
18 0  .24324325   .3571429  111  168
19 0          0  .13761468   40  109
20 0  .06666667      .1875   30   32
21 0  .11688311  .15966387   77  119
22 0   .0909091          0   11    2
23 0  .14814815   .1627907   27   43
24 0  .15463917  .20689656   97   29
25 0   .3376623  .21518987   77   79
26 0  .26666668  .24444444   15   45
27 0   .2857143   .3809524  105   63
28 0   .2105263   .3181818   19   22
29 1   .6666667        .25    6    8
30 0  .19786096  .14141414  187  198
31 0  .15068494   .2352941   73   34
32 0  .04166667   .1521739   72   92
33 0   .1392405  .23030303  158  165
34 0         .2  .22702703   15  185
35 0   .2142857  .05555556   14   18
36 0  .09803922  .25581396   51   86
37 0   .0775862  .10454545  116  220
38 0   .1846154  .13043478   65   46
39 0  .11428571  .06896552   35   58
40 0  .19902913   .2747604  206  313
41 0  .06896552  .16030534   58  131
42 0  .12121212   .1719457  132  221
43 0   .7733333 .037037037  150   27
44 0  .13513513  .10526316   37   76
45 1  .13793103        .25   29   20
46 0          0          0    1   10
47 0  .07792208   .1588785   77  107
48 0        .25   .2857143   16    7
49 0  .02222222   .0406504   90  123
50 0  .09302326          0   43    3
51 0   .3918919      .2875   74   80
52 0  .06666667         .2   45   45
53 0        .44   .6578947   25   38
54 0   .0952381  .16666667   21   18
55 0  .08571429  .27272728   35   22
56 0  .12322275   .1796875  211  128
57 0  .12658228  .15384616   79  117
58 0   .0923077  .14346895  325  467
59 0  .11111111  .09243698  135  119
60 0   .3448276  .17021276   29   47
61 0  .27407408        .25  135   56
62 1     .21875   .1764706   32   17
63 0  .08396947  .13061224  131  245
64 0  .29166666  .15384616   24   26
65 0  .22330096   .3219178  103  146
66 0   .0909091  .08333334   44   36
67 1  .05813954         .2   86  120
68 0  .06451613   .3235294   31   34
69 1  .13333334        .05   15   60
70 0  .16216215  .27184466  111  103
71 0   .3370166   .4136126  181  191
72 0  .15315315  .30612245  222  147
73 0   .2826087   .2063492   92   63
74 0  .13475177  .14754099  141   61
75 0   .4166667  .13043478   36   23
76 0  .29411766   .2352941   17   17
77 0   .3333333   .6666667   30    6
78 0  .14285715  .29032257   56   62
79 0   .7333333   .6666667   15    6
80 0  .24752475   .3492064  101   63
81 0  .08163265       .125   49   48
82 0   .2631579        .25   19    8
83 0   .1818182        .25   55   20
84 0   .3303965   .4039216  227  255
85 0   .1728395  .19148937   81   94
86 0  .06324111 .027333334 1265 1500
87 0  .08333334  .05882353   60   51
88 1  .03448276 .071428575   29   28
89 0  .04225352  .10447761   71  134
90 0  .25757575   .2093023  132  129
91 1  .10714286        .04   28   50
92 0  .12280702  .09920635  114  252
93 0       .125   .2857143   16    7
94 0 .069518715  .20714286  187  140
end

Combining Heckman selection and 2SLS

$
0
0
Dear All,

I am estimating the effect of a public policy on wages. However, in my estimation I face a couple of econometric challenges that I would like to tackle in Stata.

The gist of what I am trying to do is to estimate the causal effect of a public policy (T) on wages (W) at the individual level with panel data. Whether one is exposed to the policy (T) can be instrumented with a plausible instrumental variable (Z), which I have. So I can implement a straightforward 2SLS approach estimating the effect of T on W using Z as an instrument for T. However, the policy T directly influences labor supply (L). Therefore wages (my ultimate dependent variable) for people who choose not to be in the labor force will be unobserved. Therefore, I need Heckman selection adjustment to estimate the effect of T on W because T affects L (and wages, W, are not observed for some people for whom L=0).

Implementing the 2SLS estimation is obviously straightforward in Stata (where T is endogenous but there is a plausible instrument Z). However, how do I implement the second part, the Heckman Selection, on top of the 2SLS estimation if T affects labor supply L (L is my outcome variable but wages W is!) but L causes some values of W to be unobserved in the sample (the essence of Heckman selection).
  • Is there a command in Stata that can combine both estimations at the same time? I looked extensively online and I am unable to find one although I see some references to some Stata commands potentially being capable of handling the two estimations at the same time.
  • If there is not a single command that can implement these two approaches at the same time, can anyone help me in implementing these two approaches in Stata? I'd be very grateful!

Fama-MacBeth regressions with dataset containing time-series data

$
0
0
Dear community,

For a while I am struggling now on how to perform Fama-MacBeth regressions using my time-series dataset. A screenshot of my dataset can be seen attached so you understand how it is constructed. The data continues to me5bm5, where the variables me1bm1 until me5bm5 are the portfolio returns on portfolios sorted on size (me) and book-to-market equity (bm). I found commands asreg and xtfmb, but asreg is for panel data (while I have time-series data) and xtfmb results in an error saying I have no observations. Please note that all my data is downloaded from Kenneth French's data library. I would truly appreciate if someone could help me with this since I have been struggling for weeks with this problem!

Yours truly,

Niek Schaaf

merge two datasets, one with monthly observations and the other with yearly.

$
0
0
Hi,

For my thesis I am trying to merge dataset A & B. Dataset A contains monthly stock price data for firms. Dataset B contains supplier-customer relationships at year level with the amount of sales in $ for every relationship & year.

As Example:

Dataset A
firmname CUSIP date year stockprice
HP 40434L105 31oct2012 2012 20
HP 40434L105 30nov2012 2012 30
HP 40434L105 31dec2012 2012 15
APPLE 037833100 30june2008 2008 20
APPLE 037833100 31aug2008 2008 10
Dataset B
firmname Supplier CUSIP supplier firmname customer CUSIP customer year sales
ASML 5656565 HP 40434L105 2012 500
ASML 5656565 HP 40434L105 2013 650
ASML 5656565 APPLE 03783310 2008 950
Samsung 465556 APPLE 03783310 2012 850
Samsung 465556 APPLE 03783310 2013 950
Samsung 465556 APPLE 03783310 2014 999
I think that the correct way for merging would be the M:1 merge using dataset A &then the variables CUSIP and year?
I have different ideas for this, but I am kind of lost with the right approach. I tend to use the m:m merge method, but with this, it seems stata 'destorys' my dataset.

Could someone help me out here? As one probably might obtain, I am still a beginner with research&stata, my apologies if some easy things go over my head.

Thanks in advance.

grts,
Arslan

Usage of a Sum Sign equivalent in STATA

$
0
0
Hello all,
I've been sitting here for hours now and I just can't seem to find a solution. Never really worked with STATA and similar statistic tools. I used to program a little bit when I was younger.
My problem is the following:

I have data of daily returns on assets. What I wanna do now is to calculate the exponential weighted average daily return for each time period t with the formula on the paper shown in the picture.
It doesn't look like there is a sum sign equivalent function in STATA (Capital Sigma). I thought about using for loops and incrementing i each time until t-2 but I don't know yet how I'd apply that with STATA syntax. I read the help content of forval, for each etc. but it didn't really help me.
Array


Short description of the picture:
You can see 3 values with variable t and return. When I want to calculate the EWA Daily Return in period t=3 , i starts at 0, and r_t-1-i would be the return value of period 2 (0.003).

In case you're wondering why 60/61 : delta was chosen so the center of the mass is 60.

I'd appreciate it if anyone could point me in the right direction.

Thank you a lot!

Regards,
- Markus Stein

Somers' D for survey data/bootstrapping

$
0
0
I'm trying to test for the significance of differences in an ordinal measure across two groups in enterprise survey data. For reference, I have the following variables:
  • invest_plan: 0 - Withdraw investment, 1 - Reduce Investment, 2 - Maintain investment, 3 - Expand investment
  • sector_num:0 - Manufacturing, 1 - Services
My data consist of 20 strata (Countrysector, e.g. 11 to denote Manufacturing in Nigeria), and I have a svyweight (survey weight) variable that accounts for 2 things: 1) differences in the likelihood of being sampled within each strata (driven by some complications in our sampling frame) and 2) desire to give each strata equal weight (somewhat arbitrary feature of the types of calculations we want to make).

I'm looking to use Somers' D because I'd like to test for differences (accounting for ordinality) in invest_plan between manufacturing and services firms, but I'm not too familiar with how to accomplish this via bootstrapping.

As I understand it, I'll need to create my own replicate weights (which have not been provided). Can somebody provide guidance on how to correctly implement this in Stata?

Alternatively, am I better off just doing an ordered logit like the following?

Code:
svyset [weight=svyweight], strata(Countrysector)
svy: ologit invest_plan sector_num

Heatmap in Stata

$
0
0
I am trying to create a heatmap that is gridded by 10% deciles of two variables. I want them to measure in terms of Y. So I have a series, say x1 and another x2 and used the following
Code:
xtile dx1 = x1, nquantiles(10)
xtile dx2 = x2, nquantiles(10)
hmap dx1 dx2 y, noscatter
and I get the following:
Array

I want to put labels for the axes and a legend for what each colour represents. The documentation for -hmap- is severely lacking in options.
Side note: the grey squares imply missing observations which is ok.

In short: Is anyone familiar with -hmap- or have an alternative? I may just settle on creating a contour plot if I can't figure this out by the end of the week.
Thanks ahead!

Regression with two endogenous variables (one is interaction) and possible exclusion restriction violation?

$
0
0
I am trying to estimate the naive regression:

d = b + s + s*b, where s is endogenous. I'm instrumenting for s with iv.

I've seen other posters run something similar to the following regression
Code:
ivreg2 d = b (s s_b = iv iv_b)
where s_b and iv_b are interaction terms.

However, I would really like to do something like this instead:
Code:
ivreg2 d = b (s = iv) (s_b = iv_b)
because I'm don't think s should be predicted with iv_b since I worry that b might influence s.

My questions are:
(1) is there a way to run my second regression?
(2) if my concern is correct, would that constitute a violation of the exclusion restriction?
(3) is my second regression correct?

Import each excel column as a separate dataset into Stata

$
0
0
Hi Statalist,

I am trying to convert an excel file so that each column becomes a separate Stata file. Does anyone happen to have an easy solution for this in Stata? I appreciate your time and any suggestion you might be able to provide.

Al Bothwell

Tabs for multiple subgroups in one table

$
0
0
I am hoping to run tabs for by various subgroups (different job roles, in this instance).

For my two variables below in the data example, I know this would be relatively simple to do where I get multiple tables for each subgroup. So one subgroup is if they are in some type of managerial position which corresponds with a 1 or 2 response:

Code:
tab satisf if jobtitle ==1 | jobtitle == 2
That would give me one table. To look at those in a non-full-time job, the corresponding value is a 3, 4, or 7:

Code:
tab satisf if jobtitle == 3 | jobtitle == 4 | jobtitle == 7
That would give me a separate second table, and so on.

But is there a way to, essentially, run those two (and eventually more) together where each column is a different subgroup that I create in this manner? Thank you very much in advance for any tips.


Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input byte(jobtitle satisf)
 2 5
19 5
 5 5
19 5
 4 5
 1 5
19 5
 1 5
 7 5
17 5
 5 5
 3 5
 3 5
 1 5
 1 5
19 5
11 5
 3 5
 1 5
 7 3
 1 5
 2 5
 2 .
 2 5
 2 5
 3 5
 8 5
 7 5
12 5
 2 5
 2 5
 5 5
 1 5
 7 5
 4 5
 1 5
 3 3
 3 5
 4 5
 3 5
 2 5
 3 5
 2 5
 9 5
 4 5
19 5
 1 5
13 5
 3 5
 1 5
 2 5
10 5
 8 5
19 5
17 5
 5 5
 3 5
 5 5
 5 5
 1 5
 1 5
 2 5
 1 5
 5 3
 9 5
 3 5
17 5
 9 5
 5 5
 3 5
 1 5
 5 5
 1 5
 3 5
 3 5
 2 5
 7 5
 1 5
 5 5
19 6
 5 5
 2 5
 3 5
11 5
 3 5
 4 5
 4 5
 9 5
 9 5
 1 5
17 6
 2 5
 4 5
 5 1
 2 5
 2 5
 8 5
17 5
 5 1
 3 5
end

EDIT: Edited to add that I've also realized because subgroups mean I'll have relatively small samples, there will be quite a few instances where multiple rows in a table have zeroes and therefore will not show up with -tab-. I'm only slightly familiar with -tabcount- and looked at the help file to no avail, so if anyone out there is more familiar with it and knows a way to do what I'm aiming for above while using -tabcount-, please assist if possible!

question on line chart output

$
0
0
Hello,

I created a twoway line chart but the output is too small and I'm not sure how to make it larger. Can anyone help me?

This is the command I used:
. twoway (line uniqfirm3_id year), by(, legend(on)) by(contnt_id naics1)

And this is the output I got.
Array

Another question I have is that I labeled the variable and legend to be on but as you can see on the picture, the title of each graph still shows just the id numbers instead of its labels. Does anyone know how to fix this?

Also, for each of the line of each of the graph, I hope the label to be shown as well. I was wondering if anyone can help me with the command.

Thank you for your help in advance!!

Replace missing values in panel dataset by the mean, by pasting up, by pasting down

$
0
0
Hello, I have the following dataset:

clear
input year id value
1990 1 .
1991 1 .
1992 1 4
1993 1 .
1994 1 .
1995 1 8
1996 1 .
1997 1 .
1990 2 .
1991 2 2
1992 2 .
1993 2 .
1994 2 .
1995 2 6
1996 2 .
1997 2 .
end

I would like to replace the missing values in three ways:
  1. Sorted by id and year, I would like to replace the first missing values with the first non-missing value.
  2. Sorted by id and year, I would like to replace the missing values that are enclosed by two non-missing values with the average of these two non-missing values.
  3. Sorted by id and year, I would like to replace the last missing values with the last non-missing value.
The output dataset should look like this:

clear
input year id value
1990 1 4
1991 1 4
1992 1 4
1993 1 6
1994 1 6
1995 1 8
1996 1 8
1997 1 8
1990 2 2
1991 2 2
1992 2 4
1993 2 4
1994 2 4
1995 2 6
1996 2 6
1997 2 6
end

Thank you.

Convert date and time to Stata format

$
0
0
Hi all,

I would like to convert the following date and time format into Stata format, so that I can performance subtraction between two date and time to find the interval.

1/19/2015 11:33:00 AM

After conversion, I can do:

XXXXXXX - YYYYYY = ZZZZZZ

I have been looking at a few documents on the use of the clock function, but couldn't hack it. Appreciate any helping on the code.

Thanks,
Fred

Creating dummy variable based on percentiles

$
0
0
Hi Everyone,

I have a variable G-Index with the following distribution:


Governance |
Index |
(Gompers, |
Ishii, |
Metrick) | Freq. Percent Cum.
------------+-----------------------------------
1 | 1 0.02 0.02
2 | 9 0.14 0.16
3 | 55 0.87 1.02
4 | 161 2.53 3.56
5 | 331 5.21 8.77
6 | 549 8.64 17.41
7 | 738 11.61 29.02
8 | 857 13.49 42.51
9 | 917 14.43 56.94
10 | 803 12.64 69.58
11 | 698 10.99 80.56
12 | 521 8.20 88.76
13 | 396 6.23 95.00
14 | 189 2.97 97.97
15 | 98 1.54 99.51
16 | 20 0.31 99.83
17 | 6 0.09 99.92
18 | 4 0.06 99.98
19 | 1 0.02 100.00
------------+-----------------------------------
Total | 6,354 100.00

I am trying to create a variable treat, which is equal to 0 if the value of G-index is in the top 25% percentile and 1 if its in the bottom 75%. Can anyone help me with the proper codes for that?

Thanks!

how many waves at least needed to run system GMM

$
0
0
Dear all,

I'm estimating system gmm on a panel of 249 individuals across three waves. I wanted to look at how once-lagged expenditures will impact on a categorical health outcome and my control variable include once lagged health outcome. I used xtabond2 and Stata15.1.

I could not get ar (1) and (2) tests, I noticed similar issues/posts on this and the explanation was there was no 3rd lag in levels. But I'm still wondering:
1. In this case, would the result still valid?
2. How many waves needed at least to run system GMM?

My codes and results are the following


Code:
xtabond2 health L(1/1).i.health L(1/1).exp L(1/1).age L(1/1).female, robust small gmm(L(1/1).i.health) ivstyle(L(1/1).exp L(1/1).age L(1/1).gender, equation(level))
Code:
Group variable: pid    Number of obs    =    469
Time variable : Year    Number of groups    =    248
Number of instruments = 15    Obs per group: min    =    1
F(13, 247)    =      3.70    avg    =    1.89
Prob > F      =     0.000    max    =    2
Code:
Instruments for first differences equation
GMM-type (missing=0, separate instruments for each period unless collapsed)
L(1/2).(1bL.health 2L.health 3L.health)
Instruments for levels equation
Standard
L.exp L.age L.gender _cons
GMM-type (missing=0, separate instruments for each period unless collapsed)
D.(1bL.health 2L.health 3L.health)

Arellano-Bond test for AR(1) in first differences: z =      .  Pr > z =      .
Arellano-Bond test for AR(2) in first differences: z =      .  Pr > z =      .

Sargan test of overid. restrictions: chi2(1)    =  28.61  Prob > chi2 =  0.000
(Not robust, but not weakened by many instruments.)
Hansen test of overid. restrictions: chi2(1)    =   8.96  Prob > chi2 =  0.003
(Robust, but weakened by many instruments.)

Many many thanks in advance.


Cheers,
Tianxin
Viewing all 72868 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>