Quantcast
Channel: Statalist
Viewing all 72758 articles
Browse latest View live

simultaneous equation model

$
0
0
I need to run a simultaneous equation model where one of my dependent variable is continuous and the other dependent variables are endogenous and binary. Following to Heckman's (1978) paper "Dummy Endogenous Variables in a Simultaneous Equation System”, I know that when we have only two equations with continuous-dummy variable endogenous case, the command in stata is “cdsimeq”. But now, I have four equations as follow:
y1it0+ α2 y2it + α3 y3it + α4 y4it +Xit α+Ɛ1it
y2it0+ β1 y1it + β4 y4it + Zit β +Ɛ2it
y3it0+ γ1 y1it + γ2 y2it + γ3 y3it +Wit γ +Ɛ3it
y4it0+ λ 1 y1it + λ 2 y2it + λ 3 y3it +Vit λ +Ɛ4it
where y1it is continuous and the other dependent variables are endogenous and binary. Also X, Z, W, V are control and explanatory variables.
I have not found anything for this system of equations in Stata. Can someone please advice?


Is there an obvious reason why pca does not allow pweights?

$
0
0
I'm working with a PISA database of student answers to socioeconomic/background questions. Each student (i.e. each observation) has an assigned sampling weight. As there are many questions in the survey (i.e. many variables) I would like to do PCA to some groups of questions and see if the information they contain can be reduced to a small number of indicators.

A similar question was asked here some time ago, but no answer was given:
http://www.stata.com/statalist/archi.../msg00543.html

I think I know how to apply the weights to my data before doing PCA, but the fact that pweights is not simply available as an option makes me nervous that I might be missing something. Is there an obvious reason why pca does not allow pweights?

Seemingly unrelated regression & within transformation

$
0
0
Hello,

I am currently estimating labor demand elasticities by means of a panel dataset on firms. In this context, I conduct a seemingly unrelated regression (command: sureg, version: 14) for my system of three equations. In order to account for firm fixed effects in my analysis, I want to run SUR on within-transformed data. As I have not found a possibility to combine SUR and within transformation by an explicit command, I decided to first within-transform my data manually and then run the sureg command. I think my procedure is correct if the data do not contain any missing values which render corresponding observations useless for the estimation. But my data contain many missing values. Depending on whether I delete missing observations before or after the within-transformation, the sets of within-transformed data differ since the firm-specific means change. I checked both possibilities and the results differ greatly. I think the procedure should be consistent with a common FE estimation using xtreg ... ,fe but I am not sure whether Stata here first within-transforms the data or first deletes observations with missing values. So my questions is should I first delete missing observations and then subtract the firm-specific mean or should I first subtract the firm-specific mean and then exclude missing observations from the SUR estimation.

It would be great if you could help me. Thank you in advance!

Martin Popp

Data manipulation (change address).

$
0
0
I have the following data. A1 is the address of corporation. A2 equals 0 if the corporation does not change address, and 2 (or 1 and 3 when id==2) means that the firm moves to a new address 2.
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input float(id year A1 A2)
1 2010 1 0
1 2011 1 2
1 2012 1 0
1 2013 1 0
2 2010 2 1
2 2011 2 0
2 2012 2 3
2 2013 2 0
end
My question is: H can I obtain the following results (A1 denotes the current address)?
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input float(id year A1 A2)
1 2010 1 0
1 2011 2 2
1 2012 2 0
1 2013 2 0
2 2010 1 1
2 2011 1 0
2 2012 3 3
2 2013 3 0
end

Interactions with coefplot. Comparing (some) coefficients across two models

$
0
0
Dear statalist,

I have panel data over two periods (2000 and 2010) and I am employing a model that, as well as some control variables, includes my variable of interest and its interaction with a time dummy to assess how the effect of x has changed over time. I am then running the same model for a restricted sample. In order to compare the coefficients of interest from both models, I am storing each individual estimate using lincomest and then using coefplot. However, I am stuck: I am somewhat getting the outcome but in two different graphs (which also contains an empty category).

This is the code I am using:

gen x_d_time = x*d_time; /*to generate the interaction between x and a time dummy*/

local controls c1 c2 c3;

reg y x x_d_time `controls';
estimates store base;

lincomest x; /*to save the effect of x*/
est store x;

est restore base;
lincomest x + x_d_time; /*to save the effect in the next period (addinp up the interaction term to the base coefficient)
est store x_t;

/*I now run the same model but only in those units smaller than 100*/
reg y x x_d_time `controls' if size<100;
estimates store base2;

lincomest x; /*and save the same information as above*/
est store x_2;

est restore base2;
lincomest x + x_d_time;
est store x_t_2;

/*to compare the coefficients*/
coefplot (x, rename((1) = x) label(Whole sample)) (x_2, rename((1) = x) label(Size<100))
|| (x_t, rename((1) = x_t)) (x_t_2, rename((1) = x_t))
, drop(_cons `controls') vertical bycoefs bylabels(2000 2010);

I would like to have both comparisons (x and x_2 on the one hand and x_t and x_t_2 on the other) in the same graph. Could anyone tell me what I am doing wrong? Many thanks!


Creating variables from loops

$
0
0
Hi, I'm doing a monte carlo experiment to create some panel data.
To do this i need a colum vector for the ID and time_ID of the observations (together with the y and x values which should have been correctly calculated - not included)

I made the following loop:
Code:
// Loop for the time period
    for(t=1; t<=T;t++) {
            
        // Loop for sample size 
        for(i=1; i<=N;i++) {

            id[(t-1)*N+i,1] = i
            time_id[(t-1)*N+i,1] = t

  }
}
id and time_id have both been created before the loop begins with the dimensions [N*T, 1]
T is 50 and sample size N differs, but let's say it's 100 this time.

The time_id column vector looks exactly like I want, with the first 100 observations = 1, the next 100 = 2 and etc.

The column vector id doesn't, however, and I can't seem to grasp why.
In this, the first 50 observations are listed as id = 1, the next 50 id = 2 and etc until obs. 4951-5000 where id = 100.

I would like the first 100 observations, corresponding to the 100 people in the sample at T = 1, to have the id from 1-100 and then repeat until end.

Any help is appreciated

cgmwildboot and esttab

$
0
0
Hello everyone,

I am running a pooled OLS with 60 observations which belong to 15 clusters, I read that I can't use vce(cluster varname) command since my clusters are too few. Therefore I runned the cgmwildboot which is a user-written command, which I got here: http://www.statalist.org/forums/foru...nd-cgmwildboot

my command is:

cgmwildboot y x1 x2 x3 x4, cluster(nation) bootcluster(nation)

Some of the output coefficients are significant at the 5% and 10% level. And when I use the command: esttab, r2 ar2 p starlevels( * 0.10 ** 0.05 *** 0.010)
to compute the table of results it doesn't show either p-values nor stars of significance level.

Am I doing something wrong?

I really appreciate your help. Thank you!

Best,
Carolina

Chow test, Panel Data, Stata 13

$
0
0
Dear all,

I have panel data and want to see what is the time trend of the Internet Expenses for DAX30 companies from 2000-2016. After doing "regress company time" for each company, I suspect there is a structural break in 2012. I know how to check for it using the Chow test, and I could do it for every single company, but I would still don't know if this structural break would be significant in case not all companies showed it. I was wondering if Stata had a tool for this.

I hope I have been clear enough.


Thank you!

PSM seems to depend on the outvar in teffects psmatch

$
0
0
Hi Everyone,

My problem is the following: I can not use pscore for matching (I can not download it to the server), so I chose the teffects psmatch to tag the observations that are matched. For the teffects psmatch I have to give an output variable, so I tested if I end up with the same sample at the end if I change the output var. To my great surprise the final sample -- observations that are matched -- was different. Does anyone has an explanation for that? Given that I don't want to estimate any ATE in this step and only need the PSM to get a balanced sample this phenomena troubles me. Do you have a suggestion how to get around this problem?

Using &quot;statsby&quot; to present multiple panel results

$
0
0
Hi all,

I used the following commands to produce multiple panel residual unit root test results.

//predict the residuals for all panels
su group, meanonly
forvalues i = 1/`r(max)' {
regress ln_TX_R ln_REER ln_RGDP_P gfc gfc_ln_RGDP_P t if group == `i'
predict res`i' if group==`i', res

}
end

//DF unit root test of the residuals [null: unitroot (error term not stationary) = no cointegration]
su group, meanonly
forvalues i = 1/`r(max)' {
dfuller res`i' if group==`i'

}
end


And the results are as below,
I would like to have test stat, 1%, 5%, 10% CV all in one table, or imported to excel for further analysis.
I know someone posted before to use the command "statsby"

i tried the following but to no avail..

statsby, by(group): dfuller res`i' if group==`i'

and sometimes i get the following message:
no; data in memory would be lost

any help will be greatly appreciate.
Thanks

saving coefficient

$
0
0
Dear all,

i have the following stata command:

by portfolio month: regress return mktrf smb hml wml

and i want stata to generate a column with all e(r2) for each portfolio number(1 or 2) and month !
For Example
Portfolio Month e(r2)
1 2000m2 0,40
1 2000m2 0,40
1 2000m2 0,40
1 2000m3 0,34
1 2000m3 0,34
...
2 2000m2 0,23
2 2000m2 0,23
2 2000m2 0,23
2 2000m2 0,23
2 2000m3 0,37
2 2000m3 0,37
..


does anyone knows how it function?

customizing legend text for stacked bar graphs

$
0
0
dear all,

I am graphing stacked bars by a category and somehow cannot find a way to customize the text of the legend. say the dataset looks something like this

country thing1 thing2
A .5 .5
B .3 .7

graph thing1 thing2, stack over(country)

it will then say "mean of thing1" and "mean of thing2" in the legend. I know how to turn the legend off completely. However, I'd rather like to either use the varnames directly (without "mean of"), labels, or a custom text.

I know this is probably very easy to solve but I couldn't find a solution despite looking for a long time now..
Best,
Swati

Foreach

$
0
0
Dear Experts

This would be very basic question but I have been struggling with foreach for several days and would love to get support from you.
I tried to look up answers and advices but most of them are referring to file names with different numbers.
For example, codes were forevalues n = 2000/2015

In my case, I saved csv data set using different names as follows.
TB_coepidemics of TB and HIV_country_WHO
TB_coepidemics of TB and HIV_income groups_WHO
TB_coepidemics of TB and HIV_region_WHO
TB_drug resistant_country_WHO
TB_drug resistant_income groups_WHO
TB_drug resistant_region_WHO
TB_incidence_country_WHO
TB_incidence_income groups_WHO
TB_incidence_region_WHO
TB_new case_country_WHO
TB_new case_income groups_WHO
TB_new case_region_WHO
.....

1. My first task is to import each csv file and save them as dta
I used the below codes. If i don't use substr, the file name was saved .......csv.dta. But I would like to keep only dta.

global FOLDER `"TB/Data/STATA"'

local csvdir: dir "TB/Data/CSV" files "TB_*.csv"

foreach file in `csvdir' {

insheet using `file', firstrow, clear

local filename = "substr("`file'", -4, "")"

save `"$FOLDER/`filename'.dta"', replace

}


2. My 2nd task would be
appending 3 files in the same group.
For example,
1 TB_coepidemics of TB and HIV_all file for

TB_coepidemics of TB and HIV_country_WHO
TB_coepidemics of TB and HIV_income groups_WHO
TB_coepidemics of TB and HIV_region_WHO

1 TB_drug resistant_all file for

TB_drug resistant_country_WHO
TB_drug resistant_income groups_WHO
TB_drug resistant_region_WHO


3. Then merge these appended file using variables country year


Could you please advise me how I can do this repetitive work using foreach?

Thank you so much.

Cross-level interaction with binary outcome

$
0
0
Hello,

I would like to hear any advice on the best way to specify the following requirements in Stata.

I have a binary outcome and a predictor of interest. All observations come from one of 12 clusters. I want to see if the effect of my level 1 predictor changes in response to a continuous level 2 predictor, i.e., a cross-level interaction. I understand that this introduces a random component into the level 1 effect and should be a straightforward specification for a continuous outcome

However, since my outcome is binary, should I analyze this effect using xtlogit with random effects, or would a gee/population average specification make more sense?

Best,
John L.

How to make and interpreting bivariate statistics for population survey analysis?

$
0
0
Dear experts

Regarding statistics to population survey, could you please tell me which one of the syntax using for bivariate analysis [chi square] and what does different the meaning of each syntax below :

Code:
1.
svy: tabulate sex malaria
and output here :
Number of strata   =         1                 Number of obs     =     259,885
Number of PSUs     =     4,418                 Population size   =  30,152,652
                                               Design df         =       4,417
 
-------------------------------
gender of |
responden |       malaria     
ts        |    no    yes  Total
----------+--------------------
     male | .4744  .0185  .4929
   female | .4909  .0162  .5071
          |
    Total | .9653  .0347      1
-------------------------------
  Key:  cell proportion
 
  Pearson:
    Uncorrected   chi2(1)         =   58.3020
    Design-based  F(1, 4417)      =   49.6352     P = 0.0000
Code:
2.
.  svy: tabulate sex malaria, row
and output here :
 (running tabulate on estimation sample)
 
Number of strata   =         1                 Number of obs     =     259,885
Number of PSUs     =     4,418                 Population size   =  30,152,652
                                               Design df         =       4,417
 
-------------------------------
gender of |
responden |       malaria     
ts        |    no    yes  Total
----------+--------------------
     male | .9625  .0375      1
   female |  .968   .032      1
          |
    Total | .9653  .0347      1
-------------------------------
  Key:  row proportion
 
  Pearson:
    Uncorrected   chi2(1)         =   58.3020
    Design-based  F(1, 4417)      =   49.6352     P = 0.0000
Code:
3.
. svy linearized : tabulate sex  malaria, obs row percent ci

and output here :
 (running tabulate on estimation sample)
 
Number of strata   =         1                 Number of obs     =     259,885
Number of PSUs     =     4,418                 Population size   =  30,152,652
                                               Design df         =       4,417
 
-------------------------------------------------------
gender of |
responden |                   malaria                 
ts        |            no            yes          Total
----------+--------------------------------------------
     male |         96.25          3.746            100
          | [96.01,96.48]  [3.518,3.987]              
          |       1.2e+05           5595        1.3e+05
          |
   female |          96.8          3.198            100
          | [96.57,97.02]  [2.979,3.431]              
          |       1.3e+05           4971        1.3e+05
          |
    Total |         96.53          3.468            100
          | [96.31,96.74]  [3.257,3.692]              
          |       2.5e+05        1.1e+04        2.6e+05
-------------------------------------------------------
  Key:  row percentage
        [95% confidence interval for row percentage]
        number of observations
 
  Pearson:
    Uncorrected   chi2(1)         =   58.3020
    Design-based  F(1, 4417)      =   49.6352     P = 0.0000
How to make odds ratio for cross-sectional design survey? Should I make syntax for prevalence ratio or may I take directly odds ratio in the syntax below?


Code:
4.
. svy linearized : logistic sex malaria

and output here :
 (running logistic on estimation sample)
 
Survey: Logistic regression
 
Number of strata   =         1                 Number of obs     =     259,885
Number of PSUs     =     4,418                 Population size   =  30,152,652
                                               Design df         =       4,417
                                               F(   1,   4417)   =       49.54
                                               Prob > F          =      0.0000
 
------------------------------------------------------------------------------
             |             Linearized
         sex | Odds Ratio   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
     malaria |   .8488294   .0197667    -7.04   0.000     .8109481    .8884803
       _cons |   1.034818   .0042681     8.30   0.000     1.026484    1.043219
------------------------------------------------------------------------------
Based on the table above [chi square and binary logistic].

Where the sex variable which assumptions male is given code = 0 and female is given code = 1.
Malaria prevalence differs by sex Males are more likely to have malaria than females (1.85% males versus 1.62% females, P = 0.000). Based on odds ratio (OR) female have the chances of getting malaria 0.85% or 0.85 times than male (as categorical reference)

How do I interpret an odds ratio less than 1 in a logistic regression?
May I will be written male with a chance of 1 / 0.85 times or 1.2 times to get malaria compared than female as well?

or

The odds of malaria in male decreased by (1 - 0.85 ) 15% compared those in a female. Whatever on the dependent variable decreases. For each unit increase, it decreases by a multiple of (1 - OR )


Thank you in advance for your reply


Sincerely yours,



Hamzah

Prefix for first observation in panel?

$
0
0
I have three variables, id year effect. After using xtset id year, I want to calculate, for each id, the difference between the value of effect in each year and the value in the first year. Can I do this using lag prefixes? The help file for tsvarlist didn't provide an answer (as far as I can tell).

Sean

cdsimeq individual panel effects with probit

$
0
0
Hi,
I'm trying to estimate a simultaneous equation say:
y=f(X1 X2 X3 X4) ==> individual effects model with y continuous
X1= f(y X2 X4 X5) ==> probit model with X1 binary

I know that "cdsimeq" in STATA do such thing however, I'm not sure it's suitable for panel specific individual effects. It simply run OLS for continuous variable and Probit for binary.
My question is, what program is suitable to my model? is "xi: reg3" command suitable? As I know reg3 is suitable when the to endogeneous variables are continuous!!!!

HELP please

network map

$
0
0
Dear Stata users,
I am doing network meta analysis.
i did a network map but i want to include the odd ratio and 95% CI in side plot
Group counselling Vs No contact
Group counselling Vs Self help
Group counselling Vs Individual counselling
No contact Vs Self help
Self help Vs Individual counselling
Individual counselling Vs No contact

like in the attachment,
i did a example smoking data set.

please advice me how can i develop this kind of network map plot.


Thanks
sugan
Code:
 
use smoking.dta, clear
      study     d      n                      trt  
        1     9    140               No contact  
        1    23    140   Individual counselling  
        1    10    138        Group counselling  
        2    11     78                Self help  
        2    12     85   Individual counselling  
        2    29    170        Group counselling  
        3    79    702               No contact  
        3    77    694                Self help  
        4    18    671               No contact  
        4    21    535                Self help  
        5     8    116               No contact  
        5    19    146                Self help  
        6    75    731               No contact  
        6   363    714   Individual counselling  
        7     2    106               No contact  
        7     9    205   Individual counselling  
        8    58    549               No contact  
        8   237   1561   Individual counselling  
        9     0     33               No contact  
        9     9     48   Individual counselling  
       10     3    100               No contact  
       10    31     98   Individual counselling  
       11     1     31               No contact  
       11    26     95   Individual counselling  
       12     6     39               No contact  
       12    17     77   Individual counselling  
       13    95   1107               No contact  
       13   134   1031   Individual counselling  
       14    15    187               No contact  
       14    35    504   Individual counselling  
       15    78    584               No contact  
       15    73    675   Individual counselling  
       16    69   1177               No contact  
       16    54    888   Individual counselling  
       17    64    642               No contact  
       17   107    761   Individual counselling  
       18     5     62               No contact  
       18     8     90   Individual counselling  
       19    20    234               No contact  
       19    34    237   Individual counselling  
       20     0     20               No contact  
       20     9     20        Group counselling  
       21    20     49                Self help  
       21    16     43   Individual counselling  
       22     7     66                Self help  
       22    32    127        Group counselling  
       23    12     76   Individual counselling  
       23    20     74        Group counselling  
       24     9     55   Individual counselling  
       24     3     26        Group counselling  

network setup d n, studyvar(study) trtvar(trt)
network table
network pattern
network map

network meta consistency

Merging Multiple Obs. Per Year Dataset with Coutnty-Year Dataset

$
0
0
Hello Statalisters,


I apologize if it is a question already been asked (and solved, hopefully). I have two datasets that I want to merge, but I have a problem sprouting mainly from the way the data is structured.

In one data set I have, what most datasets look like:

Country year score
X 1998 5
X 1999 4
Y 1998 5
Y 1999 3

and so forth. The other dataset, however, has multiple observations per year.

Country Year aid agency
X 1999 2000 UN
X 1999 3000 World Bank
X 1999 3500 IMF


The question now is how do I merge the two datasets? Since I cannot xtset the latter dataset, the conventional way of merging datasets failed me.

Thanks for your time,
BA

Cohort analysis in hours of work model

$
0
0
Hi,
I am running a linear regression model using pooled cross section data where hours of work is the yvar. I have demographics as ivars (like age, age_sq, b1.native, b1.white, b1.married, children), time variables (20 years from 1994 to 2014) and year of birth variable in 5 year-time (this is my cohort variable which I generated by subtracting age from the survey year).

In the past I ran the model with no cohort variable, showing changes in the hours of work over the life cycle in 3 different years, providing the time effect only. But I read it is more convenient to follow a cohort scope.

If I run the following code:
reg hours $xvars $timevars

How do I include the cohort variables in 5 year time?
I tried the following
reg hours $xvars $timevars i.cohort

But the outcome I get is confusing to give a prediction. I am interested in comparing the hours worked over the lifecycle in 3 years: 1994 (base year), 2004 and 2014 to see how the hours changed over time.

depvar (hours) Coef. Std. Err. t P>t [95% Conf. Interval]

age 2.176652 .3098079 7.03 0.000 1.569278 2.784027
age_sq -.0293843 .0027635 -10.63 0.000 -.0348021 -.0239665

native
[0] Other 1.653617 .6987043 2.37 0.018 .2838153 3.023418

white_race
[0] Other 3.317547 .7791355 4.26 0.000 1.790061 4.845033

married_cohabit
[0] Other -3.163612 .7469612 -4.24 0.000 -4.628021 -1.699204

children_05 -1.183119 .6823782 -1.73 0.083 -2.520913 .1546752
children_0509 -1.238106 .5847977 -2.12 0.034 -2.384595 -.0916174
children_1015 1.435132 .5602626 2.56 0.010 .3367438 2.53352

live_london_se
[0] Outside .8404913 .4815423 1.75 0.081 -.1035666 1.784549

cohort_birth5
1930 17.44648 14.50962 1.20 0.229 -10.99945 45.89242
1935 21.26571 14.53338 1.46 0.143 -7.226798 49.75823
1940 28.39575 14.6183 1.94 0.052 -.2632477 57.05474
1945 29.28223 14.73948 1.99 0.047 .3856644 58.1788
1950 31.81939 14.90086 2.14 0.033 2.606436 61.03234
1955 31.03861 15.11305 2.05 0.040 1.409649 60.66757
1960 30.5975 15.35071 1.99 0.046 .5026206 60.69237
1965 25.27036 15.60428 1.62 0.105 -5.321634 55.86235
1970 23.51994 15.90992 1.48 0.139 -7.671263 54.71114
1975 22.50421 16.26165 1.38 0.166 -9.376565 54.38499
1980 17.87636 16.62613 1.08 0.282 -14.71898 50.47169
1985 27.17076 17.17711 1.58 0.114 -6.504746 60.84627
1990 17.97325 20.08565 0.89 0.371 -21.40443 57.35094

year1995 -4.574128 1.220225 -3.75 0.000 -6.966365 -2.18189
year1996 -7.860776 1.253371 -6.27 0.000 -10.318 -5.403556
year1997 -12.32974 1.378639 -8.94 0.000 -15.03254 -9.626932
year1998 -12.43727 1.445566 -8.60 0.000 -15.27128 -9.603254
year1999 -10.58144 1.546404 -6.84 0.000 -13.61315 -7.549737
year2000 -13.44297 1.66281 -8.08 0.000 -16.70289 -10.18305
year2001 -14.23594 1.772534 -8.03 0.000 -17.71097 -10.76091
year2002 -13.89415 1.811528 -7.67 0.000 -17.44563 -10.34267
year2003 -19.15668 1.951345 -9.82 0.000 -22.98227 -15.33109
year2004 -16.93614 2.095543 -8.08 0.000 -21.04443 -12.82786
year2005 -18.61459 2.157636 -8.63 0.000 -22.84461 -14.38457
year2006 -17.00503 2.246228 -7.57 0.000 -21.40874 -12.60133
year2007 -15.83526 2.364347 -6.70 0.000 -20.47053 -11.19999
year2008 -15.71916 2.550629 -6.16 0.000 -20.71964 -10.71869
year2009 -14.89414 2.707031 -5.50 0.000 -20.20124 -9.587039
year2010 -15.61056 2.854735 -5.47 0.000 -21.20723 -10.01388
year2011 -17.38405 2.964909 -5.86 0.000 -23.19672 -11.57138
year2012 -15.69827 3.098829 -5.07 0.000 -21.77349 -9.623052
year2013 -15.76747 3.272106 -4.82 0.000 -22.18239 -9.352545
year2014 -15.74998 3.429686 -4.59 0.000 -22.47384 -9.026118
_cons -.2902722 18.744 -0.02 0.988 -37.03767 36.45712

R_sq 0.3209
Adj R_sq 0.3146
N 4,571


Viewing all 72758 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>