Quantcast
Channel: Statalist
Viewing all 72857 articles
Browse latest View live

Interaction terms between a dummy and continuous variable

$
0
0
Hi all,

I am running a fixed effects model on a data set with 18 states from 1970 to 2018.
State wise agricultural GDP is my dependent variables and my independent variables include a dummy that takes the value of 1 if a heat wave occurred and zero otherwise. I also have irrigation as an independent regressor.
I want to see the efficiency of irrigation in reducing the adverse effects of heat waves on agricultural GDP so I run the following model.

AGRI_GDP= B0+ B1*HEAT_WAVE + B2*IRRIGATION + B12*HEAT_WAVE*IRRIGATION+ B3*OTHER_CONTROLS+e
I see some research papers (for example Dell, M., Jones, B.F. and Olken, B.A., 2012. Temperature Shocks and Economic Growth: Evidence from the Last Half Century. American Economic Journal: Macroeconomics, 4(3), 66-95 https://scholar.harvard.edu/files/de...emperature.pdf) do not include the base dummy term (B1*HEAT_WAVE in my case) when running similar models, and instead only include the base term of the continuous variable (IRRIGATION in my case) and the interaction between the continuous and dummy variable (HEAT_WAVE*IRRIGATION)

Which of the two approaches would be correct if the objective was to see the efficiency of irrigation in reducing the adverse effects of heat waves on agricultural GDP?


Thank you for your time.

Scatterplot only displaying means

Scatterplot only displaying means

$
0
0
Hello everyone,

I have the following problem; I have produced scatterplots displaying the median income from 1997 to 2012 with a visual cut off in 2004. I have done this for a treatment and a control group. Since it will get messy as soon as I include multiple groups, I would like to do reduce the coefficients before 2004 and after to the means of the medians. So that I will only display the mean before and after for each group. I have already generated variables with the means, but I am not able to display both means with a line (2004) in the middle.

How I have produced the scatterplot with all medians:

twoway (scatter med_inc_t year, xlabel(1997(3)2012) xtitle("Years") ytitle("Median Income in EUR") title("Wage Employment: Treated vs. Control") xline(2004, lcolor (red)) connect(line) lcolor(black) mcolor(black) legend(label(1 "Treated"))) ///
(scatter med_inc_c year , connect(line) lpattern(dash) lcolor(black) mcolor(black) legend(label(2 "Control")) msymbol(Oh))

How I have generated the mean of the medians pre and post:

egen mean_inc_male_t_post=mean(med_inc_male_t) if year>2004
egen mean_inc_male_t_pre=mean(med_inc_male_t) if year<2004
egen mean_inc_male_c_post=mean(med_inc_male_c) if year>2004
egen mean_inc_male_c_pre=mean(med_inc_male_c) if year<2004

Thanks for your help!

collinearity

$
0
0
I'm not sure the way I paste the result here is the same way you mentioned, but I left the result here.
I faced another issue of collinearity. As you see, my model is simple difference model of panel data with trend, but in this time, the result of trend is omitted in every time period. I analyzed difference model for two time period based 2014; so the time gap is 1,2,3,4 to 2015, 2016, 2017, 2018 respectively, I used the time gap as trend variable.
Is it fail to building model appropriately or coding?
Thank you for taking your time and answer in advance.


drop in 39/40
(2 observations deleted)

. encode occupation, gen(occ)


sort occ, stable

. by occ: gen time=_n

. tsset occ time
panel variable: occ (strongly balanced)
time variable: time, 1 to 2
delta: 1 unit

. gen lemp=log(근로자수)

. reg D.lemp trend D.lnMWRIIWMW

note: trend omitted because of collinearity

Source | SS df MS Number of obs = 19
-------------+---------------------------------- F(1, 17) = 0.41
Model | .016417834 1 .016417834 Prob > F = 0.5315
Residual | .684092334 17 .040240726 R-squared = 0.0234
-------------+---------------------------------- Adj R-squared = -0.0340
Total | .700510168 18 .038917232 Root MSE = .2006

------------------------------------------------------------------------------
D.lemp | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
trend | 0 (omitted)
|
lnMWRIIWMW |
D1. | -.0059717 .0093492 -0.64 0.532 -.0256969 .0137534
|
_cons | .083711 .0534071 1.57 0.135 -.0289683 .1963902
------------------------------------------------------------------------------

&quot;Data in memory have changed&quot; warning

$
0
0
Does anyone know whether it's possible to disable the pop-up interface prompt that says "Data in memory have changed. Do you want to save the changes before exiting?" Since I always save files as needed using dofile syntax, I've always found this warning box somewhat annoying and unnecessary (since my answer always is "Don't save"). I'm guessing it can't be disabled from appearing, but thought I'd ask around to be sure.

Dealing with multiple or statements

$
0
0
I am mapping ICD-9 codes to CCS and the coding can be quite inefficient. Some CCS include hundreds of ICD-9 codes and it is even worse for ICD-10. Is there a better way to condition multiple or statements? Currently I am doing the following:

Code:
replace PRCCS =9    if proc_p ==0120 |proc_p ==0129 |proc_p ==016 |proc_p ==0201 ///
|proc_p ==0202 |proc_p ==0203 |proc_p ==0204 |proc_p ==0205 |proc_p ==0206 ///
|proc_p ==0207 |proc_p ==0211 |proc_p ==0212 |proc_p ==0213 |proc_p ==0214 ///
|proc_p ==022 |proc_p ==0221 |proc_p ==0222 |proc_p ==0291 |proc_p ==0292 ///
|proc_p ==0293|proc_p ==0294 |proc_p ==0296 |proc_p ==0299 |proc_p ==0301 ///
|proc_p ==031 |proc_p ==0329 |proc_p ==034 |proc_p ==0351 |proc_p ==0352 |proc_p ==0353 ///
|proc_p ==0359 |proc_p ==036 |proc_p ==0371 |proc_p ==0372 |proc_p ==0379 |proc_p ==0397 ///
|proc_p ==0398 |proc_p ==0399 |proc_p ==0401 |proc_p ==0402 |proc_p ==0403 |proc_p ==0404 ///
|proc_p ==0405 |proc_p ==0406 |proc_p ==0407 |proc_p ==042 |proc_p ==043 |proc_p ==0441 ///
|proc_p ==0442 |proc_p ==045 |proc_p ==046 |proc_p ==0471 |proc_p ==0472 |proc_p ==0473 ///
|proc_p ==0474 |proc_p ==0475 |proc_p ==0476 |proc_p ==0479 |proc_p ==0491 |proc_p ==0492 ///
|proc_p ==0493 |proc_p ==0499 |proc_p ==050 |proc_p ==0521 |proc_p ==0522 |proc_p ==0523 ///
|proc_p ==0524 |proc_p ==0525 |proc_p ==0529 |proc_p ==0581 |proc_p ==0589 |proc_p ==059 ///
|proc_p ==1761 |proc_p ==8053 |proc_p ==8054 |proc_p ==8458 |proc_p ==8694 |proc_p ==8695 ///
|proc_p ==8696 |proc_p ==8697 |proc_p ==8698

test equality of coefficients from different models estimated by user written commands

$
0
0
Dear All,

I want to test equality of regression coefficients from to models. My first model is basic wage equation, estimated with OLS (reg command). My second model is estimated wage equation corrected for selectivity, where selectivity is estimated by mlogit (labour market status has three categories). It is user written command selmlog by Bourguignon François, Fournier Martin and Gurgand Marc. Their paper and help file are attached. The problem is that I don't know how to save estimates after selmlog command and to use them with e.g. suest command. If I type estimate store, I get the notification "last estimation results not found, nothing to store".

I tried also to save coefficients as matrix, and then test equality of matrix elements from two matrices, but it does not work.

selmlog wage $employment, sel(labour market status=$selection) dmf(2) mlop(b(3)) showmlogit

matrix list e(b)
mat bsel=e(b)

reg wage $employment
mat bmincer=e(b)

test bmincer[1,1]=bsel[1,1]

I get the following result. It is obvious that something is wrong.

( 1) = .0273823
Constraint 1 dropped

F( 0, 3447) = .
Prob > F = .
Could you please help me.

Best wishes,
Aleksandra

Does -xtmixed- use the full data set ?

$
0
0
Dear all,

In texts books, we are told that mixed effects models use the "full" data set.
For example, let's say we have 100 participants and repeated measurements (baseline, 12 weeks and 24 weeks).
The beauty of mixed effects models is that they can use data for 10 participants who have data for only one time point (e.g. baseline only), participants that have all time points ( n = 50) and participants who have only the last measurement ( n = 40).

However, in Stata, for example, what we have is list wise deletion of all participants with at least a single missing data.


Code:
* generate a simple dataset. lazy coding. sorry.* treatment (1 or 0)


* outcome is continuous
* time is categorical (0,12 and 24 weeks)

***************** simulation of data ********************
set seed 12345
clear
set obs 100
gene treatment = cond(_n>50,1,0)
gene id = _n
gene outcome = rnormal(100,10)
gene time = 0
tempfile baseline followup1 followup2
save `baseline', replace
clear
set obs 100
gene treatment = cond(_n>50,1,0)
gene id = _n
gene outcome = cond(treatment==1,rnormal(50,10), rnormal(100,10))
gene time = 12
save `followup1', replace
clear
set obs 100
gene treatment = cond(_n>50,1,0)
gene id = _n
gene outcome = cond(treatment==1,rnormal(25,10), rnormal(100,10))
gene time = 24
save `followup2', replace
clear
use `baseline'
append using `followup1'
append using `followup2'
***************** end of simulation  ********************


************ ANALYSIS 1: WITH THE FULL DATA SET *********************
xtmixed outcome treatment#time || id:, var reml
*! generate missing outcome data for 5% of the sample (MCAR)
replace outcome = . if runiform()>0.95
************ ANALYSIS 2:  WITH MISSING DATA *************************
xtmixed outcome treatment#time || id:, var reml
drop if outcome==.
************ ANALYSIS 3: MANUAL LIST WISE DELETION ******************
xtmixed outcome treatment#time || id:, var reml

Does it make sense to mention that analysis 2 uses the full data set? Any suggestions regarding how to use the full data set?

Thank you so much for any comment or reference on that topic.

All the best,

Tiago

Maximum CSV size in Stata SE and what to do about 'fat' data

$
0
0
I have a dataset with 4600 observations of 3500 variables. The .xlsx file is 90 MB and the .csv version of it is 75 MB.

I'm currently running Stata 13 IC, which has served me fine in the past. I'm running into an issue here trying to import my data, however, since the maximum amount of variables in Stata IC is 2048. I also ran into issues with the maximum import size for .xlsx files being 40 MB.

I'm thinking about purchasing Stata 16 SE to allow me to work with this bigger data. I just wanted to check with people here to make sure Stata 16 SE would handle my dataset, at least in .csv format; is this correct? Would it handle that dataset in .xlsx format?

-estimates use-, error message when reading stored estimates across Stata versions

$
0
0
I have one model estimates file made using Stata v15.1. My co-worker is trying to do -estimates use- to read that model, and they are using Stata v14.1. They are getting the following error message:

> file is from a more recent version of Stata; you need to upgrade your Stata

so the model can not be read. Are model estimates really not usable from one Stata version to the next?

Weighting

$
0
0
Greetings Stata community,

Am using Stata version 13. I would like to perform a chi-square test for the variables v024 and v013 - both of which are categorical. I want to have the results weighted. When i entered this command ( ta v024 v013, chi2 row [iweight=wt] ) the message returned by Stata was option [ not allowed. Kindly help me with the right command to use. Thank you.

Vincent

graph

$
0
0
Hi all,

I have data about companies' performance over time and a variable that identifies uncertainty periods by state of incorporation. I would like to graph companies' performance during uncertainty periods, but the uncertainty periods vary across companies. So I am not sure the best way to graph this data?

see example below


Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input byte stateid int(firmid year) float(performance uncertainty)
1 10010 1994 .0894412 0
1 10010 1991 .0207386 0
1 10010 1992 .0387597 0
1 10010 1993 .0966105 0
1 10012 2002 .0738338 1
1 10012 2000 .0391909 1
1 10012 1995 .0050087 0
1 10012 2003 .0986469 1
1 10012 1997   .02295 0
1 10012 1991 .0143952 0
1 10012 1999  .008036 0
1 10012 1998 .0302348 0
1 10012 1996 .0177345 0
1 10012 1992 .0079708 0
1 10012 1993 .0124058 0
1 10012 2001 .0288672 1
1 10012 1994 .0182261 0
2 12012 1991      .22 0
2 12012 1992       .1 0
2 12012 1995      .12 0
2 12012 2000      .11 1
2 12012 2002      .25 0
2  3115 2000      .01 1
2  3115 2001      .09 0
2  3115 2002       .1 0
3  2020 1997      .08 0
3  2020 1998      .07 0
3  2020 1999      .05 0
3  2020 2000      .02 1
3  2020 2002      .07 0
3  2021 2000      .05 1
3  2021 2001       .1 1
3  2021 2002      .01 0
3  2021 2003      .09 0
3  2021 2004        0 0
3  2025 2000      .01 1
3  2025 2001     .022 1
3  2025 2002     .099 0
3  2025 2003      .02 0
3  2025 2004      .01 0
end


Thanks in advance

Have any other users of Stata under Windows had problems with the shell command disallowing UNC paths?

$
0
0
Fellow Statalisters

I use the very latest Stata Version 16 (dated 11 Dec 2019) under Windows 10, and all of a sudden I find that I cannot really use the shell command any more in the way that I have always routinely taken for granted in the past. This is because, when I am using Stata in a current directory which is a networked drive and I type

shell

a Windows 10 CMD window opens and tells me that I cannot use the network drive as the current folder, because UNC paths are not supported as possible current directories, and changes the current directory to C:\Windows. An example of this behaviour is given below:

**** BEGINNING OF CMD OUTPUT - CUT HERE
'\\icnas4.cc.ic.ac.uk\rnewson\rnewson\projects\sme eton\pracnonparstats\ansmlatex'
CMD.EXE was started with the above path as the current directory.
UNC paths are not supported. Defaulting to Windows directory.
Microsoft Windows [Version 10.0.17134.1184]
(c) 2018 Microsoft Corporation. All rights reserved.

C:\Windows>
**** END OF CMD OUTPUT - CUT HERE


This seems to be saying that Windows 10 no longer does as it is asked by Stata when a CMD window is initiated using a shell command, at least on my machine, because Windows 10 no longer allows the current folder to be a networked drive..

I have encountered this behaviour from my machine before, both when launching a CMD window from Stata using shell and when opening a CMD window from within TextPad (my favourite text editor for Windows which comes with a Tools menu including "Command prompt window from current folder", which used to open a CMD window in the current folder but now makes a similar complaint about UNC paths not being supported. However, previously this behaviour would cease after I shut down my machine and restarted it again, after which UNC paths seemed to be supported again.. Now this no longer seems to happen, and Windows 10 still disallows UNC paths as the current folder, even after being restarted.

I would guess that this problem is the fault of my Microsoft Windows 10 operating environment, and not of StataCorp. And, if I do a Web search on

how-do-you-handle-cmd-does-not-support-unc-paths-as-current-directories

I get advice either to use commands called pushd and popd in my shell script or to use a Microsoft-distributed utility called PowerShell. This seems to suggest that opening a CMD window under Windows 10 to execute an ordinary CMD shell script in the current networked folder is no longer thought to be a reasonable thing to expect to be able to do. But have any other Stata users under Windows 10 encountered this problem when they try to use the shell command in a networked current folder with a UNC path? And is there a non-drastic workaround?

Best wishes

Roger

How to plot multilevel coefficients from mixed ? Can you use coefplot? If not a loop? macro?

$
0
0
Hello Statalist & happy holidays!

I need to plot the curves from my mixed analysis. But I'm not quite sure what is the best method. Should I do this manually? Is there a command I can use? A loop? A macro?

The exposure is categorical so I have 3 curves to plot. Finding a way to do this automatically would be great.

Thank you for your insights.

Array

Splitting sample by dependent variable

$
0
0
Does it ever make sense for one to split their sample by values of the dependent variable and then run a separate regression for each group?

Correlation between two matrices

$
0
0
Hi,

I have 4 variables called x, y, x1, and y1. I define two matrices called A and B as A=[x y] and B=[x1 y1]. I want to calculate the correlation between A and B. I can do it in Matlab with corr2 function but don't know if it is possible to do it in Stata. Thank you for all help in advance.

Ulas

Local holding list of model specifications

$
0
0
Hi,

I was hoping someone might have a more elegant solution for the following issue. I'm running a ton of PCAs using a variety of model specifications. I'd like to hold all of the model specifications within a local, which I could use within a for loop. However, I'm running into a strange issue when creating this local. An example below:

Code:
*The following refer to variable names in the dataset:

local z_be bank_z childcarecenter_z conveniencestore_z conveniencestoreorsupermarket_z credit ///
                 union_z fastfood_z fedex_z firestations_z hospital_z landfill_z lawenforcement_z ///
                 library_z mobilehomeparks_z nursinghome_z pharmacy_z playground_z primarycareproviders_z ///
                 privateschool_z publicbuildings_z publichealthdepartments_z publicpark_z publicpool_z publicschool_z ///

local z_ses z_income z_mortgage z_rent z_unemployment z_poverty z_education

*I've tried the below with differing amounts of quotes and skew quotes;
*the following has gotten me closest to what I would expect.

local specifications "`z_ses'" "`z_be'" "`z_ses' `z_be'"

*Testing out if the above local holds my model specifications correctly:

local n 0
foreach x in `specifications'{
    di "Model `n': "
    di "`x'"

    local ++n
}
Oddly, when I do this, the above output spits out the individual elements of "z_ses" but correctly lists the rest of the model specifications (output below). I'm at a loss for why STATA is disaggregating the first term but not the second. How should I be creating this local to correctly hold different combinations of variables for model specifications?


Kind regards,
Max Griswold

Array

Portfolio analysis, measuring re-occurence and making panel data from repeated time values within panel (r451)

$
0
0
Hello,

First post, here goes:

I have data as follow:
Firm_ID Year SICCode EquityAmount Invested Total Amount Inv. by the firm Target company_ID Co-investment Count of Patents
1 1995 7372 54.4 1500 10001 2 400
1 1995 4565 8.7 1500 10003 1 440
1 1996 6383 7.9 1500 10007 1 528
1 2001 1781 15.4 1500 10012 1 652
2 1995 7372 29.9 1480 10001 2 150
2 2003 9773 22.9 1480 10005 2 175
3 1996 7372 77.8 980 10001 3 8129
3 1997 9444 139.9 980 10002 1 8129
3 2001 9773 48,8 980 10005 1 9220
1. This data is of Larger firms/investment portfolios (indicated by Firm_ID) investing in smaller companies (Target Company_ID).

2. The assigned SIC(Standard Industrial Classification Codes) are the codes of the targeted companies. SIC codes vary between 1.000 and 10.000 and indicate an industry.

3. As you can see the data is unbalanced, Firm_ID 1 has 4 observations, 2 has 2 obs and 3 has 3 obs.

4. The time values are sometimes repeated for multiple firms (e.g. first 2 rows). In this case with investments in the company 10001 and 10003 are both conducted in 1995, but sometimes multiple investments are conducted in the same company as multiple rounds of investment, so company 10003 might as well be another 10001.

5. There is a variable named Co-investments, this variable shows 2 in the first row because in 1995 Firm_ID 1 and Firm_ID 2 invested in this company as 2 investors. It shows 1 when there is only 1 investor in the data set invested in that specific company on that specific date. However, in 1996 Firm_ID decides to invest in the same company that Firm_ID 1 & 2 were invested in a year earlier. That is why it has a 3.


My goals and scream for your help:

1. I want to analyze how the SIC dispersion in the portfolio of a firm influences Variable Y(patents).
The theory states that the larger the distance (variance?) of SIC from the mean or the yearly mean, the stronger the growth of Patents. So the more explorative a Firm becomes by investing in distant SIC codes relative to each other: 1781, 6383, 4565, 7372, the more patents it purchases it. What would be the right measure to measure portfolios dispersion in SIC codes? variance, squared percentage growth or other measures? which commands would I apply?
Eventually, I want to be able to regress the growth or this dispersion with the growth of patents to show the relationship.


2. Variable/feature vector 7 "Co-investment" is non-existing, I would like to generate it ... I am guessing through variable Firm_ID and Target Company and measuring re-occurrence? Is there a command for this in STATA? This is going to be a dummy moderator ... something that will add to the explorative power of an investment

3. How do I order/structure my dataset so that I have a constant time variance, so time in equally spaced points in order for my data to become panel data and ready for regression? right now I get error r(451), repeated time value.
I have only around 2500 observations for 19 firms and am not keen on omitting much of the data, considering that some portfolios only consist of 59 investments, and 1 has even 700(this one has the repeated time values a lot).

I am afraid making my set smaller will make my data less relevant. Also If you have any remarks or tips please don't hesitate!
I am here to learn and only had a beginner's course in STATA!


Thank you in advance!

Haik

GAM download in stata

$
0
0
how to download GAM (generalized additive model) in stata ?

Destring a variable

$
0
0
Dear Stata Users,

First, I want to create a year and month from "v1". I use the code below:

Code:
g year = substr(v1, 1, 4)
g month = substr(v1, 5, 6)
However, when I try to destring "year" and "month" I have the following thread:

year contains nonnumeric characters; no replace
Can you please help me how can I destring "year" and "month"? Below I attach a sample from my data.

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input str32 v1 float(me1 me2 me3 me4)
"192512" 5.82 15.61 50.01    1319
"192601" 5.91  15.9 50.92 1331.71
"192602" 5.55 14.62 46.63 1366.39
"192603"    5  13.2 44.56 1322.46
"192604" 5.03 13.75    45 1350.21
"192605"  5.1 13.27  45.6 1382.58
"192606" 5.17 13.76    48 1510.46
"192607" 5.28  13.8 49.09 1521.25
"192608" 5.38 14.52 47.43 1561.71
"192609" 5.15  13.8 47.07 1576.54
"192610" 4.94 13.13 45.96 1581.94
"192611"  5.1 13.98 45.47 1608.91
"192612" 5.28 14.38 49.83 1595.12
"192701" 5.69 14.94 50.92 1628.38
"192702" 6.12 15.85 50.87 1690.91
"192703" 6.12 15.14 50.05 1768.07
"192704" 5.96  14.7 49.95 1729.49
"192705" 6.17  15.4 50.86 1841.78
"192706" 6.19 14.66 48.62 1774.64
"192707" 6.51 15.12 53.55 1925.16
"192708" 6.49 15.13 54.31  2140.2
"192709" 6.72 16.32  58.4  2314.2
"192710" 6.37 16.24 56.94 2231.55
"192711" 6.84 17.01 58.27 2257.65
"192712" 7.15 18.84 60.27  2401.2
"192801" 7.26 18.13 60.97 2335.95
"192802" 6.95 17.96  59.5 2381.62
"192803" 7.49 20.02 62.81  3253.8
"192804"  7.7  20.9 65.93 3292.95
"192805" 7.97 21.11 65.99  3358.2
"192806"  7.1 19.97 62.29    3306
"192807"  6.9 20.36 62.27 3347.32
"192808" 7.19 21.59 67.02 3536.55
"192809"  7.5  22.8 69.53 3769.27
"192810" 7.93  22.4 70.07 3806.25
"192811" 8.69 24.89 76.04  3680.1
"192812" 8.92 26.06 83.62 3545.25
"192901" 9.04 26.85 85.08 3637.69
"192902"    9 27.69 83.31 3643.12
"192903" 8.75 25.95 83.07  3697.5
"192904" 8.69 26.04 81.13 3675.75
"192905" 7.81 23.81 77.34    3045
"192906" 8.37 25.62 83.76  3262.5
"192907" 8.61 25.82 82.68 3516.82
"192908" 8.64 26.43 88.06 3926.02
"192909" 8.14  24.7 80.61 3860.29
"192910" 6.44 20.55 70.46 3244.02
"192911" 5.39 16.25 57.75  2946.5
"192912"  5.2 15.75 58.65 2939.89
"193001" 5.68 16.95 65.17 2958.06
"193002" 5.97 18.03 64.92 3179.38
"193003" 6.25 18.85 64.83 3488.23
"193004" 6.33    18 60.68 3329.68
"193005" 5.86 17.28 60.24  3588.5
"193006" 4.44  14.4 50.54  3257.7
"193007" 4.58 14.96 52.29 3284.78
"193008" 4.21 14.08 51.69 3809.98
"193009" 3.68 12.07 42.64 3580.54
"193010" 3.36 10.84 38.11 3439.35
"193011" 3.23 10.61 37.93 3315.81
"193012"  2.6  9.04 33.44 3152.55
"193101" 3.21 10.11 37.44 3306.98
"193102" 3.59 11.32 39.51 3507.74
"193103"  3.3 10.42 37.25 3403.53
"193104" 2.73  8.75 33.18 3277.39
"193105" 2.34  7.49 27.16 2971.05
"193106" 2.81  9.05 33.74 3234.59
"193107" 2.59  8.05 30.66  3058.9
"193108" 2.37  7.99 30.71 3088.18
"193109" 1.52   5.4 20.37 2353.86
"193110" 1.75  5.49 22.91 2482.26
"193111"  1.5   5.1 19.86 2329.08
"193112" 1.12  4.06 16.42 2178.32
"193201" 1.25  4.27 16.34 2087.36
"193202" 1.25  4.24 16.67 2353.24
"193203"  1.1  3.64 15.04 2057.04
"193204"  .87  3.04 12.78 1826.15
"193205"  .65  2.26  8.86 1644.24
"193206"  .66  2.29  9.35 1434.33
"193207"  .94  3.03  12.3 1674.56
"193208" 1.58  4.78 18.26 2136.34
"193209" 1.47  4.38 17.54 2096.69
"193210" 1.21  3.66 15.06 1933.44
"193211" 1.06  3.41  14.1 1912.45
"193212"   .9  3.11 13.85 1943.18
"193301"  .97   3.3 15.07 1950.18
"193302"  .76  2.73 12.45 1810.21
"193303"  .87  2.87 13.42 1651.59
"193304" 1.21  4.25 20.71  1866.2
"193305"  2.1  6.26 25.76 2190.45
"193306" 2.56  7.58 28.71 2377.07
"193307" 2.42   6.7 26.06  2258.1
"193308" 2.48  7.81 31.17 2365.41
"193309" 2.09  6.43 26.24 2244.11
"193310" 1.77  5.73 23.72 2085.48
"193311" 1.86   5.9  25.3 2213.78
"193312" 1.84  6.21 26.67 2083.15
"193401" 2.62  7.73 31.81 2202.12
"193402" 2.81  7.86 30.55 2248.77
"193403" 2.69   7.7 31.26 2237.11
end
Viewing all 72857 articles
Browse latest View live