Quantcast
Channel: Statalist
Viewing all 72840 articles
Browse latest View live

t-test of the hypothesis that the population mean is equal to a specified value given by a scalar

$
0
0
I want to run a t-test that the population mean is equal to a specified value given by a scalar. For example, the sample of the population is given by the variable "mpg" and the scalar has the name "alpha" and is equal to 23.

However, using the syntax
Code:
ttest mpg == alpha
yields "variable alpha not found".

Using the syntax
Code:
ttest mpg == `alpha'
yields "invalid syntax"

While using the syntax
Code:
ttest mpg == 23
works.

How can I make "ttest" read the numerical value of the scalar "alpha"?

Reshaping data from monadic to dyadic

$
0
0
Dear,

I have a data organized such that each observation is assigned to country in an eventID by year:

EventID Country Year
88 220 1870
88 271 1870
88 255 1870
220 3251870
220 710 1870
220 2 1870
220 220 1870
2117 70 1870
2117 70 1870
2168 255 1871
2168 255 1871
2169 365 1871
2169 220 1871
256 230 1872
256 2 1872
256 200 1872

I need this reshaped to dyad/year data under EventID to merge the other data which is a dyadic trade data, so the following shape is what I want to reshape.

EventID Country Country2 Year
88 220 271 1870
88 220 255 1870
88 220 325 1870
88 271 220 1870
88 271 255 1870
88 271 325 1870
88 325 220 1870
88 325 271 1870
88 325 255 1870
.
.
.
256 2 200 1872
256 200 2 1872
.
.
.

I've tried to create a dyadic data by using egen country2 group('var' 'var1') to merge it with a dyadic trade data.
But, I think I need to reshape the data first before merging two data.

Would anyone help me to reshape this data?

Best Regards,
Woo

Latent growth curve SEM model - different slope factor loading between individuals

$
0
0
I am using Stata 15.1 to estimate a latent growth curve mode. I have longitudinal data of infant length/height. The infants were supposed to be measured at birth, 1 month, 6 months and 12 months but in reality, the follow up visits occurred when practically possible.
Data example below. SIN is the unique ID. Agemonths is the age in months of the infant when the follow up visits actually occurred and then I have the length/height measurements in cm.

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input int SIN double(agemonths0 agemonths1 agemonths6 agemonths12 length0 length1 length6 length12)
 3 .19726027397260273 2.0712328767123287  6.641095890410958  12.23013698630137 51   58   70 73.5
25 .39452054794520547  1.380821917808219  10.84931506849315                  . 51 54.4    .    .
46  .6246575342465753  .9205479452054794  7.035616438356164  17.49041095890411 49 54.4 75.1 79.7
70                  0  1.117808219178082 5.7534246575342465 16.306849315068494 50   53   61 74.5
end

For reference, I have shown the code for my unconstrained model for length below.

My question is – in the SEM framework, is it possible to include the actual time a measurement was taken, in this case the data contained in the agemonths variable instead of constraining the slope factor loadings to 0, 1, 6, and 12?


Code:
    sem (length0 <- Intercept@1 Slope@0)    ///
    (length1 <- Intercept@1 Slope@1 )        ///
    (length6 <- Intercept@1 Slope@6)          ///
    (length12 <- Intercept@1 Slope@12),   ///
    latent(Intercept Slope) ///
    means(Intercept Slope)  noconstant iterate(10) method(mlmv) ///
    cov(e.length0*e.length1) cov(e.length0*e.length6) cov(e.length0*e.length12) cov(e.length6*e.length12)  ///

generate a variable with value equal to the one of a particular group

$
0
0
Dear All, I have this dataset:
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input float id int year float pre_rmspe
1 2011 2.1479816
1 2012 2.1479816
1 2013 2.1479816
1 2014 2.1479816
1 2015 2.1479816
1 2016 2.1479816
2 2011 1.1751639
2 2012 1.1751639
2 2013 1.1751639
2 2014 1.1751639
2 2015 1.1751639
2 2016 1.1751639
3 2011  23.91138
3 2012  23.91138
3 2013  23.91138
3 2014  23.91138
3 2015  23.91138
3 2016  23.91138
end
I wish to generate a variable `wanted', which values are all equal to the value of 23.91138 (id=3)? Is there one-line command for this? Thanks.

randomselect

$
0
0
Hello,

I am trying to randomly select a subsample of participants from my data set. I found the command randomselect useful in this sense, but I don't know how to set seed in my syntax so that the randomly selected observations are the same during subsequent runs of the do file.

Basically, I want to select two groups based on the following characteristics:

Group 1: N=3000, smokers, 50% female, aged 50-80
Group 2: N=3000, non smokers, 50% female, aged 20-80

Here is my syntax (with the seed command integrated but not working as expected):


Code:
randomselect if smoking == 1 & gender == 1, gen(sample_1) n(1500) seed(7492001)

randomselect if smoking == 1 & gender == 0 & sample_1 != 1, gen(sample_2) n(1500) seed(7492001)

randomselect if smoking == 0 & gender == 1, gen(sample_3) n(1500) seed(7492001)

randomselect if smoking == 0 & gender == 0 & sample_1 != 1, gen(sample_4) n(1500) seed(7492001)

g sample_smoking = 0 if inlist(1, sample_1, sample_2)
replace sample_smoking = 1 if inlist(1, sample_3, sample_4)

drop sample_1-sample_4

Thank you in advance for any comment!

Giovanni

Errror in process string

$
0
0
Hi all,

I have an issues in processing string.
I have a data set that includes some regions and a region has some data via year.
I use loop for processing data. However, I have a error. My code is :

local region_code "HN TH CT"
foreach region in `region_code' {
local year_code "2000 2001 2003"
foreach year in `year_code' {
local filename ""G:\Working""`region'""""`year'""""`region'""RT"" `region'""RT.DTA""
display `filename'
use `filename', clear
}
}

The result of display command is : G:\Working\HN\2000\HNRR\HNRR.DTA . But use command return an error : invalid '"HN' r(198).
I use : display “use `filename', clear” and the result is : use G:\Working""HN""""2000""""HN""RR""HN""RR.DTA", clear" invalid name
I gues the “filename” variable is: G:\Working\””HN””\””2000””\””H N””RR\””HN””RR.DTA, so I get an error use command.
Why is the filename different between filenam in display command and use command ?

Could you help me to resolve the problem please .

Hut

How I can add 95% CI error bar to my multiple line graph

$
0
0
Hi,

First, I have used the collapse command to make the mean and standard deviation by age and SES for smoking.
Second, I have made the upper and lower values of the confidence interval for smoking.
Third, I have used the command:
xtset SES age

Fourth, I have used the command:
xtline smoking, overlay

Now, I have the figure I am looking for without error bars.
How I can add 95% error bars to each age group across all SES?

I really appreciate your help.

Panel data: building sums correcting for focal panel ID characteristics

$
0
0
Hello,

I am trying to calculate
sumi(xi*zj) where
xi = a dummy
zj = a dummy xi is connected to in firm j

I have panel data with company IDs and individual IDs (mulitlevel) and years as identifiers. I basically want to calculate the sum of individuals in company i who are connected to other companies j with certain characteristics. However, this characteristic is also present in the focal company i - and I struggle to correct for it.

Here is what I tried so far:

Code:
gen Dummy1=1 if var1>0 & var1=.  // firm characteristic I am interested in
gen xi = 1 if var2==5 // individuals I am looking at
gen product=xi*Dummy1 //individuals connected to firm chatacteristic
egen sum_xizj=sum(product), by(compid year)  // sum of inidivudals connected to firm characteristic, but this includes also the focal firm characteristic

I would be grateful for any comments on this!

Thank you

Problems with the use autocompletion in do-files

$
0
0
Some things I recognized using autocompletion in Stata 16.0:
  • the comand "mvdecode" does not appear
  • command-options do not appear (it would be great to see the options of the command after typing a comma)
  • variable names do not appear
  • "tabulate oneway" is a suggestion. when you choose it as a command, you get an error message (variable oneway not found)

Weighted combined score predicting outcome

$
0
0
I have this quite open question as I have a large longitudinal dataset with multiple options. However, I will try my best to describe my wish for support.

Is it possible to let Stata generate a combined score/index/indicator based on a model? Let us say I have four exposure variables; X1 X2 X3 and X4. They can be continous or categorical or a mix. I would like them to predict an outcome (could be continous or categorical) and I would like Stata to investigate the impact of each exposure variable. In case X2 predicts outcome better that X1, X2 should have more influence than X1 in the combined score.

An example could be:
I would like to develop a screening tool to idenfication of persons in risk of stroke. I have this longitudinal dataset with persons with stroke describing smoking status, alcohol consumption, blood pressure and activity level prior to the stroke. Can I make a weighted screening tool assesseing smoking status, alcohol consumption, blood pressure and activity level in order to identify the persons in highest risk of having a stroke?

Stereotype logistic regression vs. Multinomial and Ordinal Logistic Regression

$
0
0
What are the ways to compare the fit of multinomial, ordinal, and stereotype logistic regression to a given dataset with ordinal response data using Stata?

create a 2-way summary table

$
0
0
Hi all


My data include

- id and industry are the firm id and the industry of the firm
- first_certified is the first time (year) the company got certification.
- year_certified is the year the company got certification. Firm required to apply and get certification every 2 years. For example, firm id = 4 got certification in 2015 and the firm re-applied and got certification in 2017.

- overall_score is the total score a firm got when certified which is the sum of 6 measures (benefit, community, customer, environment, governance)
- certification_cycle is the order of certification i.e., company id= 4 got certification first time (certification cycle =1) in 2015 and got second certification (certification cycle =2) in 2017
- current status certified/de-certified is at the current time (at the end of 2019)

The database includes more than 2000 companies and 5000 observations.
id industry_id first_certified year_certified overall_score workers benefits community customers environment governance certification_cycle current_status
1 37 2015 2015 80.5 33.7 11 25.4 3.1 9.5 8.8 1 de-certified
2 47 2017 2017 130.4 80.8 29 11.7 8.9 1 certified
3 27 2015 2015 115.3 86.6 23.1 5.6 1 de-certified
4 45 2015 2015 82.2 21.5 47.7 9.1 4 2 certified
4 45 2015 2017 91.9 23.8 10 32.7 14.2 5.6 15.6 1 certified
5 40 2015 2015 111.2 38.1 54.1 4.4 14.5 1 de-certified
6 27 2012 2012 108.6 26.1 9.1 43.6 27.4 11.4 4 certified
6 27 2012 2014 109.8 23.6 7.2 57.3 18.9 9.8 3 certified
6 27 2012 2015 108.9 22.6 10.9 59.1 15.6 11.7 2 certified
6 27 2012 2016 115.6 21.3 8 70.6 11.4 12.3 1 certified
7 27 2013 2013 84.2 22.6 5.5 20.6 30.9 10 3 de-certified
7 27 2013 2015 87.2 39.2 35.4 12.5 2 de-certified
7 27 2013 2017 84.2 15.6 3.2 26.1 25 17.5 1 de-certified
8 61 2019 2019 107.9 24.3 0.5 33.2 38.9 11.3 1 certified
9 45 2017 2017 80.3 30 13 27.7 5.4 7.5 9.7 1 certified
10 34 2014 2014 111 27 3.7 31.5 1.1 42.2 9.3 2 certified
10 34 2014 2016 92.8 23 3.8 25.7 32.9 11.2 1 certified
11 63 2013 2013 92.6 26.9 6 19.1 26.2 7.7 12.4 2 de-certified
11 63 2013 2016 85.4 27.1 8.5 27.3 9 10.1 11.9 1 de-certified
12 5 2013 2013 97.5 26.5 2.3 29 17.5 28.8 13.1 1 de-certified
13 45 2019 2019 82.2 33.8 10.2 24 5.9 7.6 10.7 1 certified
14 45 2016 2016 88.9 48.2 11.7 17.4 11.6 1 de-certified
15 27 2018 2018 104.6 69.5 22.2 12.8 1 certified
16 51 2018 2018 82.1 33.3 9 23.2 11.8 13.7 1 certified
17 22 2018 2018 99.1 20.1 10 25.2 41.7 2.2 9 1 certified
18 36 2014 2014 96.9 22.7 3.7 37.2 29.7 7.1 2 de-certified
18 36 2014 2016 103.6 18.7 3.3 48.2 29.5 7.3 1 de-certified
19 61 2017 2017 104.9 27.9 8.9 23.1 41.3 12.6 1 certified
20 45 2014 2014 99.4 21 6.9 24.7 36.1 9.3 8.3 2 certified
20 45 2014 2016 83.1 25.3 9.5 20.4 21.8 8.2 7.4 1 certified
I would like create summary table to record the (total) number of firms which are first time got certification; firm re-applied and got certification and de-certified. for firm with current status of de-certified we count on the year of the last certified. for example, firm id=7 is count first certified in 2013, re-certified in 2015 and 2017 then de-certified in 2019 (2017+2) which is time the company need to apply if they want to be certified.
year first certified re-certified Discontinue
2007
2008
2009
2019
I am new stata. what I can do is to calculate each value in the table and combine them all together. However, it is time consumming and easy to have mistake. Could anyone can give me some direction of how to wite the codes for this

Thank you very much

10-year probabilities after stcrreg

$
0
0
Dear all,

I would like to calculate the 10-year risk of stroke for each individual in my dataset, given a set of covariates.
I am having troubles estimating 10-year individual risks based on a Fine and Gray model.
For a Cox model, I calculate 10-year risk using the following commands:

stset studytime, failure(stroke=1)
stcox age BP sm
predict double xb, xb
predict double basesurv, basesurv
sum basesurv if _t<10
scalar base10y=r(min)
gen risk10y=1 - base10y^exp(xb)
replace risk10y=risk10y*100


Now the question is, how can I calculate the 10-year risks of each individual in my dataset after running a Fine and Gray model?
stcrreg age BP sm, compete(stroke_compete=2)

I would very much appreciate your thoughts.

Best,
John

Translog cost function using foreach and one_id program

$
0
0
Hi,

I have a panel of 270 firms with quarterly observations ranging from year 2000 to 2019.

I want to run the following trans-log cost function for each of the firm using foreach and one_id program. Can anyone inform me of the correct syntax please as mine did not work and gave errors for all groups.

lnTC = a+ a1lnW12 +1/2 a2lnW3+ a3lnW12 *lnW3+ a4lnTA + 1/2a5(lnTA)2 +a6lnTA*lnW12+ a7lnTA*lnW3 + e

where, a denotes alpha (coefficients) in the above equation. TC=total cost, W12 and W3 are the prices of labor and capital, TA = total assets, e is the error term

Regards

How can I get total effect, indirect effect, and direct effect by using bias-corrected interval?

$
0
0
Hello

I have been working on mediation analysis by using STATA. My model is L1->L2->L3 mediation.

I try to find total effects of mediation by using bias-corrected interval by using bootstrapping. However, I only found the result by normal-based interval.

I used command "estat teffects, all" for total effect and used "estat bootstrap" for bias-corrected confidence interval.

Here is the result which I got with normal-based interval. If anyone know how to get bias-corrected confidence interval instead of normal-based, I will be appreciate it.

Thank you

Direct effects
------------------------------------------------------------------------------
| Observed Bootstrap Normal-based
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
Measurement |
ZR651 |
L1| 1 (constrained)
-----------+----------------------------------------------------------------
ZR652 |
L1 | .8256664 .252527 3.27 0.001 .3307225 1.32061
-----------+----------------------------------------------------------------
ZR653 |
L1 | .5956981 .1836301 3.24 0.001 .2357898 .9556064
-----------+----------------------------------------------------------------
ZR631 |
L2| 1 (constrained)
L1| 0 (no path)
-----------+----------------------------------------------------------------
ZR632 |
L2 | 1.075239 .0048015 223.94 0.000 1.065829 1.08465
L1 | 0 (no path)
-----------+----------------------------------------------------------------
ZR633 |
L2| .9011307 .0049258 182.94 0.000 .8914762 .9107852
L1| 0 (no path)
-----------+----------------------------------------------------------------
ZR711 |
L2 | 0 (no path)
L3 | 1 (constrained)
L1 | 0 (no path)
-----------+----------------------------------------------------------------
ZR712 |
L2 | 0 (no path)
L3 | 1.088491 .0081486 133.58 0.000 1.07252 1.104462
L1| 0 (no path)
-----------+----------------------------------------------------------------
ZR713 |
L2 | 0 (no path)
L3| .8144637 .0073958 110.13 0.000 .7999683 .8289592
L1 | 0 (no path)
-------------+----------------------------------------------------------------
Structural |
L2|
L1| .2133648 .0538285 3.96 0.000 .1078629 .3188668
-----------+----------------------------------------------------------------
L3 |
L2| -.1299983 .0040167 -32.36 0.000 -.1378708 -.1221257
L1 | -.0070817 .0214143 -0.33 0.741 -.0490529 .0348895
------------------------------------------------------------------------------


Indirect effects
------------------------------------------------------------------------------
| Observed Bootstrap Normal-based
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
Measurement |
ZR651 |
L1 | 0 (no path)
-----------+----------------------------------------------------------------
ZR652 |
L1 | 0 (no path)
-----------+----------------------------------------------------------------
ZR653 |
L1 | 0 (no path)
-----------+----------------------------------------------------------------
ZR631 |
L2 | 0 (no path)
L1 | .2133648 .0538285 3.96 0.000 .1078629 .3188668
-----------+----------------------------------------------------------------
ZR632 |
L2 | 0 (no path)
L1 | .2294183 .057767 3.97 0.000 .116197 .3426396
-----------+----------------------------------------------------------------
ZR633 |
L2 | 0 (no path)
L1 | .1922696 .0486709 3.95 0.000 .0968764 .2876628
-----------+----------------------------------------------------------------
ZR711 |
L2 | -.1299983 .0040167 -32.36 0.000 -.1378708 -.1221257
L3 | 0 (no path)
L1 | -.0348187 .0226447 -1.54 0.124 -.0792015 .009564
-----------+----------------------------------------------------------------
ZR712 |
L2 | -.141502 .0044709 -31.65 0.000 -.1502649 -.1327391
L3| 0 (no path)
L1 | -.0378999 .024637 -1.54 0.124 -.0861876 .0103878
-----------+----------------------------------------------------------------
ZR713 |
L2 | -.1058789 .0034911 -30.33 0.000 -.1127212 -.0990365
L3 | 0 (no path)
L1 | -.0283586 .018442 -1.54 0.124 -.0645043 .0077871
-------------+----------------------------------------------------------------
Structural |
L2|
L1 | 0 (no path)
-----------+----------------------------------------------------------------
L3 |
L2 | 0 (no path)
L1 | -.0277371 .00692 -4.01 0.000 -.0413 -.0141741
------------------------------------------------------------------------------


Total effects
------------------------------------------------------------------------------
| Observed Bootstrap Normal-based
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
Measurement |
ZR651 |
L1 | 1 (constrained)
-----------+----------------------------------------------------------------
ZR652 |
L1 | .8256664 .252527 3.27 0.001 .3307225 1.32061
-----------+----------------------------------------------------------------
ZR653 |
L1 | .5956981 .1836301 3.24 0.001 .2357898 .9556064
-----------+----------------------------------------------------------------
ZR631 |
L2| 1 (constrained)
L1 | .2133648 .0538285 3.96 0.000 .1078629 .3188668
-----------+----------------------------------------------------------------
ZR632 |
L2 | 1.075239 .0048015 223.94 0.000 1.065829 1.08465
L1 | .2294183 .057767 3.97 0.000 .116197 .3426396
-----------+----------------------------------------------------------------
ZR633 |
L2 | .9011307 .0049258 182.94 0.000 .8914762 .9107852
L1| .1922696 .0486709 3.95 0.000 .0968764 .2876628
-----------+----------------------------------------------------------------
ZR711 |
L2 | -.1299983 .0040167 -32.36 0.000 -.1378708 -.1221257
L3 | 1 (constrained)
L1 | -.0348187 .0226447 -1.54 0.124 -.0792015 .009564
-----------+----------------------------------------------------------------
ZR712 |
L2 | -.141502 .0044709 -31.65 0.000 -.1502649 -.1327391
L3 | 1.088491 .0081486 133.58 0.000 1.07252 1.104462
L1 | -.0378999 .024637 -1.54 0.124 -.0861876 .0103878
-----------+----------------------------------------------------------------
ZR713 |
L2| -.1058789 .0034911 -30.33 0.000 -.1127212 -.0990365
L3 | .8144637 .0073958 110.13 0.000 .7999683 .8289592
L1| -.0283586 .018442 -1.54 0.124 -.0645043 .0077871
-------------+----------------------------------------------------------------
Structural |
L2 |
L1 | .2133648 .0538285 3.96 0.000 .1078629 .3188668
-----------+----------------------------------------------------------------
L3 |
L2 | -.1299983 .0040167 -32.36 0.000 -.1378708 -.1221257
L1 | -.0348187 .0226447 -1.54 0.124 -.0792015 .009564
------------------------------------------------------------------------------








Generate new variable if the string value of an existing variable is in a list

$
0
0
Dear All,

I have a dataset contains Country variable, with country names as string values. I've been trying to generate a region variable, here's what I tried:
Code:
gen Region = " "
Code:
bysort Country Year: replace Region = "South America" if Country == "Argentina" | Country == "Brazil" | Country == "Peru"...
I'm wondering if there's are easier way, say generate Region = "South America" if Country is inlist(Argentina, Brazil, Peru...), so that I don't have to type "Country ==" part many times?

Any help is appreciated!

Thank you!

Best,
Craig

do files are now variable aware?

$
0
0
I just noticed this. When did this happen? That's really useful. Is this across frames or for the active frame?

Reshaping Data Problem

$
0
0
Dear,


The following data shows the countries involved in the Dispnum variable by year.
I need help to reshape the following data. The first three rows represent three nations involved in event# 88 in 1987.

Code:
Dispnum Country Year
88 220 1870  
88 271 1870
88 255 1870
220 3251870
220 710 1870
220 2 1870
220 220 1870
2117 70 1870
2117 70 1870
2168 255 1871
2168 255 1871
2169 365 1871
2169 220 1871
256 230 1872
256 2 1872
256 200 1872
I want to reshape the data above to dyadic data such as the following shape. I'd like to merge this data with dyadic trade data.

Code:
Dispnum Country Country2 Year
88 220 271 1870
88 220 255 1870
88 220 325 1870
88 271 220 1870
88 271 255 1870
88 271 325 1870
88 325 220 1870
88 325 271 1870
88 325 255 1870
220 325 710 1870
220 325 2 1870
220 325 220 1870
220 710 325 1870
220 710 2 1870
220 710 220 1870
220 2 325 1870
220 2 710 1870
220 2 220 1870
220 220 325 1870
220 220 710 1870
220 220 2 1870
2117 2 70 1870
2117 70 2 1870
256 230 2 1872
256 230 200 1872
256 2 230 1872
256 2 200 1872
256 200 230 1872
256 200 2 1872
I want to the combination between countries under the same Dispnum, for example, Dispnum 88 has to have 6 rows by the combinations of three countries(220, 271, and 255), and Dispnum 220 has 12 rows by combinations of four countries (325, 710, 2, and 220).
Would anyone help me to reshape this data?

'YRDIF': module to calculate daily date differences

$
0
0
I thank Kit for making a command yrdif available on SSC. This will calculate the difference between two dates yielding the difference in years. It approximates the SAS function yrdif with basis 'ACTUAL' or 'AGE'. In addition, the option yrunit(ageact) will calculate the fractional part of the as a 365th or 366th. The option, yrunit(age), like the SAS function yrdif with basis 'AGE' calculates the fractional part as a 365th.

Example

Code:
clear
mat mbdate = (29,29,19,16,28,19,28,29,29,29\ 2, 2, 1, 7, 3,11, 2, 2, 2, 2\1996,1996,2005,2014,1952,1952,2011,2012,2012,1996)
mat mcdate = (28,29,19,26,19,19,19,19,29,31\ 2, 2, 1,12,10,10,10,10, 2, 8\2000,2000,2020,2019,2012,2012,2012,2012,2012,2013)
set obs `=colsof(mbdate)'
gen bdate = mdy(mbdate[2, _n], mbdate[1, _n], mbdate[3, _n])
gen cdate = mdy(mcdate[2, _n], mcdate[1, _n], mcdate[3, _n])
format bdate cdate %td
yrdif bdate cdate , gen(actual) yrunit(actual)
yrdif bdate cdate , gen(age) yrunit(age)
yrdif bdate cdate , gen(ageact) yrunit(ageact) snm(agect)
list

keep bdate
yrdif bdate , currdate(mdy(2,29,2000)) gen(actual) yrunit(actual)
yrdif bdate , currdate(mdy(2,29,2000)) gen(age) yrunit(age)
yrdif bdate , currdate(mdy(2,29,2000)) gen(ageact) yrunit(ageact)
list

odbc connection extremely slow on another pc

$
0
0
Hi, I´m trying to use odbc load on another machine to download a rather big table from a server. Making the same odbc connection and it being a better PC (more RAM and Disk Space) the time can be 5 times slower than in the previews one.

Am I making something bad with the odbc setup? New PC has a i7 3rd Gen CPU while the old one has i7 4th gen if that makes a huge difference.

Greetings
Viewing all 72840 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>