Quantcast
Channel: Statalist
Viewing all 72772 articles
Browse latest View live

Using esttab to report results for Lagged Variables

$
0
0
Hi All,

My dataset resembles the following (I have panel data, so multiple observations for the same time period, but I think this table is without loss of generality)

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input float(year y x)
2001 12 321
2002 32  32
2003 12  12
2004  3   1
2005  1   2
end
In the above, I have data on individuals (identifier dropped), and variables y and x. I wish to explain y as a function of the first , second lag, third, fifth and tenthy of x, each for different models. After declaring the data either as time series, or as panel, I do the following:

Code:
foreach i in 1 2 3 5 10{
    eststo : qui xi : xtreg y l`i'.(x) i.year
    }
Now, I wish to use the esttab command- but the issue is that each of the variables are named differently. So, for instance, if I type:

Code:
    esttab using "table1.tex", scalar(F) stats(N r2 vce) varwidth(25) keep(L.x)  star(+ 0.15 * 0.10 ** 0.05 *** 0.01) p  label replace
all I would obtain would be the coefficient son the first lag of x. If I want more lags of x, I would have to add L2.x and so on. I do not wish to do this, as the table would look unnecessarily big, especially if I add other controls. Is it possible to subsume all the coefficients here under a variable name "x", in a single row? I could name the columns differently, corresponding to the particular lag.

Many thanks,
CS

Understanding Abadie, Athey, Imbens, and Wooldridge (2017) using a long-difference example

$
0
0
There are 100 counties j. There are many people in each county. People do not move across counties from Jan 1 to Dec 31. No time subscript is needed in this example. It's a long difference.
Y_{i} = 1 if person i got cancer by Dec 31, and 0 otherwise.

X_{j(i)} = Amount of pollutant that spilled into county j (in which person i lives) from Jan 1 to Dec 31
Before reading Abadie et al. (2017), I have been thinking I need to cluster at state because there are state-level health-related policies.

But Abadie et al. (2017) say
"The researcher should assess whether the sampling process is clustered or not, and whether the assignment mechanism is clustered. If the answer to both is no, one should not adjust the standard errors for clustering, irrespective of whether such an adjustment would change the standard errors."
In this example, in what situation would "sampling process" and "assignment mechanism" be considered to be clustered?

Is Abadie et al. (2017) basically saying that clustering at state is too conservative approach?

So in this example, Abadie et al. (2017) recommends clustering at county?

xtabond2: two way clustering-dynamic panel

$
0
0
Hello all.
For my master thesis I am trying analyse what are the determinants of Non Performing Loans on a sample of banks taken from the Bankscope database.
I have decide to to use both difference GMM and system GMM with the xtabond2. I still have try to use the command since I am making some preliminary consideration and then decide how tu run the model.

My dependent variable is the percentage of NPLs at individual bank's level. As regressor I have the lagged dependent variables plus other individual bank variable and some macroeconomic variables for each country, that are equal for all bank's in a given country (such as the GDP growth)

As I have read from Roodman's paper, one of the key assumption of both estimators is that the errors are not correlated among individuals in the panel. However, given that we are talking about banks, "individuals" that by their very nature are widely interconnected, I think that this assumption is not reliable.

Given the variables included into the regression, I am thinkg about a two-way clustering, at individual level (bank_id) and country level (ISO). When I will run this regression, I will only consider country for which I have AT LEAST 5 banks.

1)Is the two-clustering a correct way to proceed?
2) How can I implement two-way clustering in the xtabond2 syntax? (I am using Stata 14)


How to merge different Excel File.

$
0
0
Hello Sir,

I have 4 separate -excel files required to merge in one excel file. This is the longitudinal data sets but it has a different range of individuals. The individuals and duration vary each other. For example, One excel sheet has 38 individuals and 30 years but others have 50 individuals and 35 years. I need to sort common individuals and the number of years, that make the balanced data.

Noted that my professor told me to use the following code to sort the data.
NACE REV. 2 - ISIC REV. 4
Source Target
gen nace==. A A
replace nace= 1 if isic== 1

I don't have any idea how I can do it. Pls, thanks for your kind cooperation.

Autocorrelation (ac and pac)

$
0
0
I'm using the autocorrelation command ac and the partial autocorrelation command pac. I'm running them on over 1,000 different time series with length T=12. May I ask a couple of questions?

The commands are slow because they generate a plot, which I don't want to see. Is there a way to save time by suppressing the plot? It's no big deal if you have only one time series, but if you have over 1,000, it's a drag.

The pac command won't calculate the partial autocorrelation for lags beyond 4. Why not? Is this an arbitrary software limit, or is there some mathematical reason you can't calculate the lag-5 autocorrelation from a time series with T=12.

Many thanks.

How to save the "drift" parameter of a Dickey-Fuller unit-root test (dfuller)?

$
0
0
I would like to save the "drift" parameter of a Dickey-Fuller unit-root test, in order to export it with the help of "putexcel". With "drift parameter" I mean the regression constant of the estimated model.

As far as I see, it is not available under the "stored results" of "dfuller". It can however be displayed using the option "regress" in "dfuller". It is then displayed in a table and has the name "_cons". However "_cons" cannot be saved or exported.

Is there any possibility to save the "drift" parameter or would I have to manipulate the source code of "dfuller"?

Thanks for help!

How do I reshape my dataset with three identifiers (time, location, and index) when it's already wide?

$
0
0
Hello Statalists,
This is an air quality panel data and it looks like this:
Array
The variable names in Chinese indicate different locations. However, I need type as variables on top, and locations listed as a column. I read the examples of reshape command and did not find a similar case. How should I do this? Thanks!
Best,
Sony Tian

Proportional sampling based on population mean, variance etc.

$
0
0
Hello.

I am curious if there is an easy way to select a proportional sample out oft a panel data that …
  • … is restricted to a specific number of observations (i.e. sample = 100) in the new sample
  • … comes up with the best sample fit (in terms of mean, variance etc.) compared to the population (panel data)
  • … considers (calculates and selects the new sample based on) more than five different measurements/variables
I would like to reduce my panel-data - that contains the whole population - towards a much smaller sample.

It should be still very similar to the original population in the panel data.

Is there anyone that could help me with that?

Thank you.

Konstantin

Data cleaning question for string variables

$
0
0
I had what I hope will be a simple data cleaning question. I'm importing some data and some individuals in my dataset have *'s beside their name. I want to get rid of these *'s but keep the player name. How would I do so? (Note that this is within a string variable.)

Here's an example of the data:

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input str59 Player
"Tom Brady"    
"Aaron Rodgers*"
"Aaron Rodgers*"
end
I would like my data to look like:
Code:
clear
input str59 Player
"Tom Brady"    
"Aaron Rodgers"
"Aaron Rodgers"
end

Help with a gravity model!

$
0
0
Good morning,

At the moment I am doing a gravity model for the trade in music. My problem is very simple, the coefficients I am getting are simply too big for a regression over log variables.

Code:
note: dexp40 omitted because of collinearity
note: dimp2 omitted because of collinearity

      Source |       SS       df       MS              Number of obs =    4608
-------------+------------------------------           F( 97,  4510) =  115.74
       Model |   200664.63    97  2068.70752           Prob > F      =  0.0000
    Residual |   80613.966  4510  17.8744936           R-squared     =  0.7134
-------------+------------------------------           Adj R-squared =  0.7072
       Total |  281278.596  4607  61.0546116           Root MSE      =  4.2278

------------------------------------------------------------------------------
    lstreams |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
       ldist |  -.6587691     .08396    -7.85   0.000    -.8233719   -.4941664
 comlang_off |   2.953035   .2436569    12.12   0.000     2.475348    3.430722
        home |   6.002062   .5270833    11.39   0.000     4.968721    7.035404
       dexp1 |   4.945245   .6250663     7.91   0.000     3.719809    6.170682
       dexp2 |   14.58816   .6283887    23.22   0.000     13.35621    15.82011
       dexp3 |   7.035149   .6104235    11.53   0.000      5.83842    8.231878
       dexp4 |   9.383499   .6109546    15.36   0.000     8.185729    10.58127
       dexp5 |  -1.895036   .6235645    -3.04   0.002    -3.117528   -.6725441
       dexp6 |   11.25079   .6157759    18.27   0.000     10.04357    12.45801
       dexp7 |   14.62706    .618263    23.66   0.000     13.41496    15.83916
       dexp8 |   .6458567   .6256846     1.03   0.302    -.5807919    1.872505
       dexp9 |   13.03582   .6214147    20.98   0.000     11.81755     14.2541
      dexp10 |  -2.044303   .6204987    -3.29   0.001    -3.260784   -.8278211
      dexp11 |   2.191108   .6102339     3.59   0.000     .9947502    3.387465
      dexp12 |    12.5533    .610251    20.57   0.000      11.3569    13.74969
      dexp13 |    5.28296   .6209701     8.51   0.000     4.065554    6.500366
      dexp14 |  -1.749257   .6217986    -2.81   0.005    -2.968287   -.5302275
      dexp15 |   -2.03346   .6207022    -3.28   0.001    -3.250341   -.8165796
      dexp16 |   5.865316   .6107381     9.60   0.000      4.66797    7.062662
      dexp17 |   14.32632   .6105227    23.47   0.000     13.12939    15.52324
      dexp18 |   14.22159   .6104212    23.30   0.000     13.02487    15.41832
      dexp19 |   4.442653   .6110472     7.27   0.000     3.244701    5.640605
      dexp20 |  -2.040904   .6205621    -3.29   0.001     -3.25751   -.8242982
      dexp21 |   .6050663   .6102633     0.99   0.322    -.5913489    1.801482
      dexp22 |   2.676738   .6114521     4.38   0.000     1.477992    3.875484
      dexp23 |   1.845969   .6213763     2.97   0.003     .6277672    3.064171
      dexp24 |   10.20732   .6119518    16.68   0.000     9.007591    11.40704
      dexp25 |   13.39599   .6105503    21.94   0.000     12.19901    14.59297
      dexp26 |   .6241502   .6103874     1.02   0.307    -.5725083    1.820809
      dexp27 |  -.7422275   .6103943    -1.22   0.224      -1.9389    .4544446
      dexp28 |  -1.635686    .612465    -2.67   0.008    -2.836418   -.4349548
      dexp29 |   4.974551    .623605     7.98   0.000      3.75198    6.197123
      dexp30 |   13.80133   .6102519    22.62   0.000     12.60494    14.99772
      dexp31 |   8.408377   .6281037    13.39   0.000     7.176986    9.639768
      dexp32 |  -1.386433   .6205055    -2.23   0.026    -2.602927   -.1699379
      dexp33 |   13.71171   .6104243    22.46   0.000     12.51498    14.90844
      dexp34 |   2.674934   .6206225     4.31   0.000     1.458209    3.891658
      dexp35 |  -1.739016   .6239791    -2.79   0.005    -2.962321   -.5157109
      dexp36 |  -1.454858   .6235239    -2.33   0.020     -2.67727   -.2324452
      dexp37 |   1.662482   .6243582     2.66   0.008     .4384339     2.88653
      dexp38 |   13.61984   .6102974    22.32   0.000     12.42336    14.81632
      dexp39 |    5.13073   .6111923     8.39   0.000     3.932493    6.328966
      dexp40 |          0  (omitted)
      dexp41 |   13.50532    .617114    21.88   0.000     12.29547    14.71516
      dexp42 |   13.59787   .6104942    22.27   0.000       12.401    14.79474
      dexp43 |   2.070785    .611107     3.39   0.001     .8727162    3.268855
      dexp44 |  -.5465918   .6194238    -0.88   0.378    -1.760966    .6677826
      dexp45 |   1.800519   .6112067     2.95   0.003     .6022548    2.998784
      dexp46 |   15.61918   .6115066    25.54   0.000     14.42033    16.81804
      dexp47 |   16.84393   .6161336    27.34   0.000     15.63601    18.05186
      dexp48 |   3.484983   .6246923     5.58   0.000      2.26028    4.709686
       dimp1 |  -.5169499   .6119338    -0.84   0.398     -1.71664    .6827402
       dimp2 |          0  (omitted)
       dimp3 |  -.9982875   .6261751    -1.59   0.111    -2.225898    .2293226
       dimp4 |  -1.543314   .6246724    -2.47   0.014    -2.767979   -.3186504
       dimp5 |  -2.493593   .6123757    -4.07   0.000    -3.694149   -1.293036
       dimp6 |  -1.488432   .6141977    -2.42   0.015     -2.69256    -.284303
       dimp7 |  -1.744134   .6131087    -2.84   0.004    -2.946127   -.5421401
       dimp8 |  -.8792039   .6117905    -1.44   0.151    -2.078613    .3202052
       dimp9 |  -1.716964   .6132944    -2.80   0.005    -2.919321   -.5146062
      dimp10 |  -1.692138    .613818    -2.76   0.006    -2.895522   -.4887536
      dimp11 |  -1.012701   .6283196    -1.61   0.107    -2.244515    .2191136
      dimp12 |  -.4275758   .6273332    -0.68   0.496    -1.657456    .8023047
      dimp13 |  -2.224384   .6135369    -3.63   0.000    -3.427217   -1.021551
      dimp14 |  -1.841544    .613101    -3.00   0.003    -3.043523   -.6395659
      dimp15 |  -2.867881   .6136935    -4.67   0.000    -4.071021   -1.664741
      dimp16 |  -.4457421   .6229358    -0.72   0.474    -1.667002    .7755174
      dimp17 |  -1.789707   .6249028    -2.86   0.004    -3.014822   -.5645909
      dimp18 |   .4755847   .6262185     0.76   0.448    -.7521106     1.70328
      dimp19 |  -1.084995   .6217852    -1.74   0.081    -2.303999    .1340087
      dimp20 |  -7.395029   .6137786   -12.05   0.000    -8.598335   -6.191722
      dimp21 |  -1.515205   .6270143    -2.42   0.016     -2.74446   -.2859493
      dimp22 |  -2.102666   .6205155    -3.39   0.001     -3.31918    -.886151
      dimp23 |  -.2907965   .6117342    -0.48   0.635    -1.490095    .9085023
      dimp24 |  -1.747562   .6205976    -2.82   0.005    -2.964238   -.5308865
      dimp25 |   .4170185   .6239907     0.67   0.504     -.806309    1.640346
      dimp26 |  -2.229249   .6253329    -3.56   0.000    -3.455209    -1.00329
      dimp27 |  -1.086725   .6252681    -1.74   0.082    -2.312557     .139107
      dimp28 |  -1.845894   .6191436    -2.98   0.003    -3.059718   -.6320687
      dimp29 |  -.7199771   .6123618    -1.18   0.240    -1.920506    .4805522
      dimp30 |  -.8983727   .6278126    -1.43   0.153    -2.129193    .3324477
      dimp31 |  -.9845153    .610235    -1.61   0.107    -2.180875    .2118444
      dimp32 |  -7.489847   .6138138   -12.20   0.000    -8.693223   -6.286472
      dimp33 |  -1.422042   .6250039    -2.28   0.023    -2.647356   -.1967282
      dimp34 |  -2.550239   .6137416    -4.16   0.000    -3.753474   -1.347005
      dimp35 |  -2.119479   .6122393    -3.46   0.001    -3.319768   -.9191894
      dimp36 |  -1.066587   .6123896    -1.74   0.082    -2.267171    .1339966
      dimp37 |  -.5125992   .6105097    -0.84   0.401    -1.709498    .6842991
      dimp38 |  -.6311261   .6263899    -1.01   0.314    -1.859157    .5969051
      dimp39 |  -1.163926    .621138    -1.87   0.061    -2.381661     .053809
      dimp40 |  -1.190264   .6283887    -1.89   0.058    -2.422213    .0416862
      dimp41 |  -.3276512   .6168866    -0.53   0.595    -1.537051     .881749
      dimp42 |  -.3337024   .6243783    -0.53   0.593     -1.55779    .8903851
      dimp43 |  -.7770938   .6234663    -1.25   0.213    -1.999393    .4452057
      dimp44 |  -4.901238   .6123542    -8.00   0.000    -6.101752   -3.700723
      dimp45 |  -.8839441   .6212445    -1.42   0.155    -2.101888    .3339996
      dimp46 |  -.8803774   .6223687    -1.41   0.157    -2.100525    .3397702
      dimp47 |  -.8205589   .6139556    -1.34   0.181    -2.024213     .383095
      dimp48 |  -1.971208   .6120308    -3.22   0.001    -3.171088   -.7713272
       _cons |   8.175194   .9709411     8.42   0.000     6.271674    10.07871
------------------------------------------------------------------------------

.
Can anyone give me an advice on how to fix this?

With kind regards,
Ramadan Aly

Interpretation of result for cross sectional independence test - xtcd2

$
0
0
Hi - I have received this result and with the negative CD value I am confused over its interpretation.

xdcd2 DADILL
Pesaran (2015) test for weak cross sectional dependence.
Unbalanced panel, test adjusted.

H0: errors are weakly cross sectional dependent.
CD=-0.133
p-value = 0.894

Also - with my dataset having many individuals (c.2500) and only 3 rounds in the panel - is autocorrelation a huge concern for analysis anyway?

Question: Creating an indicator for every fifth event

$
0
0
I have a question concerning an efficient way to create an indicator for the date at which an event occurs as a function of a rolling sum. My dataset looks like the one below. Here, AW denotes a credit applied to a person's (denoted by id) account.

I am interested in creating an indicator that indicates the date at which the person reaches five credits in their account. In other words, person 186 has 12 credits in their account. I want to create an indicator for the date at which 186 reaches the fifth and tenth credit, whereas person 010 would only have one indicator because they only have 6 credits.


I am using Stata 14.2. Thanks in advance for any assistance.

MDC

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input double id str9 trans_date str2 status_code 
186 "03JAN2019" "AW"
186 "03OCT2019" "AW" 
186 "17JAN2009" "AW" 
186 "26MAY2009" "AW" 
186 "10MAY2009" "AW" 
186 "23APR2011" "AW" 
186 "03JAN2019" "AW" 
186 "03OCT2019" "AW" 
186 "17JAN2009" "AW" 
186 "26MAY2009" "AW" 
186 "10MAY2009" "AW" 
186 "23APR2011" "AW" 
010 "23NOV2009" "AW" 
010 "30OCT2010" "AW" 
010 "08OCT2011" "AW" 
010 "23NOV2009" "AW" 
010 "30OCT2010" "AW" 
010 "08OCT2011" "AW" 
end

Create numlist based on values?

$
0
0

I am currently struggling a bit with Stata syntax. As I am using the "margins" command, I want to specify an option. The following is from the "help margins":

at(age=(20(10)50)) does the same as at(age=(20 30 40 50)); that is,
you may specify a numlist.
If I do not want to hardcode the numeric values into the numlist, but want to have it dynamic, how can I do that? So for example, I want the numlist to go from min(age) to max(age) in 10 even steps. Or alternatively, I want to go from one standard deviation below mean to one standard deviation above mean in 3 even steps.

I feel like I probably cannot do this within the parenthesis of margins, but even when trying to construct a numlist prior to the margins account with these values, it seems that I am constantly failing at the syntax. "help numlist" isn't that helpful for me, as it only gives hardcoded numbers.

Creating principal components output after pca

C:\Users\AppData\Local\Temp\ST_04000001.tmp * *not found

$
0
0
I´m converting spss files into .dta files through many loops and one of those is like the following (the other ones only change the name of the spss file)

cd "$directorio/`i'/`i'-Modulo238";
usespss 10_IVCENAGRO_REC05.sav;
save 238, replace;
erase 10_IVCENAGRO_REC05.sav;

and after runnings it, the error on the title appears:

file
C:\Users\AppData\Local\Temp\ST_04000001.tmp
not found
r(601);


I tried searching manually the file ST_04000001.tmp but it indeed does not exist.

I thank in advance your responses and any possible solutions.

Literature on Stata's Extended Regression Models (ERMs): is there any?

$
0
0
I'm using extended probit (eprobit) and extended linear regression (eregress) in my dissertation. I'd like to reference some literature about these models.

However, the only references for ERMs I've found are Stata references.

I want to know exactly how ERMs handle endogeneity (if they do) and how they differ from IV models.

Thanks.

Recentered Influence Function (RIF) regression and decomposition

$
0
0
Dear all,
Thanks to Prof. Baum the newest update to the package -rif- is now available, with the version of the commands that are explained in an upcoming article in STJ.
Also thank you to the people who have used these commands before and reported some bugs that have now been fixed.
So these commands include:

- rifvar(), a set of extensions to egen to estimate RIF's for a large set of distribution statistics. It can now be used in combination with other user-written commands that estimate RIF statistics.

- rifhdreg This command estimates RIF-OLS regressions, also allowing for multiple fixed effects. Perhaps the most popular application would be the estimation of unconditional quantile regressions. This command also has options to obtain treatment effects using binary treatment or multivariate treatments, based on Inverse probability weighting (IPW). This is similar to what -teffects ipw- does, but now also for inequality indices. They are also known as inequality treatment effects.
Something not many people may have noticed about this command. It can be used to estimate confidence intervals of any statistic for which a RIF exists. And even do it for multiple groups at the same time.

- oaxaca_rif This command implements both simple and reweighted oaxaca decompositions using RIFs as the dependent variable. The major update here is that using the option -noisily- provides the output of all intermediate steps and saves them as separate items as part of the stored results in e().

- rifsureg and rifsureg2. Something people usually ask in the forum is to have an equivalent to sqreg (simultaneous quantile regression estimator) but for unconditional quantiles. This is exactly what rifsureg does. You can estimate simultaneous unconditional quantile regressions. In fact, you can use most of the same options as rifhdreg. The caveat, it only estimates unconditional quantile regressions.

rifsureg2 also allows you to do simultaneous regressions. In contrast with rifsureg, this command can estimate other statistics: Gini, quantiles, variance, absolute indices, etc.

- uqreg. This is the last command I added to the package. As the name says it only estimates unconditional quantile regression. The difference with rifhdreg, however, is that it allows using methods other than OLS. (RIF-probit, RIF-logit, etc)

If interested in the details, all the relevant references are available in the helpfiles.

Best Regards.
Fernando

ppml_panel_sg: RESET test (p-value)

$
0
0
Hi,

I am trying the RESET test (p-value) for a gravity model with fixed effects: (exporter-year, importer-year, pair) as following:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ppml_panel_sg TRADE RTA if exporter != importer, ex(exporter) im(importer) y(year) cluster(pairid)
estimates store PPMLfesPAIR

***RESET test

predict fit, xb
generate fit2 = fit^2
ppml_panel_sg TRADE RTA fit2 if exporter != importer, ex(exporter) im(importer) y(year) cluster(pairid)
test fit2 = 0
drop fit*
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

However, I received this mistake:

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
note: fit2 omitted because of collinearity over lhs>0 (creates possible existence issue)
Iterating...





test fit2 = 0
fit2 not found
r(111);

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

What should I do?

Thanks!

Modifying the collapse option in the xtabond2 command

$
0
0
I am implementing system GMM using the xtabond2 command. The data series is very long. This long history generates a very high number of instrumental variables (IV) which creates a significantly over identified result. Unfortunately, when I include the collapse option, the results change from what is observed in previous literature and don’t make intuitive sense. I suspect the averaging of the individual instruments across all the observations is reducing the information necessary to produce meaningful coefficients. Do you know how to modify the collapse option so that the IV matrix only collapses the earlier observations of the instrument matrix and leaves the more recent observations? Thank you, Dan

Generate mean depending on variable specification

$
0
0
Dear folks,

I am currently working on a dataset that contains sales data of items in stores across the country. Each row represents a purchase and contains information on the amount of product name, product brand, units sold, unit price, date, store name, store brand, state of store, etc. For the sake of price analysis, I am currently trying to create a mean price across the whole observational period. However, as prices obviously differ across brands, store types and time, one has to create several individual means that control for each of these variable specifications.
More precisely:

I want Stata to create individual mean prices for each of all observations that share:

The same product (variable 1)
The same store type (variable 2)
The same state of store (variable 3)
The same week of purchase (variable 4)
The same year of purchase (variable 5)

I have some working knowledge with creating new variables, means, etc. in Stata. However, I haven't managed to do get it done if the mean depends on more than one variable as in the case above.

Many thanks in advance!
Viewing all 72772 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>