Skip to content

Investor IQ: Focus List

May 22, 2019

Our Investor IQ weekly publication which is updated every Monday morning (on the top right hand corner of this blog) provides basic trend-following and relative strength (RS) signals for both US and Canadian ETFs and individual stocks. We recently added a “Focus List” at the request of some of our readers which highlights both long and short positions to focus on. The focus list long positions have a relative strength>90% and are either a buy or a hold position based on a composite of trend-following and momentum signals. The focus list short positions have a relative strength<10% and are either a sell or a hold position. An example can be seen in the picture below:

We also recently added signals for the Dow 30 stocks and also the S&P/TSX 60 stocks in Canada. We plan to expand the universe of both ETFs and Stocks over time. As a result of the trade war, we have included the Chinese Yuan (CYB) to the currency section for US ETFs as being one ticker worth watching as it has been an early warning for the stock market in recent times.

Current and Historical S&P500 Economic Forecasting Model Predictions

May 15, 2019

In the last post we introduced the S&P500 Economic Forecasting Model which seeks to predict the chances of a moderate or large drawdown over the next 90 days. The model considers a large range of different macroeconomic variables and their derivatives to assess the likelihood of a given event. What the model can do is identify when there are signs of economic weakness that may or may not be reflected in the current market price. What the model cannot do is identify sentiment, liquidity, or news driven corrections (ie the Donald Trump effect). Currently the model is waiting on several new pieces of data before making a new prediction. The table below shows the model predictions using “point-in-time” data as of the first day of the respective month. Here is the current and historical output:

The most recent prediction which was made as of the beginning of March shows that the model does not expect a moderate to large correction through the end of May. The direction of the market is expected to be sideways or flat. So from a macroeconomic perspective it seems as if the economy is healthy for the time being. Absent any major news events or tweets and the economy is likely to keep chugging along and the stock market will probably remain in a trading range before climbing higher. If you look at the historical predictions they have been fairly accurate in recent times and provide some guidance of when it might be worthwhile buying on weakness (during December 2018) by looking at both the chance of a drawdown and the predicted market direction. While the model was created without running backtests, we will show some applications of using the signals for timing the S&P500.

Shiny New Toys

May 14, 2019

Its been a long time folks, but we have some shiny new toys in the works. Current trends in the industry and working with data scientists has made me a believer in the benefits of using a machine learning approach. I have always been a proponent of “theory-free” approaches on this blog as long as they are designed with robust architecture. In contrast, strict adherence to overly simplistic theories and rules is not optimal for complex systems like the stock market. After experiencing many years of getting whipsawed by traditional indicators, I have recently become convinced a la Philosophical Economics (see this great piece) that you need to have a model(s) that can provide insight into market returns/risk without strictly using price-based indicators. A true macroeconomic model helps to gauge risk that may not be present in current prices and also helps to de-emphasize the reliance price movements that are false alarms. Predicting recessions is not necessarily the most useful for macro models because 1) you can have a bear market without a recession and 2) you can have a recession without a bear market. Furthermore you can have large and damaging corrections that are neither. As a result predicting drawdowns is potentially a more interesting and practical exercise.

S&P 500 Indicator Series

Economics Report

The S&P500 Indicator Series are machine learning forecasting models that use either 1) Macroeconomic 2) Sentiment 3) Technical or 4) Seasonality data with a very wide range of indicators/inputs to make investing decisions.

S&P 500 Economic Forecasting Model Introduction

The S&P 500 Economic Forecasting Model employs a Gradient Boosting Model (GBM) to predict the future distribution of S&P500 returns over the next 90 days based on economic data. GBM is a machine learning methodology which can be used for either regression or classification.

The S&P500 Economic Forecasting Model is a classifier model that predicts the likelihood of equity market drawdowns (moderate or large corrections) and the direction of returns (positive, negative or flat) over a 90-day period. The input variables are derivatives of monthly aggregated macroeconomic data, and does not include price-based or technical data. The choice of a classifier model is due to the fact that equity markets are driven by a wide variety of variables that are often nonlinear by nature. Furthermore, it is important to note that macroeconomic variables are just one component that explains the variation in equity market returns and by using a classifier we avoid many of the issues that regression models have with unobserved features.

The model itself is based on an ensemble of GBM style models (specifically using the XGBoost library. A large number of input macroeconomic data series are selected (see Model Importances for the list) and transformed to create derivative time series. Given that monthly economic data is still relatively sparse (60 years of backdata x 12 months/year), we wanted to choose a model technique that doesn’t required huge amounts of data, but is still very flexible. We excluded alternative models such as logistic regression and neural networks for this reason.

In a GBM model that is attempting to match similar periods together, it is important to make the input values ‘comparable’ in some sense, so the raw values are not appropriate in most cases. Otherwise, it is possible for the model to use the values to simply use the values to memorize where it is in time, which does not generalize well. Instead, values are transformed to make them relative (i.e. percentage change Year over Year, or lognormal differences). It is not necessary to make the inputs stationary in a strict sense, but this is useful to maximize the generality of the model.

The models are trained using a k-fold training algorithm, using a Bayesian optimization routine to select the hyperparameters (tree depth, learning rate, etc). Again, this is done to maximize accuracy and generality while avoiding overfitting.

The output of the model is a score, which is then optimized to maximize theMatthews Correlation Coefficient, which can be considered to be a robust accuracy measure for unbalanced classification sets (which the training data in face is).

The model results over time are shown in the chart below. The blue and red bars show the periods where we expect a drawdown of 10%+ (Moderate Correction) or 15%+(Large Correction) respectively from the end of that period onwards.

More on this model to follow very soon along with weekly model updates on the predicted output.

Welcome to Investor IQ

May 13, 2019

There is some interesting new content on the CSSA blog that will be very useful for readers. Investor IQ is currently a free tool that shows basic trend signals (Buy, Hold or Sell) for a wide range of US and Canadian ETFs as well as a relative strength ranking. The signals will be updated as of the close of Friday and posted on Monday morning. This feature is currently in Beta and will be expanded to include individual stocks and other analytics. It can be found on the blog under the tab “CSSA” as a dropdown menu. A sample of some of the output can be seen below. More details to follow……

“2D Asset Allocation” using PCA (Part 2)

August 21, 2018

In the last post we showed how to use PCA to create Offense and Defense portfolios by focusing on the first principal component or “PC1.” After rotation has been completed it is possible to derive weights or portfolios for each principal component. Another good primer on using PCA for asset allocation is written by a reader of the blog- Dr. Rufus Rankin. The link for this book is here. We can separate the PC1 portfolio which represents broad systematic risk by dividing it into two dimensions- Offense (Risk On) and Defense (Risk Off)- by isolating positive versus negative weights. To form each portfolio you would simply take the absolute value of each weight and divide it by the sum of absolute values of weights for each of the Offense and Defense portfolio. In this example we will use 8 core asset classes for the sake of simplicity- Domestic Equity, Emerging Market Equity, International Equity, Commodities, High Yield Bonds, Gold, Intermediate Treasurys, Long-Term Treasurys. Here is the PC1 Offense Portfolio using the in sample period from 1995-2018 on various ETFs with extensions using indices:

This portfolio shows that some of the more aggressive asset classes such as emerging markets have the highest weighting, while international and domestic equity have nearly equal weightings. Equity overall has the highest weighting in the offense portfolio which is logical. Commodities take second spot while assets such as high yield bonds and gold have smaller weightings. In general this portfolio makes sense: for the most part when the market goes down and systematic risk is very high, all of these asset classes have a tendency to fall. However, during a bull market, these asset classes tend to do very well. In contrast when we look at the PC1 Defense Portfolio and it looks predictably like the opposite of the offense portfolio:

The PC1 Defense Portfolio has a high duration portfolio tilted toward long-term treasurys that has historically performed quite well during recessionary periods or other periods when systematic risk is high. The performance of both the PC 1 Offense and Defense Portfolios over time is plotted in the graph below.

In the graph we can clearly see the inverse correlation between the PC1 Offense and Defense Portfolios. Both obviously perform well at different times as we would expect. A simple tactical model would be to hold the PC1 Offense portfolio when systematic risk is low and to hold the PC1 Defense portfolio when risk is high. To do this we can simply use the 200-day simple moving average strategy on the PC1 Offense portfolio on a daily basis (generating an equity curve by using the weights of PC1 and rebalancing this portfolio monthly) and holding the PC1 Offense portfolio when risk is on- the equity curve is above its 200-day sma- and holding the PC1 Defense portfolio when risk is off- the equity curve of PC1 Offense is below its 200-day sma. We can give this simple strategy a name- “2D Asset Allocation”- which represents the two dimensions that we have separated the asset class universe into: Offense and Defense. The performance of this strategy is shown below:

The performance of this simple strategy is quite good, and manages to perform well even during the 2015 period which was difficult for traditional momentum/trend-following strategies. Below is a table showing the summary statistics. A good tactical strategy will ideally perform better than the buy and hold version of its underlying offense/defensive components over a full market cycle. Clearly the 2D Asset Allocation does substantially better than either component in isolation.

The best part about this strategy is that it was by no means “curve-fit” since the 200sma is a well-established strategy and is not the optimal strategy on the PC1 Offense portfolio. Using PCA to reduce dimensionality and derive this portfolio is a well-established statistical practice. The only caveat is that this portfolio was derived “in sample” which is less than ideal but no different than the starting place from which traditional system developers create trading strategies via backtests. Perhaps a better way to do this would be to using a rolling or anchored PCA analysis to derive the two portfolios instead on a walk forward basis. The choice of asset class universe in this case was designed to capture major asset classes, but the good thing about PCA is that you can use just about any asset class universe you want without introducing undue bias by choosing an arbitrary subset. In either case, this is a good example of how tactical asset allocation can be greatly simplified. Refinements to the strategy could include holding a minimum allocation to PC1 Defense for diversification purposes or potentially using momentum within the PC1 Offense and Defense portfolios to overweight/underweight different holdings. The possibilities are endless.


This material is for informational purposes only. It is not intended to serve as a substitute for personalized investment advice or as a recommendation or solicitation of any particular security, strategy or investment product. Opinions expressed are based on economic or market conditions at the time this material was written. Economies and markets fluctuate. Actual economic or market events may turn out differently than anticipated. Facts presented have been obtained from sources believed to be reliable, however, cannot guarantee the accuracy or completeness of such information, and certain information presented here may have been condensed or summarized from its original source.

“2D Asset Allocation” Using PCA (Part 1)

July 23, 2018

Asset allocation is a complex problem that can be solved using endless variations of different approaches that range from theoretical like Mean-Variance to heuristic like Minimum Correlation or even “tactical strategies.” Another challenge is defining an appropriate asset class universe which can lead to insidious biases that even experienced practitioners can fail to grasp or appreciate. Reducing dimensionality and the number of assumptions is the ultimate goal. The simplest way to manage a portfolio is to revert to a CAPM world where there is a market portfolio and you can leverage or hold cash to meet your risk tolerance requirements. But this method also requires one to define a “market portfolio” which in theory can be defined as the market-cap weighted mix of investable asset classes, but in practice is elusive to define and also determine on a real-time basis. What we really want is a sense of what drives systematic risk across a range of asset classes and to identify a portfolio that best represents that systematic risk (offense), and a portfolio that is inversely correlated to that systematic risk (defense). A parsimonious way to make that determination is to use Principal Component Analysis (PCA) by isolating the PC1 or first principal component portfolio that explains most of the variation across a broad set of asset classes. In most cases, the first principal component will explain between 60-70% of the variation across asset classes and represents a core systematic risk factor. If we take a large basket of core asset classes we can use PCA to identify this PC1 portfolio over the period from 1995-2018 using ETFs with index extensions. In this case we used the R code provided by Jim Picerno’s excellent new book Quantitative Investment Portfolio Analytics in R.

We can see that this PC1 portfolio makes a lot of intuitive sense: The highest weights are in Emerging Markets, Nasdaq/Technology,  and Small Cap (Offense). Asset classes with negative weights have an inverse relationship to this core systematic risk factor, and the lowest are Long-Term Treasurys followed by Intermediate Treasurys, Inflation-Protected Treasurys, the Aggregate Bond Index and Short-Term Treasurys  (Defense).  Effectively the  “Offense” portfolio is positively tilted toward the most aggressive asset classes that likely perform the best during a bull market, while the “Defense” portfolio is positively tilted toward the most defensive asset classes that likely perform the best in a bear markets. With one calculation we have mathematically separated the asset classes into two broad groups/dimensions which can be used to create a wide variety of different simple asset allocation schemes. In a subsequent post we will show some examples of how this can be done.

Adaptive Volatility: A Robustness Test Using Global Risk Parity

November 29, 2017

In the last post we introduced the concept of using adaptive volatility in order to have a flexible lookback as a function of market conditions. We used the R-squared of price as a proxy for the strength of the trend in the underlying market in order to vary the half-life in an exponential moving average framework. The transition function used an exponential formula to translate to a smoothing constant. There are many reasons why this approach might be desirable from a regime or state dependent volatility framework, to improving the  mitigation of tail risk by being more responsive to market feedback loops as mentioned in this article from Alpha Architect. In the latter case, by shortening the volatility lookback when the market seems to be forming a bubble in either direction (as measured by trend measures such as the Hurst Exponent or R-squared) we can more rapidly adjust volatility to changes in the market conditions.

In order to test the robustness of the adaptive volatility measure we decided to follow the approach of forming risk parity portfolios which was inspired by this article by Newfound Research. Our simple Global Risk Parity portfolio uses five major asset classes: Domestic Equities (VTI), Commodities (DBC), International Equity (EFA), Bonds (IEF), and Real Estate (ICF). The choice of VTI was deliberate since we already did the first test using SPY. VTI contains the full spectrum of domestic equities including large, mid and small cap whereas SPY is strictly large cap. We created simple risk parity portfolios (position size is equivalent to 1/vol scaled to the sum of inverse vol across assets) with weekly rebalancing and a 1-day delay in execution. For realized volatility portfolios we ran each individually using various parameters including  20-day, 60-day, 252-day and all history. To test adaptive volatility we ran 27 different portfolios that varied the maximum smoothing constant and the r-squared lookback. The smoothing constant was varied between .1,.5 and .9 and the R-squared lookback was varied using 10,12,15,20,25,30,40,50 and 60 days. We chose to keep the multiplier (-10 in the original post) the same since it was a proxy for the longest possible lookback (all history) by design. The testing period was from 1995 to present and we used extensions with indices for the ETFs when necessary to go back in time. In the graph below we chart the return versus risk for each portfolio.


We used a line to separate the performance of the realized volatility portfolios to better illustrate the superiority in performance of the adaptive volatility portfolios. All parameter combinations outperformed the realized volatility portfolios on a return basis. In terms of risk-adjusted return or sharpe ratio, the realized volatility portfolios fell on the 0%, 3.3%, 13.3% and 33% percentile in the distribution of all portfolios- in other words nearly all the adaptive portfolios also outperformed on a risk-adjusted basis. Was there any systematic advantage to using certain parameters for the adaptive volatility portfolios? As it turns out the maximum smoothing constant was less important than the choice of R-squared. We stated in the previous post that shorter r-squared parameters on average were more desirable than long parameters as long as they weren’t too short so as to avoid capturing noise. Shorter lookbacks should allow the adaptive volatility to more rapidly adjust to current market conditions and therefore reduce drawdowns and improve returns. It turns out that this pattern is validated when we average across smoothing constant values (hold them constant) and look at the return relative to maximum drawdown (MAR)  as a function of R-squared lookback.













Clearly the shorter-term R-squared values improved the return relative to maximum drawdown. While not shown, the drawdowns were much lower and drove this effect, while the returns showed a more modest improvement. The drawback to shorter lookbacks is increased turnover, which can be mitigated by longer rebalancing windows or through improved smoothing measures or rules to mitigate allocation changes without materially affecting results. Another alternative is to average all possible r-squared and smoothing constant portfolios with a tilt toward shorter r-squared parameters to have a good balance between responsiveness and smoothness while mitigating the risk of a poor parameter choice.

In conclusion, this simple robustness test appears to show that adaptive volatility is relatively robust and may have practical value as a substitute or complement to realized volatility. We will do some single stock tests in order to further investigate this effect and possibly compare to traditional forecasting methods such as GARCH.  Additional exploration on this concept could be done to vary the transition formula or choice of trend indicator. Finally, it may be valuable to test these methods in a more formal volatility forecasting model rather than using just a backtest, and calibrate the parameters according to which are most effective every day.


Information on this website is provided by David Varadi, CFA, with all rights reserved, has been prepared for informational purposes only and is not an offer to buy or sell any security, product or other financial instrument. All investments and strategies have risk, including loss of principal, and one cannot use graphs or charts alone in making investment decisions.    The author(s) of any blogs or articles are principally responsible for their preparation and are expressing their own opinions and viewpoints, which are subject to change without notice and may differ from the view or opinions of others affiliated with our firm or its affiliates. Any conclusions or forward-looking statements presented are speculative and are not intended to predict the future or performance of any specific investment strategy. Any reprinted material is done with permission of the owner.


Adaptive Volatility

November 15, 2017

One of the inherent challenges in designing strategies is the need to specify certain parameters. Volatility parameters tend to work fairly well regardless of lookback, but there are inherent trade-offs to using short-term versus longer-term volatility. The former is more responsive to current market conditions while the latter is more stable. One approach is to use a range of lookbacks which reduces the variance of the estimator or strategy-  i.e. you have less risk of being wrong. The flip side is that you have not increased accuracy or reduced bias. Ultimately you don’t want to underfit relevant features as much as you do not want to overfit random noise in the data. Forecasting volatility can be beneficial towards achieving a solution but is more complicated to implement and exchanges lookback parameters for a new set of parameters. Using market-based measures such as the options market has the fewest parameters and inherent assumptions, and can theoretically improve accuracy but the data is not easily accessible, and it is more useful for individual equities rather than macro markets.


An alternative  approach is to create an “adaptive” volatility measure that varies its lookback as a function of market conditions. Using an exponential moving average framework we can apply a transition function that uses some variable that can help us decide what conditions should require shorter or longer lookbacks. More specifically, we vary the smoothing constant or alpha of the EMA using a mathematical transform of a chosen predictor variable.  The benefit of this approach is that it can potentially improve outcomes by switching to short or longer volatility as a function of market conditions, and it can be superior to picking a single parameter or basket of multiple parameters. Furthermore, it can achieve a better trade-off between responsiveness and smoothness which can lead to better outcomes when transaction costs become an issue.



How do we choose this predictor variable? There are  two observations about volatility that can help us determine what to use:


  1. Volatility can be mean-reverting within a particular market regime- this favors longer lookbacks for volatility to avoid making inefficient and counterproductive changes in position size
  2. Volatility can trend higher or lower during a transition to a new market regime- this favors shorter lookbacks for volatility to rapidly respond by increasing or decreasing position size

We can’t predict what regime we are in necessarily so the simplest way to address these issues is to look at whether the market is trending or mean-reverting. The simplest method is to use the R-squared of the underlying price of a chosen market with respect to time. A high r-squared indicates a strong linear fit, or high degree of trend while the opposite indicates a rangebound or sideways market. If the market is trending (r-squared is higher), then we want to shorten our lookbacks in order to ensure we can capture any sudden or abrupt changes in volatility. If the market is trendless or mean-reverting (r-squared is low) then we want to lengthen our lookbacks since we would also expect that volatility should revert to its historical long-term mean.

Transition Function:

In order to translate the R-squared value into a smoothing constant (SC) or alpha for an exponential moving average we need a mathematical transform. Since markets are lognormally distributed an exponential function makes the most sense.

SC=  EXP(-10 x (1- R-squared(price/time, length))

MIN( SC, .5)

To get a more stable measure of R-squared we use a lookback of 20-days, but values between 15 to 60 days are all reasonable (shorter is noisier, longer has greater lag). By choosing -10 in the above formula, this will default to an almost anchored or all-time lookback for historical volatility for the underlying, which we expect to serve as an indication of “fair value” during periods in which volatility is mean-reverting. (Technically speaking if the r-squared is  zero then  substituting (2-SC)/SC gives an effective lookback of 44052.) By choosing a MAX SC of .5 we are limiting the smoothing period to a minimum lookback of effectively 3 days (2/(n+1)=SC). Therefore the adaptive volatility metric can vary its effective lookback window between 3 days and all history.

This formula gets applied to take the exponential moving average of squared returns. Translating this to annualized volatility, you need to take the square root of the final value and multiply by the square root of 252 trading days.  We can compare this to the often used 20-day realized volatility on the S&P500 (SPY) to visualize the differences:




Considering that the adaptive volatility uses a much longer average lookback than 20-days we can see that it has comparable responsiveness during periods of trending volatility, and has flat or unchanging volatility during periods of mean-reverting volatility. This leads to an ideal combination of greater accuracy and lower turnover. Even without considering transaction costs the results are impressive (note that leverage in the example below has not been constrained in order to isolate the pure differences):




The results show that adaptive volatility outperforms realized volatility, and while not shown this is true across all realized lookback windows. Relative to 20-day realized, adaptive volatility outperforms by 3% annually with the same standard deviation. Factoring in transaction costs would increase this gap in returns significantly.  Risk-adjusted returns are higher, but more impressively this comes with lower drawdowns even at the same level of volatility. This is due to the better combination of responsiveness and smoothness.  In either case,  I believe that adaptive volatility is an area worth considering as an alternative tool to research. One can come up with a variety of different predictors and transition formulas to research that may be superior– the purpose of using r-squared was that it happens to be straightforward and intuitive along with the exponential transition function.




Risk Management and Dynamic Beta Podast

August 4, 2017

I had the honor of speaking with Mebane Faber of Cambria Investment Management recently where I discussed the topic of risk management and also applying a dynamic beta approach on his widely popular podcast “The Mebane Faber Show”. The interview is almost an hour and covers a wide range of topics whether you are a quant geek like myself or an investor.

Here is the link to the podcast.

Episode #64: David Varadi, “Managing Risk Is Absolutely Critical”

Tracking the Performance of Tactical Strategies

September 8, 2016


There is a cool new website that tracks the performance of well-known tactical strategies. AllocateSmartly  has collected an extensive list of strategies from well-known hedge fund managers like Ray Dalio along with several other portfolio managers and financial bloggers. The backtests for these strategies use a very detailed and comprehensive method that is both conservative and realistic. Where possible, the author uses tradeable assets rather than indices and factors in transaction costs along with careful treatment of dividends. The current allocations and performance are tracked in real-time which allows investors to be able to realistically trade these portfolios. Curiously the best performing model tracked on the website this year is the Minimum Correlation Algorithm from CSSA which says a lot about the importance of diversification in 2016 versus momentum and managing risk via trend-following/time-series momentum. In fact, if you dig deeper you will notice that most of the best performers have a structural or dynamic diversification element. The worst performers have been the most concentrated and oriented toward identifying the best performers. As the website correctly points out- the diversification oriented strategies tend to do well during normal market conditions but ultimately the dynamic and more tactical strategies outperform during bear markets. Over longer backtest periods, the more truly tactical performers had better long-term performance. Different market regimes will reward different approaches depending on how predictable and interrelated the markets happen to be that year. An umbrella is great for a rain storm but less than ideal for a sunny day. That is why it is important to understand the strategies you are following and why you are investing in them rather than blindly chase performance. While many quant developers and investors chase the best looking equity curves it is important to consider two primary factors: 1) the utility curve that works best for any one individual is a very personal choice (ie risk/reward and tracking error) 2) you need to choose a set of assumptions for capital markets either going forward or over the long-term: will returns, correlations or volatility be predictable and if so which will be the most predictable and why.

On a side note, I was informed that the very popular “A Simple Tactical Asset Allocation Strategy with Percentile Channels” by CSSA is also being added to the AllocateSmartly website very soon. This is a tactical and structural diversification hybrid that provides balanced factor risk with the ability to de-risk during market downturns. While it lacks the higher returns of more momentum-oriented or equity-centric strategies it provides a steady and low-risk profile across market conditions.



Disclosure: The author(s) principally responsible for the preparation of this material are expressing their own opinions and viewpoints, which are subject to change without notice and may differ from the view or opinions of others at BSAM or its affiliates. Any conclusions presented are speculative and are not intended to predict the future of any specific investment strategy. This material is based on publicly available data as of the publication date and largely dependent on third party research and information which we do not independently verify. We make no representation or warranty with respect to the accuracy or completeness of this material. One cannot use any graphs or charts, by themselves, to make an informed investment decision. Estimates of future performance are based on assumptions that may not be realized and actual events may differ from events assumed. BSAM is not acting as a fiduciary in presenting this material. Benchmark indices are presented or discussed for illustrative purposes only and do not account for deduction of fees and expenses incurred by investors.

The strategies discussed in this material may not be suitable for all investors. We urge you to talk with your investment adviser prior to making any investment decisions. Information taken from Minimum Correlation Algorithm strategy article is publicly available and used by a third party to generate the strategies and signals provided on We have not reviewed and do not represent this information as accurately interpreted or utilized.