In the immortal words of Herm Edwards, the 2013 Head Coach of the New York Jets, “You Play to Win the Game.”
The sage wisdom provided by John Madden, the 2006 Pro Football Hall of Fame inductee and legendary sports broadcaster, “You can’t win a game if you don’t score any points.”
Applying the guidance of Edwards and Madden to a Federal contracting environment, the emergence of “self-scoring” and the winning of Federal contracts obviously requires the bidder to “score a bunch of points” to “win.” But, this begs the question: “How many points do I need to score, and how hard is it to score points in general”?
While self-scoring spreadsheets are somewhat self-explanatory, the strategy behind the bid/no-bid process, teaming, and optimization of the total “self-score” are not often discussed within the business and proposal community.
Since it appears that Self-Scoring is becoming more prevalent at the Federal level, this briefing addresses some of the questions clients have asked about the self-scoring process. We hope to provide some simple strategic guidance based on RWCO’s 20+ years of experience bidding and winning GWACs with a self- scoring evaluation component.
History and Approach of Self-Scoring
Before we discuss the strategic approach to self-scoring, we all must have a fundamental understanding of how we got here. Self-scoring was born out of the necessity to streamline the procurement and evaluation process as Technical Evaluation Teams (TETs) experienced a considerable uptick in bid volume across multiple GWACs. Increases in the bidding volume have a way of slowing down the awards process. Moreover, with more bidders and more time, industry scrutiny is also magnified. The result is, in part, increased bid protest activity tied to evaluation criteria, offeror capabilities, and past performance assessments.
The U.S. contracting rules and regulations, FARs and DFARs, are based on the Competition in Contracting Act (CICA) dating back to 1984 and with origins in the 19th Century. Agencies have been trying to find ways to streamline this old system to conduct acquisitions for new kinds of requirements and do business with ever-evolving industries and markets. The Self Scoring Methodology came to the forefront of Government contracting in 2013 when the GSA finalized its new source selection technique to select contractors for its OASIS GWAC. After winning the protest, GSA announced its intention to use it for future opportunities as well. There have been multiple reasons for GSA’s commitment to the Self Scoring Evaluation process. However, the cynic quickly points out that the Self Scoring Methodology is all about saving time.
Genesis Self-Scoring Methodology
A simple “Google Search” for Federal Contracting and the Number of Federal Contractors will yield search results that highlight the robust increase in the total number of firms that participate in the Federal Government’s contracting marketplace. The steady increase in the number of firms participating in Federal procurements further stresses the procurement process due to the Federal Government’s reduction of government personnel to manage the procurement process. Consider that, in a typical GWAC such as CIO-SP4, the Government anticipates hundreds of interested firms across many functional requirements. The Self-Scoring methodology alleviates what would otherwise be a massive evaluation burden on the Technical Evaluation Teams that oversee GWAC awards. The Self-Scoring Methodology provides the
RWCO Self-Scoring Methodology and GWACS 2
Government a more simplified acquisition process while, at the same time, illuminates those bidders (or potential bidders) that lack the experience, past performance, and overall capability to complete for a GWAC award. In essence, the Self-Scoring Methodology forces bidding organizations have an honest and objective conversation with themselves, resulting in a significant percentage of bidders making a no- bid determination due to the gaps in past performance and a relatively low score as a result.
Competitive Landscape in Self-Scoring Environments
As a firm that supports Clients large and small in all sorts of procurements, we have a deep reservoir of hands-on-experience working with Self-Scoring frameworks. It is critical to note that not all Self-Scoring scenarios are the same. Some may be focused on simple past/fail compliance factors and others on past performance and technical capability. So, a bidder must examine the Self-Scoring worksheet(s) released by the Government in draft form well in advance of any final RFP. By “reviewing” the Self-Scoring worksheets, the potential offer must understand how each factor is weighed into the overall total score and recognize that gaps may exist (and address those gaps well in advance of the Final RFP through teaming or joint ventures).
Self-Scoring Strategy and Lessons Learned
Assuming the bidder conducts an honest self-assessment of past performance and capabilities, the Self- Scoring worksheet offers a look into the procurement’s crystal ball. The Self-Scoring dynamic provides for complete and utter awareness of technical gaps in capability and past performance that is instructive for bidders to illuminate the need for teaming, partnering, and/or joint venturing. However, taken by itself, the Self-Scoring Methodology does little by way of informing potential bidders on “what score” they need to obtain to have a viable chance of a subsequent GWAC award. What follows is a discussion of three (3) specific guideposts that support the self-assessment process and guide strategic thinking within a competitive bid environment.
Guidepost 1: 80% Rule – As a PRIME contract on self-scoring GWAC procurements, the total target score should be 80% of the point total or greater. By targeting 80%, the offeror targets a scoring level that generates high confidence in getting evaluated against its peers’ past performance.
Guidepost 2: “Head Start” Rule – Potential offerors, should review the Self-Scoring Worksheet in its totality. In many instances, the categories within the Self-Scoring Worksheet may carry different point values. Thus, if an offeror does not maximize its score (shooting for the 80%), then it is conceivable that the potential offeror may give competing organizations (those that obtain 80% or greater of the point totals) a “head start.” With some categories weighted lower than others, a head start on those categories with the highest point totals may provide competitors with a head start large enough that a potential offeror would not have enough points in other categories to close that gap. Thus, not all evaluation elements are created equal. The potential offeror should account for those category point totals that are greater than those in other categories.
Guidepost 3: Gap Analysis and Teaming – With the 80% Rule and the “Head start” Rule in place, a potential offeror may take the position that these deficiencies in the scoring totals can be fully mitigated through teaming. While it is true that limitations in point totals may be remediated through onboarding team members that have the requisite attributes to maximize a category score, the offeror introduces a new problem by address the Gap problem. There is a diminishing return for teaming members in a GWAC environment as a multitude of teaming partners creates complexities that other offerors may not have. Issues with programmatic oversight, supervisory roles, efficiency in service delivery, and subsequent task order bids are considered as the Government reviewer assesses the offeror’s management plan and associated performance risk. Experience suggests that less is more when it comes to filling gaps through teaming.
Downloads
PUBLIC SECTOR PROCUREMENT BAROMETER SURVEY RESULTS
RWCO conducts an annual research survey of comprehensive market trends across the Federal contractor community. The research survey, entitled “Public Sector Procurement Barometer”, is fielded via an online research portal. Respondents are invited to participate in the survey through an email outreach campaign that is conducted throughout the month December. The survey is released from January 2-January 31 every calendar year.
Archived Library
SITE III WHITE PAPER
The DIA will combine two information technology contracting vehicles worth potentially $5.1B as a follow-on to the Enhanced Solutions for the IT Enterprise contract (E-SITE). The DIA plans to merge the $3B Infrastructure Sustainment and Development 2 program with the $2.1B Application DS2 solicitation to form a SITE III multiple-award contract. IDS2 covers cloud services and data center support work, while ADS2 seeks data integration, software engineering and other technical support services.
LTASC SCOPE OF SUPPORT
RWCO has assembled a complete review of the LTASC III program in the form of a project plan and we are providing you access to that project plan with no strings attached. Consider it our way of providing value in the form of market intelligence and LTASC guidance while demonstrating our capability of support on LTASC responses in the future.