What is Scoop?

Scoop is a tool that allows business analysts and data analysts tap into any business data source, work with that data, produce powerful, live/real-time data analysis and share those analyses telling beautiful data stories with colleagues, partners and customers. Scoop is both entirely new, and extremely familiar to business professionals used to using common productivity tools like spreadsheets and slide presentations. However, it brings those familiar tools together in a new way, blending them with live data and a powerful data analysis engine. And it does all this without requiring the skills or resources of a conventional data team. No expertise in SQL, databases, servers, infrastructure, APIs, etc. is required. However, some reasonable working knowledge of spreadsheets and spreadsheet formulas enables the full use of Scoop to blend datasets together and create new calculations.

Business users can leverage their existing skills to connect to any business application as a data source and to manipulate that data. It does this by intelligently using the reports from an application as a data source. This allows business users to define their own data sources (as they are the ones that typically create reports) and leverage them without any technical assistance. This intelligent "Scooping" allows those business users to achieve results with data that previously required expensive and complex data infrastructure and skills like data warehouses, SQL, analytical databases, ETL (extract-transform-load) tools, server infrastructure, developer operations, and systems administration. Once a report with desired data is defined, the user simply points Scoop to it, and Scoop will automatically grab that data every day, building a time series dataset and keeping the data live and fresh.

In doing so, Scoop provides an unprecedented level of agility and affordability to leverage data to make informed and intelligent business decisions by dramatically lowering the barriers to adoption. Scoop's mission is to break the logjam that has prevented more than a small subset of well-resourced business professionals to fully operationalize use of data to drive better performance and decision-making.

The Basic Scoop Workflow

Scoop data applications that are created from scratch follow a basic flow:

  • Identify/build a report in a source application that has the data one wants to analyze
  • Either manually upload that report once, or point Scoop to that report so that it can be automatically captured each day (either by email or by robot)
  • If necessary, augment the dataset from that report with new columns and calculations using spreadsheet logic
  • Create charts, tables, kpis and other visual summaries of that data using Scoop Explorer (or potentially do some ad hoc data exploration)
  • Drop those visualizations and summaries onto a Scoop Canvas for presentation (and potentially add interactivity to that canvas with prompts)
  • Share that presentation with others

Additionally, users can create Scoop live worksheets, which are spreadsheets that can be connected to Scoop data and automatically refreshed and updated. Scoop can also leverage data in those live worksheets to present in canvases as well and even provide windows into those live worksheets on a canvas so that users can interactively change/update data in those live worksheets while using a canvas.


In Scoop, a Workspace is a container for a group of related datasets, analyses, charts, tables, live worksheets, canvases etc. Datasets are added to a workspace and users are granted access to a workspace. Users can then build charts, analyses and data summaries on those datasets and display them on canvases. Those canvases can be shared both with other users of a workspace, and, optionally, to external users who may want to view the analysis on a canvas.

A new user to Scoop has two workspaces already defined for them:

  • A sample workspace showing them a sample dataset, canvas and other Scoop items to play with
  • An empty personal workspace ready to add new datasets and create new analysis

The very first time a user enters Scoop, they should start in the sample workspace, allowing them to tour the features and capabilities of Scoop. To change workspaces, simply select the desired workspace from the workspace selector in the upper left next to the Scoop logo.

Scooping Reports into Datasets


Scoop employs a novel technology for accessing business data that bypasses all of the infrastructure typically required of things like data warehouses and data lakes. Scoop leverages an intelligent engine that emulates what a human does when they attempt to do adhoc analysis of application data. All applications have some sort of reporting layer that allows users to query and see data from those applications. A user might typically download that report and then use a spreadsheet to create a single, bespoke, point-in-time analysis of that data. However, nearly immediately that analysis is growing stale as the underlying data in the application changes. To update that report, the user has to manually re-do that entire exercise. Scoop automates this entire process by intelligently reading those same reports and turning them into datasets automatically. Scoop can also automatically update those datasets by re-running those reports on a repeated basis, obviating the need for manual work, and creating live analysis that can be reviewed much more frequently.

Datasets and the Scoop Analytical Time Series Database

The datasets that underlie Scoop are stored in a powerful analytical time series database, that can be sliced, diced, filtered, modified, etc. far more flexibly than data in a spreadsheet alone. This allows users to work with far more data than would be feasible in a conventional spreadsheet, and to flexibly summarize and manipulate it, but doing so without requiring the enormous infrastructure typically involved in data warehousing.

Each report that is Scooped, is analyzed. Scoop then determines what columns are present in that report, what types of data are represented in those columns, the presentation formats of that data, and even how that data should be aggregated. Scoop then creates a unique dataset for each report. Each time a new report of the same type is provided to Scoop, the data from that report is added to the Scoop dataset. If new columns are added to the report in the future, Scoop will automatically add those new columns to the dataset if it makes sense without requiring any interventions from the user. See Understanding Scoop Datasets for more details on how Scoop intelligently accumulates and manages datasets.

Intelligent Data Snapshotting to Track Changes

There are two types of datasets that Scoop supports:

  • Basic, transactional dataset: With this dataset type, essentially all rows in a source report are considered as unique items or transactions and just added to the dataset. This dataset intuitively just accumulates all rows in all the reports provided to Scoop.
  • Intelligent snapshot dataset: This powerful dataset type is designed to allow the tracking of changes to items over time. Each report needs to have a column with a unique identifier for each item (e.g. sales opportunity id, marketing lead id, order number, service request ticket number, etc.). This is typically used with reports that are showing the status of certain business items (like leads or sales opportunities). Each report then becomes a snapshot in time and Scoop remembers all snapshots it has seen and then allows for powerful analysis of changes to those items over time. This kind of analysis is generally not possible in business applications and allows for very powerful insights into how processes behave.

When adding a new dataset, the user selects which type it is at the beginning, and Scoop then manages that dataset accordingly when it processes new reports.


Canvases allow you to create live data stories based on your datasets. Unlike tools that use dashboards as a presentation concept, Scoop embraces a more modern user experience, broadening the idea of what it means to present data and tell a story. Dashboards are typically limited to charts, tables and analytical controls that, while functional, keep data in it's own place, separate from the conversation and business process.

Canvases, on the other hand, are designed for the broader purpose of including all the elements of data story telling, including descriptions, narratives, diagrams, and all of the elements of a modern presentation tool. In fact, Scoop can import the full designs of your Microsoft Powerpoint or Google Slides presentations as a backdrop for data. Those presentation canvases can be presented just as you would in a conventional presentation too. They can also be shared as live, interactive data applications, allowing data to be utilized far more widely in an organization (or even external to an organization).

The idea is to create a fully live and interactive data story that can be directly shared and communicated with other business professionals, embracing the full human element of presentation, but enriching it with powerful, interactive analytical and data content. In effect, with Scoop, one can build powerful data applications that can be delivered to any business constituency.

Canvases are, in effect, just that, a nearly infinite background to place content. A canvas can start completely bank, and one can place text, charts, tables, arrows, etc. on top in any arrangement desired. One can zoom in and zoom out to see the entire canvas. Additionally one can layer frames onto a canvas that facilitate presentations. When one switches to presentation mode, each frame becomes an interactive slide that can be shared. Importing a Microsoft PowerPoint slide deck into Scoop creates a frame for each slide, importing all the slide content. Users can then layer additional interactive Scoop elements on top of those slides to enrich them with live data.

Exploring Data and Creating Visual Summaries

Once a dataset has been created, one can explore the data in that dataset, producing a wide variety of highly styled charts and tables. These artifacts can then be saved and dropped onto a canvas at a later point. In Scoop Explorer, you can define KPIs that can be saved and re-used. KPIs are very powerful and allow one to control things like aggregations, time dimensions, prior period comparisons and other things to enable sophisticated calculates that can be re-used.

Scoop makes extensive use of theming. Saved themes allow one to have consistent formatting across a wide variety of elements. This includes colors, fonts, backgrounds, formatting options, etc. In fact, when Scoop imports a PowerPoint presentation, it automatically analyzes that presentation and intelligently creates a color theme that will match. The idea is to make your analysis extremely presentable, as well as powerful.

Spreadsheet Data Preparation

To fully work with data, one almost always needs to manipulate it. This may include creating new calculations based on the data in a source report, or even combining the data from two different datasets to create a new, third dataset. Traditionally, in the expensive and technical world of data warehousing, this involved extraordinarily complex tools and processes that required sophisticated training and specialized skill sets. This complexity typically required the setting up of some sort of analytical database to hold a data warehouse and extensive use of database tools to build and automate complex, technical logic, typically in a language like SQL or XML - the purview of data teams with deep backgrounds, but not most business users. Broadly, this process of data preparation has been an enormous barrier that has prevented broader use of data in organizations.

Scoop changes this dynamic, by allowing business users to leverage a skill set they already have, the spreadsheet, to manipulate and prepare data. In fact, Scoop has created an entirely new way to prepare data, called Spreadsheet Data Preparation. With Spreadsheet Data Preparation, users can use normal spreadsheet formulas, with the actual spreadsheets they normally use (like Google Sheets or Microsoft Excel), to create new calculations and to blend data from different datasets. Moreover, Scoop fully leverages the full power and flexibility of a spreadsheet, allowing one to put data anywhere, reference it, create side calculations, lookup tables, etc. Users do not have to sacrifice the immense flexibility of the spreadsheet, to use it in preparing data - meaning that Scoop is not only ridiculously easy to use and fast to get setup, but the types of calculations a business user can create can actually go beyond what a typical data team generally can implement because of the immense expressiveness of a spreadsheet - i.e. it's more powerful too.

Calculated Columns (adding new columns to an existing report dataset)

A source report for Scoop should have all the raw data elements that one needs. However, one might want to add a new column that leverages the raw report columns. For example one might like to:

  • Combine first and last name into a full name
  • Extract the year, month or day out of a date for some specific test
  • Lookup a value in a reference table to enrich it (e.g. lookup a discount rate based on a code, or determine if a specific item needs manual adjustment because of an error)
  • Bucket a number into categories for grouping (e.g. large, medium and small deal sizes)

These are all possible with Scoop's ability to create calculated fields. On can add calculations to any dataset that is based on raw data. By simply clicking on the calculated columns tab (as pictured above) after a dataset has been created, Scoop creates a new spreadsheet with a template for creating calculations. Scoop fills out one set of cells with values for each of the columns that are present in the original report. Scoop then creates another section where one can create new columns and add formulas to calculate them. Scoop progressively fills the source columns one row at a time from the raw source report, runs the calculations in the new columns row, and then adds the results of those calculations to the dataset. Any spreadsheet calculation can be specified (e.g. an VLOOKUP into table on another sheet), to provide immense power to the user. For more information see the section on Adding Calculated Columns

Blended Datasets (combining two datasets into a new one)

Calculated fields are very powerful, but blended datasets take Spreadsheet Data Preparation to another level, by allowing a user to take data from two different datasets and flexibly blend them together to create a new one. It builds on the concepts of calculated fields, but extends it to two different source datasets with some powerful abilities to control how that data is aggregated and filtered.

The basic building block for a blended dataset is a dataset query. Here Scoop allows the user to specify which columns from another dataset are to be leveraged for the blending. It also allows one to filter that dataset for specific values as well as to aggregate by columns in the query. A user can specify one query and use that by itself to create a new calculated dataset based on the aggregation of an initial one, or one can specify two queries, and create a new dataset based on the combination of those. To do the latter, one specifies a "blending condition". Scoop scans each record from each query and tests which combinations yield a TRUE for the blending condition. If that condition is TRUE, an output record is created in the blended dataset. Typical uses are to:

  • Test whether a column in one query is equal to the column in another one and link based on that (i.e. a JOIN in the data world)
  • Test whether some fuzzy condition is met and link based on that (some sort of lookup)

For more details on how to combine aggregated data from two datasets to create a new one, see Blending Two Datasets

Live Worksheets

Scoop augments the way you work. That includes what you can do with spreadsheets. In fact, Scoop tightly integrates with and extends your existing spreadsheets. Every spreadsheet element shown previously inside Scoop is backed by an actual cloud spreadsheet from Microsoft or Google. If you will notice in each of the prior two screenshots, there was a spreadsheet icon in the upper left of each of the spreadsheet elements. If you click on it, you would open up the actual Google Sheet that is behind this SheetLet. Scoop blends with your spreadsheet application very tightly. In addition to being able to read/utilize a Google Sheet directly, Scoop can populate a Google Sheet with live Scoop data. As such, the relationship between Scoop and Live Worksheets is bi-directional, Scoop can populate sheets, and it can treat them as data sources. Nothing else like this exists in the data world...

Notice the small Scoop logo on the right hand toolbar in Google Sheets right above the plus at the bottom. This triggers the Scoop Add On that facilitates bringing data into Google Sheets from Scoop. You need to add that add on from the Extensions menu item. And when you do, all data in Scoop is now available to data in your Google Sheets. See Installing Scoop Plugin for Google Sheets

Live Worksheets Populated with Scoop Data

Clicking on the Scoop for Sheets addon button in the side bar opens the plugin's main panel.

One can create new queries based on Scoop dataset data and Scoop will populate sheets in your workbook with the results of those queries.

You can also manually refresh the data in your worksheets or have Scoop do that for you automatically when a source dataset changes (e.g. every day when there is new data). See Dataset Queries for more info.

Scoop Leveraging Live Worksheets as a Data Source

Scoop can also leverage Live Worksheets directly as a datasource. If you supply Scoop a named range that has a proper table of data, Scoop will allow you to visuallize, aggregate and filter that data into expressive summaries. If that table is based on calculations, and those calculations are a function of canvas prompts, or values that are exposed in a Sheetlet (a spreadsheet window on a canvas), then those changes will update whatever chart or table analysis/summary in real time. This enables what-if type analysis in real time.

Insights and Advanced Analysis

In addition to helping users assemble datasets and produce analytical stories, Scoop has a powerful Insights engine that allows for deeper analysis of snapshot data. A key goal of a business analyst is to leverage data to understand what is working and what isn't working well in a process. Key questions often surface about how a process unfolds over time, where things move quickly or where they don't and where things tend to go sideways. Snapshot datasets provide a foundation for this type of insight. Each snapshot captures a picture of the current status of a set of objects, much like a single frame in a roll of movie film. By taking snapshots daily over a period of time, Scoop can replay that movie and help make a variety of very useful measurements that describe how that underlying process is performing or not performing.

Process Analysis

As mentioned above, Process Analysis plays back snapshot history and analyzes how a process evolves over time. Generally, each analysis is focused on a particular status item (e.g. sales stage for sales opportunities, or lead status for a lead). These business items can move through many different states throughout their lifetime and generally there is an ideal end-state (e.g. closed won for sales opportunities). The key is to understand how items move from state to state, how likely they are to go from any one state to another and how likely they are to ultimately wind up in a successful outcome.

With that as a goal, Scoop offers two types of visual analysis to explore processes - the process diagram and the Sankey chart. Both show how a process unfolds and provide different ways to explore that. A process diagram gives precise measurement of conversion rates from one state to another as well as cycle times (how long it takes to get to the next state, and/or to the final/successful state). The Sankey chart visually shows the proportions that move across different states and allows one to visualize how different groups move through several successive states. Both of these are based on snapshot datasets and both can be filtered for specific attributes (e.g. products, people or regions), or time periods to really help drill down into where processes are working well, and where they may be getting stuck.

To learn more see Process Analysis


Everything above describes the various tools Scoop can bring to bear to create a solution. Users connect to datasets, build analysis and create data stories to help business people understand and make better decisions. Given that many businesses are often trying to analyze the same process and make the same decisions, Scoop has pre-created solutions for a number of use cases. These pre-built solutions are called Recipes. Like a meal recipe, they include ingredients that are the requirements to utilize them. Typically a recipe focuses on a specific business use case like:

  • Sales forecasting
  • Product usage analysis and customer value
  • Marketing cost per lead

Each recipe typically relies on one or maybe two source reports and requires that those source reports have certain fields. In the case of sales forecasting for example, a single source report from your CRM is all that is required. That source report should list all currently open sales opportunities and include the following necessary fields:

  • OpportunityID (a unique identifier so that this list can be snapshotted)
  • Opportunity Name (something human readable)
  • Opportunity Amount (some amount indicating value)
  • Sales Stage (the deal status, with one potential value being closed won or success)
  • Owner
  • Expected close date
  • Created date
  • Any additional status or attribute fields that may be useful for analysis later

Your CRM may not use these exact names or perhaps you've configured it to use different names. However, as long as you can make the names you use to the ones expected by the recipe, Scoop can automatically setup a forecasting canvas for you that includes numerous best practice analysis and all the process and forecast accuracy insight you might need. In minutes you can have a fully functioning, best practice data application setup and running without the need for any configuration. And, since the underlying Scoop components are the ones mentioned above, if you want to make changes or add more analysis to what is out of the box, you can freely do so. You are not bound to a one size fits all approach that packaged applications provide. Morover, should you like to extend this use case to include other data, like sales or financial data, you are free to do that as well.

For more information see Understanding Recipes