Skip to content

Modified Datasets

Modified Datasets let you clean and transform data before it reaches your dashboard. By transforming at the source, you get consistent results across all widgets and better performance.

A Modified Dataset is a “virtual” version of a raw dataset with operations applied. The original data stays untouched.

  1. Go to Data Settings → Modified Datasets
  2. Click + Add Modified Dataset
  3. Select the source dataset you want to transform

Tools are organized into four categories. Operations run sequentially from top to bottom.

Change values or reshape your data.

Replace values based on conditions.

  • Choose Columns: Select target columns. Toggle “Apply to all EXCEPT selected” if needed.
  • Conditions (If/Else): Add replacement rules. Each rule has a condition and a replacement value.
  • Advanced Value: Use another column’s value instead of static text.

Pattern matching with regular expressions.

  • Columns: Select text columns to search
  • Find: Enter the regex pattern (e.g., ^Ref-\d+)
  • Replace: Enter the replacement string

Round numeric values.

  • Columns: Select numeric columns
  • Decimal Places: Number of decimals (0 or higher)
  • Method: Round (standard), Floor (always down), Ceil (always up)

Split one row into multiple rows based on a delimiter.

  • Column: Select the column with delimited data (e.g., “Tag1, Tag2, Tag3”)
  • Delimiter: The separator character (e.g., ,)
  • Result: “Tag1, Tag2” becomes two rows, with other columns duplicated

Replace empty values with a default.

  • Columns: Select columns to fill (must be same data type)
  • Value: The default (e.g., “0” for numbers, “N/A” for text)
  • Exclude: Fill nulls in all columns except the selected ones

Turn unique values in a column into new column headers. (Requires All Features tier)

  • Columns (Headers): Column whose values become new column names
  • Index (Group By): Columns that identify unique rows
  • Values: Column to fill the cells with
  • Aggregation: How to handle multiple values (Sum, Min, Max, First, Last, Mean)

Swap rows and columns. (Requires All Features tier)

  • Header Column: Column whose values become new column headers
  • Original Headers: Optionally include old column names as a new row

Shorten date/time precision.

  • Columns: Select Date or Datetime columns
  • Unit: Year, Quarter, Month, Week, Day, Hour, Minute, or Second
  • Example: “2023-10-27 14:35:12” truncated to Month becomes “2023-10-01”

Reduce your dataset to relevant rows.

Remove rows that don’t match criteria.

  • Conditions: Build logic with AND/OR groups
  • Operators: Equals, Contains, Starts With, Is Null, Greater Than, etc.

See Filtering documentation for the full guide.

Keep only a subset of rows.

  • Side: Keep from Top or Bottom
  • Count: Number of rows to keep

Remove repeated rows.

  • Keep Rule: First, Last, or Delete All duplicates
  • Method: Check all columns, or specific columns only

Manage column metadata.

Sort your data.

  • Columns: Add columns to sort by (first column is primary sort)
  • Direction: Check “Ascending” for A-Z / 0-9

Remove columns from output.

  • Columns: Select fields to discard. This doesn’t delete source data.

Give columns friendlier names.

  • Map: Enter new names next to originals
  • Names must be unique within the dataset

Generate clickable links.

  • Column: Select target column
  • Configuration: URL pattern (e.g., https://crm.com/ticket/{id})
  • Filter: Optionally apply only to rows matching a condition

Add a sequential row number column. (Requires Starter tier or higher)

  • Column Name: Name for the new index column (e.g., “Row Number”)
  • Offset: Starting number for the index (default is 0)
  • Example: With offset 1, rows are numbered 1, 2, 3…

Group data and calculate summaries. Good for performance.

  • Grouping Columns: Dimensions to group by (e.g., Customer, Status)
  • Aggregations: Sum, Average, Count, Unique Count, Min, Max, Mode, Range
  • Rename: Give calculated columns custom names

Generate rows for a time range.

  • Start/End Columns: Date range boundaries
  • Unit: Create rows for every Day, Week, Month, Quarter, or Year
  • Spread Columns: Distribute numeric values across new rows (optional)

Create calculated fields. The most flexible tool.

Formula Builder

Stack items to build expressions:

  • Column: Reference an existing field
  • Value: Static number or text
  • Type: Convert data types (String → Int)
  • Operators: Add, Subtract, Multiply, Divide, Power
  • Parentheses: Group operations for order of precedence

Advanced features

  • Grouping: Calculate based on filtered subsets (e.g., “Sum of Revenue where Status = Closed”)
  • Business Days: When subtracting dates, exclude weekends
  • Timezone: Set specific timezones for date calculations
  • Week Start: When extracting week numbers from dates, choose which day starts the week (Sunday through Saturday). Defaults to Sunday. This appears when converting a date or datetime column to a number with week-based units like “week of year”, “week of quarter”, or “week of month”.

Common formulas

  • Profit: [Revenue] - [Cost]
  • Days Open: [Resolved Date] - [Created Date] (returns Timedelta)
  • Status Label: Use Find & Replace instead of Virtual Column for text logic

  1. Filter early: Put the Filter tool at the top. Processing 1,000 rows is faster than 1,000,000.
  2. Aggregate early: If you only need monthly totals, aggregate before adding complex virtual columns.
  3. Hide unused columns: Drop large text columns early to save memory.
  1. Handle nulls: Use Fill Null before math operations. 10 + null often equals error or null.
  2. Check types: You can’t divide “Ten” (text) by 2 (number). Use Virtual Column > Type to convert first.
  3. Standardize dates: Use Truncate Datetime to align dates (e.g., all set to midnight) before grouping.
  • Preview often: The preview updates after every change. Verify your data at each step.
  • Locking: If you use a Dataset Template, you can lock specific tools to prevent overwrites when the template updates.
  • Chaining: You can create a Modified Dataset from another Modified Dataset. Use this to separate “Cleaning” logic from “Analysis” logic.

When you edit an existing Modified Dataset and your changes remove one or more output columns, Resplendent runs a dependency check before saving.

If downstream items use those columns, the Columns Being Removed dialog appears. It can list:

  • Modified Datasets
  • Joined Datasets
  • Widgets, grouped by dashboard
  • Filter Variables

Use Cancel to return to the editor, or Save Anyway to keep the column removal and continue saving. If the dependency check fails, the save does not continue.


When you delete a modified dataset, Resplendent shows a preview of the downstream items that depend on it.

The delete dialog can list:

  • Joined datasets built from that modified dataset, including nested joined datasets
  • Child modified datasets attached to those joined datasets
  • Widgets affected by those joined datasets
  • Filter variables tied to those joined datasets or their child modified datasets

If you continue, the modified dataset is deleted and dependent joined datasets are deleted with it. Filter variables listed in the dialog become static.

Before you confirm deletion:

  1. Read the full dependency list in the dialog
  2. Check any joined datasets you still need and recreate them from another source first
  3. Review affected widgets and dashboards so you know what will need cleanup after deletion