Skip to main content
warning

WORKING DRAFT — Pending review by Lisa (CEO), Greg (President), and Jay (Operations). Not approved for publication.

One Pipeline. Every Source. Every Format. Every Loan.

Your LOS exports loans in one format. Your correspondent sellers send tapes in another. Agency files follow FNMA standards — except when they follow Freddie's. Ginnie has its own PDD format. Your servicing system exports something else entirely. And that one seller still sends you an Excel file with columns in a different order every quarter.

Before anyone on your desk can price a loan, run Best Execution, or measure risk exposure, someone has to take all of that data and make it make sense. Column names get mapped. Terminology gets normalized. Duplicates get flagged. Missing fields get investigated.

That's 2–4 hours a day. Every day. And if the person who built the import template goes on vacation, everyone else is guessing.

Data Manager eliminates the guessing.


What Data Manager Does

Import from Any Source

Data Manager handles file imports, database imports, and structured XML/ULDD imports. Whatever your counterparties send you, there's a path to get it into the pipeline cleanly:

  • File imports — CSV, Excel, DBase, text, and custom delimited formats
  • ULDD/XML — MISMO 3.0, ULDD Phase 4A/5 (Fannie Mae, Freddie Mac), Ginnie Mae PDD 3.0
  • Agency formats — FNMA, FHLMC, GNMA standard file layouts
  • Database imports — direct connections to external databases for automated data pulls

Every import runs through a 15-step validation pipeline: staging, loading, prefix/case normalization, conversion rules, primary key checks, type validation, foreign key verification, duplicate detection, expression evaluation, acceptable/alarm value checks, associated table updates, and finally the write to the pipeline. Imports that fail validation get flagged — not silently swallowed.

Automatic Normalization

Here's the part that saves your team hours every day: configurable conversion rules.

Every data source has its own way of saying the same thing. One source calls it "SFR," another calls it "Single Family," a third uses "1-Unit Detached." Conversion rules map every source's terminology to PowerSeller's standard vocabulary — automatically, on every import.

Set up the rules once. After that, no matter how many sources feed your pipeline, the data comes in clean and consistent. When a source changes its format (and they will), you update one mapping — not twenty spreadsheets.

Visual Query Builder (VMD)

Not every question your desk asks fits a canned report. The Visual Model Designer lets your team build ad-hoc queries without writing SQL — select fields, define joins, set filters, and run. Saved queries become reusable reports that anyone on the team can execute.

For teams that do know SQL, VMD also supports direct query editing. And queries built in VMD feed the data pipeline system for automated ETL workflows.

Reference Data Backbone

Data Manager is also where you configure the master data that every other module depends on:

  • Instruments — every financial product definition (loan, MBS, whole loan, futures, treasury, eurodollar) with associated investors, guarantors, and security classes
  • Companies and contacts — investors, counterparties, broker-dealers, and their relationships
  • Guarantors — agency configurations for Fannie Mae, Freddie Mac, Ginnie Mae, and private guarantors
  • Marketing programs — program definitions tied to master agreements and commitments

If Secondary Manager is the profit engine, Data Manager is the fuel system. Wrong data in means wrong decisions out — no matter how good your pricing models are.

Macro Automation

Repetitive import tasks — the ones your team runs every morning at 7:00 AM — get automated with macros. Define the sequence, schedule it, and let it run. Macro groups chain multiple operations together for end-to-end data refresh workflows.


The Real Impact

Before Data Manager: An analyst arrives at 7:00 AM. She opens six emails with attached loan tapes. Each one is in a different format. She spends until 10:00 AM normalizing columns, fixing terminology mismatches, deduplicating against yesterday's pipeline, and loading data into the system. By the time the pipeline is clean, the morning pricing window is half over.

After Data Manager: Import macros ran at 6:30 AM. Conversion rules normalized every source automatically. Validation flagged three loans with missing FICO scores — she reviews those in 15 minutes. By 7:15 AM, the pipeline is clean and BestEx is running. She spends her morning on analysis, not data entry.


By the Numbers

  • 2–4 hours/day of manual data reconciliation eliminated
  • 15-step automated validation pipeline on every import
  • 11 export formats including Excel, PDF, XML, CSV, and agency-standard layouts
  • MISMO 3.0 / ULDD Phase 4A/5 / GNMA PDD 3.0 — native support, not aftermarket adapters

Who Benefits

Secondary Marketing VP: "I can trust the numbers my team gives me because the data is clean at the source. When I ask 'how many conforming 30-years closed this week,' I get one answer — not three different answers from three different spreadsheets."

Analyst: "I spend my time analyzing, not cleaning data. Conversion rules handle the normalization I used to do manually. I set them up once, and they just work."

Executive: "Reports are consistent because everyone works from the same normalized pipeline. No more 'well, it depends on which spreadsheet you're looking at.'"

Correspondent Desk Manager: "We onboard new sellers in hours, not days. Map their tape format to our conversion rules, and their loans flow into the pipeline like everyone else's."


The Differentiator

Every lender imports data. The question is how.

You can do it with manual spreadsheet gymnastics — copy, paste, rename columns, fix terminology, cross your fingers, repeat tomorrow. Or you can do it with a system that knows your sources, applies your rules, validates automatically, and gives every downstream module a clean foundation to work from.

Clean data isn't glamorous. But every basis point of value your desk captures in Best Execution, every hedge ratio your risk team calculates, and every delivery deadline your shipping desk hits — all of it depends on the data being right.

Data Manager makes sure it is.


Want to see how your data sources map into a single, normalized pipeline? Let us walk you through a demo with your actual data formats. →