top of page

Databricks

Python (Django)

React

AWS S3

Gemini

Tech Stack

Client Profile

Industry

Automotive

Region

North America

Technology

Databricks

Overview

A leading automotive advisory firm that provides M&A and investment insights for the U.S. car dealership market struggled to leverage its raw data, coming from over 18,000 dealerships spanning decades. Each record had roughly 150 fields drawn from Polk, Helix, demographic and population datasets and other open sources and APIs. This had issues of inconsistent formats, missing common identifiers that prevented easy merging, and large gaps. These problems slowed extraction of actionable insights: full data refreshes took more than a week and blocked timely, strategic decisions such as dealership valuations.

 

To resolve the client's data challenges, Shorthills AI developed JumpIQ, an AI-powered platform that ingests and processes raw data from Polk, Helix, and other open APIs directly into Databricks. A robust data engineering pipeline was built for intelligent merging (using techniques like fuzzy matching and address normalization), cleaning, mapping, and formatting to create a unified “golden record” for each dealership. On this refined data foundation, advanced AI/ML models were deployed for predictive analytics, including revenue forecasting, sales efficiency, dealership valuation, and performance scoring—all accessible through a web-based dashboard offering detailed analytical reports and visual insights.

 

As a result, the client reduced data processing time from over a week to just 8 hours, gained a single clean and accurate database, and obtained significantly stronger predictive insights that enable faster, more confident strategic decisions.

Depositphotos_360517248_XL.jpg

Real-Time M&A Intelligence for 18,000+ Dealerships

Our Solutions

Data Foundation: Lakehouse & Entity Resolution

We stood up a Databricks-powered lakehouse with medallion layers (bronze → silver → gold) and survivorship rules to reconcile conflicts. Fuzzy matching plus brand/state heuristics created a durable golden dealer record across renames, mergers, and closures—an analytics-ready backbone with end-to-end lineage.

Signals & Feature Engineering

On unified records, we built a reusable catalog of 150+ signals per dealership spanning performance, market, and macro indicators. Features are standardized across brands/states and versioned over time, so valuations, forecasts, and benchmarks stay fair and reproducible.

Valuation & Forecasting Engines

A model suite blends store performance with market signals to produce explainable valuations and forward-looking forecasts. Scenario/sensitivity views test brand, geography, and macro assumptions—accelerating buy/no-buy calls with consistent methodology.

Delivery Experience: Analyst App for M&A Workflows

A secure analytics app streamlines real M&A tasks: search/filter/compare, geospatial views, and exportable diligence summaries. Built on governed tables and shared definitions, it keeps every stakeholder aligned—from board decks to deep dives.

Untitled design (41).png

Modernizing consumer support chat with a multi-intent LLM—parsing up to 5 intents with ~1.2s latency to lift automation and first-contact resolution.

Industry

Customer Experience 

Region

North America 

Technology

AWS

Executive Summary

A high-volume consumer support operation needed a chatbot that could accurately understand complex, multi-intent requests and respond near-instantly. The legacy bot misread intents, handled only one action at a time, and responded slowly—pushing users to call centers. We built a custom multi-intent LLM engine exposed via API that parses up to 5 intents in one message, fuzzy-matches products from a 40k+ catalog, and returns clean JSON for backend execution. Optimized with vLLM and FastAPI, latency dropped to ~1.2s, automation rose, and first-contact resolution improved—cutting support costs while lifting customer satisfaction.  

Tech Stack

GPT 4

Llama 3

vLLM 

FastAPI 

Docker

Python

AWS

My SQL  

Executive Summary

A high-volume consumer support operation needed a chatbot that could accurately understand complex, multi-intent requests and respond near-instantly. The legacy bot misread intents, handled only one action at a time, and responded slowly—pushing users to call centers. We built a custom multi-intent LLM engine exposed via API that parses up to 5 intents in one message, fuzzy-matches products from a 40k+ catalog, and returns clean JSON for backend execution. Optimized with vLLM and FastAPI, latency dropped to ~1.2s, automation rose, and first-contact resolution improved—cutting support costs while lifting customer satisfaction.  

Tech Stack

GPT 4

Llama 3

vLLM 

FastAPI 

Docker

Python

AWS

My SQL  

Untitled design (1)_edited.jpg

Modernizing Leading U.S. Automotive M&A with Databricks—unifying data from 18,000+ dealerships into golden records to deliver explainable valuations, standardized forecasts, and 8-hour refreshes

Industry

Automotive

Region

North America

Technology

Databricks

Databricks

Python (Django)

React

AWS S3

Gemini

Tech Stack

Executive Summary

A leading U.S. automotive advisory firm struggled to turn decades of raw data from 18,000+ dealerships—spread across Polk, Helix, demographic datasets, and multiple APIs—into actionable insights. The fragmented and inconsistent data made full refreshes take over a week, delaying critical decisions like dealership valuations. Shorthills AI developed JumpIQ, an AI-powered platform that ingests this data into Databricks, creating unified “golden records” through intelligent cleaning, mapping, and merging. Advanced AI/ML models then deliver predictive analytics via a web dashboard with detailed reports and visual insights. The result: data processing dropped from over a week to 8 hours, the client gained a single accurate database, and predictive insights now support faster, more confident decisions.

Depositphotos_447463274_XL_edited_edited.jpg

Modernizing Leading U.S. Automotive M&A with Databricks—unifying data from 18,000+ dealerships into golden records to deliver explainable valuations, standardized forecasts, and 8-hour refreshes

Industry

Automotive

Region

North America

Technology

Databricks

Tech Stack

Databricks | Python (Django) | React | AWS S3 | Gemini

Executive Summary

A leading U.S. automotive advisory firm struggled to turn decades of raw data from 18,000+ dealerships—spread across Polk, Helix, demographic datasets, and multiple APIs—into actionable insights. The fragmented and inconsistent data made full refreshes take over a week, delaying critical decisions like dealership valuations. Shorthills AI developed JumpIQ, an AI-powered platform that ingests this data into Databricks, creating unified “golden records” through intelligent cleaning, mapping, and merging. Advanced AI/ML models then deliver predictive analytics via a web dashboard with detailed reports and visual insights. The result: data processing dropped from over a week to 8 hours, the client gained a single accurate database, and predictive insights now support faster, more confident decisions.

Outcomes

Unify all your disparate sources into a governed data lakehouse, resolve duplicates to a single “golden record,” and standardize key signals so analysts can trust the data. That’s how we built JumpIQ for a leading U.S. automotive M&A firm: we consolidated decades of data across 18,000+ dealerships, cut refresh time from 7+ days to ~8 hours, and engineered 150+ metrics per store. On top, we added explainable valuation and forecasting models so you can run what-ifs on brand, geography, and macro factors. The result: faster, defensible diligence with scenario planning directly from your historical data.

Drastic Speed Improvement

Full data ingestion and refresh cycles reduced from over a week to 8 hours.

Enhanced Predictive Accuracy

Unified, clean database for 18,000+ dealerships, each with ~150 data points.

Comprehensive & Accurate Data

More reliable forecasts for Key Performance Indicators, sales, and valuations.

vitaly-gariev-Oexx7cEMKFA-unsplash.jpg

Challenges

Consumer support operations see complex, multi-step requests that legacy bots misread, handle one-by-one, and answer slowly—driving escalations and cost. Fragmented intent handling and weak product matching stall self-service and hurt CSAT. With fast, multi-intent understanding and catalog-aware actions, teams resolve more in a single turn and keep customers in channel. 

Inaccurate intent recognition

Conversations stalled or escalated due to frequent misclassification.  

No multi-intent handling

Users had to start separate flows for add/remove/relocate  actions in one request.  

High latency & operational cost

Slow replies and frequent hand-offs to agents increased workload and spend.  

Outcomes

A high-volume support team was losing customers to slow, one-track chats that misread intent and pushed calls to agents. With Shorthills AI’s multi-intent engine, one message can cover add/remove/relocate and more—parsed correctly and executed together. Latency falls to about 1.2 seconds, keeping conversations fluid and reducing abandon. Automation and first-contact resolution rise as fewer queries need hand-offs, cutting support costs. Fuzzy catalog matching ensures the right products and packs are actioned, reducing rework. The net effect: faster replies, fewer escalations, and higher customer satisfaction—delivered via an API the team can extend as new products launch. 

Near-instant responses

latency reduced to ~1.2 seconds for fluid chat UX.  

Higher automation & FCR

more issues resolved in one interaction via multi-intent parsing.  

Lower support cost

fewer escalations to human agents and easier updates for new products/services.  

Depositphotos_65558759_S.jpg

Frequently Asked Questions

What Shorthills AI Did

We replaced the brittle bot with a single, fast brain for chat. It understands up to five intents in one message, matches product names against a 40k+ catalog, and returns a clean, ready-to-run JSON action list. Exposed as a simple API, it drops into existing flows, and is tuned for low latency—so customers get near-instant answers and agents see fewer escalations. 

We fine-tuned an open-source model on 15k+ synthetic examples to extract multiple intents and entities from a single query—scaling to 5 intents per turn.  

Engineered a Multi-Intent LLM Core

We added a rules layer that fuzzy-matches extracted names to a 40k+ product catalog, validates business logic, and formats a clean JSON payload for downstream execution.  

Built Precision Post-Processing & Fuzzy Matching

We packaged the model with vLLM and FastAPI, containerized with Docker, and exposed a simple API that partners can drop into existing workflows.  

Delivered a High-Throughput Inference API

We applied prompt engineering, input cleaning, and response validation to consistently hit ~1.2s end-to-end latency under load. Go-live followed a 2-month build; production since Aug 2024.  

Optimized for Reliability & Speed 

Also Read

Depositphotos_221371978_XL (1).jpg

Elevating purchase decisions through product research with AI—analyzing 18.6M+ reviews across 1,500+ categories to deliver granular, feature-specific product insights.

Depositphotos_564524564_XL.jpg

Modernizing contact center support for a leading consumer brand—answering agent queries for ~30% faster resolution times and achieving ~40% higher CSAT scores.

Depositphotos_799660602_S.jpg

Modernizing domain chatbot delivery with a reusable GenAI platform—launch bots in days/weeks (not months) and scale across industries without rebuilding NLU. 

Depositphotos_660260974_S.jpg

Modernizing marketing ROI for an e-commerce brand—Python /Databricks MMM (PyMC  Marketing) quantifies channel lift, models  adstock/ saturation, and recommends optimal  budgets. 

bottom of page