The STAR experiment at RHIC collects data on billions of heavy ion collisions, producing many peta-bytes of data each year. This data includes information from many STAR subsystems, which must be carefully reconstructed to produce physicist-friendly lists of charged particle tracks, and photon directions and energies. This reconstruction requires 10s of millions of CPU hours each year. The collaboration needs to process this data quickly in order to avoid backlogs and consequent delays in analysis. But backlogs can happen sometimes, particularly when new detector components or algorithm fine-tuning delays the start of the next processing cycle. To help relieve such backlogs, STAR scientists at BNL and in NSD recently collaborated with LBNL’s National Energy Research Supercomputer Center (NERSC) on a demonstration project that used NERSC’s supercomputer Cori to quickly process the raw data from the experiment.