Global declines in biodiversity highlight the need to effectively monitor density and distribution of threatened species. In recent years, molecular survey methods detecting DNA released by target-species into their environment (eDNA) have been rapidly on the rise. Despite providing new, cost-effective tools for conservation, eDNA-based methods are prone to the occurrence of errors. Best field and laboratory practices can mitigate some, but risks of errors cannot be eliminated and need to be accounted for. Here, we synthesise recent advances in data processing tools that increase the reliability of interpretations drawn from eDNA data. We review advances in occupancy models to consider spatial data-structures and simultaneously assess rates of false positive and negative results. Further, we introduce process-based models and the integration of metabarcoding data as complementing approaches to increase reliability of target-species assessments. These tools will be most effective when capitalising on multi-source datasets collating eDNA with classical survey and citizen-science approaches, paving the way for more robust decision-making processes in conservation planning.