With billions of networked connected embedded systems, the security historically provided by the isolation of embedded systems is no longer sufficient. Both proactive security measures that prevent intrusions and reactive measures that detect intrusions are essential. Anomaly-based detection is a common reactive approach employed to detect malware that has evaded proactive defenses by observing anomalous deviations in the system execution. Timing-based anomaly detection detects malware by monitoring the system's internal timing, which offers unique protection against mimicry malware compared to sequence-based anomaly detection. However, previous timing-based anomaly detection methods focus on each operation independently at the granularity of tasks, function calls, system calls, or basic blocks. These approaches neither consider the entire software execution path nor provide a quantitative estimate of the presence of malware. This paper presents a novel model for specifying the normal timing for execution paths in software applications using cumulative distribution functions of timing data in sliding execution windows. We present a probabilistic formulation for estimating the presence of malware for individual operations and sequences of operations within the paths, and we define thresholds to minimize false positives based on training data. Experimental results with a smart connected pacemaker and three sophisticated mimicry malware demonstrate improved performance and accuracy compared to state-of-The-Art timing-based malware detection.