Capturing the PPS signal is as straightforward as setting up an any-edge interrupt on a GPIO pin and recording the system clock on ISR entry. To determine the specific edge (rising or falling), it suffices to simply read the pin's level.
#define PIN_PPS 7 // Pin numberstatic QueueHandle_t pps_queue;
void IRAM_ATTR pps_isr(void *_unused){
unsigned t = esp_cpu_get_cycle_count();
t = t * 2 + gpio_get_level(PIN_PPS);
// Pack the timer value with the pin's level
xQueueSendFromISR(pps_queue, &t, NULL);
}
voidapp_main(void){
// ...
pps_queue = xQueueCreate(10, sizeof(unsigned));
gpio_config(&(gpio_config_t){
.pin_bit_mask = (1ull << PIN_PPS),
.mode = GPIO_MODE_INPUT,
.intr_type = GPIO_INTR_ANYEDGE,
});
gpio_install_isr_service(ESP_INTR_FLAG_LEVEL3);
gpio_isr_handler_add(PIN_PPS, pps_isr, NULL);
unsigned t;
while (xQueueReceive(pps_queue, &t, 0)) {
// Unpack timestamp and pin levelprintf("PPS: %9u %u\n", t / 2, t % 2);
}
// NOTE: NMEA message dumping omitted
}
Here is an excerpt from the log, each rising edge annotated with its difference from the previous rising edge (period):
The system clock is 160 MHz. The ±1~2% variation in the period seems unexpectedly large; the calculated time of 90 ms is even stranger. After a few fiddling attempts, I noticed that this recorded period changes with other parts of the code. Thus, it is highly likely that the esp_cpu_get_cycle_count() counter freezes when the core is sleeping.
As of v5.5, ESP-IDF does not provide an accessible interface to read the system timer counter; the only public interface esp_timer_get_time() has a resolution of 1 us. Ideally, we need something finer to reach a more confident conclusion.
If we look into the esp_timer component, there is a private subroutine esp_timer_impl_get_counter_reg() (source) that has what we want. Replacing the timer call with this clears up all confusion:
The period is 16000221 cycles (one second), consistent down to ±1 cycle. This stayed stable over the few minutes during testing. This means that, in the optimistic estimate, we might have a temporal accuracy on the order of 100, even 10 ns! Will that be real?
The request to accurately timestamp electronic signals was raised in the GRID (Gamma-Ray Integrated Detectors) team. To be specific, the team was aiming to deploy a swarm of distributed detectors on the ground; each unit carries a SiPM sensor that generates an electronic signal when it is hit by a muon. Muons are commonly spawned by cosmic rays entering the Earth's atmosphere. Such particles are scattered to trace a cone shape, and if we are able to accurately timestamp and geotag each event, we will get to recover the conic section and in turn find out exactly when and in which direction the ray hit the atmosphere.
The actual GRID payloads flying in space have access to UTC pulse signals and interpolate accurate timestamps with timer counters, but the current data pipeline requires a postprocessing step after raw data (counter values at each second's pulse as well as each particle event) is returned to the ground, in order to filter out possible outliers. Transmitting and processing raw counter values from a swarm will be cumbersome; I could not imagine that this cannot be done onsite. (Just think about orchestra performances! The orchestra maintains tempo even if the conductor flies through anything ad lib.) However, there does not seem to be related attempts publicly documented, so I took the challenge and set off to look for an answer myself.
Ideally, we would like the temporal resolution to be on par with spatial resolution, say 30 m / c = 0.1 μs. Maintaining UTC with this accuracy sounds scary, but if we take a closer look, for a microcontroller running at 100 MHz this is a solid jitter tolerance of 10 cycles. Typical GNSS modules provide a 1 Hz pulse (PPS) with sub-microsecond alignment and nanosecond jitter, tagged with UTC timestamps. With these in hand, we can interpolate the absolute timestamp of an external interrupt by reading the system timer counter. Our target is surely worth an attempt!
The production detectors will be built on ESP32-S3. I dug up an ESP32-C3 development board — this runs at a comparable system clock frequency (160 MHz), and firmware is highly portable through ESP-IDF, so there needs be no worry among different models. The prototype is built simply by connecting it to a GNSS module and an LCD display.
The rationale for the LCD is to make the device more portable: we will be able to connect it to power banks, and easily test in different environments as well as test multiple such prototypes at a later stage.
It must be noted that ESP32-C3 has a hardwired UART0 at GPIO pins 20 (RX) and 21 (TX), which boot ROM writes logs to. This cannot be disabled without blowing the eFuse (see ESP32-C3 TRM v1.3, Section 7.3); we would like to avoid this default TX pin (21). Pin 20 can be safely used by remapping it to UART1.
Displaying serial messages from the GNSS module is straightforward. The messages are in NMEA 0183 format, consisting of printable character lines (“sentences”). We can simply dump everything onto the screen.
Signal indoors is barely sufficient for a timestamp, but not for a position fix. Moving the device onto a power bank and reaching outside of the window quickly elicited a latitude/longitude/altitude reading. (see footnote [1]).
It appears that everything is going on as expected. The next step would be to inspect the PPS signal: we will capture the edges and try to timestamp them with the system timer counter, and see how every type of jitter plays together.
Footnote [1]. It is worth noting that in Chinese mainland, it is mandated that all published maps and related services use a coordinate system GCJ-02 (or Mars coordinates), slightly different from WGS-84 used in most standard equipment. Many map services (including Google Maps) adhere to the regulation, so coordinates reported by GNSS modules (in WGS-84) need to be converted to GCJ-02 before being transcribed into the search bar. Approximate...