Recent developments in earthquake forecasting models have demonstrated the need for a robust method for identifying which model components are most beneficial to understanding spatial patterns of seismicity. Borrowing from ecology, we use Log‐Gaussi...
Recent developments in earthquake forecasting models have demonstrated the need for a robust method for identifying which model components are most beneficial to understanding spatial patterns of seismicity. Borrowing from ecology, we use Log‐Gaussian Cox process models to describe the spatially varying intensity of earthquake locations. These models are constructed using elements which may influence earthquake locations, including the underlying fault map and past seismicity models, and a random field to account for any excess spatial variation that cannot be explained by deterministic model components. Comparing the alternative models allows the assessment of the performance of models of varying complexity composed of different components and therefore identifies which elements are most useful for describing the distribution of earthquake locations. We demonstrate the effectiveness of this approach using synthetic data and by making use of the earthquake and fault information available for California, including an application to the 2019 Ridgecrest sequence. We show the flexibility of this modeling approach and how it might be applied in areas where we do not have the same abundance of detailed information. We find results consistent with existing literature on the performance of past seismicity models that slip rates are beneficial for describing the spatial locations of larger magnitude events and that strain rate maps can constrain the spatial limits of seismicity in California. We also demonstrate that maps of distance to the nearest fault can benefit spatial models of seismicity, even those that also include the primary fault geometry used to construct them.
Recently, many statistical models for earthquake occurrence have been developed with the aim of improving earthquake forecasting. Several different underlying factors might control the location of earthquakes, but testing the significance of each of these factors with traditional approaches has not been straightforward and has restricted how well we can combine different successful model elements. We present a new approach using a point process model to map the spatial intensity of events. This method allows us to combine maps of factors which might affect the location of earthquakes with a random element that accounts for other spatial variation. This allows us to rapidly compare models with different components to see which are most helpful for describing the observed locations. We demonstrate this approach using synthetic data and real data from California as a whole and the 2019 Ridgecrest sequence in particular. Slip rates are found to be beneficial for explaining the spatial distribution of large magnitude events, and strain rates are found useful for constraining spatial limits of observed seismicity. Constructing a fault distance map can also improve models where many events cannot be directly linked to an individual fault.
Spatially varying seismicity can be efficiently modeled as a Log‐Gaussian Cox process which include deterministic and stochastic components
LGCPs can be analyzed with integrated nested Laplace approximations to compare seismicity models and identify useful model components
These models find maps of strain rate and distance to nearest fault useful for constraining spatial seismicity