Pushing the boundaries of AI capabilities.
While LLMs generate natural language well enough to create fluent text, it is still difficult to control the generation process, leading to irrelevant, repetitive, and hallucinated content. At Google DeepMind, I have been working on planning for making language generation less opaque and more grounded. My work has shown that planning is particularly beneficial (a) when the input and output are long (e.g., a book, multiple news articles, stories), (b) in a human-in-the-loop setting where a user works together with the model to edit text; (c) in high-precision scenarios where the system’s output must be faithful to the input (e.g., with built-in citations); and (d) in cross-lingual generation and low-resource settings in particular as a bridge between languages.
I got to be a small part of the large effort that led to Gemini, Google's largest and most capable AI model. Give it a try!
Over the last few decades, wildfires have become a massive problem, with longer fire seasons and larger fires each year. Assessing the fire likelihood — the probability of wildfire burning in a specific location — is critical for forestry management, disaster preparedness, and early-warning systems.
Traditionally, we estimate wildfire likelihood by modeling fire behavior across simulations by varying parameters, which can be time-consuming and computationally-intensive. Instead of using simulations, I implemented and trained deep neural networks to predict the occurrence of wildfires from remote-sensing data, using data from historical wildfires. I demonstrated that these models can successfully identify areas of high fire likelihood from aggregated data about vegetation, weather, and topography.
I presented this work at the NeurIPS 2020 Artificial Intelligence for Humanitarian Assistance and Disaster Response Workshop.
Python
TensorFlow
Earth Engine
When we think of earthquakes, we think of the ground shaking. But there exists an entirely different type of earthquake, much smaller in amplitude, similar to a repeating rumbling: These tiny repeating earthquakes are called tectonic tremor.
Since tectonic tremor repeats regularly, it is valuable for mapping the precise location of faults. However, it is near the noise level and thus difficult to detect reliably. It is usually detected by looking for a match using carefully crafted reference templates.
Using a catalog of more than 1 million tremor events detected along the San Andreas Fault for over 15 years, I created a novel approach to tremor detection with a convolutional neural network. I demonstrate that this methodology can detect tremor of very low signal amplitude, even below the background noise level, without prior templates.
Python
C++
TensorFlow