{ "cells": [ { "cell_type": "markdown", "metadata": { "execution": { "iopub.execute_input": "2021-04-02T02:17:21.439720Z", "iopub.status.busy": "2021-04-02T02:17:21.439409Z", "iopub.status.idle": "2021-04-02T02:17:21.447577Z", "shell.execute_reply": "2021-04-02T02:17:21.445869Z", "shell.execute_reply.started": "2021-04-02T02:17:21.439693Z" } }, "source": [ "# Toy Example - Hurricane Walaka\n", "\n", "The first step is to play with a convenient, toy example!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Preprocess Track Data\n", "\n", "Well the actual first step is to import packages and get data. Since `ahlive` comes with tropical tutorial data, I will simply use that because it's convenient :)." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "execution": { "iopub.execute_input": "2021-04-02T02:46:03.674994Z", "iopub.status.busy": "2021-04-02T02:46:03.674780Z", "iopub.status.idle": "2021-04-02T02:46:06.466262Z", "shell.execute_reply": "2021-04-02T02:46:06.465431Z", "shell.execute_reply.started": "2021-04-02T02:46:03.674971Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "TC TRACKS | Source: IBTrACS v04 - USA | https://www.ncdc.noaa.gov/ibtracs/\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ ":6: DtypeWarning: Columns (8,9,23,24,25) have mixed types.Specify dtype option on import or set low_memory=False.\n", " df = ah.tutorial.open_dataset('tc_tracks')\n" ] } ], "source": [ "import os\n", "import xarray as xr\n", "import pandas as pd\n", "import ahlive as ah\n", "\n", "df = ah.tutorial.open_dataset('tc_tracks')" ] }, { "cell_type": "markdown", "metadata": { "execution": { "iopub.execute_input": "2021-04-02T02:22:13.755276Z", "iopub.status.busy": "2021-04-02T02:22:13.755042Z", "iopub.status.idle": "2021-04-02T02:22:13.761579Z", "shell.execute_reply": "2021-04-02T02:22:13.759045Z", "shell.execute_reply.started": "2021-04-02T02:22:13.755256Z" } }, "source": [ "I will further process the dataset:\n", " \n", "1. select East Pacific TCs\n", "2. resample the data to daily temporal resolution\n", "3. rename column to time\n", "4. shift longitudes from -180 to 180" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "execution": { "iopub.execute_input": "2021-04-02T02:46:06.467830Z", "iopub.status.busy": "2021-04-02T02:46:06.467636Z", "iopub.status.idle": "2021-04-02T02:46:06.605514Z", "shell.execute_reply": "2021-04-02T02:46:06.604986Z", "shell.execute_reply.started": "2021-04-02T02:46:06.467811Z" } }, "outputs": [], "source": [ "df = df.loc[df['basin'] == 'EP']\n", "df = df.groupby('name').resample('1D').mean().reset_index()\n", "df = df.rename(columns={'iso_time': 'time'})\n", "df.loc[df['lon'] < 0, 'lon'] += 360." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this notebook, since I am just playing with a toy example, I will simply select one tropical cyclone: Walaka!" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "execution": { "iopub.execute_input": "2021-04-02T02:46:06.607094Z", "iopub.status.busy": "2021-04-02T02:46:06.606912Z", "iopub.status.idle": "2021-04-02T02:46:06.615668Z", "shell.execute_reply": "2021-04-02T02:46:06.614875Z", "shell.execute_reply.started": "2021-04-02T02:46:06.607074Z" } }, "outputs": [], "source": [ "df_tc = df.loc[df['name'] == 'WALAKA'].copy()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Preprocess SST Data\n", "\n", "Then I will download some OISST data that correspond to Walaka's dates." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "execution": { "iopub.execute_input": "2021-04-02T02:46:06.626747Z", "iopub.status.busy": "2021-04-02T02:46:06.623716Z", "iopub.status.idle": "2021-04-02T02:46:07.063680Z", "shell.execute_reply": "2021-04-02T02:46:07.062547Z", "shell.execute_reply.started": "2021-04-02T02:46:06.626652Z" } }, "outputs": [], "source": [ "for time in df_tc['time']:\n", " _ = os.system(\n", " f'wget -nc https://www.ncei.noaa.gov/data/'\n", " f'sea-surface-temperature-optimum-interpolation/'\n", " f'v2.1/access/avhrr/{time:%Y%m}/'\n", " f'oisst-avhrr-v02r01.{time:%Y%m%d}.nc'\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then read the gridded SST data!" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "execution": { "iopub.execute_input": "2021-04-02T02:46:07.065919Z", "iopub.status.busy": "2021-04-02T02:46:07.065563Z", "iopub.status.idle": "2021-04-02T02:46:07.328885Z", "shell.execute_reply": "2021-04-02T02:46:07.328198Z", "shell.execute_reply.started": "2021-04-02T02:46:07.065881Z" } }, "outputs": [], "source": [ "ds = xr.open_mfdataset('oisst*.nc')[['sst']]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since I am using my puny personal computer, to prevent it from going full-blown fan (overheating), I will coarsen the gridded dataset to every 1 degree." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "execution": { "iopub.execute_input": "2021-04-02T02:46:07.329966Z", "iopub.status.busy": "2021-04-02T02:46:07.329803Z", "iopub.status.idle": "2021-04-02T02:46:07.600152Z", "shell.execute_reply": "2021-04-02T02:46:07.599686Z", "shell.execute_reply.started": "2021-04-02T02:46:07.329936Z" } }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/solactus/anaconda3/lib/python3.8/site-packages/dask/array/numpy_compat.py:41: RuntimeWarning: invalid value encountered in true_divide\n", " x = np.divide(x1, x2, out)\n" ] } ], "source": [ "ds = ds.coarsen(lat=4, lon=4).mean().squeeze().load()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Mimicking Paper's Methodology\n", "\n", "Next, I wrote a function and a short snippet that reflects the methodology described in the paper (which happens to reference another paper):\n", "\n", "\"We employ a footprint method that samples storm properties within a 6 × 6 degree domain that is centered on the best track location and moves with each storm (as described in Li et al., 2016).\n", "\n", "...\n", "\n", "We characterize storm‐induced SST anomalies by following each storm track and subtracting the prestorm temperatures (−1 day relative to storm passage) from the poststorm temperatures (averaged from +3 to +5 days).\"" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "execution": { "iopub.execute_input": "2021-04-02T02:46:07.601964Z", "iopub.status.busy": "2021-04-02T02:46:07.601705Z", "iopub.status.idle": "2021-04-02T02:46:07.702880Z", "shell.execute_reply": "2021-04-02T02:46:07.702351Z", "shell.execute_reply.started": "2021-04-02T02:46:07.601939Z" } }, "outputs": [], "source": [ "def _sel_ds(ds, time, lat, lon):\n", " method = 'nearest' if not isinstance(time, slice) else None\n", " return ds.sel(\n", " time=time, method=method\n", " ).sel(\n", " lat=slice(lat - 3, lat + 3),\n", " lon=slice(lon - 3, lon + 3),\n", " ).load()\n", "\n", "ds_list = []\n", "for i, (r, row) in enumerate(df_tc.iterrows()):\n", " time_post = slice(\n", " row['time'] + pd.Timedelta('3D'),\n", " row['time'] + pd.Timedelta('5D')\n", " )\n", " time_past = row['time'] - pd.Timedelta('1D')\n", " ds_post = _sel_ds(ds, time_post, row['lat'], row['lon']).mean('time')\n", " ds_past = _sel_ds(ds, time_past, row['lat'], row['lon'])\n", " ds_anom = ds_post - ds_past\n", " ds_anom = ds_anom.reindex_like(ds).fillna(0)\n", " if i == 0:\n", " ds_list.append(ds_anom)\n", " else:\n", " ds_list.append(ds_list[-1] + ds_anom)\n", "\n", "ds_footprint = xr.concat(\n", " ds_list, 'time'\n", ").assign_coords(**{\n", " 'time': df_tc['time'].values\n", "})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Animating TC Footprint" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Just to validate that I did everything right (and also to test `ahlive`), I scale the radius of maximum winds (RMW) by some arbitrary factor and animate the TC footprint!" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "execution": { "iopub.execute_input": "2021-04-02T02:46:07.712414Z", "iopub.status.busy": "2021-04-02T02:46:07.712226Z", "iopub.status.idle": "2021-04-02T02:46:42.781714Z", "shell.execute_reply": "2021-04-02T02:46:42.780750Z", "shell.execute_reply.started": "2021-04-02T02:46:07.712394Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[########################################] | 100% Completed | 32.8s\n" ] }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df_tc['scale'] = df_tc['usa_rmw'] ** 1.25\n", "\n", "ah_df = ah.DataFrame(\n", " df_tc, 'lon', 'lat', s='scale', color='black', inline_labels='usa_pres',\n", " alpha=0.5, vmin=-4, vmax=4, coastline=True, xlims=(170, 230),\n", " ylims=(0, 50), cmap='RdBu_r', preset='trail', crs='PlateCarree',\n", " title='Hurricane Walaka SST Footprint', clabel='SST Anomaly [K]'\n", ").config('preset', chart='both').config('inline', suffix='hPa')\n", "ah_ds = ah.Dataset(ds_footprint, 'lon', 'lat', 'sst')\n", "\n", "(ah_df * ah_ds).render()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 }