{ "cells": [ { "cell_type": "markdown", "id": "042fff0c", "metadata": {}, "source": [ "# Profiling notebook" ] }, { "cell_type": "markdown", "id": "adc40d63", "metadata": {}, "source": [ "This notebook collects and compares the run time for several notebooks. These notebooks are specified in the `experiment_notebooks` variable (see Configuration).\n", "Simply run the whole notebook, and the results will be displayed in tables at the end.\n", "\n", "Each notebook listed in the `experiment_notebooks` (See Configuration) table must have the `run_experiment()` defined. Optionally, you can define the `close_experiment()` function.\n", "\n", "This profiler will profile the `run_experiment` function, and after that's done, call the `close_experiment` (if it exists). The `close_experiment` function is not mandatory, but if there are any resources that need to be closed, you can implement that here. The profiler measures the times listed in the `methods` variable (see Configuration), and the total time." ] }, { "cell_type": "markdown", "id": "cab05395", "metadata": {}, "source": [ "After the profiling is done, the notebook generates a file in this directory for each notebook. This file contains the detailed profiling report. For the notebook `.ipynb` it generates `.ipynb.prof` file, which can be opened with snakeviz (`pip install snakeviz`): `snakeviz .ipynb.prof`." ] }, { "cell_type": "markdown", "id": "ea2ed6ef", "metadata": {}, "source": [ "## Configuration" ] }, { "cell_type": "code", "execution_count": null, "id": "d0f2e1bb", "metadata": {}, "outputs": [], "source": [ "# `benchmark_mode` sets whether we run the schedules in benchmark mode.\n", "# If it's benchmark mode, we override the reference measurements file\n", "# with the current timing values, and that will be those will be the new reference values.\n", "benchmark_mode = False\n", "profiling_reference_filename = \"profiling_reference_values.pickle\"" ] }, { "cell_type": "code", "execution_count": null, "id": "a23f2159", "metadata": {}, "outputs": [], "source": [ "# How many times each notebook is run. (The results are averaged.)\n", "samples = 10" ] }, { "cell_type": "code", "execution_count": null, "id": "ef356093", "metadata": {}, "outputs": [], "source": [ "# The end result table will display each cell in different colors.\n", "# Each value's \"sigma\" is practically it's measurement error,\n", "# and if the current time is above/below\n", "# the `reference value±sigma*sigma_multiplier_threshold`\n", "# the cell will be displayed in different colors.\n", "sigma_multiplier_threshold = 2.0 # 2.0 is a reasonable value." ] }, { "cell_type": "code", "execution_count": null, "id": "c508a258", "metadata": {}, "outputs": [], "source": [ "# Notebooks to profile.\n", "experiment_notebooks = [\n", " \"./simple_binned_acquisition.ipynb\",\n", " \"./resonator_spectroscopy.ipynb\",\n", " \"./random_gates.ipynb\",\n", "]" ] }, { "cell_type": "code", "execution_count": null, "id": "f7526434", "metadata": {}, "outputs": [], "source": [ "methods = [\n", " (\"compile\", (\"QuantifyCompiler\", \"compile\")),\n", " (\"prepare\", (\"InstrumentCoordinator\", \"prepare\")),\n", " (\"schedule\", (None, \"create_schedule\")),\n", " (\"run\", (\"InstrumentCoordinator\", \"start\")),\n", " (\"process\", (\"InstrumentCoordinator\", \"retrieve_acquisition\"))\n", " ]" ] }, { "cell_type": "markdown", "id": "5047bfef", "metadata": {}, "source": [ "## Loading reference data" ] }, { "cell_type": "code", "execution_count": null, "id": "19ee4226", "metadata": {}, "outputs": [], "source": [ "# Reference values for profiling.\n", "# Each notebook has a reference timing value.\n", "import pickle\n", "from os.path import exists\n", "\n", "if (not benchmark_mode):\n", " if (not exists(profiling_reference_filename)):\n", " raise RuntimeError(\n", " f\"Reference file '{profiling_reference_filename}' does not exist! \"\n", " f\"Make sure this file is created by first running the profiling with 'benchmark_mode=True'!\"\n", " )\n", " with open(profiling_reference_filename, \"rb\") as f:\n", " reference = pickle.load(f)" ] }, { "cell_type": "markdown", "id": "fce3143d", "metadata": {}, "source": [ "## Profiling functions" ] }, { "cell_type": "code", "execution_count": null, "id": "8d105fd4", "metadata": { "scrolled": true }, "outputs": [], "source": [ "import cProfile\n", "import pstats\n", "import qcodes\n", "import importlib\n", "import inspect\n", "import numpy as np\n", "import pandas as pd" ] }, { "cell_type": "code", "execution_count": null, "id": "ee79a5fc", "metadata": {}, "outputs": [], "source": [ "def stat_experiment(experiment_notebook):\n", " qcodes.instrument.Instrument.close_all()\n", " %run $experiment_notebook\n", " with cProfile.Profile() as pr:\n", " run_experiment()\n", " if \"close_experiment\" in globals():\n", " close_experiment()\n", " return pstats.Stats(pr)" ] }, { "cell_type": "code", "execution_count": null, "id": "0b6a6963", "metadata": {}, "outputs": [], "source": [ "def match_class_method(class_name, method_name, module_path, line_number):\n", " module_name = inspect.getmodulename(module_path)\n", " spec = importlib.util.spec_from_file_location(module_name, module_path)\n", " module = importlib.util.module_from_spec(spec)\n", " \n", " try:\n", " spec.loader.exec_module(module)\n", " except:\n", " #print(f\"WARNING {class_name, method_name, module_path, line_number}\")\n", " return False\n", " \n", " classes = inspect.getmembers(module, inspect.isclass)\n", " for _, cls in classes:\n", " methods = inspect.getmembers(cls, inspect.isfunction)\n", " for method in methods:\n", " if method[0] == method_name:\n", " _, start_line = inspect.getsourcelines(method[1])\n", " if (class_name == cls.__name__) and (start_line == line_number):\n", " return True\n", " return False" ] }, { "cell_type": "code", "execution_count": null, "id": "7c5bc9f8", "metadata": {}, "outputs": [], "source": [ "def get_stat(stats, class_name, method_name):\n", " for stat_key in stats.stats:\n", " module_path = stat_key[0]\n", " line_number = stat_key[1]\n", " current_method_name = stat_key[2]\n", " if method_name == current_method_name:\n", " if class_name is not None:\n", " if match_class_method(class_name, method_name, module_path, line_number):\n", " return stats.stats[stat_key]\n", " else:\n", " return stats.stats[stat_key]\n", " return None" ] }, { "cell_type": "code", "execution_count": null, "id": "b61d4dff", "metadata": {}, "outputs": [], "source": [ "def expected_value_and_sigma(t_sum, t_sq_sum, samples):\n", " expected_value = t_sum / samples\n", " sigma = (t_sq_sum / samples - expected_value ** 2) ** 0.5\n", " return (expected_value, sigma)" ] }, { "cell_type": "code", "execution_count": null, "id": "340e4d73", "metadata": {}, "outputs": [], "source": [ "def stat_experiment_detailed(experiment_notebook, samples):\n", " total_time = [0, 0]\n", " times = [[0, 0] for _ in range(len(methods))]\n", " for sample in range(samples):\n", " print(f\"Running notebook {experiment_notebook} {sample + 1}/{samples}\")\n", " stats = stat_experiment(experiment_notebook)\n", "\n", " for i, method in enumerate(methods):\n", " current_stats = get_stat(stats, method[1][0], method[1][1])\n", " if current_stats:\n", " time = current_stats[3]\n", " times[i][0] += time\n", " times[i][1] += time ** 2\n", " else:\n", " times[i][0] = np.nan\n", " times[i][1] = np.nan\n", " total_time[0] += stats.total_tt\n", " total_time[1] += stats.total_tt ** 2\n", "\n", " times = [expected_value_and_sigma(t[0], t[1], samples) for t in times]\n", " total_time = expected_value_and_sigma(total_time[0], total_time[1], samples)\n", " \n", " stats.dump_stats(f\"{experiment_notebook}.prof\")\n", " print(f\"Generated `{experiment_notebook}.prof` profiling file\")\n", " \n", " return times, total_time" ] }, { "cell_type": "markdown", "id": "6ae21570", "metadata": {}, "source": [ "## Running the profiling" ] }, { "cell_type": "code", "execution_count": null, "id": "832b49f7", "metadata": { "scrolled": false }, "outputs": [], "source": [ "measured_data = []\n", "for experiment_notebook in experiment_notebooks:\n", " times, total_time = stat_experiment_detailed(experiment_notebook, samples=samples)\n", " measured_data.append((experiment_notebook, times, total_time))" ] }, { "cell_type": "code", "execution_count": null, "id": "ecdec47f", "metadata": { "scrolled": true }, "outputs": [], "source": [ "measured_data" ] }, { "cell_type": "code", "execution_count": null, "id": "93d96a18", "metadata": {}, "outputs": [], "source": [ "if (benchmark_mode):\n", " with open(profiling_reference_filename, \"wb\") as f:\n", " pickle.dump(measured_data, f)\n", " reference = measured_data" ] }, { "cell_type": "markdown", "id": "37f0ce97", "metadata": {}, "source": [ "## Displaying the results" ] }, { "cell_type": "code", "execution_count": null, "id": "32916c92", "metadata": {}, "outputs": [], "source": [ "reference" ] }, { "cell_type": "code", "execution_count": null, "id": "2072edb7", "metadata": { "scrolled": true }, "outputs": [], "source": [ "table = []\n", "header = []\n", "table_diff = []\n", "header_diff = []\n", "\n", "header.append(\"\")\n", "header_diff.append(\"\")\n", "for method in methods:\n", " header.append(method[0])\n", " header_diff.append(method[0])\n", "header.append(\"total\")\n", "header_diff.append(\"total\")\n", " \n", "for row_id, (experiment_notebook, times, total_time) in enumerate(measured_data):\n", "\n", " row = []\n", " row_diff = []\n", " row.append(experiment_notebook)\n", " row_diff.append(experiment_notebook)\n", " for column_id, time in enumerate(times):\n", " expected_value = time[0]\n", " sigma = time[1]\n", " row.append(f\"{expected_value:.2g} ± {sigma:.2g} s\")\n", " \n", " time_diff = expected_value - reference[row_id][1][column_id][0]\n", " row_diff.append(f\"{time_diff:.2g} ± {sigma:.2g} s\")\n", " \n", " row.append(f\"{total_time[0]:.2g} ± {total_time[1]:.2g} s\")\n", " \n", " total_time_diff = total_time[0] - reference[row_id][2][0]\n", " row_diff.append(f\"{total_time_diff:.2g} ± {total_time[1]:.2g} s\")\n", " \n", " table.append(row)\n", " table_diff.append(row_diff)" ] }, { "cell_type": "code", "execution_count": null, "id": "199829ee", "metadata": {}, "outputs": [], "source": [ "def diff_to_style(current, ref):\n", " green = \"#d0ffd0\"\n", " red = \"#ffd0d0\"\n", " val, sigma = current[0], current[1]\n", " ref_val, ref_sigma = ref[0], ref[1]\n", " if ((val - sigma * sigma_multiplier_threshold) > (ref_val + ref_sigma * sigma_multiplier_threshold)):\n", " return f\"background-color: {red}\"\n", " if ((val + sigma * sigma_multiplier_threshold) < (ref_val - ref_sigma * sigma_multiplier_threshold)):\n", " return f\"background-color: {green}\"\n", " return \"\"" ] }, { "cell_type": "code", "execution_count": null, "id": "e3331c14", "metadata": {}, "outputs": [], "source": [ "style_table = []\n", "\n", "for row_id, (experiment_notebook, times, total_time) in enumerate(measured_data):\n", " row = []\n", " row.append(\"\")\n", " for column_id, time in enumerate(times):\n", " if row_id < len(reference) and column_id < len(reference[row_id][1]):\n", " row.append(diff_to_style(time, reference[row_id][1][column_id]))\n", " else:\n", " row.append(\"\")\n", " if row_id < len(reference):\n", " row.append(diff_to_style(total_time, reference[row_id][2]))\n", " else:\n", " row.append(\"\")\n", " style_table.append(row)" ] }, { "cell_type": "code", "execution_count": null, "id": "da2084cf", "metadata": {}, "outputs": [], "source": [ "style_table = np.array(style_table)\n", "style_properties = {\"border\": \"1px solid gray\"}\n", "styles = [dict(selector=\"caption\", props=[(\"text-align\", \"center\"), (\"font-size\", \"200%\"), (\"color\", \"black\")])]" ] }, { "cell_type": "code", "execution_count": null, "id": "b80f4c47", "metadata": {}, "outputs": [], "source": [ "df = pd.DataFrame(table, columns=header)\n", "df = df.style.set_properties(**style_properties).apply(lambda _ : style_table, axis=None)\n", "df = df.set_caption(\"Measured times\").set_table_styles(styles)" ] }, { "cell_type": "code", "execution_count": null, "id": "8bd5327f", "metadata": {}, "outputs": [], "source": [ "df_diff = pd.DataFrame(table_diff, columns=header)\n", "df_diff = df_diff.style.set_properties(**style_properties).apply(lambda _ : style_table, axis=None)\n", "df_diff = df_diff.set_caption(\"Measured diffs to reference\").set_table_styles(styles)" ] }, { "cell_type": "code", "execution_count": null, "id": "60974026", "metadata": {}, "outputs": [], "source": [ "# If the cell is green (or red), the current time\n", "# is significantly less (or more) than the reference time.\n", "df" ] }, { "cell_type": "code", "execution_count": null, "id": "dc818961", "metadata": {}, "outputs": [], "source": [ "# All data is (current_time - reference_time).\n", "# If the cell is green (or red), the current time\n", "# is significantly less (or more) than the reference time.\n", "df_diff" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.18" } }, "nbformat": 4, "nbformat_minor": 5 }