hyperparameter_hunter.utils package

Submodules

hyperparameter_hunter.utils.boltons_utils module

hyperparameter_hunter.utils.file_utils module

This module defines utilities for reading, writing, and modifying different types of files

hyperparameter_hunter.utils.file_utils.default_json_write(obj)

Convert values that are not JSON-friendly to a more acceptable type

Parameters
obj: Object

The object that is expected to be of a type that is incompatible with JSON files

Returns
Object

The value of obj after being cast to a type accepted by JSON

Raises
TypeError

If the type of obj is unhandled

Examples

>>> assert default_json_write(np.array([1, 2, 3])) == [1, 2, 3]
>>> assert default_json_write(np.int8(32)) == 32
>>> assert np.isclose(default_json_write(np.float16(3.14)), 3.14, atol=0.001)
>>> assert default_json_write(pd.Index(["a", "b", "c"])) == ["a", "b", "c"]
>>> assert default_json_write((1, 2)) == {"__tuple__": [1, 2]}
>>> default_json_write(object())  
Traceback (most recent call last):
    File "file_utils.py", line ?, in default_json_write
TypeError: <object object at ...> is not JSON serializable
hyperparameter_hunter.utils.file_utils.hook_json_read(obj)

Hook function to decode JSON objects during reading

Parameters
obj: Object

JSON object to process, or return unchanged

Returns
Object

If obj contains the key “__tuple__”, its value is cast to a tuple and returned. Else, obj is returned unchanged

Examples

>>> assert hook_json_read({"__tuple__": [1, 2]}) == (1, 2)
>>> assert hook_json_read({"__tuple__": (1, 2)}) == (1, 2)
>>> assert hook_json_read({"a": "foo", "b": 42}) == {"a": "foo", "b": 42}
hyperparameter_hunter.utils.file_utils.write_json(file_path, data, do_clear=False)

Write data to the JSON file specified by file_path, optionally clearing the file before adding data

Parameters
file_path: String

The target .json file path to which data will be written

data: Object

The content to save at the .json file given by file_path

do_clear: Boolean, default=False

If True, the contents of the file at file_path will be cleared before saving data

hyperparameter_hunter.utils.file_utils.read_json(file_path, np_arr=False)

Get the contents of the .json file located at file_path

Parameters
file_path: String

The path of the .json file to be read

np_arr: Boolean, default=False

If True, the contents read from file_path will be cast to a numpy array before returning

Returns
content: Object

The contents of the .json file located at file_path

hyperparameter_hunter.utils.file_utils.add_to_json(file_path, data_to_add, key=None, condition=None, default=None, append_value=False)

Append data_to_add to the contents of the .json file specified by file_path

Parameters
file_path: String

The target .json file path to which data_to_add will be added and saved

data_to_add: Object

The data to add to the contents of the .json file given by file_path

key: String, or None, default=None

If None, the original contents of the file at file_path should not be of type dict. If string, the original content at file_path is expected to be a dict, and data_to_add will be added to the original dict under the key key. Therefore, key is expected to be a unique key to the original dict contents of file_path, unless append_value is True

condition: Callable, or None, default=None

If callable, will be given the original contents of the .json file at file_path as input, and should return a boolean value. If condition(original_data) is truthy, data_to_add will be added to the contents of the file at file_path as usual. Otherwise, data_to_add will not be added to the file, and the contents at file_path will remain unchanged. If condition is None, it will be treated as having been truthy, and will proceed to append data_to_add to the target file

default: Object, or None, default=None

If the attempt to read the original content at file_path raises a FileNotFoundError and default is not None, default will be used as the original data for the file. Otherwise, the error will be raised

append_value: Boolean, default=False

If True and the original data at file_path is a dict, then data_to_add will be appended as a list to the value of the original data at key key

hyperparameter_hunter.utils.file_utils.make_dirs(name, mode=511, exist_ok=False)

Permissive version of os.makedirs that gives full permissions by default

Parameters
name: Str

Path/name of directory to create. Will make intermediate-level directories needed to contain the leaf directory

mode: Number, default=0o0777

File permission bits for creating the leaf directory

exist_ok: Boolean, default=False

If False, an OSError is raised if the directory targeted by name already exists

hyperparameter_hunter.utils.file_utils.clear_file(file_path)

Erase the contents of the file located at file_path

Parameters
file_path: String

The path of the file whose contents should be cleared out

class hyperparameter_hunter.utils.file_utils.RetryMakeDirs

Bases: object

Execute decorated callable, but if OSError is raised, call make_dirs() on the directory specified by the exception, then recall the decorated callable again

Examples

>>> from tempfile import TemporaryDirectory
>>> with TemporaryDirectory(dir="") as d:  
...     def f_0():
...         os.mkdir(f"{d}/nonexistent_dir/subdir")
...     f_0()
Traceback (most recent call last):
    File "file_utils.py", line ?, in f_0
FileNotFoundError: [Errno 2] No such file or directory...
>>> with TemporaryDirectory(dir="") as d:
...     @RetryMakeDirs()
...     def f_1():
...         os.mkdir(f"{d}/nonexistent_dir/subdir")
...     f_1()

Methods

__call__

class hyperparameter_hunter.utils.file_utils.ParametersFromFile(key: Optional[Union[str, int]] = None, file: Optional[str] = None, verbose: bool = False)

Bases: object

Decorator to specify a .json file that defines default values for the decorated callable. The location of the file can either be specified explicitly with file, or it can be retrieved when the decorated callable is called through an argument key/index given by key

Parameters
key: String, or integer, default=None

Used only if file is not also given. Determines a value for file based on the parameters passed to the decorated callable. If string, represents a key in kwargs passed to ParametersFromFile.__call__(). In other words, this names a keyword argument passed to the decorated callable. If key is integer, it represents an index in args passed to ParametersFromFile.__call__(), the value at which specifies a filepath containing the default parameters dict to use

file: String, default=None

If not None, key will be ignored, and file will be used as the filepath from which to read the dict of default parameters for the decorated callable

verbose: Boolean, default=False

If True, will log messages when invalid keys are found in the parameters file, and when keys are set to the default values in the parameters file. Else, logging is silenced

Notes

The order of precedence for determining the value of each parameter is as follows, with items at the top having the highest priority, and deferring only to the items below if their own value is not given:

  • 1)parameters explicitly passed to the callable decorated by ParametersFromFile,

  • 2)parameters in the .json file denoted by key or file,

  • 3)parameter defaults defined in the signature of the decorated callable

Examples

>>> from tempfile import TemporaryDirectory
>>> with TemporaryDirectory(dir="") as d:
...     write_json(f"{d}/config.json", dict(b="I came from config.json", c="Me too!"))
...     @ParametersFromFile(file=f"{d}/config.json")
...     def f_0(a="first_a", b="first_b", c="first_c"):
...         print(f"{a}   ...   {b}   ...   {c}")
...     @ParametersFromFile(key="config_file")
...     def f_1(a="second_a", b="second_b", c="second_c", config_file=None):
...         print(f"{a}   ...   {b}   ...   {c}")
...     f_0(c="Hello, there")
...     f_0(b="General Kenobi")
...     f_1()
...     f_1(a="Generic prequel meme", config_file=f"{d}/config.json")
...     f_1(c="This is where the fun begins", config_file=None)
first_a   ...   I came from config.json   ...   Hello, there
first_a   ...   General Kenobi   ...   Me too!
second_a   ...   second_b   ...   second_c
Generic prequel meme   ...   I came from config.json   ...   Me too!
second_a   ...   second_b   ...   This is where the fun begins

Methods

__call__

hyperparameter_hunter.utils.file_utils.real_name(path, root=None)
hyperparameter_hunter.utils.file_utils.print_tree(start_path, depth=- 1, pretty=True)

Print directory/file tree structure

Parameters
start_path: String

Root directory path, whose children should be traversed and printed

depth: Integer, default=-1

Maximum number of subdirectories allowed to be between the root start_path and the current element. -1 allows all child directories beneath start_path to be traversed

pretty: Boolean, default=True

If True, directory names will be bolded

Examples

>>> from tempfile import TemporaryDirectory
>>> with TemporaryDirectory(dir="") as d:
...     os.mkdir(f"{d}/root")
...     os.mkdir(f"{d}/root/sub_a")
...     os.mkdir(f"{d}/root/sub_a/sub_b")
...     _ = open(f"{d}/root/file_0.txt", "w+")
...     _ = open(f"{d}/root/file_1.py", "w+")
...     _ = open(f"{d}/root/sub_a/file_2.py", "w+")
...     _ = open(f"{d}/root/sub_a/sub_b/file_3.txt", "w+")
...     _ = open(f"{d}/root/sub_a/sub_b/file_4.py", "w+")
...     print_tree(f"{d}/root", pretty=False)
...     print("#" * 50)
...     print_tree(f"{d}/root", depth=2, pretty=False)
...     print("#" * 50)
...     print_tree(f"{d}/root/", pretty=False)
|-- root/
|   |-- file_0.txt
|   |-- file_1.py
|   |-- sub_a/
|   |   |-- file_2.py
|   |   |-- sub_b/
|   |   |   |-- file_3.txt
|   |   |   |-- file_4.py
##################################################
|-- root/
|   |-- file_0.txt
|   |-- file_1.py
|   |-- sub_a/
|   |   |-- file_2.py
##################################################
root/
|-- file_0.txt
|-- file_1.py
|-- sub_a/
|   |-- file_2.py
|   |-- sub_b/
|   |   |-- file_3.txt
|   |   |-- file_4.py

hyperparameter_hunter.utils.general_utils module

This module defines assorted general-use utilities used throughout the library. The contents are primarily small functions that perform oft-repeated tasks

hyperparameter_hunter.utils.general_utils.deep_restricted_update(default_vals, new_vals, iter_attrs=None)

Return an updated dictionary that mirrors default_vals, except where the key in new_vals matches the path in default_vals, in which case the new_vals value is used

Parameters
default_vals: Dict

Dict containing the values to return if an alternative is not found in new_vals

new_vals: Dict

Dict whose keys are expected to be tuples corresponding to key paths in default_vals

iter_attrs: Callable, list of callables, or None, default=None

If callable, must evaluate to True or False when given three inputs: (path, key, value). Callable should return True if the current value should be entered by remap. If callable returns False, default_enter will be called. If iter_attrs is a list of callables, the value will be entered if any evaluates to True. If None, default_enter will be called

Returns
Dict, or None

Examples

>>> deep_restricted_update({'a': 1, 'b': 2}, {('b',): 'foo', ('c',): 'bar'})
{'a': 1, 'b': 'foo'}
>>> deep_restricted_update({'a': 1, 'b': {'b1': 2, 'b2': 3}}, {('b', 'b1'): 'foo', ('c', 'c1'): 'bar'})
{'a': 1, 'b': {'b1': 'foo', 'b2': 3}}
hyperparameter_hunter.utils.general_utils.extra_enter_attrs(iter_attrs: Union[Callable[[Tuple[str, ], Union[str, tuple], Any], bool], List[Callable[[Tuple[str, ], Union[str, tuple], Any], bool]]]) → Callable[[Tuple[str, ], Union[str, tuple], Any], Tuple[Any, Iterable]]

Build an enter function intended for use with boltons_utils.remap that enables entrance into non-standard objects defined by iter_attrs and iteration over their attributes as dicts

Parameters
iter_attrs: Callable, list of callables, or None

If callable, must evaluate to True or False when given three inputs: (path, key, value). Callable should return True if the current value should be entered by remap. If callable returns False, default_enter will be called. If iter_attrs is a list of callables, the value will be entered if any evaluates to True. If None, default_enter will be called

Returns
_enter: Callable

Function to enter non-standard objects according to iter_attrs (via remap)

hyperparameter_hunter.utils.general_utils.flatten(l)
hyperparameter_hunter.utils.general_utils.to_snake_case(s)

Convert a string to snake-case format

Parameters
s: String

String to convert to snake-case

Returns
String

Snake-case formatted string

Notes

Adapted from https://gist.github.com/jaytaylor/3660565

Examples

>>> to_snake_case("snakesOnAPlane") == "snakes_on_a_plane"
True
>>> to_snake_case("SnakesOnAPlane") == "snakes_on_a_plane"
True
>>> to_snake_case("snakes_on_a_plane") == "snakes_on_a_plane"
True
>>> to_snake_case("IPhoneHysteria") == "i_phone_hysteria"
True
>>> to_snake_case("iPhoneHysteria") == "i_phone_hysteria"
True
hyperparameter_hunter.utils.general_utils.now_time()
hyperparameter_hunter.utils.general_utils.sec_to_hms(seconds, ms_places=5, as_str=False)

Convert seconds to hours, minutes, and seconds

Parameters
seconds: Integer

Number of total seconds to be converted to hours, minutes, seconds format

ms_places: Integer, default=5

Rounding precision for calculating number of seconds

as_str: Boolean, default=False

If True, return string “{hours} h, {minutes} m, {seconds} s”. Else, return a triple

Returns
String or tuple

If as_str=True, return a formatted string containing the hours, minutes, and seconds. Else, return a 3-item tuple of (hours, minutes, seconds)

Examples

>>> assert sec_to_hms(55, as_str=True) == '55 s'
>>> assert sec_to_hms(86400) == (24, 0, 0)
>>> assert sec_to_hms(86400, as_str=True) == '24 h'
>>> assert sec_to_hms(86370) == (23, 59, 30)
>>> assert sec_to_hms(86370, as_str=True) == '23 h, 59 m, 30 s'
hyperparameter_hunter.utils.general_utils.expand_mins_secs(mins, secs)

Format string expansion of mins, secs to the appropriate units up to (days, hours)

Parameters
mins: Integer

Number of minutes to be expanded to the appropriate string format

secs: Integer

Number of seconds to be expanded to the appropriate string format

Returns
String

Formatted pair of one of the following: (minutes, seconds); (hours, minutes); or (days, hours) depending on the appropriate units given mins

Examples

>>> assert expand_mins_secs(34, 57) == "34m57s"
>>> assert expand_mins_secs(72, 57) == "01h12m"
>>> assert expand_mins_secs(1501, 57) == "01d01h"
>>> assert expand_mins_secs(2880, 57) == "02d00h"
hyperparameter_hunter.utils.general_utils.to_standard_string(a_string)
hyperparameter_hunter.utils.general_utils.standard_equality(string_1, string_2)
class hyperparameter_hunter.utils.general_utils.Alias(primary_name, aliases)

Bases: object

Convert uses of aliases to primary_name upon calling the decorated function/method

Parameters
primary_name: String

Preferred name for the parameter, the value of which will be set to the value of the used alias. If primary_name is already explicitly used on call in addition to any aliases, the value of primary_name will remain unchanged. It only assumes the value of an alias if the primary_name is not used

aliases: List, string

One or multiple string aliases for primary_name. If primary_name is not used on call, its value will be set to that of a random alias in aliases. Before calling the decorated callable, all aliases are removed from its kwargs

Examples

>>> class Foo():
...     @Alias("a", ["a2"])
...     def __init__(self, a, b=None):
...         print(a, b)
>>> @Alias("a", ["a2"])
... @Alias("b", ["b2"])
... def bar(a, b=None):
...    print(a, b)
>>> foo = Foo(a2="x", b="y")
x y
>>> bar(a2="x", b2="y")
x y

Methods

__call__

hyperparameter_hunter.utils.general_utils.set_default_attr(obj, name, value)

Set the name attribute of obj to value if the attribute does not already exist

Parameters
obj: Object

Object whose name attribute will be returned (after setting it to value, if necessary)

name: String

Name of the attribute to set to value, or to return

value: Object

Default value to give to obj.name if the attribute does not already exist

Returns
Object

obj.name if it exists. Else, value

Examples

>>> foo = type("Foo", tuple(), {"my_attr": 32})
>>> set_default_attr(foo, "my_attr", 99)
32
>>> set_default_attr(foo, "other_attr", 9000)
9000
>>> assert foo.my_attr == 32
>>> assert foo.other_attr == 9000
hyperparameter_hunter.utils.general_utils.short_repr(values: Optional[tuple], affix_size=3) → Optional[tuple]

Make a shortened representation of an iterable, replacing the midsection with an ellipsis

Parameters
values: Tuple, list, or None

Iterable to shorten if necessary. If None, None will be returned

affix_size: Int, default=3

Number of elements in values to include at the beginning and at the end of the shortened representation. This is not the total number of values to include. An affix_size of 3 includes the first 3 elements in values, followed by an ellipsis, then the last 3 elements in values. The length of the returned representation will be (2 * affix_size + 1). If length of values is less than or equal to (2 * affix_size + 1), it is returned unchanged

Returns
Tuple, list, or None

Shortened representation of values if necessary. Otherwise, unchanged values

Examples

>>> short_repr(list("abcdefghijklmnopqrstuvwxyz"))
['a', 'b', 'c', ..., 'x', 'y', 'z']
>>> short_repr(tuple("abcdefghijklmnopqrstuvwxyz"), affix_size=1)
('a', ..., 'z')
>>> short_repr(list("foo"))
['f', 'o', 'o']
>>> short_repr(list("foo"), affix_size=1)
['f', 'o', 'o']
>>> short_repr(list("foo2"), affix_size=1)
['f', ..., '2']
>>> assert short_repr(None) is None
hyperparameter_hunter.utils.general_utils.subdict(d, keep=None, drop=None, key=None, value=None)

Compute the “subdictionary” of a dict, d

Parameters
d: Dict

Dict whose keys will be filtered according to keep and drop

keep: List, or callable, default=`d.keys()`

Keys to retain in the returned subdict. If callable, return boolean given key input. keep may contain keys not in d without raising errors. keep may be better described as the keys allowed to be in the returned dict, whether or not they are in d. This means that if keep consists solely of a key not in d, an empty dict will be returned

drop: List, or callable, default=[]

Keys to remove from the returned subdict. If callable, return boolean given key input. drop may contain keys not in d, which will simply be ignored

key: Callable, or None, default=None

Transformation to apply to the keys included in the returned subdictionary

value: Callable, or None, default=None

Transformation to apply to the values included in the returned subdictionary

Returns
Dict

New dict with any keys in drop removed and any keys in keep still present, provided they were in d. Calling subdict with neither keep nor drop is equivalent to dict(d)

Examples

>>> subdict({"a": 1, "b": 2})
{'a': 1, 'b': 2}
>>> subdict({"a": 1, "b": 2, "c": 3}, drop=["b", "c"])
{'a': 1}
>>> subdict({"a": 1, "b": 2, "c": 3}, keep=["a", "c"])
{'a': 1, 'c': 3}
>>> subdict({"a": 1, "b": 2, "c": 3}, drop=["b", "c"], key=lambda _: _.upper())
{'A': 1}
>>> subdict({"a": 1, "b": 2, "c": 3}, keep=["a", "c"], value=lambda _: _ * 10)
{'a': 10, 'c': 30}
>>> subdict({("foo", "a"): 1, ("foo", "b"): 2, ("bar", "c"): 3}, drop=lambda _: _[0] == "foo")
{('bar', 'c'): 3}
>>> subdict({("foo", "a"): 1, ("foo", "b"): 2, ("bar", "c"): 3}, keep=lambda _: _[0] == "foo")
{('foo', 'a'): 1, ('foo', 'b'): 2}
>>> subdict({(6, "a"): 1, (6, "b"): 2, (7, "c"): 3}, lambda _: _[0] == 6, key=lambda _: _[1])
{'a': 1, 'b': 2}
>>> subdict({"a": 1, "b": 2, "c": 3}, drop=["d"])
{'a': 1, 'b': 2, 'c': 3}
>>> subdict({"a": 1, "b": 2, "c": 3}, keep=["d"])
{}
>>> subdict({"a": 1, "b": 2, "c": 3}, keep=["b", "d"])
{'b': 2}
>>> subdict({"a": 1, "b": 2, "c": 3}, drop=["b", "c"], key="foo")
Traceback (most recent call last):
    File "general_utils.py", line ?, in subdict
TypeError: Expected callable `key` function
>>> subdict({"a": 1, "b": 2, "c": 3}, drop=["b", "c"], value="foo")
Traceback (most recent call last):
    File "general_utils.py", line ?, in subdict
TypeError: Expected callable `value` function
hyperparameter_hunter.utils.general_utils.multi_visit(*visitors) → callable

Build a remap-compatible visit function by chaining together multiple visit functions

Parameters
*visitors: Tuple[callable]

Any number of visit functions of the form expected by remap() that each accept three positional arguments: “path”, “key”, and “value”. visitors need not explicitly return any of the values usually expected of a visit function. If one of visitors does not return anything (or explicitly returns None), the next function in visitors is invoked with the same input values. visitors are invoked in order until one of them actually returns something

Returns
visit: Callable

visit function of form used by remap() that accepts three positional arguments: “path”, “key”, and “value”. Behaves as if each function in visitors had been invoked in sequence, returning the first non-null value returned by one of the visitors

hyperparameter_hunter.utils.learning_utils module

This module defines simple utilities for making toy datasets to be used in testing/examples

hyperparameter_hunter.utils.learning_utils.get_breast_cancer_data(target='diagnosis')

Get the Wisconsin Breast Cancer classification dataset, formatted as a DataFrame

Parameters
target: String, default=”diagnosis”

What to name the column in df that contains the target output values

Returns
df: pandas.DataFrame

The breast cancer dataset, with friendly column names

hyperparameter_hunter.utils.learning_utils.get_pima_indians_data(target='class')

Get the Pima Indians Diabetes binary classification dataset, formatted as a DataFrame

Parameters
target: String, default=”class”

What to name the column in df that contains the target output values

Returns
df: pandas.DataFrame
The Pima Indians dataset, of shape (768, 8 + 1), with column names of:

“pregnancies”, “glucose”, “bp” (shortened from “blood_pressure”), “skin_thickness”, “insulin”, “bmi”, “dpf” (shortened from “diabetes_pedigree_function”), “age”, “class” (or given target column)

Notes

This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. Thanks to Jason Brownlee (of MachineLearningMastery.com), who has generously made a public repository to collect copies of all the datasets he uses in his tutorials

Examples

>>> get_pima_indians_data().head()
   pregnancies  glucose  bp  skin_thickness  insulin   bmi    dpf  age  class
0            6      148  72              35        0  33.6  0.627   50      1
1            1       85  66              29        0  26.6  0.351   31      0
2            8      183  64               0        0  23.3  0.672   32      1
3            1       89  66              23       94  28.1  0.167   21      0
4            0      137  40              35      168  43.1  2.288   33      1
hyperparameter_hunter.utils.learning_utils.get_boston_data()

Get SKLearn’s Boston House Prices regression dataset

Returns
df: pandas.DataFrame

The Boston House Prices dataset of shape (506, 13 + 1)

Notes

The intended target column in this dataset is “MEDV”; however, the weighted distances column “DIS” can also be used as the target column

Examples

>>> pd.set_option("display.max_columns", 10000)  # Ensure all columns are printed below
>>> pd.set_option("display.width", 110)  # Ensure all columns are printed below
>>> get_boston_data().head()
      CRIM    ZN  INDUS  CHAS    NOX     RM   AGE     DIS  RAD    TAX  PTRATIO       B  LSTAT  MEDV
0  0.00632  18.0   2.31   0.0  0.538  6.575  65.2  4.0900  1.0  296.0     15.3  396.90   4.98  24.0
1  0.02731   0.0   7.07   0.0  0.469  6.421  78.9  4.9671  2.0  242.0     17.8  396.90   9.14  21.6
2  0.02729   0.0   7.07   0.0  0.469  7.185  61.1  4.9671  2.0  242.0     17.8  392.83   4.03  34.7
3  0.03237   0.0   2.18   0.0  0.458  6.998  45.8  6.0622  3.0  222.0     18.7  394.63   2.94  33.4
4  0.06905   0.0   2.18   0.0  0.458  7.147  54.2  6.0622  3.0  222.0     18.7  396.90   5.33  36.2
hyperparameter_hunter.utils.learning_utils.get_diabetes_data(target='progression')

Get the SKLearn Diabetes regression dataset, formatted as a DataFrame

Parameters
target: String, default=”progression”

What to name the column in df that contains the target output values

Returns
df: pandas.DataFrame

The diabetes dataset, with friendly column names

hyperparameter_hunter.utils.learning_utils.get_iris_data(target='species', use_str_target=False)

Get the Iris classification dataset, formatted as a DataFrame

Parameters
target: String, default=”species”

What to name the column in df that contains the target output values

use_str_target: Boolean, default=False

If True, replace label-encoded target values with string labels

Returns
df: pandas.DataFrame

This Iris dataset, with friendly column names

Examples

>>> get_iris_data(use_str_target=False).sample(n=5, random_state=32)
     sepal_length_(cm)  sepal_width_(cm)  petal_length_(cm)  petal_width_(cm)  species
55                 5.7               2.8                4.5               1.3        1
22                 4.6               3.6                1.0               0.2        0
26                 5.0               3.4                1.6               0.4        0
56                 6.3               3.3                4.7               1.6        1
134                6.1               2.6                5.6               1.4        2
>>> get_iris_data(use_str_target=True).sample(n=5, random_state=32)
     sepal_length_(cm)  sepal_width_(cm)  petal_length_(cm)  petal_width_(cm)     species
55                 5.7               2.8                4.5               1.3  versicolor
22                 4.6               3.6                1.0               0.2      setosa
26                 5.0               3.4                1.6               0.4      setosa
56                 6.3               3.3                4.7               1.6  versicolor
134                6.1               2.6                5.6               1.4   virginica
hyperparameter_hunter.utils.learning_utils.get_toy_classification_data(target='target', n_samples=300, n_classes=2, shuffle=True, random_state=32, **kwargs)

Wrapper around sklearn.datasets.make_classification to produce a pandas.DataFrame

hyperparameter_hunter.utils.optimization_utils module

This module defines utility functions used to organize hyperparameter optimization, specifically the gathering of saved Experiment files in order to identify similar Experiments that can be used as learning material for the current OptimizationProtocol

hyperparameter_hunter.utils.parsing_utils module

This module contains utilities for parsing Python source code. Its primary tasks include the following: 1) stringifying Python source code; 2) traversing Abstract Syntax Trees, especially to locate imports; and 3) preparing and cleaning source code for reuse

Related

hyperparameter_hunter.compat.keras_optimization_helper

Uses hyperparameter_hunter.utils.parsing_utils to prepare for Keras optimization

Notes

Many of these utilities are modified versions of utilities originally from the Hyperas library. Thank you to the Hyperas creators, and contributors for their excellent work and fascinating approach to Keras hyperparameter optimization. Without them, Keras hyperparameter optimization in hyperparameter_hunter would be far less pretty

hyperparameter_hunter.utils.parsing_utils.stringify_model_builder(build_fn)

Get the stringified Python source code of build_fn

Parameters
build_fn: Callable

A Keras model-building function

Returns
build_fn_str: Strings

A stringified version of build_fn

hyperparameter_hunter.utils.parsing_utils.build_temp_model_file(build_fn_str, source_script)

Construct a string containing extracted imports from both build_fn_str and source_script

Parameters
build_fn_str: Str

The stringified source code of a callable

source_script: Str

Absolute path to a Python file. Expected to end with ‘.py’, or ‘.ipynb’

Returns
temp_file_Str: Str

Combination of extracted imports, and clean build_fn_str in Python script format

hyperparameter_hunter.utils.parsing_utils.read_source_script(filepath)

Read the contents of filepath

Parameters
filepath: Str

Absolute path to a Python file. Expected to end with ‘.py’, or ‘.ipynb’

Returns
source: Str

The contents of filepath

hyperparameter_hunter.utils.parsing_utils.write_python(source_str, filepath='temp_modified.py')

Save source_str to the file located at filepath

Parameters
source_str: String

The content to write to the file at filepath

filepath: String

The filepath of the file to which source_str should be written

class hyperparameter_hunter.utils.parsing_utils.ImportParser

Bases: ast.NodeVisitor

Methods

generic_visit(node)

Called if no explicit visitor function exists for a node.

visit(node)

Visit a node.

visit_Import

visit_ImportFrom

visit_Import(node)
visit_ImportFrom(node)
hyperparameter_hunter.utils.parsing_utils.extract_imports(source)

(Taken from hyperas.utils). Construct a string containing all imports from source

Parameters
source: String

A stringified fragment of source code

Returns
imports_str: String

The stringified imports from source

hyperparameter_hunter.utils.parsing_utils.remove_imports(source)

(Taken from hyperas.utils). Remove all imports statements from source fragment

Parameters
source: String

A stringified fragment of source code

Returns
non_import_lines: String

source, less any lines containing imports

hyperparameter_hunter.utils.parsing_utils.remove_comments(source)

(Taken from hyperas.utils). Remove all comments from source fragment

Parameters
source: String

A stringified fragment of source code

Returns
string: String

source, less any comments

hyperparameter_hunter.utils.result_utils module

This module defines default helper functions used during an Experiment’s result-saving process

Related

hyperparameter_hunter.environment

Uses the contents of hyperparameter_hunter.utils.result_utils to set default values to help process Experiments’ result files if they are not explicitly provided. These values are then used by hyperparameter_hunter.recorders

hyperparameter_hunter.recorders

This module uses certain attributes set by hyperparameter_hunter.environment.Environment (Environment.prediction_formatter, and Environment.do_full_save) for the purpose of formatting and saving Experiment result files. Those attributes are, by default, the utilities defined in hyperparameter_hunter.utils.result_utils

Notes

The utilities defined herein are weird for a couple reasons: 1) They don’t do much, and 2) Despite the fact that they don’t do much, they are extremely sensitive. Because they are default values for Environment attributes that are included when generating Environment.cross_experiment_key, any seemingly insignificant change to them is likely to result in an entirely different cross_experiment_key. This will, in turn, result in Experiments not matching with other similar Experiments during hyperparameter optimization, despite the fact that the changes may not have done anything at all. So be careful, here

hyperparameter_hunter.utils.result_utils.format_predictions(raw_predictions: numpy.array, dataset_df: pandas.core.frame.DataFrame, target_column: str, id_column: Optional[str] = None)

Organize components into a pandas.DataFrame that is properly formatted and ready to save

Parameters
raw_predictions: np.array

The actual predictions that were made and that should inhabit the column named target_column in the result

dataset_df: pd.DataFrame

The original data provided that yielded raw_predictions. If id_column is not None, it must be in dataset_df. In practice, expect this value to be one of the following: experiments.BaseExperiment.train_dataset, experiments.BaseExperiment.holdout_dataset, or experiments.BaseExperiment.test_dataset

target_column: str

The name for the result column containing raw_predictions

id_column: str, or None, default=None

If not None, must be the name of a column in dataset_df, the contents of which will be included as a column in the result and are assumed to be sample identifiers of some kind

Returns
predictions: pd.DataFrame

Dataframe containing the formatted predictions

hyperparameter_hunter.utils.result_utils.default_do_full_save(result_description: dict) → bool

Determines whether an Experiment’s full result should be saved based on its Description dict

Parameters
result_description: dict

The formatted description of the Experiment’s results

Notes

This function is useless. It is included as an example for proper implementation of custom do_full_save functions

hyperparameter_hunter.utils.version_utils module

This module defines utilities for comparing versions of the library (HHVersion), as well as deprecation utilities, namely Deprecated

Related

hyperparameter_hunter.exceptions

Defines the deprecation warnings issued by Deprecated

class hyperparameter_hunter.utils.version_utils.HHVersion(v_str: str)

Bases: object

Parse and compare HyperparameterHunter version strings

Comparisons must be performed with a valid version string or another HHVersion instance.

HyperparameterHunter follows the “<major>.<minor>.<micro>” versioning scheme, with a few other variants, but all start with a triplet of period-delimited numbers. Supported version schemes are as follows (numbers given in examples may be greater than 9 in practice):

  • Final Release Version: “1.0.2”, “2.2.0”, “3.0.0”, etc.

  • Alpha: “3.0.0alpha0”, “3.0.0a1”, “3.0.0a2”, etc.

  • Beta: “3.0.0beta0”, “3.0.0b1”, “3.0.0b2”, etc.

  • Release Candidate: “3.0.0rc0”, “3.0.0rc1”, “3.0.0rc2”, etc.

  • Development Version: “1.8.0.dev-f1234afa” (git commit hash appended)

  • Development Version (Pre-Release): “1.8.0a1.dev-f1234afa”, “1.8.0b2.dev-f1234afa”, “1.8.1rc1.dev-f1234afa”, etc.

  • Development Version (no git hash available): “1.8.0.dev-Unknown”

Parameters
v_str: String

HyperparameterHunter version string, such as hyperparameter_hunter.__version__

Notes

Thank you to the brilliant [Ralf Gommers](https://github.com/rgommers), author of SciPy’s [NumpyVersion class](https://github.com/scipy/scipy/blob/master/scipy/_lib/_version.py). He generously gave his permission to adapt his code for use here. Ralf Gommers: a gentleman and a scholar, and a selfless contributor to NumPy, SciPy, and countless other libraries

Examples

>>> from hyperparameter_hunter import __version__
>>> HHVersion(__version__) > "1.0.2"
True
>>> HHVersion(__version__) > "3.0.0a1"
True
>>> HHVersion(__version__) <= "999.999.999"
True
>>> HHVersion("2.1")  # Missing "micro" number
Traceback (most recent call last):
    File "version_utils.py", line ?, in HHVersion
ValueError: Not a valid HyperparameterHunter version string
Attributes
v_str: String

Original value provided as input on initialization

version: String

Main version segment string of “<major>.<minor>.<micro>”

major: Integer

Major version number

minor: Integer

Minor version number

micro: Integer

Micro version number

pre_release: String

Pre-release version segment of v_str, or “final” if there is no pre-release segment. String starting with “a”/”alpha”, “b”/”beta”, or “rc” for pre-release segments of alpha, beta, or release candidate, respectively. If v_str does not end with one of the aforementioned pre-release segments but is a development version, then pre_release is an empty string. “alpha” and “beta” will be shortened to “a” and “b”, respectively

is_dev_version: Boolean

True if the final segment of v_str starts with “.dev”, denoting a development version. All development versions of the same release or pre-release are considered equal

class hyperparameter_hunter.utils.version_utils.Deprecated(v_deprecate=None, v_remove=None, v_current=None, details='')

Bases: object

Decorator to mark a function or class as deprecated. Issue warning when the function is called or the class is instantiated, and add a warning to the docstring. The optional details argument will be appended to the deprecation message and the docstring

Parameters
v_deprecate: String, default=None

Version in which the decorated callable is considered deprecated. This will usually be the next version to be released when the decorator is added. If None, deprecation will be immediate, and the v_remove and v_current arguments are ignored

v_remove: String, default=None

Version in which the decorated callable will be removed. If None, the callable is not currently planned to be removed. Cannot be set if v_deprecate = None

v_current: String, default=None

Source of version information for currently running code. When v_current = None, the ability to determine whether the wrapped callable is actually in a period of deprecation or time for removal fails, raising a DeprecatedWarning in all cases

details: String, default=””

Extra details added to callable docstring/warning, such as a suggested replacement

Notes

Thank you to the ingenious [Brian Curtin](https://github.com/briancurtin), author of the excellent [deprecation library](https://github.com/briancurtin/deprecation). He generously gave his permission to adapt his code for use here. Brian Curtin: his magnanimity is surpassed only by his intelligence and imitability

Methods

__call__(obj)

Call method on callable obj to deprecate

Module contents