qualia_core.evaluation package
Subpackages
Submodules
Module contents
- class qualia_core.evaluation.Evaluator[source]
Bases:
ABC- apply_dataaugmentations(framework: LearningFramework[Any], dataaugmentations: list[DataAugmentation] | None, test_x: numpy.typing.NDArray[np.float32], test_y: numpy.typing.NDArray[np.int32]) tuple[numpy.typing.NDArray[np.float32], numpy.typing.NDArray[np.int32]][source]
Apply evaluation
qualia_core.dataaugmentation.DataAugmentation.DataAugmentationto dataset.Only the
qualia_core.dataaugmentation.DataAugmentation.DataAugmentationwithqualia_core.dataaugmentation.DataAugmentation.DataAugmentation.evaluateset are applied. This should not be used to apply actual data augmentation to the data, but rather use the conversion or transformqualia_core.dataaugmentation.DataAugmentation.DataAugmentationmodules.- Parameters:
framework – The
qualia_core.learningframework.LearningFramework.LearningFrameworkinstance providing thequalia_core.learningframework.LearningFramework.LearningFramework.apply_dataaugmentation()method compatible with the providedqualia_core.dataaugmentation.DataAugmentation.DataAugmentationdataaugmentations – List of
qualia_core.dataaugmentation.DataAugmentation.DataAugmentationobjectstest_x – Input data to apply data augmentation to
test_y – Input labels to apply data augmentation to
- Returns:
Tuple of data and labels after applying
qualia_core.dataaugmentation.DataAugmentation.DataAugmentationsequentially
- compute_accuracy(preds: list[int], truth: numpy.typing.NDArray[np.int32]) float[source]
Compute accuracy from the target results.
- Parameters:
results – List of
Resultfrom inference on the targettest_y – Array of ground truth one-hot encoded
- Returns:
Accuracy (micro) between 0 and 1
- abstractmethod evaluate(framework: LearningFramework[Any], model_kind: str, dataset: RawDataModel, target: str, tag: str, limit: int | None = None, dataaugmentations: list[DataAugmentation] | None = None) Stats | None[source]
- limit_dataset(test_x: numpy.typing.NDArray[np.float32], test_y: numpy.typing.NDArray[np.int32], limit: int | None) tuple[numpy.typing.NDArray[np.float32], numpy.typing.NDArray[np.int32]][source]
Truncate dataset to
limitsamples.- Parameters:
test_x – Input data
test_y – Input labels
limit – Number of samples to truncate to, data is returned as-is if None or 0
- Returns:
Tuple of data and labels limited to
limitsamples
- shuffle_dataset(test_x: numpy.typing.NDArray[np.float32], test_y: numpy.typing.NDArray[np.int32]) tuple[numpy.typing.NDArray[np.float32], numpy.typing.NDArray[np.int32]][source]
Shuffle the input data, keeping the labels in the same order as the shuffled data.
Shuffling uses the seeded shared random generator from
qualia_core.random.shared.- Parameters:
test_x – Input data
test_y – Input labels
- Returns:
Tuple of shuffled data and labels
- class qualia_core.evaluation.Stats(name: str = '', i: int = -1, quantization: str = 'float32', params: int = -1, mem_params: int = -1, accuracy: float = -1, avg_time: float = -1, rom_size: int = -1, ram_size: int = -1, metrics: dict[str, float] = <factory>)[source]
Bases:
object- asnamedtuple() StatsFields[source]