Python tensorflow.python.ops.nn.in_top_k() Examples
The following are 10
code examples of tensorflow.python.ops.nn.in_top_k().
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example.
You may also want to check out all available functions/classes of the module
tensorflow.python.ops.nn
, or try the search function
.
Example #1
Source File: backend.py From lambda-packs with MIT License | 6 votes |
def in_top_k(predictions, targets, k): """Returns whether the `targets` are in the top `k` `predictions`. Arguments: predictions: A tensor of shape `(batch_size, classes)` and type `float32`. targets: A 1D tensor of length `batch_size` and type `int32` or `int64`. k: An `int`, number of top elements to consider. Returns: A 1D tensor of length `batch_size` and type `bool`. `output[i]` is `True` if `predictions[i, targets[i]]` is within top-`k` values of `predictions[i]`. """ return nn.in_top_k(predictions, targets, k) # CONVOLUTIONS
Example #2
Source File: backend.py From Serverless-Deep-Learning-with-TensorFlow-and-AWS-Lambda with MIT License | 6 votes |
def in_top_k(predictions, targets, k): """Returns whether the `targets` are in the top `k` `predictions`. Arguments: predictions: A tensor of shape `(batch_size, classes)` and type `float32`. targets: A 1D tensor of length `batch_size` and type `int32` or `int64`. k: An `int`, number of top elements to consider. Returns: A 1D tensor of length `batch_size` and type `bool`. `output[i]` is `True` if `predictions[i, targets[i]]` is within top-`k` values of `predictions[i]`. """ return nn.in_top_k(predictions, targets, k) # CONVOLUTIONS
Example #3
Source File: eval_metrics.py From lambda-packs with MIT License | 5 votes |
def _top_k_generator(k): def _top_k(probabilities, targets): targets = math_ops.to_int32(targets) if targets.get_shape().ndims > 1: targets = array_ops.squeeze(targets, squeeze_dims=[1]) return metric_ops.streaming_mean(nn.in_top_k(probabilities, targets, k)) return _top_k
Example #4
Source File: eval_metrics.py From auto-alt-text-lambda-api with MIT License | 5 votes |
def _top_k_generator(k): def _top_k(probabilities, targets): targets = math_ops.to_int32(targets) if targets.get_shape().ndims > 1: targets = array_ops.squeeze(targets, squeeze_dims=[1]) return metric_ops.streaming_mean(nn.in_top_k(probabilities, targets, k)) return _top_k
Example #5
Source File: eval_metrics.py From keras-lambda with MIT License | 5 votes |
def _top_k_generator(k): def _top_k(probabilities, targets): targets = math_ops.to_int32(targets) if targets.get_shape().ndims > 1: targets = array_ops.squeeze(targets, squeeze_dims=[1]) return metric_ops.streaming_mean(nn.in_top_k(probabilities, targets, k)) return _top_k
Example #6
Source File: metric_ops.py From lambda-packs with MIT License | 4 votes |
def streaming_recall_at_k(predictions, labels, k, weights=None, metrics_collections=None, updates_collections=None, name=None): """Computes the recall@k of the predictions with respect to dense labels. The `streaming_recall_at_k` function creates two local variables, `total` and `count`, that are used to compute the recall@k frequency. This frequency is ultimately returned as `recall_at_<k>`: an idempotent operation that simply divides `total` by `count`. For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall_at_<k>`. Internally, an `in_top_k` operation computes a `Tensor` with shape [batch_size] whose elements indicate whether or not the corresponding label is in the top `k` `predictions`. Then `update_op` increments `total` with the reduced sum of `weights` where `in_top_k` is `True`, and it increments `count` with the reduced sum of `weights`. If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. Args: predictions: A float `Tensor` of dimension [batch_size, num_classes]. labels: A `Tensor` of dimension [batch_size] whose type is in `int32`, `int64`. k: The number of top elements to look at for computing recall. weights: `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension). metrics_collections: An optional list of collections that `recall_at_k` should be added to. updates_collections: An optional list of collections `update_op` should be added to. name: An optional variable_scope name. Returns: recall_at_k: A `Tensor` representing the recall@k, the fraction of labels which fall into the top `k` predictions. update_op: An operation that increments the `total` and `count` variables appropriately and whose value matches `recall_at_k`. Raises: ValueError: If `predictions` and `labels` have mismatched shapes, or if `weights` is not `None` and its shape doesn't match `predictions`, or if either `metrics_collections` or `updates_collections` are not a list or tuple. """ in_top_k = math_ops.to_float(nn.in_top_k(predictions, labels, k)) return streaming_mean(in_top_k, weights, metrics_collections, updates_collections, name or _at_k_name('recall', k)) # TODO(ptucker): Validate range of values in labels?
Example #7
Source File: metric_ops.py From auto-alt-text-lambda-api with MIT License | 4 votes |
def streaming_recall_at_k(predictions, labels, k, weights=None, metrics_collections=None, updates_collections=None, name=None): """Computes the recall@k of the predictions with respect to dense labels. The `streaming_recall_at_k` function creates two local variables, `total` and `count`, that are used to compute the recall@k frequency. This frequency is ultimately returned as `recall_at_<k>`: an idempotent operation that simply divides `total` by `count`. For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall_at_<k>`. Internally, an `in_top_k` operation computes a `Tensor` with shape [batch_size] whose elements indicate whether or not the corresponding label is in the top `k` `predictions`. Then `update_op` increments `total` with the reduced sum of `weights` where `in_top_k` is `True`, and it increments `count` with the reduced sum of `weights`. If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. Args: predictions: A float `Tensor` of dimension [batch_size, num_classes]. labels: A `Tensor` of dimension [batch_size] whose type is in `int32`, `int64`. k: The number of top elements to look at for computing recall. weights: `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension). metrics_collections: An optional list of collections that `recall_at_k` should be added to. updates_collections: An optional list of collections `update_op` should be added to. name: An optional variable_scope name. Returns: recall_at_k: A `Tensor` representing the recall@k, the fraction of labels which fall into the top `k` predictions. update_op: An operation that increments the `total` and `count` variables appropriately and whose value matches `recall_at_k`. Raises: ValueError: If `predictions` and `labels` have mismatched shapes, or if `weights` is not `None` and its shape doesn't match `predictions`, or if either `metrics_collections` or `updates_collections` are not a list or tuple. """ in_top_k = math_ops.to_float(nn.in_top_k(predictions, labels, k)) return streaming_mean(in_top_k, weights, metrics_collections, updates_collections, name or _at_k_name('recall', k)) # TODO(ptucker): Validate range of values in labels?
Example #8
Source File: metric_ops.py From tf-slim with Apache License 2.0 | 4 votes |
def streaming_recall_at_k(predictions, labels, k, weights=None, metrics_collections=None, updates_collections=None, name=None): """Computes the recall@k of the predictions with respect to dense labels. The `streaming_recall_at_k` function creates two local variables, `total` and `count`, that are used to compute the recall@k frequency. This frequency is ultimately returned as `recall_at_<k>`: an idempotent operation that simply divides `total` by `count`. For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall_at_<k>`. Internally, an `in_top_k` operation computes a `Tensor` with shape [batch_size] whose elements indicate whether or not the corresponding label is in the top `k` `predictions`. Then `update_op` increments `total` with the reduced sum of `weights` where `in_top_k` is `True`, and it increments `count` with the reduced sum of `weights`. If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. Args: predictions: A float `Tensor` of dimension [batch_size, num_classes]. labels: A `Tensor` of dimension [batch_size] whose type is in `int32`, `int64`. k: The number of top elements to look at for computing recall. weights: `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension). metrics_collections: An optional list of collections that `recall_at_k` should be added to. updates_collections: An optional list of collections `update_op` should be added to. name: An optional variable_scope name. Returns: recall_at_k: A `Tensor` representing the recall@k, the fraction of labels which fall into the top `k` predictions. update_op: An operation that increments the `total` and `count` variables appropriately and whose value matches `recall_at_k`. Raises: ValueError: If `predictions` and `labels` have mismatched shapes, or if `weights` is not `None` and its shape doesn't match `predictions`, or if either `metrics_collections` or `updates_collections` are not a list or tuple. """ in_top_k = math_ops.cast(nn.in_top_k(predictions, labels, k), dtypes.float32) return streaming_mean(in_top_k, weights, metrics_collections, updates_collections, name or _at_k_name('recall', k))
Example #9
Source File: metric_ops.py From deep_image_model with Apache License 2.0 | 4 votes |
def streaming_recall_at_k(predictions, labels, k, weights=None, metrics_collections=None, updates_collections=None, name=None): """Computes the recall@k of the predictions with respect to dense labels. The `streaming_recall_at_k` function creates two local variables, `total` and `count`, that are used to compute the recall@k frequency. This frequency is ultimately returned as `recall_at_<k>`: an idempotent operation that simply divides `total` by `count`. For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall_at_<k>`. Internally, an `in_top_k` operation computes a `Tensor` with shape [batch_size] whose elements indicate whether or not the corresponding label is in the top `k` `predictions`. Then `update_op` increments `total` with the reduced sum of `weights` where `in_top_k` is `True`, and it increments `count` with the reduced sum of `weights`. If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. Args: predictions: A floating point tensor of dimension [batch_size, num_classes] labels: A tensor of dimension [batch_size] whose type is in `int32`, `int64`. k: The number of top elements to look at for computing recall. weights: An optional `Tensor` whose shape is broadcastable to `predictions`. metrics_collections: An optional list of collections that `recall_at_k` should be added to. updates_collections: An optional list of collections `update_op` should be added to. name: An optional variable_scope name. Returns: recall_at_k: A tensor representing the recall@k, the fraction of labels which fall into the top `k` predictions. update_op: An operation that increments the `total` and `count` variables appropriately and whose value matches `recall_at_k`. Raises: ValueError: If `predictions` and `labels` have mismatched shapes, or if `weights` is not `None` and its shape doesn't match `predictions`, or if either `metrics_collections` or `updates_collections` are not a list or tuple. """ in_top_k = math_ops.to_float(nn.in_top_k(predictions, labels, k)) return streaming_mean(in_top_k, weights, metrics_collections, updates_collections, name or _at_k_name('recall', k)) # TODO(ptucker): Validate range of values in labels?
Example #10
Source File: metric_ops.py From keras-lambda with MIT License | 4 votes |
def streaming_recall_at_k(predictions, labels, k, weights=None, metrics_collections=None, updates_collections=None, name=None): """Computes the recall@k of the predictions with respect to dense labels. The `streaming_recall_at_k` function creates two local variables, `total` and `count`, that are used to compute the recall@k frequency. This frequency is ultimately returned as `recall_at_<k>`: an idempotent operation that simply divides `total` by `count`. For estimation of the metric over a stream of data, the function creates an `update_op` operation that updates these variables and returns the `recall_at_<k>`. Internally, an `in_top_k` operation computes a `Tensor` with shape [batch_size] whose elements indicate whether or not the corresponding label is in the top `k` `predictions`. Then `update_op` increments `total` with the reduced sum of `weights` where `in_top_k` is `True`, and it increments `count` with the reduced sum of `weights`. If `weights` is `None`, weights default to 1. Use weights of 0 to mask values. Args: predictions: A float `Tensor` of dimension [batch_size, num_classes]. labels: A `Tensor` of dimension [batch_size] whose type is in `int32`, `int64`. k: The number of top elements to look at for computing recall. weights: `Tensor` whose rank is either 0, or the same rank as `labels`, and must be broadcastable to `labels` (i.e., all dimensions must be either `1`, or the same as the corresponding `labels` dimension). metrics_collections: An optional list of collections that `recall_at_k` should be added to. updates_collections: An optional list of collections `update_op` should be added to. name: An optional variable_scope name. Returns: recall_at_k: A `Tensor` representing the recall@k, the fraction of labels which fall into the top `k` predictions. update_op: An operation that increments the `total` and `count` variables appropriately and whose value matches `recall_at_k`. Raises: ValueError: If `predictions` and `labels` have mismatched shapes, or if `weights` is not `None` and its shape doesn't match `predictions`, or if either `metrics_collections` or `updates_collections` are not a list or tuple. """ in_top_k = math_ops.to_float(nn.in_top_k(predictions, labels, k)) return streaming_mean(in_top_k, weights, metrics_collections, updates_collections, name or _at_k_name('recall', k)) # TODO(ptucker): Validate range of values in labels?