Warm tip: This article is reproduced from stackoverflow.com, please click
machine-learning python tensorflow iris-dataset

Error in my TensorFlow input function: "TypeError: List of Tensors when single Tensor expected"

发布于 2020-03-30 21:14:14

Python: 3.6.9

TensorFlow: 1.15.0

Despite seeing answers to similar questions on SO, I have been unable to detect and resolve the bug in my code. So I have come here to ask you for your help.

I am training a classifier on the Iris dataset and I get the following error:

TypeError: List of Tensors when single Tensor expected

But, before this error occurs I see another error in the stack trace:

ValueError: Tensor("IteratorGetNext:4", shape=(10,), dtype=string)

where 10 is the batch size.


Relevant Code:

def input_data(features, labels, batch_size=1, epochs=None, shuffle=False):
    # Create dictionaries of features and labels
    features = {str(key):np.array(value) for key,value in dict(features).items()}
    labels = {str(labels.name):np.array(labels.values)}

    dataset = tf.data.Dataset.from_tensor_slices((features, labels))

    # `drop_remainder` discards last batch in the epoch if its size is less than `batch_size`
    dataset.batch(batch_size, drop_remainder=True).repeat(epochs)
    if shuffle:
        dataset.shuffle(buffer_size=100)

    features, labels = dataset.make_one_shot_iterator().get_next()
    return features, labels


training_input_fn = lambda: input_data(train_dataset_features, train_dataset_labels,
                                        batch_size=10, epochs=100, shuffle=True)
linear_classifier.train(input_fn=training_input_fn, steps=100)

Stack Trace:

INFO:tensorflow:Calling model_fn.

---------------------------------------------------------------------------

ValueError                                Traceback (most recent call last)

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/tensor_util.py in _AssertCompatible(values, dtype)
    323   try:
--> 324     fn(values)
    325   except ValueError as e:

20 frames

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/tensor_util.py in _check_not_tensor(values)
    275 def _check_not_tensor(values):
--> 276   _ = [_check_failed(v) for v in nest.flatten(values)
    277        if isinstance(v, ops.Tensor)]

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/tensor_util.py in <listcomp>(.0)
    276   _ = [_check_failed(v) for v in nest.flatten(values)
--> 277        if isinstance(v, ops.Tensor)]
    278 # pylint: enable=invalid-name

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/tensor_util.py in _check_failed(v)
    247   # it is safe to use here.
--> 248   raise ValueError(v)
    249 

ValueError: Tensor("IteratorGetNext:4", shape=(10,), dtype=string, device=/device:CPU:0)


During handling of the above exception, another exception occurred:

TypeError                                 Traceback (most recent call last)

<ipython-input-76-4dd60e9636ae> in <module>()
      1 linear_classifier.train(
      2     input_fn = training_input_fn,
----> 3     steps = 100
      4 )

/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py in train(self, input_fn, hooks, steps, max_steps, saving_listeners)
    368 
    369       saving_listeners = _check_listeners_type(saving_listeners)
--> 370       loss = self._train_model(input_fn, hooks, saving_listeners)
    371       logging.info('Loss for final step: %s.', loss)
    372       return self

/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py in _train_model(self, input_fn, hooks, saving_listeners)
   1159       return self._train_model_distributed(input_fn, hooks, saving_listeners)
   1160     else:
-> 1161       return self._train_model_default(input_fn, hooks, saving_listeners)
   1162 
   1163   def _train_model_default(self, input_fn, hooks, saving_listeners):

/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py in _train_model_default(self, input_fn, hooks, saving_listeners)
   1189       worker_hooks.extend(input_hooks)
   1190       estimator_spec = self._call_model_fn(
-> 1191           features, labels, ModeKeys.TRAIN, self.config)
   1192       global_step_tensor = training_util.get_global_step(g)
   1193       return self._train_with_estimator_spec(estimator_spec, worker_hooks,

/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py in _call_model_fn(self, features, labels, mode, config)
   1147 
   1148     logging.info('Calling model_fn.')
-> 1149     model_fn_results = self._model_fn(features=features, **kwargs)
   1150     logging.info('Done calling model_fn.')
   1151 

/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/canned/linear.py in _model_fn(features, labels, mode, config)
    989           partitioner=partitioner,
    990           config=config,
--> 991           sparse_combiner=sparse_combiner)
    992 
    993     super(LinearClassifier, self).__init__(

/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/canned/linear.py in _linear_model_fn(features, labels, mode, head, feature_columns, optimizer, partitioner, config, sparse_combiner)
    753           labels=labels,
    754           optimizer=optimizer,
--> 755           logits=logits)
    756 
    757 

/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/canned/head.py in create_estimator_spec(self, features, mode, logits, labels, optimizer, train_op_fn, regularization_losses)
    239           self._create_tpu_estimator_spec(
    240               features, mode, logits, labels, optimizer, train_op_fn,
--> 241               regularization_losses))
    242       return tpu_estimator_spec.as_estimator_spec()
    243     except NotImplementedError:

/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/canned/head.py in _create_tpu_estimator_spec(self, features, mode, logits, labels, optimizer, train_op_fn, regularization_losses)
    894 
    895       training_loss, unreduced_loss, weights, label_ids = self.create_loss(
--> 896           features=features, mode=mode, logits=logits, labels=labels)
    897       if regularization_losses:
    898         regularization_loss = math_ops.add_n(regularization_losses)

/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/canned/head.py in create_loss(***failed resolving arguments***)
    800     logits = ops.convert_to_tensor(logits)
    801     labels = _check_dense_labels_match_logits_and_reshape(
--> 802         labels=labels, logits=logits, expected_labels_dimension=1)
    803     label_ids = self._label_ids(labels)
    804     if self._loss_fn:

/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/canned/head.py in _check_dense_labels_match_logits_and_reshape(labels, logits, expected_labels_dimension)
    305         'returns labels.')
    306   with ops.name_scope(None, 'labels', (labels, logits)) as scope:
--> 307     labels = sparse_tensor.convert_to_tensor_or_sparse_tensor(labels)
    308     if isinstance(labels, sparse_tensor.SparseTensor):
    309       raise ValueError(

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/sparse_tensor.py in convert_to_tensor_or_sparse_tensor(value, dtype, name)
    412                          (dtype.name, value.dtype.name))
    413     return value
--> 414   return ops.internal_convert_to_tensor(value, dtype=dtype, name=name)
    415 
    416 

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py in internal_convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, ctx, accepted_result_types)
   1295 
   1296     if ret is None:
-> 1297       ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
   1298 
   1299     if ret is NotImplemented:

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/constant_op.py in _constant_tensor_conversion_function(v, dtype, name, as_ref)
    284                                          as_ref=False):
    285   _ = as_ref
--> 286   return constant(v, dtype=dtype, name=name)
    287 
    288 

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/constant_op.py in constant(value, dtype, shape, name)
    225   """
    226   return _constant_impl(value, dtype, shape, name, verify_shape=False,
--> 227                         allow_broadcast=True)
    228 
    229 

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast)
    263       tensor_util.make_tensor_proto(
    264           value, dtype=dtype, shape=shape, verify_shape=verify_shape,
--> 265           allow_broadcast=allow_broadcast))
    266   dtype_value = attr_value_pb2.AttrValue(type=tensor_value.tensor.dtype)
    267   const_tensor = g.create_op(

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/tensor_util.py in make_tensor_proto(values, dtype, shape, verify_shape, allow_broadcast)
    447       nparray = np.empty(shape, dtype=np_dt)
    448     else:
--> 449       _AssertCompatible(values, dtype)
    450       nparray = np.array(values, dtype=np_dt)
    451       # check to them.

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/tensor_util.py in _AssertCompatible(values, dtype)
    326     [mismatch] = e.args
    327     if dtype is None:
--> 328       raise TypeError("List of Tensors when single Tensor expected")
    329     else:
    330       raise TypeError("Expected %s, got %s of type '%s' instead." %

TypeError: List of Tensors when single Tensor expected

Values of the dictionaries features and labels before they are passed to tf.data.Dataset.from_tensor_slices():

# features
{'0': array([7.2, 6. , 4.3, 5.7, 4.7, 7. , 5. , 5.4, 5.8, 5.6, 4.6, 6.4, 4.9,
       6.2, 4.9, 5. , 5. , 6.9, 4.8, 6.1, 5. , 5.3, 6.3, 6.7, 6.1, 5.7,
       4.4, 5.8, 4.6, 7.9, 4.5, 6.2, 7.7, 5.5, 6. , 6.3, 5.1, 7.3, 5.2,
       6.2, 7.2, 4.8, 5.2, 6.6, 6.3, 5.6, 5. , 7.7, 6.7, 6.9, 5.5, 6.5,
       6.7, 6.9, 5.1, 6. , 5.5, 6.1, 5.7, 5.4, 5.1, 6.7, 4.6, 6.8, 5.1,
       4.9, 6.1, 6.3, 6.1, 7.4, 4.8, 5.1, 5.7, 6.7, 6.8, 6. , 6.2, 6.5,
       7.2, 4.7, 5.8, 6.9, 6.7, 6. , 6.1, 7.6, 4.4, 6.4, 5.8, 5.7, 5.6,
       6.3, 5.1, 5.1, 6.5, 6.4, 6.3, 5.8, 6.3, 5.8, 5.2, 6.5, 5.5, 4.9,
       4.4]), '1': array([3.6, 2.7, 3. , 3. , 3.2, 3.2, 3.4, 3.9, 2.7, 3. , 3.6, 3.1, 2.4,
       3.4, 2.5, 3. , 3.2, 3.1, 3.1, 3. , 3.5, 3.7, 2.5, 3.1, 2.8, 3.8,
       2.9, 2.7, 3.4, 3.8, 2.3, 2.8, 3. , 3.5, 2.2, 2.5, 3.8, 2.9, 2.7,
       2.9, 3.2, 3.4, 3.4, 2.9, 2.7, 2.5, 2. , 2.6, 2.5, 3.2, 2.4, 3. ,
       3.3, 3.1, 2.5, 3. , 2.3, 3. , 2.5, 3.4, 3.5, 3.1, 3.2, 3.2, 3.8,
       3.1, 2.9, 2.9, 2.6, 2.8, 3. , 3.8, 2.9, 3. , 3. , 2.2, 2.2, 3.2,
       3. , 3.2, 2.7, 3.1, 3. , 3.4, 2.8, 3. , 3. , 2.9, 2.8, 4.4, 2.7,
       3.4, 3.4, 3.5, 2.8, 3.2, 3.3, 4. , 2.3, 2.7, 4.1, 3. , 2.6, 3.6,
       3.2]), '2': array([6.1, 5.1, 1.1, 4.2, 1.3, 4.7, 1.6, 1.7, 5.1, 4.5, 1. , 5.5, 3.3,
       5.4, 4.5, 1.6, 1.2, 5.4, 1.6, 4.9, 1.3, 1.5, 4.9, 5.6, 4. , 1.7,
       1.4, 5.1, 1.4, 6.4, 1.3, 4.8, 6.1, 1.3, 5. , 5. , 1.9, 6.3, 3.9,
       4.3, 6. , 1.9, 1.4, 4.6, 4.9, 3.9, 3.5, 6.9, 5.8, 5.7, 3.7, 5.8,
       5.7, 5.1, 3. , 4.8, 4. , 4.6, 5. , 1.5, 1.4, 4.4, 1.4, 5.9, 1.5,
       1.5, 4.7, 5.6, 5.6, 6.1, 1.4, 1.6, 4.2, 5. , 5.5, 4. , 4.5, 5.1,
       5.8, 1.6, 4.1, 4.9, 5.2, 4.5, 4.7, 6.6, 1.3, 4.3, 5.1, 1.5, 4.2,
       5.6, 1.5, 1.4, 4.6, 5.3, 6. , 1.2, 4.4, 3.9, 1.5, 5.5, 4.4, 1.4,
       1.3]), '3': array([2.5, 1.6, 0.1, 1.2, 0.2, 1.4, 0.4, 0.4, 1.9, 1.5, 0.2, 1.8, 1. ,
       2.3, 1.7, 0.2, 0.2, 2.1, 0.2, 1.8, 0.3, 0.2, 1.5, 2.4, 1.3, 0.3,
       0.2, 1.9, 0.3, 2. , 0.3, 1.8, 2.3, 0.2, 1.5, 1.9, 0.4, 1.8, 1.4,
       1.3, 1.8, 0.2, 0.2, 1.3, 1.8, 1.1, 1. , 2.3, 1.8, 2.3, 1. , 2.2,
       2.1, 2.3, 1.1, 1.8, 1.3, 1.4, 2. , 0.4, 0.3, 1.4, 0.2, 2.3, 0.3,
       0.2, 1.4, 1.8, 1.4, 1.9, 0.3, 0.2, 1.3, 1.7, 2.1, 1. , 1.5, 2. ,
       1.6, 0.2, 1. , 1.5, 2.3, 1.6, 1.2, 2.1, 0.2, 1.3, 2.4, 0.4, 1.3,
       2.4, 0.2, 0.2, 1.5, 2.3, 2.5, 0.2, 1.3, 1.2, 0.1, 1.8, 1.2, 0.1,
       0.2])}

# labels
{'4': array(['Iris-virginica', 'Iris-versicolor', 'Iris-setosa',
       'Iris-versicolor', 'Iris-setosa', 'Iris-versicolor', 'Iris-setosa',
       'Iris-setosa', 'Iris-virginica', 'Iris-versicolor', 'Iris-setosa',
       'Iris-virginica', 'Iris-versicolor', 'Iris-virginica',
       'Iris-virginica', 'Iris-setosa', 'Iris-setosa', 'Iris-virginica',
       'Iris-setosa', 'Iris-virginica', 'Iris-setosa', 'Iris-setosa',
       'Iris-versicolor', 'Iris-virginica', 'Iris-versicolor',
       'Iris-setosa', 'Iris-setosa', 'Iris-virginica', 'Iris-setosa',
       'Iris-virginica', 'Iris-setosa', 'Iris-virginica',
       'Iris-virginica', 'Iris-setosa', 'Iris-virginica',
       'Iris-virginica', 'Iris-setosa', 'Iris-virginica',
       'Iris-versicolor', 'Iris-versicolor', 'Iris-virginica',
       'Iris-setosa', 'Iris-setosa', 'Iris-versicolor', 'Iris-virginica',
       'Iris-versicolor', 'Iris-versicolor', 'Iris-virginica',
       'Iris-virginica', 'Iris-virginica', 'Iris-versicolor',
       'Iris-virginica', 'Iris-virginica', 'Iris-virginica',
       'Iris-versicolor', 'Iris-virginica', 'Iris-versicolor',
       'Iris-versicolor', 'Iris-virginica', 'Iris-setosa', 'Iris-setosa',
       'Iris-versicolor', 'Iris-setosa', 'Iris-virginica', 'Iris-setosa',
       'Iris-setosa', 'Iris-versicolor', 'Iris-virginica',
       'Iris-virginica', 'Iris-virginica', 'Iris-setosa', 'Iris-setosa',
       'Iris-versicolor', 'Iris-versicolor', 'Iris-virginica',
       'Iris-versicolor', 'Iris-versicolor', 'Iris-virginica',
       'Iris-virginica', 'Iris-setosa', 'Iris-versicolor',
       'Iris-versicolor', 'Iris-virginica', 'Iris-versicolor',
       'Iris-versicolor', 'Iris-virginica', 'Iris-setosa',
       'Iris-versicolor', 'Iris-virginica', 'Iris-setosa',
       'Iris-versicolor', 'Iris-virginica', 'Iris-setosa', 'Iris-setosa',
       'Iris-versicolor', 'Iris-virginica', 'Iris-virginica',
       'Iris-setosa', 'Iris-versicolor', 'Iris-versicolor', 'Iris-setosa',
       'Iris-virginica', 'Iris-versicolor', 'Iris-setosa', 'Iris-setosa'],
      dtype=object)}
Questioner
Uzair Zia
Viewed
371
Uzair Zia 2020-02-01 16:42

As pointed out in the comment to the question by @GPhilo

String labels cannot be used for training. They must be converted to an integer. Therefore, we can map each class to an integer and use the new 'numerical' labels for training.

So in the above case, the classes can be changed to,

'Iris-setosa' -> 0

'Iris-versicolor' -> 1

'Iris-viginica' -> 2

One way to code this in Python using Pandas is:

# 'string_labels' are the labels in string format 
# provied in a list-like structure (eg. pd.Series)
numerical_labels = pd.Categorical(string_labels).codes

# the above 'numerical_labels' is an array of dtype 'int8'
# to convert it into pd.Series of dtype 'int32' or 'int64'
numerical_labels = pd.Series(numerical_labels, dtype=np.int64)