WARNING:tensorflow:max_values is deprecated, use max_tokens instead. WARNING:tensorflow:vocab_size is deprecated, please use vocabulary_size. WARNING:tensorflow:max_tokens is deprecated, please use num_tokens instead. 1

WARNING:tensorflow:max_values is deprecated, use max_tokens instead. WARNING:tensorflow:vocab_size is deprecated, please use vocabulary_size. WARNING:tensorflow:max_tokens is deprecated, please use num_tokens instead.

WARNING:tensorflow:max_values is deprecated, use max_tokens instead.
WARNING:tensorflow:vocab_size is deprecated, please use vocabulary_size.
WARNING:tensorflow:max_tokens is deprecated, please use num_tokens instead.

Here is what the above code is Doing:
1. We’re creating a new instance of the Tokenizer class.
2. We’re passing the text to the fit_on_texts() method. This method updates the internal vocabulary based on the passed text.
3. We’re calling the texts_to_sequences() method on the text to be encoded. This method converts each word in the text to an integer index based on the internal vocabulary.
4. We’re padding the encoded sequences to the same length.

Similar Posts