Beyond n-grams: word embeddings

Feature Engineering for NLP in Python

Rounak Banik

Data Scientist

The problem with BoW and tf-idf

'I am happy'
'I am joyous'
'I am sad'
Feature Engineering for NLP in Python

Word embeddings

  • Mapping words into an n-dimensional vector space
  • Produced using deep learning and huge amounts of data
  • Discern how similar two words are to each other
  • Used to detect synonyms and antonyms
  • Captures complex relationships
    • King-QueenMan-Woman
    • France-ParisRussia-Moscow
  • Dependent on spacy model; independent of dataset you use
Feature Engineering for NLP in Python

Word embeddings using spaCy

import spacy

# Load model and create Doc object
nlp = spacy.load('en_core_web_lg')
doc = nlp('I am happy')
# Generate word vectors for each token
for token in doc:
  print(token.vector)
[-1.0747459e+00  4.8677087e-02  5.6630421e+00  1.6680446e+00
 -1.3194644e+00 -1.5142369e+00  1.1940931e+00 -3.0168812e+00
 ...
Feature Engineering for NLP in Python

Word similarities

doc = nlp("happy joyous sad")
for token1 in doc:
  for token2 in doc:
    print(token1.text, token2.text, token1.similarity(token2))
happy happy 1.0
happy joyous 0.63244456
happy sad 0.37338886
joyous happy 0.63244456
joyous joyous 1.0
joyous sad 0.5340932
...
Feature Engineering for NLP in Python

Document similarities

# Generate doc objects
sent1 = nlp("I am happy")
sent2 = nlp("I am sad")
sent3 = nlp("I am joyous")
# Compute similarity between sent1 and sent2
sent1.similarity(sent2)
0.9273363837282105
# Compute similarity between sent1 and sent3
sent1.similarity(sent3)
0.9403554938594568
Feature Engineering for NLP in Python

Let's practice!

Feature Engineering for NLP in Python

Preparing Video For Download...