[email protected] conferences: Natural Language Processing
[email protected] conferences: Natural Language Processing ran on August 11, 2020.
[email protected] conferences are one-day technology conferences from Manning Publications. Manning authors and other industry experts deliver live coding sessions, deep dives, and tutorials. Watch for free on Twitch.
free one day conference
All [email protected] conferences are free to attend.
talks from experts
Featuring expert speakers and ten minute lightning talks.
live on twitch
No travel needed. [email protected] conferences stream globally via Twitch.

00:06 “Deep Transfer Learning for Natural Language Processing” with Dipanjan Sarkar

 

1:01:44 Lightning Talk | “TopiQAL: Topic Modeling Based Question Answering Model using BioBERT” with Hamsa Shwetha Venkataram

 

1:12:34 “Getting Started with NLP” with Ekaterina Kochmar, author of “Getting Started with Natural Language Processing”

 

2:05:35 “Getting Familiar with Transformers in NLP” with Matteus Tanha

 

3:03:42 Lightning Talk | “Bots That Make Us Smarter” with Hobson Lane, co-author of “Natural Language Processing in Action”

 

3:18:19 “Understanding Alexa” with Jeff Blankenburg

Missed [email protected] conferences: Natural Language Processing? Rewatch now on Manning’s YouTube channel.
conference speakers
Dipanjan Sarkar
The intent of this session is to journey through the recent advancements in deep transfer learning for NLP by taking a look at various state-of-the-art models and methodologies. These will include: Pre-trained embeddings for Deep Learning Models (FastText with CNNs\Bi-directional LSTMs + Attention), Universal Embeddings (Sentence Encoders, NNLMs), and Transformers.

We will also look at the power of some of these models, especially transformers, to solve diverse problems like summarization, entity recognition, question-answering, sentiment analysis, and classification. Plus lots of hands-on examples leveraging, Python, TensorFlow, and the famous transformers library from HuggingFace.
Dipanjan (DJ) Sarkar is a Data Science Lead at Applied Materials, leading advanced analytics efforts around computer vision, natural language processing and deep learning. He is also a Google Developer Expert in Machine Learning. He has consulted and worked with several startups as well as Fortune 500 companies like Intel and Open Source organizations like Red Hat/IBM. He primarily works on leveraging data science, machine learning and deep learning to build large- scale intelligent systems. He holds a master of technology degree with specializations in Data Science and Software Engineering.
Ekaterina Kochmar
This session will teach you how to build your own NLP application for spam detection completely from scratch. No prior experience with NLP is required – just come along to the session and learn some useful skills on the spot! Basic Python programming skills will be a plus.
Ekaterina Kochmar is the author of Getting Started with Natural Language Processing. She is an Affiliated Lecturer and a Senior Research Associate at the Natural Language and Information Processing group of the Department of Computer Science and Technology, University of Cambridge. She holds an MA degree in Computational Linguistics, an MPhil in Advanced Computer Science and a PhD in Natural Language Processing. Ekaterinahas written blog posts on the implementation of algorithms and NLP applications from scratch. She has also lectured for workshops and bootcamps on Machine Learning and Data Science with Python.
Jeff Blankenburg
This session will unpack how Amazon’s Alexa works, show you how to build your own voice user interfaces, and demonstrate the best practices you should consider as you build the future of ambient computing. We will cover topics like natural language understanding, intents, slots, and serverless functions in a fun, easy-to-understand format.
Jeff Blankenburg spent the early part of his career in digital advertising, building websites for Victoria’s Secret, Abercrombie & Fitch, and FordMotor Company, among others. He also spent 8 years at Microsoft, primarily as a technology evangelist for any new tech he could get his hands on.

Today, he works on the Amazon Alexa team helping developers make Alexa even smarter. Jeff has also spoken at conferences all over the world, including London, Munich, India, Tokyo, Sydney, and New York, covering topics ranging from software development technologies to soft skill techniques. He also serves as an organizer for the Stir Trek conference.
Matteus Tanha
The current state-of-the-art architectures in NLP are Neural Networks that use a mechanism called attention. These types of architectures are called transformers and have been outperforming other models in topics such as summarization, text classification, translation, and Question-Answering. In this presentation, we will get familiar with the transformers package developed by Hugging Face which provides a developer-friendly Python library for running transformers architecture models for various problems.
Matteus has been developing machine learning systems for the last 10 years and has been mainly focusing on Natural Language Processing (NLP) for the last 5 years. He got his Ph.D. in Computational Chemistry from Carnegie Mellon University specializing his research on machine learning applied to quantum chemistry. He has been teaching technical courses at both high school and university level and he is currently finishing off creating a Manning LiveProject in NLP focusing on transformers for Question-Answering.
lightning talks
Hobson Lane
The world is full of bots that make us dumber, virtual assistants and search engines that feed us misinformation. But fortunately, those same corporations that manipulate us have released their code and models to help us build better ones. In this talk, I’ll show you how to build an open domain question answering system and search engine that make you smarter by filtering out the noise.
Hobson Lane is an experienced NLP Engineer.
Hamsa Shwetha Venkataram
In this talk, we will present the hierarchical approach to answer topical questions regarding COVID-19 based on the medical research corpus that the White House and a coalition of leading research groups have prepared as the COVID-19 Open Research Dataset (CORD-19). CORD-19 is a resource of over 29,000 scholarly articles, including over 13,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses. This task involves providing a way to model the topics discussed in these papers in an unsupervised setting to ask relevant questions, and summarize the text using open source tools and cloud platforms.

Each of these tasks are independent yet interconnected in this proposed pipeline, helping us to reach the goal of answering critical questions in response to this unprecedented pandemic.
Prior to arriving at the ‘Center of the Universe’ aka NASA Jet Propulsion Laboratory as a Data Scientist and working on Voyager and Mars 2020, Hamsa Shwetha Venkataram started her career as an Applications Developer at Oracle Financial Services, Bangalore in 2013 after her Bachelors in Computer Science from Visvesvaraya Technological University. Four years later, she went on to pursue MS in Computer Science at University of Southern California, and graduated in 2018, along with a hobby diploma in tasting coffees of the world. Trained in performing arts of Indian classical dance and music, she believes storytelling is equally important as much as the content, and enjoys the very process of creating a solution with a compelling story for a problem in hand. When not getting hands dirty with data, she loves to hike, and think about the cosmos.