YAAKOV BRESSLER
  • About
  • Resources
  • Theatre
  • Analysis
  • What's New?
  • About
  • Resources
  • Theatre
  • Analysis
  • What's New?

My Takeaways from the 2019 Toronto Machine Learning Society (TMLS) Annual Conference

11/24/2019

 
My month culminated with the TMLS (Toronto Machine Learning Society) Annual Conference, a 2 day event where industry leaders came together to discuss their research, innovations, and challenges.

​Here are my takeaways:

Toronto as a Center for AI

​The Canadian government is successfully positioning Toronto as global center for state of the art machine learning + AI innovation and adoption. A few reasons how and why:
  • The University of Toronto has a legacy of producing top scientists, most notably Geoffrey Hinton who is a professor there. (If you want to be the best, learn from the best.)
  • A shift in immigration policy in the US and UK have made it more difficult for masters and phd students to get visas after their schooling – Canada is now foreign talents' top choice.
  • Availability of entry level data science jobs as propulsion for "4th wave" data scientists to become  leaders in applied machine learning.
  • College education is not cost prohibitive – think of the number of smart financially constrained young people (in the US) whose grades suffer because of stress and limited time due to financial aid / working to pay for their schooling. Canada is allowing those students to succeed.

AI in Business

  • Many businesses which have relied on consulting firms for their AI until now are building (or acqui-hiring) their own Data Science teams. (Nike and Zodiac in 2018 as an example.)
  • As these businesses scale their AI production, they become more cognizant of data governance and ethical use of AI – consumer trust & transparency is top of mind.

Applied AI

  • A "fourth wave" of Data Scientists are entering the field from non-software fields, bringing with them vast domain knowledge + skills centered on business integrated.  
    • An informal and totally subjective summary of the history of DS progression:
    • Wave 1 (90's – 05') are the early pioneers, with backgrounds in systems and engineering.
    • Wave 2 (05'–15') come from applied mathematics.
    • Wave 3 (15'–18') come from core sciences and academic research.
  • Many of the tools “1st and 2nd wave generation” Data Scientists have been hardcoding are being automated.
  • AutoML in the role of feature engineering is becoming more commonplace, especially for teams not sufficiently supported by data engineers.
  • Interactive visualization libraries and web-dashboards are being simplified and made 100% python executable. ​(Ex: plotly, periscope, tableau)
  • Facebook’s PyTorch and Google’s Tensorflow continue to be the two dominant open source libraries for deep learning.

State of the Art AI

  • Natural Language Processing (NLP), Voice, and Image / Video Recognition are the most effective use cases of massive (computationally demanding) Neural Networks.
  • Computational demands of these massive networks are the rate limiting step of productionizing
  • AI. Cloud computing is seen as a solution for this because of the massive parallelizability.
  • Google is trying (way too hard) to position Google Cloud as the #1 tool for deploying neural networks. (We get it… you’re not Amazon and you have a friendly interface. But stop pushing us… remember what happened with Google Plus?)
  • Quantum computing is too far of a possibility to have relevance yet.

What to expect in 2020

  • Data regulation will become an increasingly important aspect of politics.
  • A terrible data breach will occur (perhaps from an overseas attack).
  • When this breach occurs, companies will announce that they’re taking steps to becoming more “privacy focused” (kind of how we see “environmentally friendly”), meaning they’ll track and store less consumer data.
    • The New York Times has taken the lead, announcing that they’re dropping most social trackers from their site. (Always a step ahead…)
  • Less data will make deployment of massive neural networks more difficult and less effective.

What to expect in 2021

Assuming the above...
  • A demand for talent that can achieve results with less data will skyrocket. (People with these skills are in quantitative & applied mathematics.)
  • Subsequently, data science will become more centered on quantitative mathematics and experimental design.
    • Data Scientists using this methodology might choose to call their field something else. (Quantitative Science? Computational Deduction? Applied Mathematics Engineer?)
  • Further, alternative data will become more commonplace, since it may be considered “safer.” (Arguably it is not.)

That's it for now!
Picture
Kathryn Hume on how AI can be used to determine intent.
Picture
"How Canada Wins" – a talk on policies encouraging innovation.
Picture
Chris Wiggins on how data science is used at the New York Times
Picture
A joke in one of the presenter's talks – one of very few □. Here, the presenter commented that a after publishing his research, a slew of articles followed (by other scientists). The basic science is compelling. It's applied use (and thus fancier titles) is not.
Picture
A talk on latent stochastic differential equations. Essentially, how do you fill in data for missing time?

    Author

    Write something about yourself. No need to be fancy, just an overview.

    Archives

    October 2020
    November 2019
    September 2019
    July 2019
    May 2019
    April 2019
    March 2019
    January 2019
    December 2018
    June 2018
    April 2018
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016

    Categories

    All

    RSS Feed

YAAKOV BRESSLER WEBSITE

Follow me on Treepress.