Microsoft on Team DS Lifecycle **- "**The Team Data Science Process (TDSP) is an agile, iterative data science methodology to deliver predictive analytics solutions and intelligent applications efficiently. TDSP helps improve team collaboration and learning by suggesting how team roles work best together. TDSP includes best practices and structures from Microsoft and other industry leaders to help toward successful implementation of data science initiatives. The goal is to help companies fully realize the benefits of their analytics program.
This article provides an overview of TDSP and its main components. We provide a generic description of the process here that can be implemented with different kinds of tools. A more detailed description of the project tasks and roles involved in the lifecycle of the process is provided in additional linked topics. Guidance on how to implement the TDSP using a specific set of Microsoft tools and infrastructure that we use to implement the TDSP in our teams is also provided."
"When I used to do consulting, I’d always seek to understand an organization’s context for developing data projects, based on these considerations:
- Strategy: What is the organization trying to do (objective) and what can it change to do it better (levers)?
- Data: Is the organization capturing necessary data and making it available?
- Analytics: What kinds of insights would be useful to the organization?
- Implementation: What organizational capabilities does it have?
- Maintenance: What systems are in place to track changes in the operational environment?
- Constraints: What constraints need to be considered in each of the above areas?"
-
Advice for a ds, business kpi are not research kpi, etc
-
Full stack DS Uri Weiss
by Uri Weiss. wrong credits? please contact me.
- DS vs DA vs MLE - the most intensive diagram post ever. This is the motherload of figure references.
References:
Why data science needs generalists not specialists
- (good advice) Building a DS function (team)
- Netflix culture
- Reed hastings on netflix' keeper test - "netflixs-keeper-test-is-the-secret-to-a-successful-workforce"
- How to manage a data science research team using agile methodology, not scrum and not kanban
- Workflow for data science research projects
- Tips for data science research management
- IMO a really bad implementation of agile for data-science-projects
Squads, Tribes, Guilds, dont be like Spotify
- DEEPNET.TV YOUTUBE (excellent)
- Mitchel ML Lectures (too long)
- Quoc Les (google) wrote DNN tutorials and 3H video (not intuitive)
- KDnuggets: numpy, panda, scikit, tutorials.
- Deep learning online book (too wordy)
- Genetic Algorithms - grid search hyper params better than brute force.. obviously
- CNN tutorial
- Introduction to programming in scikit
- SVM in scikit python
- Sklearn scipy PCA tutorial
- RNN
- Matrix Multiplication - linear algebra
- Kadenze - deep learning tensor flow - Histograms for (Image distribution - mean distribution) / std dev, are looking quite good.
- deep learning with keras
- Recommended: Udacity includes ML and DL
- Week1: Introduction Lesson 4: Supervised, unsupervised.
- Lesson 6: model regression, cost function
- Lesson 71: optimization objective, large margin classification
- PCA at coursera #1
- PCA at coursera #2
- PCA #3
- SVM at coursera #1 - simplified
Week 2: Lesson 29: supervised learning
Lesson 36: From rules to trees
Lesson 43: overfitting, then validation, then accuracy
Lesson 46: bootstrap, bagging, boosting, random forests.
Lesson 59: Logistic regression, SVM, Regularization, Lasso, Ridge regression
Lesson 64: gradient descent, stochastic, parallel, batch.\
Unsupervised: Lesson X K-means, DBscan
- Machine learning design patterns, git notebooks!, medium
- DP1 - transform Moving an ML model to production is much easier if you keep inputs, features, and transforms separate
- DP2 - checkpoints Saving the intermediate weights of your model during training provides resilience, generalization, and tunability
- DP3 - virtual epochs Base machine learning model training and evaluation on total number of examples, not on epochs or steps
- DP4 - keyed predictions Export your model so that it passes through client keys
- DP5 - repeatable sampling use the hash of a well distributed column to split your data into training, validation, and testing
- Gensim notebooks - from w2v, doc2vec to nmf, lda, pca, sklearn api, cosine, topic modeling, tsne, etc.
- Deep learning with python - francois chollet, deep learning & vision git notebooks!, official notebooks.
- Yandex school, nlp notebooks
- Machine learning engineering book (i.e., data science)
- Interpretable Machine Learning book
(really good) Practical advice for analysis of large, complex data sets - distributions, outliers, examples, slices, metric significance, consistency over time, validation, description, evaluation, robustness in measurement, reproducibility, etc.