My team at Wikimedia Foundation recently switched to a Puppet-based configuration for our R/Shiny-based metrics dashboards. In this post, we share resources and tips for learning Puppet for non-Technical Operations (Ops) people, and—as an educational exercise for newcomers—explain how the new configuration works. Love working at an organization where I'm not limited by my role and am supported in endeavors like this.
2019/08/01 update: things were a little different when I wrote this in 2017. These days I constantly see new/junior data scientists get rejected because they don't have the experience. Even those who have an impressive portfolio of projects to show off that they have the technical know-how get thumbs down. I firmly believe this is a failure of employers, not the new generation of recently graduated data scientists entering the field. As I tweeted earlier today:
most employers still have no idea why they need a data scientist (just that they do) nor how to support them once hired, which is why nobody wants to hire junior ones and only want to hire experienced ones who will "just know what to do" & find ways to support themselves
The point being that despite the wealth of information out there about the ways in which data science can bring value to an organization (e.g. What Data Scientists Really Do, According to 35 Data Scientists by Hugo Bowne-Anderson) and what information architecture is required to make that happen, employers are hiring senior data scientists (not always at a senior salary) because they feel like that excuses them from providing guidance, direction, and support. Those data scientists then have to find ways to make improvements and impact while also building the data infrastructure themselves (or trying to convince higher-ups to give them money to hire dedicated data engineers).
All of this to say: it's an immensely shitty situation and I'm sorry your (often very impressive!) resumes are being passed on simply because you haven't been doing this for 5+ years. So please ignore everything below the line and instead head over to Vicki Boykis's Data science is different now post where she suggests next steps for you:
- Don't shoot for a data science job
- Be prepared for most of your data scientist work to not be data science. Adjust your skillset for that.
She explains them in depth in the post, so – again – I encourage you to read it yourself.
Getting into a technical field like data science is really difficult when you're fresh out of school. On the off-chance that your potential employer actually gets the hiring process right, most organizations are still going to place a considerable amount of weight on experience over schooling. Like, yeah there are certain schools that make it a lot easier to go from academia to industry, but otherwise you're dealing with the classic catch-22 situation.
- Work with real data: In most academic programs, methods are taught using clean, ready-to-use data. So it's important to show that you can take some data you found somewhere and process into something that you can glean insights from. It also gives you a chance to work with data about a topic that you personally find interesting. Possible sources of data include:
- Explore it: Once you have a dataset that actually excites you, you should perform some EDA. Produce at least one (thoroughly labeled) visualization that shows some interesting pattern or relationship. I want to see your curiosity. I want to see an understanding that you can't just jump into model-fitting without developing some familiarity with your data first.
- Analyze it: You're going to lose a lot of interest if you just show and talk about how you followed the steps of some tutorial verbatim. If you learn from the tutorial and then apply that methodology to a different dataset, that's basically what "experience" means. And don't try to use an overly complicated algorithm/model if the goal doesn't require it. You might get incredible accuracy classifying with deep learning, but you'll probably have a more interesting story to tell from inference with a logistic regression. Heck, at Wikimedia we use that in our anti-harassment research.
- Present your work: It can be a neat report with an executive summary (abstract) or it can be an interactive visualization or a slide deck. Just something better than zip archive of scripts or Jupyter notebooks.
- Explain your work (however complex) and results in a way that can be understood: This is where the first point is really important. If you're describing your analysis of data from a topic you're familiar with and are interested in, you're going to have a much easier time explaining it to a stranger. Be prepared to talk about it to a non-technical person. Be prepared to talk about it to a technical person who may not be familiar with your particular methodology. Your interviewer may have done a lot of computational lingustics & NLP but no survival analysis, so get ready to give a brief lesson on K-M curves (and vice versa).
- Perform an analysis from start to finish: Because that's what we look for when we assign a take-home task to our candidates.
Acknowledgement: I would like to thank Angela Bassa (Director of Data Science at iRobot) for her input on this post. In particular, the last paragraph is based entirely on her suggestions. She also created the Data Helpers website that lists data professionals who are able to answer questions, promote, or mentor newcomers into the field.
The other night I got TensorFlow™ (TF) and Keras-based text classifier in R to successfully run on my gaming PC that has Windows 10 and an NVIDIA GeForce GTX 980 graphics card, so I figured I'd write up a full walkthrough, since I had to make minor detours and the official instructions assume -- in my opinion -- a certain level of knowledge that might make the process inaccessible to some folks.
Why would you want to install and use the GPU version of TF? "TensorFlow programs typically run significantly faster on a GPU than on a CPU." Graphics processing units (GPUs) are typically used to render 3D graphics for video games. As a result of the race for real-time rendering of more and more realistic-looking scenes, they have gotten really good at performing vector/matrix operations and linear algebra. While CPUs are still better for general purpose computing and there is some overhead in transferring data to/from the GPU's memory, GPUs are a more powerful resource for performing those particular calculations.
- An NVIDIA GPU with CUDA Compute Capability 3.0 or higher. Check your GPU's compute capability here. For more details, refer to Requirements to run TensorFlow with GPU support.
- A recent version of R -- latest version is 3.4.0 at the time of writing.
- For example, I like using Microsoft R Open (MRO) on my gaming PC with a multi-core CPU because MRO includes and links to the multi-threaded Intel Math Kernel Library (MKL), which parallelizes vector/matrix operations.
- I also recommend installing and using the RStudio IDE.
- You will need devtools:
install.packages("devtools", repos = c(CRAN = "https://cran.rstudio.com"))
- Python 3.5 (required for TF at the time of writing) via Anaconda (recommended):
- Install Anaconda3 (in my case it was Anaconda3 4.4.0), which will install Python 3.6 (at the time of writing) but we'll take care of that.
- Add Anaconda3 and Anaconda3/Scripts to your
PATHenvironment variable so that python.exe and pip.exe could be found, in case you did not check that option during the installation process. (See these instructions for how to do that.)
- Install Python 3.5 by opening up the Anaconda Prompt (look for it in the Anaconda folder in the Start menu) and running
conda install python=3.5
- Verify by running
CUDA & cuDNN
- Presumably you've got the latest NVIDIA drivers.
- Install CUDA Toolkit 8.0 (or later).
- Download and extract CUDA Deep Neural Network library (cuDNN) v5.1 (specifically), which requires signing up for a free NVIDIA Developer account.
- Add the path to the bin directory (where the DLL is) to the
PATHsystem environment variable. (See these instructions for how to do that.) For example, mine is
TF & Keras in R
Once you've got R, Python 3.5, CUDA, and cuDNN installed and configured:
- You may need to install the dev version of the processx package:
devtools::install_github("r-lib/processx")because everything installed OK for me originally but when I ran
devtools::update_packages()it gave me an error about processx missing, so I'm including this optional step.
- Install reticulate package for interfacing with Python in R:
- Install tensorflow package:
- Install GPU version of TF (see this page for more details):
library(tensorflow) install_tensorflow(gpu = TRUE)
- Verify by running:
use_condaenv("r-tensorflow") sess = tf$Session() hello <- tf$constant('Hello, TensorFlow!') sess$run(hello)
- Install keras package:
You should be able to run RStudio's examples now.
Hope this helps! :D
This weekend I got super into a new videogame called NieR: Automata (available on PS4 and PC). I saw a bunch of folks tweeting nothing but praise about it, so I decided to check out the demo on PSN. I was so blown away by it that I actually got into my car, drove to the nearest GameStop, and picked up a copy. I cannot remember the last time a game demo did that to me, if ever. This game is ⚡️E⚡️X⚡️T⚡️R⚡️E⚡️M⚡️E⚡️L⚡️Y⚡️ 💥 ⚡️G⚡️O⚡️O⚡️D⚡️, and I highly recommend it if you're into games like DmC: Devil May Cry and other PlatinumGames titles.
It borrows so many ideas from so many games and genres, but the outcome doesn't feel like a Frankenstein's monster. It all feels cohesive.
The little touches in this game are really endearing. Like when 2B gets off a ladder and does a flip onto a platform, or when she occasionally slides down the side of a ladder. The animations feel at once both completely superfluous but also absolutely necessary.
NieR: Automata is a game that I'm glad to not be reviewing, because I would be staring at an empty document, thinking, "They should have sent a poet."
I'm really excited to finally share my team's process of finding and interviewing data scientists, from writing an inclusive job description that attracts diverse candidates to rethinking how to assess technical skills. The post is up on Wikimedia Blog, and could not have been possible without the terrific editing expertise and help of Melody Kramer. My hope is that hiring managers use those lessons when structuring their process for other technical positions, not just those in DataSci.