TensorFlow deepens its advantages in the AI modeling wars


TensorFlow remains the dominant AI modeling framework. Most AI (artificial intelligence) developers continue to use it as their primary open source tool or alongside PyTorch, in which they develop most of their ML (machine learning), deep learning, and NLP (natural language processing) models.

In the most recent O’Reilly survey on AI adoption in the enterprise, more than half of the responding data scientists cited TensorFlow as their primary tool. This finding is making me rethink my speculation, published just last month, that TensorFlow’s dominance among working data scientists may be waning. Neverthless, PyTorch remains a strong second choice, having expanded its usage in the O’Reilly study to more than 36 percent of respondents, up from 29 percent in the previous year’s survey.

Boosting the TensorFlow stack’s differentiation vis-à-vis PyTorch

As the decade proceeds, the differences between these frameworks will diminish as data scientists and other users value feature parity over strong functional differentiation. Nevertheless, TensorFlow remains by far the top AI modeling framework, not just in adoption and maturity, but in terms of the sheer depth and breadth of the stack in supporting every conceivable AI development, training, and deployment scenario. PyTorch, though strong for 80 percent of the core AI, deep learning, and machine learning challenges, has a long way to go before it arrives at feature parity.

Last week, Google’s TensorFlow team distanced its stack further from PyTorch. It held its yearly TensorFlow Dev Summit via livestream (for reasons everybody knows). In spite of the absence of in-person buzz, plenty of important news and analysis came from this purely online event.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *