

In just a few short months, large language models moved from the realm of specialized researchers into the everyday workflows of data and ML teams all over the world. Here at TDS, we’ve seen how, along with this transition, much of the focus has shifted into practical applications and hands-on solutions.
Jumping straight into tinkering mode can make a lot of sense for data professionals working in industry—time is precious, after all. Still, it’s always a good idea to establish a solid grasp of the inner workings of the technology we use and work on, and that’s precisely what our weekly highlights address.
Our recommended reads looks both at the theoretical foundations of LLMs—specifically, the GPT family—and at the high-level questions their arrival raises. Even if you’re just a casual user of these models, we think you’ll enjoy these thoughtful explorations.