Computational linguistics is a two-way street: You're either using a computer to do things with human language or communicate or translate or teach a foreign language, or you're using computational techniques to learn something about human languages. Her work documenting and preserving endangered languages uses a little bit of both.
When respondents were asked which languages feel the most welcoming, Portuguese emerged on top, selected by 34 percent of participants. Spanish came in a close second with 33 percent of respondents calling it the friendliest, followed by Italian in third. Together, these languages form a clear cluster associated with warmth and approach.
By comparing how AI models and humans map these words to numerical percentages, we uncovered significant gaps between humans and large language models. While the models do tend to agree with humans on extremes like 'impossible,' they diverge sharply on hedge words like 'maybe.' For example, a model might use the word 'likely' to represent an 80% probability, while a human reader assumes it means closer to 65%.
As explained by Meta: AI-powered translations for Reels are starting to roll out in more languages, including Bengali, Tamil, Telugu, Marathi, and Kannada, on Instagram. These new additions build on our existing language support for English, Hindi, Portuguese, and Spanish. The addition of more of the languages spoken in India is significant, because India is now the biggest single market for both Facebook and Instagram usage, beating out the U.S. by a significant margin.
Parents often hear the warning: "If your child doesn't learn a second language early, they'll never be fluent." Adults, meanwhile, are told: "It's just too late for you to learn now." These claims are familiar and tidy, but misleading. Are they actually true? Is it better to learn a second language as a child or as an adult? The short answer is that it depends on what we mean by "better."
The term "conspiracy theory" calls to mind a variety of dubious claims and controversies, like rumors about Area 51, claims that the Earth is flat, and the movement known as QAnon. At first blush, these phenomena would seem to have little in common with bogus word origins. But there are a variety of false etymologies that spread virally and refuse to go away, in much the same way that stories about chemtrails, black helicopters, and UFOs refuse to die.
For the first time, speech has been decoupled from consequence. We now live alongside AI systems that converse knowledgeably and persuasively-deploying claims about the world, explanations, advice, encouragement, apologies, and promises-while bearing no vulnerability for what they say. Millions of people already rely on chatbots powered by large language models, and have integrated these synthetic interlocutors into their personal and professional lives. An LLM's words shape our beliefs, decisions, and actions, yet no speaker stands behind them.
The dataset was created by translating non-English content from the FineWeb2 corpus into English using Gemma3 27B, with the full data generation pipeline designed to be reproducible and publicly documented. The dataset is primarily intended to improve machine translation, particularly in the English→X direction, where performance remains weaker for many lower-resource languages. By starting from text originally written in non-English languages and translating it into English, FineTranslations provides large-scale parallel data suitable for fine-tuning existing translation models.
Semantic ablation is the algorithmic erosion of high-entropy information. Technically, it is not a "bug" but a structural byproduct of greedy decoding and RLHF (reinforcement learning from human feedback). During "refinement," the model gravitates toward the center of the Gaussian distribution, discarding "tail" data - the rare, precise, and complex tokens - to maximize statistical probability. Developers have exacerbated this through aggressive "safety" and "helpfulness" tuning, which deliberately penalizes unconventional linguistic friction.
On Wednesday, the Paris-based AI lab released two new speech-to-text models: Voxtral Mini Transcribe V2 and Voxtral Realtime. The former is built to transcribe audio files in large batches and the latter for nearly real-time transcription, within 200 milliseconds; both can translate between 13 languages. Voxtral Realtime is freely available under an open source license.
A major difference between LLMs and LTMs is the type of data they're able to synthesize and use. LLMs use unstructured data-think text, social media posts, emails, etc. LTMs, on the other hand, can extract information or insights from structured data, which could be contained in tables, for instance. Since many enterprises rely on structured data, often contained in spreadsheets, to run their operations, LTMs could have an immediate use case for many organizations.
OpenAI's GPT-5.2 Pro does better at solving sophisticated math problems than older versions of the company's top large language model, according to a new study by Epoch AI, a non-profit research institute.