IGEL OS can now run AI models locally on endpoints
Briefly

IGEL OS can now run AI models locally on endpoints
"AI Armor provides dynamic runtime security and relies on a central policy engine in the Universal Management Suite (UMS) to meet compliance requirements, ensuring that organizations can manage their security effectively."
"Ollama is an open-source platform designed for downloading and running large language models on endpoints, allowing employees to generate responses to priority emails without needing cloud services or incurring additional costs."
"IGEL emphasizes that the push of AI to the edge is driven by costs, as cloud-based AI services can become significantly expensive for organizations, making local solutions more appealing."
"With AI Armor, IGEL tracks processes accessing the device's NPU or GPU, allowing administrators to selectively block usage and ensure that only approved applications run on the system."
IGEL announced a locally running language model through Ollama for IGEL OS, part of the AI Armor feature that ensures dynamic runtime security. This system utilizes a central policy engine in the Universal Management Suite (UMS) to maintain compliance. Ollama allows for local AI operations without cloud dependency, enabling immediate responses to priority emails using models like Google's Gemma 3. Administrators can control user access and prompt usage, while IGEL emphasizes cost savings by reducing reliance on cloud-based AI services.
Read at Techzine Global
Unable to calculate read time
[
|
]