JetBrains launched Central, ARM shipped its first chip ever, and Google cut AI memory usage by 6x. Three events in four days that reveal where software development is heading.

On Monday March 24th, JetBrains launched a new platform for agentic software development. The same day, ARM shipped its very first in house chip after 35 years. A day later, Google published a compression algorithm that shrinks the working memory of AI models by a factor of six without any quality loss. Three launches in four days.
Individually, these are impressive product announcements. Together, they tell a bigger story. AI agents are no longer experiments. They are becoming core infrastructure in software development, with dedicated hardware, dedicated platforms, and increasingly efficient models. At MG Software, we noticed this shift in our client projects months ago. Now the industry is confirming it all at once.
If you let an AI agent loose on your codebase today, you have very little visibility into what it actually does. Which files does it modify? Which API calls does it make? How many tokens does it consume? JetBrains Central is built to solve exactly that problem.
The platform functions as a central control room. You connect agents from Claude, Codex, Gemini, or your own custom solution, and Central manages access control, budget enforcement, logging, and execution. Developers trigger workflows from their IDE, from the command line, or through a web interface. Regardless of which agent you choose, Central keeps the overview.
The numbers from JetBrains' own AI Pulse survey of 11,000 developers underline why this matters right now. 90% already use AI at work. 22% use coding agents. And 66% of companies plan to adopt agents within 12 months. The question is no longer whether teams will use agents. It is how they keep them manageable. That is precisely what Central targets.
ARM has been designing processor architectures for 35 years that other companies manufacture. From smartphones to servers, ARM draws the blueprint and partners like Apple and Qualcomm build the chips. Until now.
The ARM AGI CPU is the company's first in house product. 136 Neoverse V3 cores, manufactured on TSMC's 3nm process, delivering double the performance per watt compared to traditional x86 processors. Meta is the first customer and development partner. OpenAI, Cerebras, Cloudflare, and SAP are lined up as launch partners.
What makes this signal so strong: ARM explicitly calls the chip "built for the era of agentic AI." An air cooled rack delivers 8,160 cores. Liquid cooled, more than 45,000. That density is not meant for regular web servers. It is built for data centers that run thousands of AI agents in parallel.
"TurboQuant achieves 3 bit zero loss compression of the KV cache, reducing memory by 6x and accelerating attention by up to 8x on H100 GPUs."
— Google Research, March 2026
Every AI model maintains working memory while processing text, known as the KV cache. The longer the conversation or document, the larger this cache grows. For long contexts, memory is the bottleneck, not compute.
Google's TurboQuant compresses that cache to six times smaller without measurable loss in output quality. On an H100 GPU, this delivers up to eight times faster attention operations. The technique combines two methods: PolarQuant, which rotates data vectors for more efficient compression, and QJL, which guarantees stability under aggressive quantization. The research was presented at ICLR 2026.
The practical impact for developers is immediate. Longer contexts fit in the same memory. More concurrent users on the same hardware. Lower cost per API call. For businesses building AI features into their products, these are the improvements that make the difference between a viable and unviable business case.
67% of Fortune 500 companies now have at least one AI agent in production. A year ago, that number was 34%. Customer service is the most popular use case at 42% of deployments, followed by data analysis at 28% and coding assistance at 19%.
Walmart optimizes its supply chain with agents. JPMorgan runs more than 200 financial analysis agents. Shopify handles 60% of merchant support tickets fully autonomously. These are not experiments. This is production.
And the investment numbers confirm the pattern. In Q1 2026, $4.2 billion in venture capital went to AI agent startups. The Model Context Protocol (MCP) is establishing itself as the standard for connecting agents to tools. Frameworks are consolidating: LangGraph dominates complex workflows, CrewAI handles multi agent setups, and Microsoft's AutoGen has been absorbed into Semantic Kernel. The infrastructure is crystallizing.
At MG Software, we see this shift directly in our projects. Last month we integrated GPT 5.4 nano as a classification layer in three client projects, achieving cost savings of 62 to 81 percent. That optimization becomes even stronger once TurboQuant reaches production and memory costs drop by a factor of six.
Our own tool Refront already uses agentic workflows for ticket processing. A client sends a message, AI classifies it, creates a structured ticket, and assigns it to the right team member. The next step is deploying specialized subagents that handle simple code changes autonomously. That is exactly the pattern that JetBrains Central provides governance for.
For businesses thinking about AI integration, the advice is straightforward: start small, measure everything, and invest in manageability. The technology is here. The hardware is getting cheaper. The management platforms are appearing. The question is no longer whether AI agents will become part of your software. The question is when you start. Get in touch if you want to discuss the possibilities.
Three launches in four days. New hardware, a new management platform, and a breakthrough in memory efficiency. Each solves a different piece of the puzzle, and together they form a clear picture: AI agents are moving from prototype to infrastructure.
The companies building this infrastructure are betting that within two years, most software teams will have agents as permanent members. The adoption numbers suggest they are right. If your team has not started exploring agent workflows yet, this is the week to begin.

Sidney de Geus
Co-Founder

AI agents are no longer experimental. Here are five concrete business workflows that you can automate with AI agents today, with implementation details and expected results from our client projects.

Google DeepMind released Gemma 4 on April 2, four open-source models under Apache 2.0 that range from Raspberry Pi to datacenter scale. The 2.3B model beats its 27B predecessor. Here is what matters for developers and businesses.

Vibe coding tools like Cursor, Bolt.new, and Lovable let anyone build software with AI. But 45% of AI-generated code has security flaws and founders burn thousands rebuilding what AI built wrong. Here is where the line is.

Businesses want AI in their software but have no idea what it costs. We break down real API costs, development hours, and model choices from recent client projects at MG Software.


















We help you define and implement the right AI strategy.
Schedule an AI consultation