AI Platform is Job for Infrastructure Teams
There's more AI options at the consumption level and therefore an opportunity for infrastructure platform teams to solve and manage.
I spent the week at Smartsheet Engage in Seattle, Washington. One of the remarkable things about attending a SaaS conference is the number of end users attending the show. What stood out was how many of the product's end users were discussing their desired AI outcomes.
These are people who want to simply get work done. Whether the solutions come via AmazonQ Business, Google Cloud Gemini, OpenAI ChatGPT, or even from within the Smartsheet platform itself doesn't matter. The optionality creates challenges in the aggregate. At some point, the larger organization has to develop guardrails and standards. If this sounds familiar, it’s the same problem Cloud Platform teams look to solve today. However, the business value is at a much higher level of abstraction and, therefore, a much larger challenge.
At this scale, the problem becomes a challenge for infrastructure teams to solve. I’m not saying storage administrators, network engineers, or virtualization admins will tackle this problem. I’m saying that at this scale, this is an infrastructure problem. Infrastructure teams will need to understand how to adapt known-good patterns and apply them to this higher level of abstraction.
What has been your organization’s approach to enabling AI platforms across clouds, data, and teams? Is this a problem that sits inside of the business teams, or is it a centralized effort?