What Is Generative AI?
Glossary

What Is Generative AI?

May 03, 2023
In this article

Generative artificial intelligence (AI) has taken the world by storm, from image generation (e.g., Dall-E, MidJourney, Lensa) to conversational AI (e.g., ChatGPT) to code generation (e.g., Copilot). It seems like we’re finally entering the era where AI breaks out of the sandbox and into the daily lives of everyone from engineers to end users. But generative AI in all its iterations is still a nascent technology, just coming into its own as folks experiment and test its limits. How will this affect application management and DevOps? Only time will tell, but it’s looking promising.

 

What is generative AI?

Generative AI is a subset of artificial intelligence that revolves around algorithms that can generate new content rather than those that simply analyze existing data or focus on pattern recognition. Essentially, generative AI models are trained on large data sets and use probabilistic techniques to generate content that emulates the date it was trained on. For instance, a generative AI model may be used to generate new, realistic-looking images of animals or people that have never been seen before after being trained on a large data set of images of people and animals. 

The applications of generative AI are quickly growing in number. It can be used to create realistic images and videos, generate natural language text, compose music, even create new drug molecules for testing. It’s relatively new, but generative AI is rapidly evolving and has the potential to enable unprecedented levels of innovation and creativity across a range of industries. 

 

What is the difference between generative AI and automation?

While generative AI can come off as fancy automation, it is quite distinct. Automation is the use of technology to automate repetitive or mundane tasks that would normally require human input. The goal is to optimize processes and increase efficiency by reducing the need for human involvement — think routine maintenance, data organization, or alerting. Generative AI is focused intently on the creation of new content and solutions based on existing data and known patterns. It’s not about eliminating human intervention but helping humans go beyond traditional limits with the help of algorithms that can create something new with their guidance. 

Although both may ultimately lead to improved efficiency and productivity, they differ in how they accomplish this. Where one just focuses on streamlining, the other focuses on not just streamlining but also creating things that manual processes may limit.

 

Why is generative AI important?

Generative AI will be important for software development, application management, and operations. It will offer improved efficiency through code optimization, testing, and deployment. It will increase the speed of prototyping, as it will be able to generate new code and/or designs quickly — helping teams iterate and bring products to market faster. Creativity in development and troubleshooting will also likely benefit as generative AI will let developers think outside the box and come up with interesting solutions to complex problems. Resource allocation can also be addressed, as it can be used to analyze and optimize systems and tools. Finally, generative AI can be used to generate synthetic data that can help improve and train models. 

 

What are the challenges of generative AI?

Although the promise of generative AI is big there are still some challenges to consider. The challenges include: 

Data quality: Like any machine learning or AI algorithm, it requires large amounts of high-quality data to train on to be effective. But ensuring the quality and integrity of the data can be challenging, especially when the data is generated from multiple sources or when it’s not complete and consistent. 

Model interoperability: Generative AI can be hard to understand under the hood, making it challenging for DevOps teams to understand how the models are making decisions or generating content. If issues arise with the model, engineers may not know how to troubleshoot or optimize the models. 

Technical complexity: The complexity of using generative AI models is present from the very beginning. It’s difficult to implement and maintain and may require significant expertise and resources to manage properly. 

Resource requirements: Generative AI models can require significant computational resources like GPUs and cloud computing services when homebrewed. This can be a challenge for organizations with limited resources to implement and scale. 

 

What can we expect from generative AI?

Larger transformer architectures, reinforcement learning through human feedback, improved embeddings, and latent diffusion have introduced incredible capabilities to a broad set of uses cases. How will this all shake out?

Infrastructure-as-Code (IaC) turns into DevOps–as-Code

Infrastructure-as-Code (IaC) offers obvious benefits today, from better tracking of change to replicability to safer scale outs and more. But IaC doesn’t really tackle application management and operations, which has a lot more processes that aren’t automated or don’t benefit from IaC. This includes service configuration, troubleshooting and root cause analysis, performance management, auditing and compliance, among others. 

It’s not easy automating these tasks. They involve various systems and interfaces, require more context, have complex branding logic, and legacy tools that can be clunky. This can all be addressed with code — as it can express the logic needed, connect real-time data and actions, and scale complex algorithms with modularity. Up until now, this has only been done for the most used workflows due to difficulty in developing and maintaining that code. Code generation via generative AI will change this, and effectively automate nearly all DevOps flows. 

The death of ChatOps 

ChatOps make sense, we spend a lot of time in team chats. But its first iteration is not actually chat, it looks more like terminal commands in Slack or Teams. It’s not conversational and actually requires rigid syntax, cognitive overhead, and context-switching. The next iteration of ChatOps powered by large language models (LLMs) will be able to work with actual conversational input. It will also be able to accrue all its conversations over time for greater context and better suggestions. It’s just around the corner that the conversational tools and services will be able to point folks to the right use case, dataset, API, person, or team. 

New generative search

Search engines are excellent at their job, presenting specific destinations in the form of URLs as results. But most queries are informational in nature, and sometimes that information searched for is not actually anywhere in the form of a URL. Generative search, like ChatGPT, is completely changing this. It offers declarative knowledge and presents in an accessible way. This type of technology will crossover into DevOps. Folks will be able to type in natural language, ask questions about a Kubernetes deployment and get specific, easily digestible results. 

Generative troubleshooting at scale

Troubleshooting has proven time and again to not be easy due to data issues, poor change tracking, and lack of cause and effect models. But with the right context, generative AI will separate the signal from the noise. It will help encode manual troubleshooting processes and navigate data to see if there’s an actual problem and how it started. In short, it will streamline remediation by confirming issues, coordinating next steps, and ensure the fix works. 

Smarter chaos engineering

Chaos engineering is used to poke around environments to force teams to respond to issues in real time to assess the process. But currently, the process is generic, and remediation still remains slow and cumbersome. It doesn’t necessarily improve remediation. Generative AI will change this by expanding the quantity and quality of the situations it creates for application management and operations teams.