March 6, 2025

[LIVE] Kubernetes Event-Driven Autoscaling (KEDA): Q&A

Tania Duggal
Technical Writer

In January 2025, Ant Weiss, our Chief Cluster Whisperer, hosted a webinar with Zbynek Roubalik, the CTO of Kedify and maintainer of KEDA. The session was packed with Kubernetes Event-Driven Autoscaling (KEDA) overview, why event-driven autoscaling matters, what benefits it brings to the table, and what challenges the community still needs to solve.

During the webinar, there were a lot of questions from the audience. We felt the answers provided by Zynek and Ant could benefit a wider audience; therefore, we’re summarizing them in this post.

So, let's dig in!

Question: What are the pros and cons of using KEDA versus HPA based on CPU and memory?

Answer: There are certain workloads where CPU and memory metrics are enough. But once you need to depend on some external metrics—when the application is consuming something from an external system—then it’s much better to use KEDA.

KEDA builds on top of HPA, so you don’t lose the capabilities of HPA when using KEDA. Scaling based on resources is kind of reactive because you see the increase in resource usage and then trigger the scale-out process. With event-driven scaling, you have events that indicate an increase in resource demand, allowing you to scale more proactively.

Question: Is it possible to use KEDA instead of HPA and VPA?

Answer: Yes, you can use KEDA instead of HPA because KEDA, under the hood, creates a scaled object and opens the necessary connections. In the end, KEDA also generates an HPA resource. So, you can define the same CPU and memory resource scalers that are used with HPA. It’s essentially a one-to-one replacement for HPA because it still uses HPA under the hood.

For VPA, the answer is No. Mixing VPA and HPA for a single deployment is not a good practice. You shouldn't do that because HPA will scale the workload horizontally based on a specific metric, while VPA will try to scale it vertically based on the same metric, causing a conflict. So, you shouldn’t use them together. If you still want to achieve vertical pod autoscaling for HPA-based workloads, look into PerfectScale by DoiT because we know how to do it.

Question: What's the difference between KEDA vs. HPA with custom metrics using Prometheus?

Answer: First is the ease of setup; it is much easier to integrate KEDA with Prometheus. In addition, once you set up KEDA, you gain access to 65+ additional scalers beyond just Prometheus. The Prometheus adapter uses a single configuration, possibly a ConfigMap, to define all scaling settings for the entire cluster. With KEDA, scaling settings can be defined per workload, offering more flexibility.

Question: Is there a single KEDA operator that can be active at a given time?

Answer: Yes, there is only a single controller handling the requests. The limitation itself is with the metrics adapter component. It all ties back to HPA and how it communicates through the Kubernetes API. This interface is singular within the cluster. It works through the Kubernetes API extension, and it uses the external metrics endpoint.

Question: Is KEDA free to use? How is it different from KEDA Enterprise (Kedify)?

Answer: Yes, KEDA is an open-source project. It's free to use. KEDA Enterprise (Kedify) is basically built on top of KEDA. It has a bunch of enterprise features, including support, dashboards, and security fixes. The goal is to continue supporting the open-source project while also serving customers who have specific needs, larger deployments, or require certain features. When we started the project, the team's aim was to make Kubernetes event-driven autoscaling simple. The team wanted to solve a single problem with KEDA rather than create a tool that tries to do ten different things but doesn’t excel at any of them. The philosophy is similar to Unix utilities—each tool should do one thing well.

So, the KEDA remains the core open-source project, and KEDA Enterprise (Kedify) builds on top of that.

>> Take a look at Guide to KEDA (Kubernetes Event-Driven Autoscaler)

Question: When will the HTTPScaledObject be available for production use in KEDA?

Answer: To handle incoming traffic, gather metrics in real-time, and scale the application accordingly, there is an HTTP add-on for KEDA in the open-source version. This add-on has been in development for a long time, but it is not currently in very active development. There is not much traffic or contribution to it.

The team does not recommend it for production use. Some users have successfully used it, and for certain use cases, it works well. If you don’t have a high load, it may be fine.

However, it is still in beta, so the team does not recommend it.

The add-on uses a different resource called HTTPScaledObject, where you define various scaling options. It relies on an interceptor component, which directs all incoming traffic through it before reaching the workload. Based on the metrics gathered by the interceptor component, the workload can scale out or even scale to zero. However, the interceptor component is also a problematic part because it has to handle a lot of complexity. At the moment, it is not performing well enough for reliable production use.

Therefore we recommend using the Kedify’s HTTP Scaler that’s been tested for production use.

Question: Is KEDA for Kubernetes jobs? For example, a CI/CD pipeline requiring a pod to be created dynamically for doing CI?

Answer: It's not only for Jobs, but it's also for additional kinds of workloads like Deployments, statefulsets, etc.

Question: What if, in an organization, we don't use a message broker service like Kafka or any other? What other conventional sources or services can KEDA support?

Answer: You can find the list of all available scalers here and you can add your own.

Question: How is KEDA different from Karpenter?

Answer : Karpenter is for node scaling. KEDA is for pods.

Question: KEDA Configuration for Time-based Autoscaling with Locked Resources?

Scenario: Locking resources and preventing scale-downs during specific months. I want to lock the resources and prevent autoscaling from scaling down pods during these specific tax periods, even if the resource utilization drops below the 80% threshold. To ensure that scaling down doesn't happen while the process is running, maintain the availability and stability of the application. How can I implement this solution using KEDA to prevent pod terminations and lock resources during specific periods (e.g., tax months)?

Answer: Time-based scheduling is best achieved using the Cron scaler in KEDA.

Question: Can KEDA be integrated with confluent Kafka ?

Answer: Yes. There’s a Kafka scaler.

Question: How do you see KEDA benefits when it comes to nodepool types/optimization of nodepools?

Answer: KEDA can optimize node utilization when combined with a smart dynamic node autoscaler like Karpenter or NAP. KEDA will change the number of pods based on events, while the node scaler will provision nodes to schedule these pods on. This requires, of course, correctly defining the resource requests for the scaled pods—something PerfectScale can do for you. 

Question: Can KEDA be used within all main Cloud Hyperscalers k8s deployments, except OCI (Oracle Cloud Infrastructure)?

Answer: While KEDA can definitely be installed and used on Kubernetes clusters on Oracle Cloud, there are currently no community-provided scalers for integrating with Oracle cloud services. There are scalers for consuming events from all other cloud hyperscalers.

Question: Is KEDA supported with AWS SQS?

Answer: Yes, it is supported.

Question: What are some questions you are asking customers to better understand their challenges when using K8s? So, you can position KEDA?

Answer: KEDA is a solution when resource-based horizontal autoscaling doesn’t work well. If your current HPA implementation can’t provide the required QoS with regards to latency, reliability, or throughput. Or maybe the performance is ok, but you’re paying too much for the infrastructure - time to review your autoscaling practices and evaluate scaling based on events rather than resources.

>> Take a look at How to Save Costs by Sleeping Kubernetes Resources During Off-Hour with KEDA?

Question: In the future, is there a plan to move KEDA to an enterprise version?

Answer: KEDA will be an open-source project. It’s free to use. KEDA Enterprise (Kedify) is basically built on top of KEDA. It has a bunch of enterprise features, including support, dashboards, and security fixes. The goal is to continue supporting the open-source project while also serving customers who have specific needs, larger deployments, or require certain features. 

Question: Can KEDA know how much resource is required to be assigned to a pod when the incoming event has unpredictable workloads and different requirements for requests? Or does it only scale horizontally?

Answer: KEDA only scales horizontally. For vertical autoscaling, check out PerfectScale; we have a free community offering.

Final Thoughts

KEDA makes event-driven autoscaling in Kubernetes easier, more flexible, and more efficient, especially when resource-based scaling falls short. Whether you’re dealing with unpredictable workloads, integrating external services, or optimizing your infrastructure costs, KEDA provides a powerful solution to scale your applications dynamically.

If you’re new to KEDA, start by exploring the KEDA docs to understand how it can fit into your Kubernetes environment. And if you're looking for a way to optimize both horizontal and vertical scaling, give PerfectScale a try. Start your free trial today and see how it can fine-tune your autoscaling strategy!

PerfectScale Lettermark

Reduce your cloud bill and improve application performance today

Install in minutes and instantly receive actionable intelligence.
Subscribe to our newsletter
Check out Kubernetes Event-Driven Autoscaling (KEDA) LIVE with Zbynek Roubalik, CTO of Kedify & KEDA maintainer—top questions answered!
This is some text inside of a div block.
This is some text inside of a div block.

About the author

This is some text inside of a div block.
more from this author
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.