How to creates Kubernetes jobs with Python

Posted in: Cloud, Google Cloud Platform, Technical Track

In this blog post I will do a quick guide, with some code examples, on how to deploy a Kubernetes Job programmatically, using Python as the language of choice.

For this I’m using GKE (Google Kubernetes Engine), logging via StackTrace and haveana image available on Google Container Registry. The architecture should be something like this:

The code that I created:

  • A Dockerfile for my container
  • A Python App that has the code to run (this will be the Job)

How  does all this works?

  1. Commit the code to the GCP Source Code Repositories
  2. A CloudBuild trigger (docs: that creates the container
  3. Create a trigger (can be a CronJob) that runs the code that deploys the Job.
    1. For this exercise, I’m going to trigger the Job creation from my own laptop.

Now the code. The difficult part here was dealing a bit with the documentation. For this code I used the following library:

It provides the Kubernetes abstraction layer and greatly simplifies the work.

Now, to deploy a Kubernetes Job, our code needs to build the following objects:

  • Job object
    • Contains a metadata object
    • Contains a job spec object
      • Contains a pod template object
        • Contains a pod template spec object
          • Contains a container object

You can walk through the Kubernetes library code and check how it gets and forms the objects. Also, don’t forget that all this is based on the official Job specification (

Without much else to say, you can check the full code here:

Want to talk with an expert? Schedule a call with our team to get the conversation started.

About the Author

Carlos Rolo is a Datastax Certified Cassandra Architect, and has deep expertise with distributed architecture technologies. Carlos is driven by challenge, and enjoys the opportunities to discover new things and new ways of learning that come with working at Pythian. He has become known and trusted by customers and colleagues for his ability to understand complex problems, and to work well under pressure. He prides himself on being a tenacious problem solver, while remaining a calm and positive presence on any team. When Carlos isn’t working he can be found playing water polo or enjoying the his local community. Carlos holds a Bachelor of Electro-technical Engineering, and a Master of Control Systems and Automation.

5 Comments. Leave new

Dear Carlos,
when we call create_namespaced_job, can we have a way to wait for the job done?


delete job error.
you should add body=deleteoptions


Mr. Rolo,
Great post. It really helped me get started on my project. Just one note you may want to update the program to reflect the new API version. I had to take out ‘include_uninitialized=False,’ in a few places to get the program to run.

Thanks again for such a useful post.


Thank you for the code example.

Actually, after some research and trial, I found that you could enable kube_cleanup_finished_jobs() to clean up all dependency pods without calling kube_delete_empty_pods() directly.
by change kube_cleanup_finished_jobs() setting
body = client.V1DeleteOptions(propagation_policy=’Background’)

also please take out ‘include_uninitialized=False,’

* reference


fails for me

{MaxRetryError}HTTPConnectionPool(host=’localhost’, port=80): Max retries exceeded with url: /apis/batch/v1/ (Caused by NewConnectionError(‘: Failed to establish a new connection: [Errno 61] Connection refused’))


Leave a Reply

Your email address will not be published. Required fields are marked *