In this session, we will learn how to define and run multi-container applications with Docker Compose. Then, we will show how to deploy and scale them seamlessly to a cluster with Docker Swarm; and how Amazon EC2 Container Service (ECS) eliminates the need to install,operate, and scale your own cluster management infrastructure. We will also walk through some best practice patterns used by customers for running their microservices platforms or batch jobs. Sample code and Compose templates will be provided on GitHub afterwards.
This document introduces Docker Compose, which allows defining and running multi-container Docker applications. It discusses that Docker Compose uses a YAML file to configure and run multi-service Docker apps. The 3 steps are to define services in a Dockerfile, define the app configuration in a Compose file, and run the containers with a single command. It also covers topics like networking, environment variables, and installing Docker Compose. Hands-on labs are provided to learn Compose through examples like WordPress.
A hands-on workshop that covers 18 best practices in 4 categories or in other words ✅️ Dos & Don'ts.
After a general introduction, we will have a look at the essential practices (aka must do), then move to the image practices, then we will go through the security practices, and finally, some general practices.
Please note, this workshop assumes that you have a basic knowledge of Docker.
Hands-on repo:
https://github.com/aabouzaid/docker-best-practices-workshop
The document introduces Docker, a container platform. It discusses how Docker addresses issues with deploying different PHP projects that have varying version requirements by allowing each project to run isolated in its own container with specified dependencies. It then covers key Docker concepts like images, containers, linking, exposing ports, volumes, and Dockerfiles. The document highlights advantages of Docker like enabling applications to run anywhere without compatibility issues and making deployment more efficient.
This document provides an introduction to Docker. It begins by explaining the differences between bare metal servers, virtualization, and containerization. It then discusses how Docker uses containerization to package applications with their dependencies in lightweight containers that can run on any infrastructure. Key Docker concepts covered include images, containers, and the Docker engine. The document also briefly outlines Docker's history and commercial editions.
Jenkins is a tool that supports continuous integration by automatically building, testing, and deploying code changes. It integrates code changes frequently, at least daily, to avoid "big bang" integrations. Jenkins runs builds and tests across multiple platforms using slave nodes. It supports various source control systems and build tools and notifies developers of failed builds or tests through email or other plugins.
User authentication and authorizarion in KubernetesNeependra Khare
This document discusses user authentication and authorization in Kubernetes. It describes how Kubernetes uses external services like Active Directory and LDAP for user authentication. It also explains the different types of users in Kubernetes including normal users, service accounts, and how kubeconfig files are used. The main authorization mechanism in Kubernetes is Role-Based Access Control (RBAC) which uses roles and role bindings to control access to Kubernetes API resources and operations.
Docker is a system for running applications in isolated containers. It addresses issues with traditional virtual machines by providing lightweight containers that share resources and allow applications to run consistently across different environments. Docker eliminates inconsistencies in development, testing and production environments. It allows applications and their dependencies to be packaged into a standardized unit called a container that can run on any Linux server. This makes applications highly portable and improves efficiency across the entire development lifecycle.
A basic introductory slide set on Kubernetes: What does Kubernetes do, what does Kubernetes not do, which terms are used (Containers, Pods, Services, Replica Sets, Deployments, etc...) and how basic interaction with a Kubernetes cluster is done.
Slide deck of the presentation done at the Hactoberfest 2020 Singapore event. The talk and demo showed GitHub Actions in practice with examples of Github Superlinter, SonarCloud integration and CI CD to Azure Kubernetes service.
The recording of the session is available on YouTube
https://youtu.be/sFvCj62wmWU?t=6732&WT.mc_id=AZ-MVP-5003170
Docker is a tool that allows users to package applications into containers to run on Linux servers. Containers provide isolation and resource sharing benefits compared to virtual machines. Docker simplifies deployment of containers by adding images, repositories and version control. Popular components include Dockerfiles to build images, Docker Hub for sharing images, and Docker Compose for defining multi-container apps. Docker has gained widespread adoption due to reducing complexity of managing containers across development and operations teams.
This document describes a GitLab CI/CD workflow using GitLab, Docker, GitLab Runner, and Ansible. Developers push code to a GitLab repo, which triggers a GitLab Runner job to build a Docker image. Ansible is then used to provision servers based on the environment. The benefits of using Ansible include managing everything from one playbook to clone code, build containers, deploy applications, and more. Examples of using shell scripts versus Ansible playbooks are provided.
This document discusses Docker containers and provides an introduction. It begins with an overview of Docker and how it uses containerization technology like Linux containers and namespaces to provide isolation. It describes how Docker images are composed of layers and how containers run from these images. The document then explains benefits of Docker like portability and ease of scaling. It provides details on Docker architecture and components like images, registries and containers. Finally, it demonstrates how to simply run a Docker container with a command.
This presentation about Docker will help you learn what Docker and Docker compose is, benefits of Docker compose, differences between Docker compose and Docker swarm, basic commands of docker compose and finally, a demo on docker compose. Docker is a tool which runs containers, whereas Docker Compose is used for running multiple containers as a single service. With compose, containers run in isolation (but they interact with each other). After watching this video, you will able to create a YAML file of docker compose and run multiple containers at a time. Now, let us get started and understand how does a Docker compose work.
Below are the topics covered in this Docker compose presentation:
1. What is Docker?
2. What is a Docker Compose?
3. Benefits of Docker compose
4. Docker Compose vs Docker Swarm
5. Basic commands of Docker
6. Demo
Why learn DevOps?
Simplilearn’s DevOps training course is designed to help you become a DevOps practitioner and apply the latest in DevOps methodology to automate your software development lifecycle right out of the class. You will master configuration management; continuous integration deployment, delivery and monitoring using DevOps tools such as Git, Docker, Jenkins, Puppet and Nagios in a practical, hands on and interactive approach. The Devops training course focuses heavily on the use of Docker containers, a technology that is revolutionizing the way apps are deployed in the cloud today and is a critical skillset to master in the cloud age.
After completing the DevOps training course you will achieve hands on expertise in various aspects of the DevOps delivery model. The practical learning outcomes of this Devops training course are:
An understanding of DevOps and the modern DevOps toolsets
The ability to automate all aspects of a modern code delivery and deployment pipeline using:
1. Source code management tools
2. Build tools
3. Test automation tools
4. Containerization through Docker
5. Configuration management tools
6. Monitoring tools
Who should take this course?
DevOps career opportunities are thriving worldwide. DevOps was featured as one of the 11 best jobs in America for 2017, according to CBS News, and data from Payscale.com shows that DevOps Managers earn as much as $122,234 per year, with DevOps engineers making as much as $151,461. DevOps jobs are the third-highest tech role ranked by employer demand on Indeed.com but have the second-highest talent deficit.
1. This DevOps training course will be of benefit the following professional roles:
2. Software Developers
3. Technical Project Managers
4. Architects
5. Operations Support
6. Deployment engineers
7. IT managers
8. Development managers
Learn more at https://www.simplilearn.com/cloud-computing/devops-practitioner-certification-training
Traditional virtualization technologies have been used by cloud infrastructure providers for many years in providing isolated environments for hosting applications. These technologies make use of full-blown operating system images for creating virtual machines (VMs). According to this architecture, each VM needs its own guest operating system to run application processes. More recently, with the introduction of the Docker project, the Linux Container (LXC) virtualization technology became popular and attracted the attention. Unlike VMs, containers do not need a dedicated guest operating system for providing OS-level isolation, rather they can provide the same level of isolation on top of a single operating system instance.
An enterprise application may need to run a server cluster to handle high request volumes. Running an entire server cluster on Docker containers, on a single Docker host could introduce the risk of single point of failure. Google started a project called Kubernetes to solve this problem. Kubernetes provides a cluster of Docker hosts for managing Docker containers in a clustered environment. It provides an API on top of Docker API for managing docker containers on multiple Docker hosts with many more features.
Autoscaling of workloads in the Kubernetes environment. A slidedeck about Pod and Node autoscaling and the machinery behind it that makes it happen. Few recommendations for Pod and Node autoscaling while implementing it.
Docker Explained | What Is A Docker Container? | Docker Simplified | Docker T...Edureka!
( ** DevOps Training: https://www.edureka.co/devops ** )
This Docker Explained PPT will explain to you the fundamentals of Docker with a hands-on. Below are the topics covered in the PPT:
Problems Before Docker
Virtualization vs Containerization
What is Docker?
How does Docker work?
Docker Components
Docker Architecture
Docker Compose & Docker Swarm
Hands-On
A comprehensive walkthrough of how to manage infrastructure-as-code using Terraform. This presentation includes an introduction to Terraform, a discussion of how to manage Terraform state, how to use Terraform modules, an overview of best practices (e.g. isolation, versioning, loops, if-statements), and a list of gotchas to look out for.
For a written and more in-depth version of this presentation, check out the "Comprehensive Guide to Terraform" blog post series: https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca
Docker allows users to package applications with all their dependencies into standardized units called containers that can run on any Linux server. Containers are more lightweight than virtual machines because they share the host operating system and only require the additional libraries and binaries needed to run the application rather than a full guest operating system. Docker uses containers and an image format to deploy applications in a consistent manner across development, testing, and production. The document provides examples of how to define a Dockerfile to build an image, run containers from images using docker-compose, and common Docker commands.
Kubernetes is an open-source platform for managing containerized applications across multiple hosts. It provides tools for deployment, scaling, and management of containers. Kubernetes handles tasks like scheduling containers on nodes, scaling resources, applying security policies, and monitoring applications. It ensures containers are running and if not, restarts them automatically.
Michel Schildmeijer gave a presentation on Oracle's Enterprise Container Platform Verrazzano. Verrazzano is an open source container platform from Oracle that provides a full stack for managing hybrid and multi-cloud environments using containers and Kubernetes. It includes components for container runtime, orchestration, identity and access management, service routing, logging and tracing. Verrazzano allows organizations to run applications like WebLogic and Helidon microservices on Kubernetes across public and private clouds.
This document provides an overview and agenda for a Docker networking deep dive presentation. The presentation covers key concepts in Docker networking including libnetwork, the Container Networking Model (CNM), multi-host networking capabilities, service discovery, load balancing, and new features in Docker 1.12 like routing mesh and secured control/data planes. The agenda demonstrates Docker networking use cases like default bridge networks, user-defined bridge networks, and overlay networks. It also covers networking drivers, Docker 1.12 swarm mode networking functionality, and how concepts like routing mesh and load balancing work.
Docker Datacenter Overview and Production Setup SlidesDocker, Inc.
An overview on Docker Data Center and Universal Control Plane. We will cover how to install for production and integrate Docker Trusted Registry.
Led by DDC + UCP Champ:
Vivek Saraswat
Experience Level: Attendees need no prior experience with Docker, but should be familiar with basic linux command-line.
OpenShift, Docker, Kubernetes: The next generation of PaaSGraham Dumpleton
The document discusses how platforms like OpenShift, Docker, and Kubernetes have evolved from earlier PaaS technologies to provide next generation platforms that enable automated builds, deployments, orchestration, and security across containers. It notes how these platforms allow applications to be deployed across custom strategies rather than being constrained to a single way of working, and how they integrate with existing CI/CD tools. The document encourages gradually adopting new tooling as it makes sense and provides various resources for trying OpenShift.
Docker is a system for running applications in isolated containers. It addresses issues with traditional virtual machines by providing lightweight containers that share resources and allow applications to run consistently across different environments. Docker eliminates inconsistencies in development, testing and production environments. It allows applications and their dependencies to be packaged into a standardized unit called a container that can run on any Linux server. This makes applications highly portable and improves efficiency across the entire development lifecycle.
A basic introductory slide set on Kubernetes: What does Kubernetes do, what does Kubernetes not do, which terms are used (Containers, Pods, Services, Replica Sets, Deployments, etc...) and how basic interaction with a Kubernetes cluster is done.
Slide deck of the presentation done at the Hactoberfest 2020 Singapore event. The talk and demo showed GitHub Actions in practice with examples of Github Superlinter, SonarCloud integration and CI CD to Azure Kubernetes service.
The recording of the session is available on YouTube
https://youtu.be/sFvCj62wmWU?t=6732&WT.mc_id=AZ-MVP-5003170
Docker is a tool that allows users to package applications into containers to run on Linux servers. Containers provide isolation and resource sharing benefits compared to virtual machines. Docker simplifies deployment of containers by adding images, repositories and version control. Popular components include Dockerfiles to build images, Docker Hub for sharing images, and Docker Compose for defining multi-container apps. Docker has gained widespread adoption due to reducing complexity of managing containers across development and operations teams.
This document describes a GitLab CI/CD workflow using GitLab, Docker, GitLab Runner, and Ansible. Developers push code to a GitLab repo, which triggers a GitLab Runner job to build a Docker image. Ansible is then used to provision servers based on the environment. The benefits of using Ansible include managing everything from one playbook to clone code, build containers, deploy applications, and more. Examples of using shell scripts versus Ansible playbooks are provided.
This document discusses Docker containers and provides an introduction. It begins with an overview of Docker and how it uses containerization technology like Linux containers and namespaces to provide isolation. It describes how Docker images are composed of layers and how containers run from these images. The document then explains benefits of Docker like portability and ease of scaling. It provides details on Docker architecture and components like images, registries and containers. Finally, it demonstrates how to simply run a Docker container with a command.
This presentation about Docker will help you learn what Docker and Docker compose is, benefits of Docker compose, differences between Docker compose and Docker swarm, basic commands of docker compose and finally, a demo on docker compose. Docker is a tool which runs containers, whereas Docker Compose is used for running multiple containers as a single service. With compose, containers run in isolation (but they interact with each other). After watching this video, you will able to create a YAML file of docker compose and run multiple containers at a time. Now, let us get started and understand how does a Docker compose work.
Below are the topics covered in this Docker compose presentation:
1. What is Docker?
2. What is a Docker Compose?
3. Benefits of Docker compose
4. Docker Compose vs Docker Swarm
5. Basic commands of Docker
6. Demo
Why learn DevOps?
Simplilearn’s DevOps training course is designed to help you become a DevOps practitioner and apply the latest in DevOps methodology to automate your software development lifecycle right out of the class. You will master configuration management; continuous integration deployment, delivery and monitoring using DevOps tools such as Git, Docker, Jenkins, Puppet and Nagios in a practical, hands on and interactive approach. The Devops training course focuses heavily on the use of Docker containers, a technology that is revolutionizing the way apps are deployed in the cloud today and is a critical skillset to master in the cloud age.
After completing the DevOps training course you will achieve hands on expertise in various aspects of the DevOps delivery model. The practical learning outcomes of this Devops training course are:
An understanding of DevOps and the modern DevOps toolsets
The ability to automate all aspects of a modern code delivery and deployment pipeline using:
1. Source code management tools
2. Build tools
3. Test automation tools
4. Containerization through Docker
5. Configuration management tools
6. Monitoring tools
Who should take this course?
DevOps career opportunities are thriving worldwide. DevOps was featured as one of the 11 best jobs in America for 2017, according to CBS News, and data from Payscale.com shows that DevOps Managers earn as much as $122,234 per year, with DevOps engineers making as much as $151,461. DevOps jobs are the third-highest tech role ranked by employer demand on Indeed.com but have the second-highest talent deficit.
1. This DevOps training course will be of benefit the following professional roles:
2. Software Developers
3. Technical Project Managers
4. Architects
5. Operations Support
6. Deployment engineers
7. IT managers
8. Development managers
Learn more at https://www.simplilearn.com/cloud-computing/devops-practitioner-certification-training
Traditional virtualization technologies have been used by cloud infrastructure providers for many years in providing isolated environments for hosting applications. These technologies make use of full-blown operating system images for creating virtual machines (VMs). According to this architecture, each VM needs its own guest operating system to run application processes. More recently, with the introduction of the Docker project, the Linux Container (LXC) virtualization technology became popular and attracted the attention. Unlike VMs, containers do not need a dedicated guest operating system for providing OS-level isolation, rather they can provide the same level of isolation on top of a single operating system instance.
An enterprise application may need to run a server cluster to handle high request volumes. Running an entire server cluster on Docker containers, on a single Docker host could introduce the risk of single point of failure. Google started a project called Kubernetes to solve this problem. Kubernetes provides a cluster of Docker hosts for managing Docker containers in a clustered environment. It provides an API on top of Docker API for managing docker containers on multiple Docker hosts with many more features.
Autoscaling of workloads in the Kubernetes environment. A slidedeck about Pod and Node autoscaling and the machinery behind it that makes it happen. Few recommendations for Pod and Node autoscaling while implementing it.
Docker Explained | What Is A Docker Container? | Docker Simplified | Docker T...Edureka!
( ** DevOps Training: https://www.edureka.co/devops ** )
This Docker Explained PPT will explain to you the fundamentals of Docker with a hands-on. Below are the topics covered in the PPT:
Problems Before Docker
Virtualization vs Containerization
What is Docker?
How does Docker work?
Docker Components
Docker Architecture
Docker Compose & Docker Swarm
Hands-On
A comprehensive walkthrough of how to manage infrastructure-as-code using Terraform. This presentation includes an introduction to Terraform, a discussion of how to manage Terraform state, how to use Terraform modules, an overview of best practices (e.g. isolation, versioning, loops, if-statements), and a list of gotchas to look out for.
For a written and more in-depth version of this presentation, check out the "Comprehensive Guide to Terraform" blog post series: https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca
Docker allows users to package applications with all their dependencies into standardized units called containers that can run on any Linux server. Containers are more lightweight than virtual machines because they share the host operating system and only require the additional libraries and binaries needed to run the application rather than a full guest operating system. Docker uses containers and an image format to deploy applications in a consistent manner across development, testing, and production. The document provides examples of how to define a Dockerfile to build an image, run containers from images using docker-compose, and common Docker commands.
Kubernetes is an open-source platform for managing containerized applications across multiple hosts. It provides tools for deployment, scaling, and management of containers. Kubernetes handles tasks like scheduling containers on nodes, scaling resources, applying security policies, and monitoring applications. It ensures containers are running and if not, restarts them automatically.
Michel Schildmeijer gave a presentation on Oracle's Enterprise Container Platform Verrazzano. Verrazzano is an open source container platform from Oracle that provides a full stack for managing hybrid and multi-cloud environments using containers and Kubernetes. It includes components for container runtime, orchestration, identity and access management, service routing, logging and tracing. Verrazzano allows organizations to run applications like WebLogic and Helidon microservices on Kubernetes across public and private clouds.
This document provides an overview and agenda for a Docker networking deep dive presentation. The presentation covers key concepts in Docker networking including libnetwork, the Container Networking Model (CNM), multi-host networking capabilities, service discovery, load balancing, and new features in Docker 1.12 like routing mesh and secured control/data planes. The agenda demonstrates Docker networking use cases like default bridge networks, user-defined bridge networks, and overlay networks. It also covers networking drivers, Docker 1.12 swarm mode networking functionality, and how concepts like routing mesh and load balancing work.
Docker Datacenter Overview and Production Setup SlidesDocker, Inc.
An overview on Docker Data Center and Universal Control Plane. We will cover how to install for production and integrate Docker Trusted Registry.
Led by DDC + UCP Champ:
Vivek Saraswat
Experience Level: Attendees need no prior experience with Docker, but should be familiar with basic linux command-line.
OpenShift, Docker, Kubernetes: The next generation of PaaSGraham Dumpleton
The document discusses how platforms like OpenShift, Docker, and Kubernetes have evolved from earlier PaaS technologies to provide next generation platforms that enable automated builds, deployments, orchestration, and security across containers. It notes how these platforms allow applications to be deployed across custom strategies rather than being constrained to a single way of working, and how they integrate with existing CI/CD tools. The document encourages gradually adopting new tooling as it makes sense and provides various resources for trying OpenShift.
Shipping Applications to Production in Containers with DockerJérôme Petazzoni
This document provides an overview and introduction to using Docker in production environments. It discusses how Docker can help with "solved" problems like installing, building, and distributing applications. It also covers important areas for production Docker usage, such as service discovery, orchestration, performance, configuration management, and sysadmin tasks. The document outlines various approaches in each area and notes that there are often multiple valid solutions to consider.
Orchestration, resource scheduling…What does that mean? Is this only relevant for data centers with thousands of nodes? Should I care about Mesos, Kubernetes, Swarm, when all I have is a handful of virtual machines? The motto of public cloud IAAS is "pay for what you use," so in theory, if I deploy my apps there, I'm already getting the best "resource utilization" aka "bang for my buck," right? In this talk, we will answer those questions, and a few more. We will define orchestration, scheduling, and others, and show what it's like to use a scheduler to run containerized applications there.
Cgroups, namespaces, and beyond: what are containers made from? (DockerCon Eu...Jérôme Petazzoni
Linux containers are different from Solaris Zones or BSD Jails: they use discrete kernel features like cgroups, namespaces, SELinux, and more. We will describe those mechanisms in depth, as well as demo how to put them together to produce a container. We will also highlight how different container runtimes compare to each other.
This talk was delivered at DockerCon Europe 2015 in Barcelona.
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxConJérôme Petazzoni
Containers are everywhere. But what exactly is a container? What are they made from? What's the difference between LXC, butts-nspawn, Docker, and the other container systems out there? And why should we bother about specific filesystems?
In this talk, Jérôme will show the individual roles and behaviors of the components making up a container: namespaces, control groups, and copy-on-write systems. Then, he will use them to assemble a container from scratch, and highlight the differences (and likelinesses) with existing container systems.
Visual guide to selling software as a service by @prezlyPrezly
It took my team years to find an efficient way of getting new customers. First, I’ll show you how we messed up and then how we got on the road to conversion success using the pirate metrics framework.
React is a different way to write JavaScript apps. When it was introduced at JSConf US in May, the audience was shocked by some of its design principles. One sarcastic tweet from an audience member ended up describing React’s philosophy quite accurately: https://twitter.com/cowboy/status/339858717451362304
We’re trying to push the limits of what’s possible on the web with React. My talk will start with a brief introduction to the framework, and then dive into three controversial topics: Throwing out the notion of templates and building views with JavaScript, “re-rendering” your entire application when your data changes, and a lightweight implementation of the DOM and events.
10 commandments for better android developmentTrey Robinson
This document provides 10 commandments for better Android development. It recommends choosing a minimum SDK of 16 or higher to reach a wider audience. It advises against writing boilerplate code and instead using libraries to handle tasks like view inflation and click handling. It also recommends understanding build configurations, using Intents over permissions when possible, consuming REST APIs with libraries instead of AsyncTasks, and following effective Java practices. The document encourages leveraging open source libraries, attending meetups, and following experts on Twitter to continue improving skills.
Composer has triggered a renaissance in the PHP community, it has changed the way we deal with other people’s code and it has changed the way we share our code. We are all slowly moving to using Composer, from Wordpress to Joomla and Drupal and frameworks in between. But many of us mistreat composer, follow outdated practices or simply lack a few tricks. In this session i’ll get you the low down on how to use composer the right way.
This document summarizes a presentation by Lukas Fittl about his experience co-founding a tech startup called Efficient Cloud. Some key points:
- Efficient Cloud launched in 2010 but failed to gain any customers or revenue, despite having a team of developers, sales, and marketing staff.
- Fittl analyzes what went wrong, including not validating assumptions about customers, focusing too much on building features rather than iterating based on customer feedback, and not differentiating their product offering enough.
- He discusses lessons learned around the importance of launching minimal viable products quickly through prototyping, measuring customer response, and iterating based on learning. Traction with real customers should come before fundraising.
Performance and testing are just one aspect of code, to really be successful your code needs to be readable, maintainable and generally easier to comprehend and work with. This talk draws from my own experience in applying the techniques of object calisthenics and code readability, within an existing team. It will help you identify trouble areas, learn how to refactor them and train you to write better code in future projects avoiding common pitfalls.
The document discusses PHP objects internally. It covers how objects are represented as zvals and stored in an object store. Objects use a unique handle to reference their data in the store. Creating a new object only happens through new or clone, which add it to the store. Objects are not duplicated even if the zval is duplicated. The garbage collector helps free circular references. Object handlers define object behaviors, and can be overridden to customize objects.
Presentation made at GTA meetup in 2012-02-07.
Object Calisthenics is a set of exercise rules to reach better code, maintainable, testable and readable.
The document discusses using Android and Arduino together to program "things". It describes how the UDOO board allows running Android and communicating with an Arduino-compatible board for building smart devices and interactive things. It provides an overview of developing applications using the Android Accessory Development Kit (ADK) to interface Android with Arduino, covering aspects like setting up the development environment, manifest files, accessing I/O streams, and communicating between the two boards.
Kicking the Bukkit: Anatomy of an open source meltdownRyanMichela
On September 3rd, 2014, a disgruntled ex-developer erased from the internet the work of over 150 developers over four years. This is the story of the Bukkit Minecraft server project's demise, and how you protect your project from its fate.
Presented at Silicon Valley Code Camp 2014: http://www.siliconvalley-codecamp.com/Session/2014/kicking-the-bukkit-anatomy-of-an-open-source-meltdown
The document discusses the history and current state of climate change research. It notes that scientific consensus has formed around the occurrence of climate change due to human activity like fossil fuel burning. Recent studies have found that climate change effects are happening faster and more extensively than previous estimates, with impacts including more extreme weather, rising sea levels, and species endangerment.
MySQL users commonly ask: Here's my table, what indexes do I need? Why aren't my indexes helping me? Don't indexes cause overhead? This talk gives you some practical answers, with a step by step method for finding the queries you need to optimize, and choosing the best indexes for them.
s React.js a library or a framework? In any case, it is a new way of working that represents a revolution in the way of building web projects. It has very particular characteristics that allow us, for instance, to render React code from the server side, or to include React components from Twig tags. During this talk we will present React.js, we will explore how to take advantage of it from PHP projects and we will give answers to practical problems such as universal (isomorphical) rendering and the generation of React.js forms from Symfony forms without duplication of efforts.
Enterprise PHP: mappers, models and servicesAaron Saray
One of the greatest failures PHP has is not that it is a bad language, but that it isn't marketed as a good language. Since it's so easy to do it bad, a lot of people do just that. However, there is hope. Businesses ARE using PHP for Enterprise! You just need to apply a solid foundation to the language to get to that Enterprise level.
With our mind on the Enterprise sphere, it's time to take PHP programming to a more advanced level. To get the most out of this talk, you should already be familiar with object oriented programming concepts in PHP. To begin with, I'll talk about the need for a service layer, data mappers, and business object models. Then I'll demonstrate examples of how to do it right. As an added bonus, I'm not going to just tell you why it's best practice and a "good idea" - I'm going to SHOW you. This might be the first time in your life that you realize that changes in architecture, supporting infrastructure, and business requirements don't have to be as painful as you once thought!
Настройка окружения для кросскомпиляции проектов на основе docker'acorehard_by
Как быстро и легко настраивать/обновлять окружения для кросскомпиляции проектов под различные платформы(на основе docker), как быстро переключаться между ними, как используя эти кирпичики организовать CI и тестирование(на основе GitLab и Docker).
This document provides instructions for setting up a hack environment using Docker containers. It discusses pros and cons of different options like using a real server, cloud services, or virtual machines. Docker is recommended for its ease of use, templates, and ability to run on multiple platforms. Example Dockerfiles and Docker Compose files are provided to set up environments like IIS, Nginx, LEMP stacks, and vulnerable apps. Specific vulnerabilities like Heartbleed and DHClient RCE are demonstrated using Docker images. Finally, Docker images for security tools like Kali Linux and REMnux are mentioned.
An overview of our experiments at Industrial Light and Magic to create a fully cloud based pipeline, based on Mesos, Docker and automated with Ansible.
GDG-ANDROID-ATHENS Meetup: Build in Docker with Jenkins Mando Stam
The document discusses automating an Android application build process using Docker and Jenkins. It describes how previously the build was done manually across multiple machines. The proposed solution is to create Docker images with the Android SDK, NDK and other build tools. These images would be used as build agents in Jenkins. Several challenges are addressed such as setting environment variables and running builds interactively in Docker containers. Defining properties files and caching downloads are techniques used to optimize the build process.
This is a journey of a developer who goes from docker-compose to kompose to opencompose. Which tool can help her best to move to Kubernetes? Find out in the slides. Also there is a demo in the slides which shows how these tools can help.
This talk was presented at DevConf India on May 12th 2017. DevConf India was a parallel track with rootconf 2017. Visit devconf.in to know more.
1. Docker is a container platform that packages applications and dependencies to run seamlessly in any computing environment. It helps eliminate issues caused by differences in computing environments.
2. Kitematic provides a graphical user interface for Docker that makes it easy to run Docker containers without using the command line. It allows visually managing containers.
3. The Docker CLI can be used to run containers by pulling images from Docker Hub, a registry for Docker images, and using commands like docker run to launch containers from those images.
This document introduces Docker and discusses its benefits for hosting web applications. It explains that Docker provides an abstraction layer between applications and operating systems using containers, allowing applications to run consistently across different computing environments. Key points covered include:
- Docker images contain application code and dependencies to run consistently on any infrastructure.
- Containers are lightweight and decoupled from underlying infrastructure, providing efficient usage of resources.
- Composing systems with Docker Compose and orchestrating containers with Kubernetes allows scaling applications across multiple machines.
- Docker is open source but also a company, and many major companies support its use for development, testing, and production deployments in private data centers and public clouds.
Developing and Deploying PHP with DockerPatrick Mizer
The document discusses using Docker for developing and deploying PHP applications. It begins with an introduction to Docker, explaining that Docker allows applications to be assembled from components and eliminates friction between development, testing and production environments. It then covers some key Docker concepts like containers, images and the Docker daemon. The document demonstrates building a simple PHP application as a Docker container, including creating a Dockerfile and building/running the container. It also discusses some benefits of Docker like portability, separation of concerns between developers and DevOps, and immutable build artifacts.
DCEU 18: Building Your Development PipelineDocker, Inc.
This document discusses building a development pipeline using containers. It outlines using containers for building images, automated testing, security scanning, and deploying to production. Containers make environments consistent and reproducible. The pipeline includes building images, testing, security scanning, and promoting images to production. Methods discussed include using multi-stage builds to optimize images, leveraging Buildkit for faster builds, and parallel testing across containers. Automated tools are available to implement rolling updates and rollbacks during deployments.
This document summarizes Dockerizing a Django application. It describes the speakers' experiences moving from a non-Dockerized setup with many issues, like outdated images and long recovery times, to a Dockerized setup with improved scalability, documentation, and development workflows. Key aspects of the new setup include using Docker Compose to run multiple services, Docker Machine to provision environments, and Docker Swarm for production deployments across multiple instances.
ContainerDayVietnam2016: Dockerize a small businessDocker-Hanoi
This document discusses how Docker can transform development and deployment processes for modern applications. It outlines some of the challenges of developing and deploying applications across different environments, and how Docker addresses these challenges through containerization. The document then provides examples of how to dockerize a Rails and Python application, set up an Nginx reverse proxy with Let's Encrypt, and configure a Docker cluster for continuous integration testing.
The document provides an agenda for a DevOps with Containers training over 4 days. Day 1 covers Docker commands and running containers. Day 2 focuses on Docker images, networks, and storage. Day 3 introduces Docker Compose. Day 4 is about Kubernetes container orchestration. The training covers key Docker and DevOps concepts through presentations, videos, labs, and reading materials.
The document discusses using Docker as a development environment. It explains what Docker is, how it works using images and containers, and its benefits like having the same environment locally as production. It then provides examples of using Docker with Ruby on Rails applications, including creating Dockerfiles, using Docker Compose to run multiple services like the app and database, and caching gems with a Docker volume. Links are also included for additional reading on using Docker for development.
Scaling Docker Containers using Kubernetes and Azure Container ServiceBen Hall
This document discusses scaling Docker containers using Kubernetes and Azure Container Service. It begins with an introduction to containers and Docker, including how containers improve dependency and configuration management. It then demonstrates building and deploying containerized applications using Docker and discusses how to optimize Docker images. Finally, it introduces Kubernetes as a tool for orchestrating containers at scale and provides an example of deploying a containerized application on Kubernetes in Azure.
This document provides an overview of Docker and containers. It begins with a brief introduction to 12 Factor Applications methodology and then defines what Docker is, explaining that containers utilize Linux namespaces and cgroups to isolate processes. It describes the Docker software and ecosystem, including images, registries, Docker CLI, Docker Compose, building images with Dockerfile, and orchestrating with tools like Kubernetes. It concludes with a live demo and links to additional resources.
Accelerate your software development with DockerAndrey Hristov
Docker is in all the news and this talk presents you the technology and shows you how to leverage it to build your applications according to the 12 factor application model.
Docker is a tool that allows developers to package applications into containers to ensure consistency across environments. Some key benefits of Docker include lightweight containers, isolation, and portability. The Docker workflow involves building images, pulling pre-built images, pushing images to registries, and running containers from images. Docker uses a layered filesystem to efficiently build and run containers. Running multiple related containers together can be done using Docker Compose or Kubernetes for orchestration.
Velocity NYC 2017: Building Resilient Microservices with Kubernetes, Docker, ...Ambassador Labs
1. The presentation introduces Docker, Kubernetes, and Envoy as foundational tools for building microservices. Docker allows packaging applications into portable containers, Kubernetes provides a platform to manage containers across clusters of hosts, and Envoy handles traffic routing and resilience at the application layer.
2. The presenters demonstrate how to build a simple Python web application into a Docker container image. They then deploy the containerized application to a Kubernetes cluster using Kubernetes objects like deployments and services. This allows the application to scale across multiple pods and be accessed via a stable service endpoint.
3. Finally, the presenters note that as applications become distributed across microservices, failures at the application layer (L7) become more common and
This two-day training covers Docker concepts including installation, working with containers and images, building images with Dockerfiles, and integrating Docker with OpenStack. Day one focuses on the Docker introduction, installation, containers, images, Dockerfiles, and using Nova to run Docker containers as compute instances. It also covers using Glance as a Docker image registry. Day two covers Docker clustering with Kubernetes, networking, Docker Hub, case studies, and the Docker source code. It concludes with developing platforms and running Hadoop on Docker containers.
Use the Source or Join the Dark Side: differences between Docker Community an...Jérôme Petazzoni
The Docker Project delivers a complete open source platform to “build, ship, and run” any application, anywhere, using containers. The Docker Engine and the other main components (Compose, Machine,
and the SwarmKit orchestration system) are free; but Docker Inc. (the company who started the Docker Project) also has a complete commercial offering named “Docker EE” (for Enterprise Edition) that adds an extra set of features geared at larger organizations, as well as an extended support and release cycle.
In this talk, I will explain (and show with demos) what you can do using exclusively Docker CE (community, free edition) and which features are added by Docker EE. This talk is for you if you are in the process of selecting a container platform; or if you’re just curious, and want to know exactly what you can do (and cannot do) with Docker CE and EE.
Docker : quels enjeux pour le stockage et réseau ? Paris Open Source Summit ...Jérôme Petazzoni
Présentation donnée le 18 novembre 2015 au Paris Open Source Summit par Hervé Leclerc (Alterway) et Jérôme Petazzoni (Docker), présentant entre autres les nouvelles fonctionalités de Docker pour le stockage et le réseau arrivées dans la version 1.9 du Docker Engine.
Making DevOps Secure with Docker on Solaris (Oracle Open World, with Jesse Bu...Jérôme Petazzoni
Docker, the container Engine and Platform, is coming to Oracle Solaris! This is the talk that Jérôme Petazzoni (Docker) and Jesse Butler (Oracle) gave at Oracle Open World in November 2015.
Containers, docker, and security: state of the union (Bay Area Infracoders Me...Jérôme Petazzoni
Docker is two years old. While security has always been at the core of the questions revolving around Docker, the nature of those questions has changed. Last year, the main concern was "can I safely colocate containers on the same machine?" and it elicited various responses. Dan Walsh, SELinux expert, notoriously said: "containers do not contain!", and at last year's LinuxCon, Jérôme delivered a presentation detailing how to harden Docker and containers to isolate them better. Today, people have new concerns. They include image transport, vulnerability mitigation, and more.
After a recap about the current state of container security, Jérôme will explain why those new questions showed up, and most importantly, how to address them and safely deploy containers in general, and Docker in particular.
How to contribute to large open source projects like Docker (LinuxCon 2015)Jérôme Petazzoni
Contributing to a large open source project can seem daunting at first; but fear not! You too can join thousands of successful contributors. First, you don't have to be an expert in Golang, Python, or C, to contribute to Docker, OpenStack, or the Linux Kernel. Many projects also need help with documentation, translation, testing, triaging issues, and more. Very often, just going through bug reports to reproduce them and confirm "this also happens on my setup, with version XYZ" is extremely helpful.
If you decide to take the leap and propose a change (be it code or documentation), each open source project has different contribution guidelines and workflows.
In this talk, Arnaud and Jérôme will explain some of those workflows, how maintainers review your patches, and highlight the details that make your changes more likely to be merged into the project.
Containers, Docker, and Security: State Of The Union (LinuxCon and ContainerC...Jérôme Petazzoni
Containers, Docker, and Security: State of the Union
This document discusses the past, present, and future of container security with Docker. It summarizes that container isolation used to be a major concern but improvements have been made through finer-grained permissions and immutable containers. Image provenance is now a bigger issue but techniques like Docker Content Trust (Notary) help address it. Defense in depth with both containers and VMs is recommended. The security of containers continues to improve through practices like better upgrades, security benchmarks, and policies.
This document discusses microservices and how Docker can help implement them. It begins by defining microservices as breaking large applications into many small independent services. Some benefits include using the best technology for each service and easier deployment. Challenges include efficient communication between services and network configuration. Docker helps by providing a standardized way to build, ship and run services through containers and tools like Docker Compose and Swarm that handle networking and orchestration between containers. The document provides an overview of how to get started with microservices using Docker.
Deploy microservices in containers with Docker and friends - KCDC2015Jérôme Petazzoni
Docker lets us build, ship, and run any Linux application, on any platform. It found many early adopters in the CI/CD industry, long before it reached the symbolic 1.0 milestone and was considered "production-ready." Since then, its stability and features attracted enterprise users in many different fields, including very demanding ones like finance, banking, or intelligence agencies.
We will see how Docker is particularly suited to the deployment of distributed applications, and why it is an ideal platform for microservice architectures. In particular, we will look into three Docker related projects that have been announced at DockerCon Europe last December: Machine, Swarm, and Compose, and we will explain how they improve the way we build, deploy, and scale distributed applications.
Containers: from development to production at DevNation 2015Jérôme Petazzoni
In Docker, applications are shipped using a lightweight format, managed with a high-level API, and run within software containers which abstract the host environment. Operating details like distributions, versions, and network setup no longer matter to the application developer.
Thanks to this abstraction level, we can use the same container across all steps of the life cycle of an application, from development to production. This eliminates problems stemming from discrepancies between those environments.
Even so, these environments will always have different requirements. If our quality assurance (QA) and production systems use different logging systems, how can we still ship the same container to both? How can we satisfy the backup and security requirements of our production stack without bloating our development stack?
In this sess, you will learn about the unique features in containers that allow you to cleanly decouple system administrator tasks from the core of your application. We’ll show you how this decoupling results in smaller, simpler containers, and gives you more flexibility when building, managing, and evolving your application stacks.
Immutable infrastructure with Docker and containers (GlueCon 2015)Jérôme Petazzoni
"Never upgrade a server again. Never update your code. Instead, create new servers, and throw away the old ones!"
That's the idea of immutable servers, or immutable infrastructure. This makes many things easier: rollbacks (you can always bring back the old servers), A/B testing (put old and new servers side by side), security (use the latest and safest base system at each deploy), and more.
However, throwing in a bunch of new servers at each one-line CSS change is going to be complicated, not to mention costly.
Containers to the rescue! Creating container "golden images" is easy, fast, dare I say painless. Replacing your old containers with new ones is also easy to do; much easier than virtual machines, let alone physical ones.
In this talk, we'll quickly recap the pros (and cons) of immutable servers; then explain how to implement that pattern with containers. We will use Docker as an example, but the technique can easily be adapted to Rocket or even plain LXC containers.
The Docker ecosystem and the future of application deploymentJérôme Petazzoni
Ten years ago, virtualization ignited a revolution which gave birth to the Cloud and the DevOps initiative. Today, with containers, we are at the dawn of a similar breakthrough.
How can we capture the value of containers? How can we use their features to implement microservices and immutable infrastructures, while retaining as much as possible of our existing practices? The answer is in the rich ecosystem that developed around Docker, an open-source platform to build, ship, and run applications in containers.
In this keynote we’ll explore what the applications of tomorrow will look like, how they’ll be deployed and distributed – and how to leverage those tools today.
Docker landed almost two years ago, making it possible to build, ship, and run
any Linux application, on any platform, it was quickly adopted by developers
and ops, like no other tool before. The CI/CD industry even took it to
production long before it was stamped "production-ready."
Why does everyone (or almost!) love Docker? Because it puts powerful
automation abilities within the hands of normal developers. Automation
almost always involves building distribution packages, virtual machine
images, or writing configuration management manifests. With Docker,
those tasks are radically transformed: sometimes they're far easier than before,
other times they're no longer needed at all. Either way, the intervention
of a seasoned sysadmin guru is no longer required.
Introduction to Docker, December 2014 "Tour de France" Bordeaux Special EditionJérôme Petazzoni
Docker, the Open Source container Engine, lets you build, ship and run, any app, anywhere.
This is the presentation which was shown in December 2014 for the last stop of the "Tour de France" in Bordeaux. It is slightly different from the presentation which was shown in the other cities (http://www.slideshare.net/jpetazzo/introduction-to-docker-december-2014-tour-de-france-edition), and includes a detailed history of dotCloud and Docker and a few other differences.
Special thanks to https://twitter.com/LilliJane and https://twitter.com/zirkome, who gave me the necessary motivation to put together this slightly different presentation, since they had already seen the other presentation in Paris :-)
Introduction to Docker, December 2014 "Tour de France" EditionJérôme Petazzoni
Docker, the Open Source container Engine, lets you build, ship and run, any app, anywhere.
This is the presentation which was shown in December 2014 for the "Tour de France" in Paris, Lille, Lyon, Nice...
Containers, Docker, and Microservices: the Terrific TrioJérôme Petazzoni
One of the upsides of Microservices is the ability to deploy often,at arbitrary schedules, and independently of other services, instead of requiring synchronized deployments happening on a fixed time.
But to really leverage this advantage, we need fast, efficient, and reliable deployment processes. That's one of the value propositions of Containers in general, and Docker in particular.
Docker offers a new, lightweight approach to application portability.It can build applications using easy-to-write, repeatable, efficient recipes; then it can ship them across environments using a common container format; and it can run them within isolated namespaces which abstract the operating environment, independently of the distribution,versions, network setup, and other details of this environment.
But Docker can do way more than deploy your apps. Docker also enables you to generalize Microservices principles and apply them on operational tasks like logging, remote access, backups, and troubleshooting.This decoupling results in independent, smaller, simpler moving parts.
Containerization is more than the new Virtualization: enabling separation of ...Jérôme Petazzoni
Docker offers a new, lightweight approach to application
portability. Applications are shipped using a common container format,
and managed with a high-level API. Their processes run within isolated
namespaces which abstract the operating environment, independently of
the distribution, versions, network setup, and other details of this
environment.
This "containerization" has often been nicknamed "the new
virtualization". But containers are more than lightweight virtual
machines. Beyond their smaller footprint, shorter boot times, and
higher consolidation factors, they also bring a lot of new features
and use cases which were not possible with classical virtual machines.
We will focus on one of those features: separation of operational
concerns. Specifically, we will demonstrate how some fundamental tasks
like logging, remote access, backups, and troubleshooting can be
entirely decoupled from the deployment of applications and
services. This decoupling results in independent, smaller, simpler
moving parts; just like microservice architectures break down large
monolithic apps in more manageable components.
Pipework: Software-Defined Network for Containers and DockerJérôme Petazzoni
Pipework lets you connect together containers in arbitrarily complex scenarios. Pipework uses cgroups and namespaces and works with "plain" LXC containers (created with lxc-start), and with the awesome Docker.
It's nothing less than Software-Defined Networking for Linux Containers!
This is a short presentation about Pipework, given at the Docker Networking meet-up November 6th in Mountain View.
More information:
- https://github.com/jpetazzo/pipework
- http://www.meetup.com/Docker-Networking/
Docker Tips And Tricks at the Docker Beijing MeetupJérôme Petazzoni
This talk was presented in October at the Docker Beijing Meetup, in the VMware offices.
It presents some of the latest features of Docker, discusses orchestration possibilities with Docker, then gives a briefing about the performance of containers; and finally shows how to use volumes to decouple components in your applications.
Introduction to Docker at Glidewell Laboratories in Orange CountyJérôme Petazzoni
In this presentation we will introduce Docker, and how you can use it to build, ship, and run any application, anywhere. The presentation included short demos, links to further material, and of course Q&As. If you are already a seasoned Docker user, this presentation will probably be redundant; but if you started to use Docker and are still struggling with some of his facets, you'll learn some!
A Guide to Smart Building Open Standards 101Memoori
Are you confused by the Open Standards Landscape in Smart Building Technology? Our presentation slides serve as a non-technical guide to the types of protocols, and data frameworks used in commercial buildings and why they matter! Improve your understanding of open standards & their impact on smart buildings!
Jade Malay’s Perspective on AI and Supercomputing Growth in DallasJade Malay
Jade Malay brings a thoughtful and forward-looking perspective to the growing influence of AI and supercomputing in Dallas. As a leader with deep roots in healthcare and innovation, Jade Malay explores how these powerful technologies are not only transforming local industries but also shaping the future of education and community development. Her insights shed light on the real-world impact of AI—beyond algorithms and servers—highlighting how Dallas is becoming a thriving center for tech-driven progress that benefits everyday lives.
UiPath Automation Developer Associate 2025 Series - Career Office HoursDianaGray10
This event is being scheduled to check on your progress with your self-paced study curriculum. We will be here to answer any questions you have about the training and next steps for your career
You know you need to invest in a CRM platform, you just need to invest in the right one for your business.
It sounds easy enough but, with the onslaught of information out there, the decision-making process can be quite convoluted.
In a recent webinar we compared two options – HubSpot’s Sales Hub and Salesforce’s Sales Cloud – and explored ways to help you determine which CRM is better for your business.
This paper supports the importance of teaching logic (and logic programming) in computer science degrees and discusses several proposals that can be included in current curricula without the need to adapt the academic guides. In addition, some practical examples are described and the tools used for their subsequent application are related.
📢 UiPath Community Meetup: LLM and UiPath – From AI Center to GenAI Activities & Agents
Join us for an exciting UiPath Community Virtual Meetup where we explore how UiPath is evolving from AI Center towards GenAI, unlocking new possibilities with specialized GenAI activities and AI-powered Agents. Hosted by the Rome Chapter in collaboration with Zurich (and potentially other chapters), this session will provide insights into the latest advancements in AI-driven automation.
📅 17th April 2025 | 🕙 10:30 - 11:30 AM CET
🔥 What’s on the agenda?
From AI Center to LLM-Powered-Automation – Understanding the transition from AI Center to GenAI, DocPath and CommPath.
GenAI Activities in UiPath – Exploring new AI capabilities and how to leverage them effectively.
AI Agents and Agentic Orchestration – A live demo showcasing how LLMs can power intelligent Agents and how they can be effectively orchestrated.
🎤 Speakers:
🔹 Roman Tobler, UiPath MVP, CEO at Routinuum
🔹 Flavio Martinelli, UiPath MVP 2023, Technical Account Manager at UiPath
Whether you’re an automation developer, AI enthusiast, or business leader, this session will help you navigate the next phase of AI-driven automation in UiPath.
Oil seed milling, also known as oilseed crushing, is the process of extracting oil from seeds like soybeans, sunflower seeds, and rapeseed. This process involves several steps, including seed preparation, oil extraction (often using mechanical pressing or solvent extraction), and oil refining.
AI adoption is moving fast, but most organizations are struggling with AI readiness as they jump in before ensuring data, strategy, and governance are in place.
Autopilot for Everyone Series - Session 3: Exploring Real-World Use CasesUiPathCommunity
Welcome to 'Autopilot for Everyone Series' - Session 3: Exploring Real-World Use Cases!
Join us for an interactive session where we explore real-world use cases of UiPath Autopilot, the AI-powered automation assistant.
📕 In this engaging event, we will:
- demonstrate how UiPath Autopilot enhances productivity by combining generative AI, machine learning, and automation to streamline business processes
- discover how UiPath Autopilot enables intelligent task automation with natural language inputs and AI-powered decision-making for smarter workflows
Whether you're new to automation or a seasoned professional, don't miss out on this opportunity to transform your approach to business automation.
Register now and step into the future of efficient work processes!
Navigating common mistakes and critical success factors
Is your team considering or starting a database migration? Learn from the frontline experience gained guiding hundreds of high-stakes migration projects – from startups to Google and Twitter. Join us as Miles Ward and Tim Koopmans have a candid chat about what tends to go wrong and how to steer things right.
We will explore:
- What really pushes teams to the database migration tipping point
- How to scope and manage the complexity of a migration
- Proven migration strategies and antipatterns
- Where complications commonly arise and ways to prevent them
Expect plenty of war stories, along with pragmatic ways to make your own migration as “blissfully boring” as possible.
Low-velocity penetration impact behavior of Triply Periodic Minimal Surface s...Javier García Molleja
Authors: Lucía Doyle, Javier García-Molleja, Carlos González
Published in: Advanced Engineering Materials, 2025, 24002999
Because of copyright transfer to Wiley-VCH only the first page is provided. Available at:
https://doi.org/10.1002/adem.202402999
Monitor Kafka Clients Centrally with KIP-714Kumar Keshav
Apache Kafka introduced KIP-714 in 3.7 release, which allows the Kafka brokers to centrally track client metrics on behalf of applications. The broker can subsequently relay these metrics to a remote monitoring system, facilitating the effective monitoring of Kafka client health and the identification of any problems.
KIP-714 is useful to Kafka operators because it introduces a way for Kafka brokers to collect and expose client-side metrics via a plugin-based system. This significantly enhances observability by allowing operators to monitor client behavior (including producers, consumers, and admin clients) directly from the broker side.
Before KIP-714, client metrics were only available within the client applications themselves, making centralized monitoring difficult. With this improvement, operators can now access client performance data, detect anomalies, and troubleshoot issues more effectively. It also simplifies integrating Kafka with external monitoring systems like Prometheus or Grafana.
This talk covers setting up ClientOtlpMetricsReporter that aggregates OpenTelemetry Protocol (OTLP) metrics received from the client, enhances them with additional client labels and forwards them via gRPC client to an external OTLP receiver. The plugin is implemented in Java and requires the JAR to be added to the Kafka broker libs.
Be it a kafka operator or a client application developer, this talk is designed to enhance your knowledge of efficiently tracking the health of client applications.
GDG Cincinnati presentation by Ben Hicks, April 16, 2024.
As AI continues to permeate our industry, it's crucial to consider how it will reshape the way both seasoned and new developers learn, code, and create. This presentation offers a candid look at the evolving landscape – the opportunities, challenges, and the imperative for continuous adaptation. Let's explore the good, the bad, and the ugly of AI's influence on development, and discuss how we can best utilize what it has to offer while avoiding the snake oil.
A Product Information Management (PIM) system helps businesses deliver consistent, accurate, and up-to-date product data across all sales channels—websites, marketplaces, apps, and more—ensuring better customer experience and higher conversion rates.
Introduction to LLM Post-Training - MIT 6.S191 2025Maxime Labonne
In this talk, we will cover the fundamentals of modern LLM post-training at various scales with concrete examples. High-quality data generation is at the core of this process, focusing on the accuracy, diversity, and complexity of the training samples. We will explore key training techniques, including supervised fine-tuning, preference alignment, and model merging. The lecture will delve into evaluation frameworks with their pros and cons for measuring model performance. We will conclude with an overview of emerging trends in post-training methodologies and their implications for the future of LLM development.
Jeremy Millul - A Junior Software DeveloperJeremy Millul
Jeremy Millul is a junior software developer specializing in scalable applications. With expertise in databases like MySQL and MongoDB, Jeremy ensures efficient performance and seamless user experiences. A graduate of NYU, and living in Rochester, NY, with a degree in Computer Science, he also excels in frameworks such as React and Node.js. Jeremy’s commitment to delivering robust, high-quality solutions is matched by his dedication to staying ahead in the ever-evolving tech landscape.
Bay Area Apache Spark ™ Meetup: Upcoming Apache Spark 4.0.0 Releasecarlyakerly1
Covering new features and enhancements in the upcoming Apache Spark™ 4.0 release. This deck has an overview of the following features:
✅ Spark Connect: The future of Spark extensibility
✅ ANSI Mode: For better ANSI SQL compatibility
✅ Variant data types for semi-structured data
✅ String collation support
✅ Python UDTF functions
✅ SQL and UDTF functions
✅ PySpark UDF Unified Profiler
What comes after world domination with Daniel Stenberg, April 2025Daniel Stenberg
Open Source has in many ways already won. It is used in every product by every company, to a very a large degree. But we are not done. We can improve: we can take this further, we can make our projects better, we can enhance our communities and make sure it is done sustainably. The future is ours.
Learn Prompt Engineering: Google’s 10-Step Guide Now AvailableSOFTTECHHUB
Prompt engineering has grown into a subject that touches everyone interested in large language models. What began as a toolkit for computer programmers now shapes interactions for a larger group of users who want reliable and creative outputs. In recent years, the way we interact with language models has changed, as more people see value in crafting questions and statements that lead to well-behaved answers.
2. What to Expect from the Session
We will talk about ...
●
Docker Compose for development environments
●
taking those environments to production
– Docker cluster provisioning
– container image building and deployment
– service discovery
●
Compose, Machine, Swarm, ECS
We expect that you are familiar with Docker fundamentals!
3. Introductions
● Jérôme Petazzoni (@jpetazzo)
● Since 2010: putting things in containers at dotCloud
– polyglot PAAS
– microservices
– provisioning, metrics, scaling ...
– massive deployment of LXC
● Since 2013: putting things in containers at Docker
(reminder: dotCloud became Docker in 2013...)
● 5 years of experience on a 2 years old technology!
4. Introductions, take 2
● Hi, I'm Jérôme
● I'm a Software Engineer about to start a new gig!
● Tomorrow for my first day I will work on DockerCoins*
● (It's a cryptocurrency-blockchain-something system)
● My coworkers are using Docker all over the place
● My task will be to deploy their stack at scale
*Fictious project name; you can't buy pizzas or coffee with DockerCoins (yet).
6. Preparing for my first day
●
I just received my new laptop!
●
The only instructions where:
“Install the Docker Toolbox.”
●
~180 MB download for Windows and OS X
11. How does this work?
●
“docker-compose up” tells Compose to start the app
●
If needed, the app is built first
●
How does Compose know what to do?
●
It reads the “Compose file” (docker-compose.yml)
14. How does this work?
● Application is broken down into services
● Each service is mapped to a container
● Each container can come from:
– a pre-built image in a library called “registry”
– a build recipe called “Dockerfile”
● The Compose file defines all those services
(and their parameters: storage, network, env vars...)
16. Our sample application
●
Microservices architecture
●
Different languages and frameworks
– Ruby + Sinatra
– Python + Flask
– Node.js + Express
●
Different kinds of services
– background workers
– web services with REST API
– stateful data stores
– web front-ends
17. Mandatory plug on microservices
●
Advantages of microservices:
– enables small teams (Jeff Bezos two-pizza rule)
– enables “right tool for the right job”
– services can be deployed/scaled independently
– look for e.g. “Adrian Cockroft Microservices” talks
●
Drawbacks to microservices:
– distributed systems are hard
(cf. aphyr.com if you have doubts)
– load balancing, service discovery become essential
– look for e.g. “Microservices Not A Free Lunch” article
18. Deploying on a Cloud Instance
●
Same workflow:
1) ssh into remote Docker Host
2) git clone
3) docker-compose up
4) open app in browser
●
Let's see a real demo!
23. Compose take-aways
●
Docker abstracts the environment for us
●
Any Docker host is a valid deployment target:
– local environment
(with the Docker Toolbox)
– on-demand cloud instances
(with Docker Machine)
– bring-Your-Own-Server
(for on-prem and hybrid strategies)
●
Frictionless on-boarding (and context-switching)
●
But how do we deploy to production, at scale?
24. What's missing
●
Cluster provisioning
●
Building and deploying code
●
Service discovery
(Non-exhaustive list.)
Let's see how to address those points.
We will dive into details — and give more live demos!
27. Docker Machine
●
Docker Machine comes with the Docker Toolbox
●
Can create Docker hosts on:
– EC2 and other clouds
– local environments (VirtualBox, OpenStack…)
●
Can create clusters using Docker Swarm
●
Current limitations (but expect this to improve):
– one machine at a time
– centralized credentials
28. DEMO
export TOKEN=$(docker run swarm create)
echo $TOKEN
docker-machine create -d amazonec2 --swarm
--swarm-master --swarm-discovery token://$TOKEN node00 &
for N in $(seq 1 4); do
sleep 3
docker-machine create -d amazonec2 --swarm
--swarm-discovery token://$TOKEN node0$N &
done
wait
Video: https://www.youtube.com/watch?v=LFjwusorazs
32. Building and deploying with Docker
●
Let's continue to use Compose to build our app images
●
And store those images in a Docker Registry
– Docker Hub
(SAAS à la GitHub, free for public images)
– Docker Trusted Registry
(commercial offering; available e.g. through AWS marketplace)
– self-hosted, community version
33. The plan
●
Each time we need to deploy:
1) build all containers with Compose
2) tag all images with a unique version number
3) push all images to our Registry
4) generate a new docker-compose.yml file,
referencing the images that we just built and pushed
●
This will be done by a script
34. You get a script!
And you get a script!
Everybody gets a script!
●
All the scripts that we will use here are on GitHub
●
Feel free to use them, copy them, adapt them, etc.
URL: https://github.com/jpetazzo/orchestration-workshop
(Don't panic, URL will be shown again at the end of the presentation)
35. DEMO
●
build-tag-push.py
●
inspect the resulting YAML file
Those images are now frozen.
They'll stay around “forever” if we need them again.
(e.g. to do a version rollback)
See: https://hub.docker.com/r/jpetazzo/dockercoins_webui/tags/
37. Why do we need service discovery?
●
Service A needs to talk to service B
●
How does A know how to talk to B?
– service A needs: address, port, credentials
●
What if there are multiple instances of B?
– examples: load balancing, replication
●
What if B location changes over time?
– examples: scaling, fail-over
●
Service discovery addresses those concerns
42. Hard-coded service discovery
●
Requires many code edits to change environment
●
Error-prone
●
Big, repetitive configuration files often land in the repo
●
Adding a new service requires editing all those configs
●
Maintenance is expensive
(S services × E environments)
●
�
44. Twelve-factor App
●
Separates cleanly code and environment variables
(environment is literally defined by environment variables)
●
Still requires to maintain configuration files
(containing lists of environment variables)
●
Production parameters are easier to keep out of the repo
●
Dramatic errors are less likely to happen
●
�
46. Configuration database
●
If you want the same code in dev and prod,
you need to deploy your config DB in dev too
●
Instead of maintaining config files,
you maintain Zookeeper* clusters and fixtures
●
… or have different lookup logic for dev and prod
●
�
*Or your other favorite config DB, e.g. etcd, Consul...
47. Local load balancing / routing
●
Connect to well-known location
$db = mysql_connect(“localhost”);
cache = Redis.new(:host => "localhost")
conn, err := net.Dial("tcp", "localhost:8001”)
●
In dev: all components run locally
●
In prod: local load balancer routes the traffic
(example: AirBNB's SmartStack)
48. Local load balancing / routing
●
Code can be identical in dev and prod
●
Deployment will differ:
– direct connection in dev
– proxies, routers, load balancers in prod
●
“Configuration” is merely a static port allocation map
(indicating which service listens on which port)
●
Way easier for devs; however ops still have work to do
●
�
56. Using ambassadors
●
Code remains readable and clean
●
Plumbing (service discovery, routing, load balancing, etc.)
is abstracted away (somebody still has to do it, though!)
●
Plumbing doesn't encumber our dev environment
●
Changes in plumbing won't impact the code base
●
�
59. Moving slowly
●
Code deployment is infrequent:
– every week, on a regular schedule
– a bit of downtime is OK (a few minutes, maybe one hour)
●
Failures are rare (less than 1/year)
and/or don't have critical impact
●
Reconfigurations are not urgent:
– we bake them in the deployment process
– it's OK if they disrupt service or cause downtime
60. Strategy for apps moving slowly
●
Bake configuration and parameters with the deployment
(reconfiguration = rebuild, repush, redeploy)
●
Or configure manually after deployment (!)
●
In case of emergency: SSH+vi (!)
61. Results
●
Advantages
– zero cost upfront
– easy to understand*
●
Drawbacks
– each deployment, each change = risk
– expensive in the long run
*Except for your boss when your app is down and it takes a while to bring it back up
62. Moving mildly
●
Code deployment:
– happens every day
– downtime is not OK (except maybe very short glitches)
●
Failures happen regularly;
they must be resolved quickly
●
Reconfigurations are frequent:
– scaling up/down; moving workloads; changing databases
– altering application parameters for A/B testing
63. Strategy for apps moving mildly
●
Inject configuration after the deployment
●
When you just want to change a parameter:
reconfigure (without redeploying everything)
●
Automate the process with a “push button” script
64. Results
●
Advantages
– easy to understand and to implement
– no extra moving part
(just this extra “push button” script/process)
●
Drawbacks
– services must allow reconfiguration
– reconfiguration has to be triggered after each change
– risk of meta-failure (bug in the deployment system)
65. Moving wildly*
●
Code deployment:
– happens continuously (10, 100, 1000+ times a day)
– downtime is not OK, even if it's just a few sporadic failed requests
●
Failures happen all the time;
repair actions must be fully automated
●
Reconfigurations are part of the app lifecycle:
– automatic scaling, following planned and unplanned patterns
– generalized blue/green deployment, canary testing, etc.
*a.k.a “move fast and break things”
66. Strategy for apps moving wildly
●
Requirement: detect changes as they happen
●
Use a combination of:
– monitoring
– live stream of events that we can subscribe to
– services that register themselves
– fast polling
●
After deployment, scaling, outage, metric threshold…:
automatic reconfiguration
67. Results
●
Advantages
– everything happens automatically
– no extra step to run when you deploy
– more modular
(different processes can take care of different service types)
●
Drawbacks
– extra moving parts and services to maintain
– meta-failures are even more dangerous
68. Recap table
How fast
should we move?
How much work
is it for ...
How do we
handle ...
Slowly Mildly Wildly Devs Ops Scaling Failures
Hard-coded
12-Factor
Config Database
Local LB/routers
Ambassadors
69. Recap table (subtitles)
How fast
should we move?
How much work
is it for ...
How do we
handle ...
Slowly Mildly Wildly Devs Ops Scaling Failures
Hard-coded OK NO NO easy easy painfully horribly
12-Factor OK
OK WITH
RESTARTS
NO easy easy meh meh
Config Database OK OK OK hard hard cool cool
Local LB/routers OK OK OK medium
medium
/hard
cool cool
Ambassadors OK OK OK easy
medium
/hard
cool cool
71. The plan
●
Deploy a simple application (trainingwheels)
– on ECS
– on Swarm
●
Deploy a complex application (dockercoins)
– on ECS
– on Swarm
72. Our simple application, “trainingwheels”
●
Two service:
– web server
– redis data store
●
Tells you which web server served your request
●
Counts how many requests were served
●
Keeps separate counters for each server
73. DEMO
●
cd ~
●
git clone git://github.com/jpetazzo/trainingwheels
●
cd trainingwheels
●
docker-compose up
●
open app
●
^C
74. Deploying on ECS
●
On ECS, a container is created as a member of a task
●
Tasks are created from task definitions
●
Task definitions are conceptually similar to Compose files
(but in a different format)
●
ECS CLI to the rescue!
75. Deploying on ECS
●
ECS CLI will:
– create a task definition from our Compose file
– register that task definition with ECS
– run a task instance from that task definition
●
ECS CLI will not:
– work if your Compose file has a “build” section
(it only accepts “image” sections)
●
Let's use the “build-tag-push” script shown earlier!
77. Scaling “trainingwheels” on ECS
At this point, if we deploy and scale, we will end up with multiple
copies of the app, each with its own Redis.
To avoid this, we need to deploy our first ambassador!
Here is the plan:
●
Create a new Compose file for our Redis service
●
Use ECS CLI to run redis, and note its location
●
Update the main Compose file so that the “redis” service is
now an ambassador pointing to the actual Redis
78. Introducting jpetazzo/hamba
●
Easy ambassadoring for the masses!
●
In a shell:
docker run jpetazzo/hamba <frontend-port>
[backend1-addr] [backend1-port]
[backend2-addr] [backend2-port] …
●
In a Compose file:
redis:
image: jpetazzo/hamba
command: <front-port> [backend-addr] [backend-port] ...
79. DEMO (1/2)
●
mkdir ~/myredis
●
cp $COMPOSE_FILE ~/myredis
●
cd ~/myredis
●
edit $COMPOSE_FILE
– expose port 6379
– remove www service
●
ecs-cli compose up
●
ecs-cli compose ps
●
note host+port
80. DEMO (2/2)
●
cd ~/trainingwheels
●
edit $COMPOSE_FILE
– replace redis image with jpetazzo/hamba
– add “command: 6379 <redishost> <redisport>”
●
ecs-cli compose up
●
ecs-cli compose scale 4
●
watch ecs-cli compose ps
●
open a couple of apps
●
open the load balancer
82. Scaling “trainingwheels” on Swarm
●
Slightly different idea!
●
We keep a single Compose file for our app
●
We replace links with ambassadors:
– using a local address (127.X.Y.Z)
– sharing the client container's namespace
●
Each container that needs to connect to another service, gets
its own private load balancer for this exact service
●
That's a lot of load balancers, but don't worry, they're cheap
83. Network namespace ambassadors
www
172.17.0.4
www
172.17.0.4
“redis” and “www”
containers are created by
Compose, and placed by
Swarm, potentially on
different hosts.
In “www”, /etc/hosts has the
following entry:
127.127.0.2 redis
ContainerContainerHostHost AmbassadorAmbassador
redis
172.17.2.5
redis
172.17.2.5
91. Scaling with ambassadors
Before scaling our app, we
have one single “www”
instance, coupled with its
ambassador.
(In this example, we have
placed the first “www” and
“redis” together for clarity.)
ContainerContainerHostHost AmbassadorAmbassador
wwwwww redisredis
ambaamba
92. Scaling with ambassadors
“docker-compose scale www=4”
We now have 4 instances
of “www” but 3 of them can't
communicate with “redis”
yet.
ContainerContainerHostHost AmbassadorAmbassador
wwwwww wwwwww
wwwwww wwwwww
redisredis
ambaamba
93. Scaling with ambassadors
“create-ambassadors.py”
Each “www” instance now
has its own ambassador,
but 3 of them are still
unconfigured.
ContainerContainerHostHost AmbassadorAmbassador
wwwwww wwwwww
ambaamba
wwwwww
ambaamba
redisredis
ambaamba
wwwwww
ambaamba
94. Scaling with ambassadors
“configure-ambassadors.py”
The 3 new ambassadors
receive their configuration
and can now route traffic to
the “redis” service.
ContainerContainerHostHost AmbassadorAmbassador
wwwwww wwwwww
ambaamba
wwwwww
ambaamba
redisredis
ambaamba
wwwwww
ambaamba
96. Scaling “dockercoins” on ECS
●
Let's apply the same technique as before
●
Separate the Redis service
●
Replace “redis” with an ambassador in the Compose file
●
Let ECS do the rest!
97. DEMO (1/2)
●
Get our Redis host+port again:
– cd ~/myredis
– ecs-cli compose ps
●
cd ~/dockercoins
●
set COMPOSE_FILE
●
edit $COMPOSE_FILE
– change “image: redis” to “image: jpetazzo/hamba”
– add “command: 6379 <redishost> <redisport>
– add “mem_limit: 100000000” everywhere
– remove volumes
●
fixup-yaml.sh
98. DEMO (2/2)
●
ecs-cli compose up
●
watch ecs-cli compose ps
●
open webui
●
ecs-cli compose scale 4
●
watch ecs-cli compose ps
●
open webui
●
repeat!
99. Scaling “dockercoins” on ECS
●
We started with our “redis” service...
ContainerContainerHostHost AmbassadorAmbassador
redisredis
100. Scaling “dockercoins” on ECS
●
Created one instance of the stack with an ambassador...
ContainerContainerHostHost AmbassadorAmbassador
workerworker
rngrnghasherhasher
webuiwebui
redisredis
redisredis
101. Scaling “dockercoins” on ECS
workerworker
rngrnghasherhasher
webuiwebui
●
Added a second instance of the full stack...
ContainerContainerHostHost
redisredis
AmbassadorAmbassador
workerworker
rngrnghasherhasher
webuiwebui
redisredis
redisredis
102. Scaling “dockercoins” on ECS
workerworker
rngrnghasherhasher
webuiwebui
●
And another one… etc.
ContainerContainerHostHost
redisredis
AmbassadorAmbassador
workerworker
rngrnghasherhasher
webuiwebui
redisredis
workerworker
rngrnghasherhasher
webuiwebui
redisredis
redisredis
103. Scaling “dockercoins” on Swarm
●
Let's apply the same technique as before
●
Replace links with ambassadors
●
Start containers
●
Add ambassadors
●
Inject ambassador configuration
104. DEMO (1/2)
●
edit COMPOSE_FILE
– restore “image: redis”
– remove “command:” from the redis section
●
link-to-ambassadors.py
●
docker-compose up -d
●
create-ambassadors.py
●
configure-ambassadors.py
●
docker-compose ps webui
●
open webui
106. Scaling “dockercoins” on Swarm
●
Two (for simplicity) empty Docker hosts
ContainerContainerHostHost AmbassadorAmbassador
107. Scaling “dockercoins” on Swarm
●
“docker-compose up” — containers are unwired
ContainerContainerHostHost AmbassadorAmbassador
workerworker
webuiwebui redisredis
hasherhasher
rngrng
108. Scaling “dockercoins” on Swarm
●
Create ambassadors for all containers needing them
ContainerContainerHostHost AmbassadorAmbassador
workerworker
webuiwebui
redisredis
redisredis
hasherhasher
rngrng
redisredis
hasherhasher
rngrng
109. Scaling “dockercoins” on Swarm
●
Configure ambassadors: the app is up and running
ContainerContainerHostHost AmbassadorAmbassador
workerworker
webuiwebui
redisredis
redisredis
hasherhasher
rngrng
redisredis
hasherhasher
rngrng
113. Remarks
●
Yes, that's a lot of ambassadors
●
They are very lightweight, though (~1 MB)
docker stats $(docker ps | grep hamba | awk '{print $1}')
●
Ambassadors do not add an extra hop
– they are local to their client (virtually zero latency)
– better efficiency than external load balancer
– if the ambassador is down, the client is probably down as well
116. ECS and Swarm highlights
●
Both offer easy provisioning tools
●
ECS = AWS ecosystem
– integrates with offerings like IAM, ELB…
– provides health-checks and self-healing
●
Swarm = Docker ecosystem
– offers parity with local development environments
– exposes real-time events stream through Docker API
●
Both require additional tooling for builds
(Swarm has preliminary build support)
●
Both require extra work for plumbing / service discovery
117. Future directions, ideas ...
●
We would love your feedback!
●
App-specific ambassadors
(SQL bouncers, credential injectors...)
●
Automatically replace services using official images:
– redis, memcached → elasticache
– mysql, postgresql → RDS
– etc.
118. Other improvements
●
Listen to Docker API events stream,
detect containers start/stop events
– automatically configure load balancers (ehazlett/interlock)
– insert containers into a config database (gliderlabs/registrator)
●
Overlay networks
(offers direct container-to-container communication)
– 3rd party: weave, flannel, pipework
– Docker network plugins (experimental.docker.com)
121. Thank you!
Related sessions
● CMP302 - Amazon EC2 Container Service: Distributed Applications at Scale
● CMP406 - Amazon ECS at Coursera: Powering a general-purpose near-line
execution microservice, while defending against untrusted code
● DVO305 - Turbocharge Your Continuous Deployment Pipeline with Containers
● DVO308 - Docker & ECS in Production: How We Migrated Our Infrastructure from
Heroku to AWS (Remind)
● DVO313 - Building Next-Generation Applications with Amazon ECS (Meteor)