40 minutes | D2 | Cloud & Hyperscale
Linux Integration Engineering teams (formerly Cyborg) will show you how we embrace hybrid cloud, Red Hat's and IBM's strategic direction. There are many projects and services in our department that deploy into multiple platforms, datacenters and public clouds. Panelists are seasoned engineers who have years of experience in development, maintenance, service architecture, scalability and security. If you are working on microservices, in DevOps or SRE, or develop a service, you cannot miss this session.
40 minutes | D2 | Cloud & Hyperscale
OpenShift pipelines, based on Tekton, are powerful tools for automating tasks in a cluster like keeping applications up to date. But what happens when your applications are not yet containerized, and still in VMs? You require to apply your OpenShift pipeline to VMs. With kubevirt-tekton-tasks, cluster administrators can automate pipelines from copying a VM template, updating the template with new metadata, preparing a new disk with new files, to creating a new VM. Automating VMs can save a lot of time and money that would be invested in manual tasks.
In this session, we will introduce kubevirt-tekton-tasks and show how to automate creation of Windows 10 from copying and updating the template, creating the new disk with new files for the VM and creating the VM from it and how to customize it. We will also use kubevirt.io, an add-on for Kubernetes that is used to manage virtual machines.
Attendees will learn how to
- Create complete flow from creating / copying / modifying template to creating virtual machine from template
- Manage virtual machines with tekton
Various organizations started to look into serverless as a way of building business logic that can take advantage of the cloud. As it might look at first, it's not an easy task to rely strictly on functions that represent independent logic pieces. There is a risk of losing the big picture and by that not having full control over day-to-day operations.
In this session, Maciej will walk you through an approach that allows you to rely on serverless techniques such as functions and cloud events yet still working on the higher-level representation. He will introduce you to Workflow as a Function Flow concept that builds up on top of state of the art technologies such as Microprofile (Quarkus), CloudEvents, and KNative Eventing to deliver a highly scalable business-oriented solution that looks like a single service but runs as a set of functions.
The audience will get an introduction equipped with a set of demonstrations of function flows to prove the ease of use and visibility of the running solution.
Have you ever tried to create a cluster in Openshift's Multicluster engine ? It can be quite a good experience if you have admin level of permissions. But what about typical developer/devops engineers ? We haven't had a good solution until now.
In this session, we will take a look at a new approach using cluster templates which allows admins to define guard-rails for developer/devops engineers who want to self-service their new, fully configured, cluster.
BugHunting is a challenge for the conference attendees, who want to try their coding and debugging skills. Participants will be provided with several tasks in various programming languages. Each task consists of broken code and a bug report.
The purpose of every task is to find the bug and fix it in the code. Your solution is evaluated immediately after you submit it (you can submit as many times as you want) and you score points if your solution is correct (gained points can never be lost). Attendees with the most points will get a prize at the end of BugHunting session.
HOW TO JOIN
- You can just come and we’ll lend you a laptop.
- You can download and run the BugHunting container on your laptop (even any time after the conference - stop by for the credentials). For that, check the environment setup in our HowTo https://howto.bughunting.cz/env_setup.html.
For more information, see https://howto.bughunting.cz.
Cloud financial management or FinOps in the cloud represents tools, products, practices, and cultural setup to increase the ability of an organization to understand and manage its cloud costs.
The current approaches for implementing FinOps practices rely on teams conforming to predefined rules that should make their technical solutions financially efficient. These rules are technical rules, based on financial models defined within the company to make sure that certain financial goals are met. For instance, the organization would create a rule that no snapshot should be older than certain number of days. A more extreme example might be setting the acceptable CPU utilization level for using certain instance type. This idea is focused on gaining financial benefit through educating teams how to lower the expenses, e.g., setting the retention policy for snapshots. This approach is assisted by various practices, tools, and products. No matter how implemented, the approach represents the extension of the financial side of an organization. Some recommendations will never get accepted by engineering teams because they are not providing enough technical evidence that applying them will lead to a sustainable engineering solution while saving money for the company.
Due to this business motivation, there is no long-lasting technical influence in the organization, making the overall impact limited.
The proposed approach offers methodology that establishes a framework not only for achieving financial results, but also for making a long-lasting technical influence. It is a framework that does not interfere directly with technical or financial domains, but coexists with them, acting as a corrective force. It treats an organization as a distributed system, affecting different subsystems to achieve its goals. The approach is about establishing advanced observability at scale, and it is most suitable for large organizations where it is difficult to discuss strategy with many teams directly.
The simple example might be a subsystem able to detect teams with different strategies for managing their snapshots and try to learn about them before imposing a rule that would affect the organization. These strategies might be different for different project types or for different environments. A more complex one would be the ability of a subsystem to detect different CPU utilization patterns and learn how to evaluate them more accurately. This learning would be supported by the subsystem but would have to be executed through a dialog with teams that have different usage patterns. This would empower the proposed subsystem with the new knowledge that would make the subsystem better at performing its recommendation function. The most complex example would be its strategic function. In this example, the company with a lot of teams would use the approach to steer its teams towards desired strategic goals which would unify technical and business goals. For instance, if an organization would like to improve its adoption of serverless technology, the subsystem would be able to detect different patterns for using serverless approach, e.g., some teams would use containers, some would use container platforms, some would use lambdas and some would use some combination, or something different. Then, a dialog between teams using representative patterns would be established and valuable knowledge would be formed. There is no algorithm that can generate this knowledge – the important part of the process, besides collecting a lot of data is engaging with teams, learning from this engagement, and feeding this knowledge into the system.
The difference between the current and the proposed approach is the effects they have on a system. The current approach favors short-term and easier to implement actions supported by rich reporting capabilities. It is more generic and applicable to wider range of organizations. The proposed one prefers long-term influence, through nurturing a dialog which would generate the knowledge that would be used to accomplish long-term organizational goals.
The need for the cultural shift is also discussed: the current approach expresses the cultural shift as a set of organizational rules for lowering expenses. There is no direct correlation between efforts to lower expenses and project goals. Therefore, these rules feel imposed upon teams. The proposed approach defines cultural shift as a contract between business and technology about the strategy that should bring success to an organization, hence the motivation to implement it. The proposed approach is used to guide strategy adoption.
Red Hat OpenShift Data Foundation (ODF) is a software-defined, container-native storage solution that's integrated with the OpenShift Container Platform. It is based on Rook, Ceph and NooBaa projects and as such, it provides file storage, block storage, or object storage for users to use. Each of those storage formats require user oversight and monitoring.
The audience will learn about various alerts that can appear in their storage cluster, where to check storage status and what monitoring is available for users that use ODF as Managed Service addon instead of standard offering.
Managing project's performance during its development is a tedious job and the lack of open-source tooling does not make it any easier. In this talk, we will introduce Perun: a tool for complex management of project's performance. Perun links profiling results to the corresponding project versions (commits) and leverages these associations for analysis of, e.g., performance changes.
We will demonstrate Perun's capabilities on two real-world use-cases: (1) we will show how Perun can be used to detect a known CPython performance issue and help with pinpointing the root cause of the issue; and, (2) we will show how Perun's fuzzing module can be used to generate inputs that could help manifest performance issues (e.g., ReDoS attacks) in the future.
Get your VMs into Cloud with MTV!
We will talk about how to migrate your VMs from Red Hat Virtualization or VMware vSphere to Red Hat OpenShift with Container-native Virtualization.
Remapping your infrastructure from the virtualization environment to OpenShift.
I'll demonstrate the basic usage of the Migration Toolkit for Virtualization, where we will migrate virtual machines from Red Hat Virtualization to Red Hat OpenShift with Container-native Virtualization.
The talk is focused on administrators who want to migrate their VMs to the OpenShift.
In 2019 Zeta Global DevOps team started the long journey to the world of hybrid cloud. Since then we went through a successful migration of the business services from bare-metal servers to containerized setup, employing Kubernetes as a scheduler, and GitOps Toolkit as the entry point to the cluster.
During the talk I'll be sharing how our cluster deployment and maintenance processes and tooling matured, and how we worked with Engineering on making Zeta applications cloud-ready.
The talk is dedicated to the problem of going from bare-metal deployments to containers, and to the Kubernetes, and covers Zeta Global Kubernetes infrastructure in full.
We had a long journey, and I hope our experience will be helpful to both DevOps and Engineering organizations in other companies.
20 minutes | D3 | Future Tech and Open Research
Service observability is a capability of software to produce sufficient output (metrics, logs, traces…) to reason about its internal state. In the last couple of years we've seen tremendous development in this area, especially in connection to distributed architecture and microservices. At the same time, AI/ML became commodity, with even very sophisticated tools available as open-source ready to be used against any data.
This talk will not be about applying the most sophisticated machine learning model of the year. Instead, we will share challenges, needs and lessons we've learned while working on a real world project centered around observability data.
There are many configuration management approaches for Kubernetes. From plain yaml to complex Ansible Playbooks, everything is possible. We evaluated many solutions and found our perfect match with tanka and jsonnet.
In this workshop session, we'll migrate a yaml based Kubernetes deployment to tanka/jsonnet together and refine it for scale. Afterwards, you'll be able to simplify repetitive Kubernetes configurations and reuse common components across multiple projects.
To follow this workshop, you'll either need a Linux or MacOS workstation. WSL2 will also work.
20 minutes | D2 | Cloud & Hyperscale
If you have ever developed an operator pattern for Kubernetes, you have probably had to tweak your service account and assign it to a role. Setting up the RBAC correctly is not that hard, but it's not fun and it distracts you from the real problem the operator is about to solve. This often leads to assigning the cluster admin to the operator and neglecting the security altogether.
Log2rbac is a tool (yet another operator) that aims to solve this issue. It assists you with setting up your RBAC roles that are tailored for your application's needs. Come to see this talk and learn more.
QtRVSim (https://github.com/cvut/qtrvsim) is a free and open-source RISC-V-based computer system simulator designed for teaching and learning computer systems principles. The simulator allows students to run assembly programs and observe the instruction execution on single-cycle and pipelined microarchitectures. The simulator graphically displays the major components in the datapath, including the register file, the arithmetic-logic unit, memory caches, peripherals, and the control unit with control signals. The talk will present the current capabilities of the simulator and possible usages for teaching, as well as the design of its implementation and opportunities for future development.
20 minutes | D3 | Future Tech and Open Research
IoT brought many new challenges into software development and especially into the world of testing and Quality Assurance. The physical world of IoT devices creates a new plane of complex issues beginning with data transmission and ending with reliability.
In this talk, we'll dig into the common problems quality engineers face during the development of such tests and how to mitigate them with Patriot-framework.
30 minutes | D2
Share your lightning talks with us! You will be able to submit your lighting talk proposals on the whiteboard during the whole event. You can start thinking about your 5-minute talk you want to share with the audience now (slide-less is fine, keep the topic related to the conference).