By Jamie Duncan and John Osborne
This article, excerpted from chapter 2 of OpenShift in Action, provides an introduction to OpenShift using the command line to deploy an application.
You can interact with OpenShift in three ways: the command line; the web interface; and the RESTful API. This article focuses on deploying applications using the command line. This is because the command line exposes more of the process used to create containerized applications in OpenShift. In other examples you may use the web interface, or even the API. Our intention is to give you the most real-world examples of using OpenShift. We want to show you the best tool to get the various jobs done. We’ll also try our best not to make you repeat yourself. Almost every action in OpenShift is possible using all three access methods. If something is limited, we’ll do our best to let you know. But we want you to get the best experience possible from using OpenShift. With that said, in this article we’re going to repeat ourselves. But we have a good reason!
The most common task in OpenShift is to deploy an application. Because this is the most common task, we want to introduce you to it as early as is practical, using both the command line and the web interface. Bear with us.
Your first OpenShift installation is a two-node cluster. There’s one master node (master) and one application node (node). Before we go any further, make sure you have OpenShift installed. After you’ve completed the installation, return here to get started. Like most applications, OpenShift requires a little configuration to get going. This is what the next sections discuss.
In OpenShift, every action requires authentication. This allows every action to be governed by the security and access rules set up for all users in an OpenShift cluster. By default your OpenShift cluster’s initial configuration is set to allow any user and password combination to log in. This is called the Allow All Identity Provider.
The Allow All identity provider creates a user account the first time a user logs in. Each username is unique, and the password can be anything except an empty field. This configuration is only safe and recommended for lab and development OpenShift instances like the one you set up.
The first user that you’ll create is called dev. This user represents any normal developer or end-user in OpenShift. One thing to be aware of is that this authentication method is case-sensitive. Although the passwords can be anything, dev and Dev are different users, and aren’t able to see the same projects and applications when they log in. Be careful when you log in.
Using the oc command line application
You need to have installed
oc on your laptop or workstation. This is the tool you’ll use to manage OpenShift on the command line. If you’re using an OSX or Linux system, you can open your favorite terminal application. On Windows, open your command prompt. From your command line, run the oc login command, using dev for the username and password and the URL for your master server’s API server (listing 1). The parameters for
oc login used are:
-u, the username to log in with
-p, the user’s password
the URL for your OpenShift master’s API server. By default, it’s served over HTTPS on TCP port 8443
Listing 1. Logging into OpenShift with oc with a username and password combination.
$ oc login -u dev -p dev https://ocp-184.108.40.206.100.nip.io:8443 ❶ Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname>
❶ The syntax for logging into an OpenShift cluster, including the username, the password, and the URL for your OpenShift master’s API server
OpenShift is prompting you to accomplish your next step, which is to create a project.
In OpenShift, projects are the fundamental way applications are organized. Projects let users collect their applications into logical groups. They also serve other useful roles in relation to security. For now, think of a project as a collection of related applications, and you need to create your first one in OpenShift. You’ll create your first project, and then use it to house a handful of applications that you’ll deploy, modify, re-deploy.
To create a project, you’ll need to run the oc
new-project command, and provide a project name. For your first project, use
image-uploader as the project name. After you create a new project using the
new-project command (listing 2), the output prompts you to deploy your first application.
You can find documentation for all of the oc command’s features in the OpenShift CLI Reference documentation at https://docs.openshift.org/latest/cli_reference/get_started_cli.html.
In addition to the name for your project, you can optionally provide a display name for your project. The display name is a more human-friendly name for your project. Your project name has a restricted syntax because it becomes part of the URL for all of the applications deployed within the application. We’ll discuss how that works later.
Listing 2. Creating a new project, and being prompted to create a new application
$ oc new-project image-uploader --display-name='Image Uploader Project' Now using project "image-uploader" on server "https://ocp- 220.127.116.11.100.nip.io:8443". You can add applications to this project with the 'new-app' command. For example, try: oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git to build a new example application in Ruby.
Now that you’ve created your first project, the next section will walk you through deploying your first application, called Image Uploader, into your new project. Image Uploader is a web-based PHP application used to upload and display graphic files from your computer. Before we go further, let’s talk about application components to understand how all the parts fit and work together.
Applications in OpenShift aren’t monolithic structures; they consist of a number of different components inside a project that work together to deploy, update, and maintain your application through its lifecycle. These components are:
Custom container images
These components all work together to serve your applications to your end users. Looking at figure 1, the interactions between the application components can seem a little complex. Let’s walk through what these components do in a little more detail. We’ll start with how OpenShift creates and uses custom container images for each application.
Figure 1. How application components work together to deploy and serve an application
Custom container images
Each application deployment in OpenShift creates a custom container image to serve your application. This image is created using your application’s source code and a custom base image called a builder image. For example, the PHP builder image contains the Apache web server and the core PHP language libraries.
The image build process takes the builder image you choose, integrates your source code and creates the custom container image which is used for the application deployment. Once created, all of the container images, along with all of the builder images, are stored in OpenShift’s integrated container registry, which is noted in figure 1. The component that controls the creation of your application containers is the buildconfig.
A buildconfig contains all of the information needed to build an application using its source code. This includes all of the information required to build the application container image:
URL for the application source code
Name of the builder image to use
Name of the application container image which is created
Events that can trigger a new build to occur
Looking at figure 1, you can see these relationships illustrated. The buildconfig is used to track what it takes to build your application, and to trigger the creation of the application’s container image.
After the buildconfig does its job, it triggers the deploymentconfig which is created for your newly created application.
If an application is never deployed, it can never do its job. The job of deploying and upgrading your application is handled by the deploymentconfig. In figure 1, you can see that deploymentconfigs are created as part of the initial application deployment command. Deploymentconfigs track several pieces of information about an application:
Currently deployed version of your application
Number of replicas to maintain for the application
Trigger events that can trigger a redeployment. By default, configuration changes to the deployment or changes to the container image trigger an automatic application redeployment
Upgrade strategy. app-cli uses the default rolling upgrade strategy
A key feature of applications running in OpenShift is that they’re horizontally scalable. This concept is represented in the deploymentconfig by the number of replicas.
Maintaining application replicas
The number of replicas specified in a deploymentconfig is passed into a kubernetes object called a replication controller. A replication controller’s a special type of kubernetes pod which allows for multiple replicas, copies of the application pod, to be kept running at all times. All pods in OpenShift are replication controllers by default.
Another feature which is managed by a deploymentconfig is how application upgrades can be fully automated.
Managing upgrade methods
The default application upgrade method in OpenShift is to perform a rolling upgrade. Rolling upgrades create new versions of an application, allowing new connections to the application to access only the new version. As traffic increases on the new version and goes down on the old version, the old version of the application is deleted.
The deploymentconfig component also manages how your application is deployed. Each of the deployments for an application are tracked and available to the deploymentconfig using deployments. New application deployments can be automatically triggered by events such as configuration changes to your application, or when a new version of a container image is made available. These sorts of trigger events are monitored by imagestreams in OpenShift.
Each time a new version of an application is created by its buildconfig, a new deployment is created and tracked by the deploymentconfig. A deployment represents a unique version of an application. Each deployment references a version of the application image which was created, and creates the replication controller to create and maintain the pods to serve the application. In figure 1, the deployment is directly linked to the pod that serves an application.
New deployments can be created automatically in OpenShift by managing how applications are upgraded, which is also tracked by the deploymentconfig.
Imagestreams are used to automate actions inside OpenShift. They consist of links to one or more container images. Using imagestreams, you can monitor, and trigger new application deployments if they’re updated. In figure 1, you can see how imagestreams are linked to the container image for an application, as well as its deployment.
Now that we’ve gone through how applications are built and deployed, it’s time for you to deploy your first application. Let’s do that in the next section.
Deploying an application
Applications are deployed using the oc new-app command. When you run this command to deploy the image uploader application into the image-uploader project (listing 3), you need to provide three pieces of information:
The type of Image Stream you want to use. OpenShift ships with multiple container images, called Builder Images, which you can use as a starting point for applications. In this example you’ll use the PHP builder image to create your application.
A name for your application. In this example, use
app-cli, because this version of your application is deployed from the command line.
The location of your application’s source code. OpenShift takes this source code and combines it with the PHP builder image to create a custom container image for your application deployment.
After you run the
oc new-app command, you see a long list of output. This is OpenShift building out all of the components needed to make your application work properly like we discussed at the beginning of this section.
Listing 3. Deploying a new application in OpenShift on the command line with the
oc new-app (output trimmed for clarity).
$ oc new-app \ > --image-stream=php \ ❶ > --code=https://github.com/OpenShiftInAction/image-uploader.git \ ❷ > --name=app-cli ❸ ... --> Success Build scheduled, use 'oc logs -f bc/cli-app' to track its progress. Run 'oc status' to view your app.
❶ Image stream to use
❷ Source code for application
❸ Application name
Now that you’ve deployed your first application, you need to be able to access the newly deployed pod. Looking at figure 2, you can see that the pod is associated with a component called a service, which then links up to provide application access for users. Let’s take a look at services next.
Figure 2. Components that deploy an application inside an OpenShift project
Services provide consistent application access
In the course of a normal day’s work you could use multiple ways to force OpenShift to redeploy application pods, for any number of reasons:
Scaling applications up and down
Application pods stop responding correctly
Nodes could be rebooted or have issues
Human error (the most common cause)
Phase of the moon could be out of alignment, and all of the other things that cause computers to not do what we want
Although pods may come and go, there needs to be a consistent presence for your applications inside OpenShift. This is what a service does. A service uses the labels applied to pods when they’re created to keep track of all of the pods associated with a given application. This allows a service to act as an internal proxy for your application. You can see information about the service for app-cli by running the
oc describe svc/app-cli command (listing 4). Each service gets an IP address which is only routable from inside the OpenShift cluster. Other information which is maintained:
IP address of the service
TCP ports to connect to in the pod
Listing 4. Information about the app-cli service
$ oc describe svc/app-cli Name: app-cli Namespace: image-uploader Labels: app=app-cli Selector: app=app-cli,deploymentconfig=app-cli Type: ClusterIP IP: 172.30.90.167 ❶ Port: 8080-tcp 8080/TCP ❷ Endpoints: Session Affinity: None No events.
❶ IP address for the service
❷ Port to connect to the service on
Most components in OpenShift have a shorthand that can be used on the command line to save time and avoid misspelled component names. In listing 4, you used
svc/app-cli to get information about the service for the app-cli application. Buildconfigs can be accessed with
bc/<app-name>, deployconfigs are
dc/<app-name>. You can find the rest of the shorthands in the documentation for
Services provide a consistent gateway into your application deployment. But the IP address of a service is only available inside your OpenShift cluster. To connect users to your applications and make DNS work properly you need one more application component. Next, you’ll create a route to expose app-cli externally from your OpenShift cluster.
Exposing services to the outside world with routes
When you installed your OpenShift cluster, one of the services created was a haproxy service running inside a container on OpenShift. HAProxy is an open source, software load-balancer application. To create a route for the app-cli application, run the following command:
oc expose svc/app-cli
Examining route URLs
Like we discussed earlier, OpenShift uses projects to organize applications. An application’s project is included in the URL which is generated when creating an application route. Each application’s URL takes the following format:
When you deployed OpenShift, you specified the application domain
apps.192,168.122.101.nip.io. By default, all applications in OpenShift are served using HTTP protocol. When you put all of this together, the URL for app-cli should be:
You can get information about the route you created by running the
oc describe route/app-cli command (listing 5), including:
Host configurations added to haproxy
Service associated with the route
Endpoints for the service to connect to when handling requests for the route
Listing 5. In depth information about the newly created app-cli route
$ oc describe route/app-cli Name: app-cli Namespace: image-uploader Created: About an hour ago Labels: app=app-cli Annotations: openshift.io/host.generated=true Requested Host: app-cli-image-uploader.apps.192.168.122.101.nip.io ❶ exposed on router router about an hour ago Path: <none> TLS Termination: <none> Insecure Policy: <none> Endpoint Port: 8080-tcp Service: app-cli ❷ Weight: 100 (100%) Endpoints: 10.129.1.112:8080 ❸
❶ URL created in haproxy
❷ Associated service
❸ Endpoints for service
Now that you have the route to your application created, go ahead and verify it’s functional in a web browser. You should be able to browse to your app-cli application at this point, using the URL for the route that was created (figure 2).
You should be able to access your app-cli deployment from anywhere that your test cluster is accessible. If you created your cluster on virtual machines on your laptop, it’s most likely only accessible from your laptop. OpenShift is pretty awesome, but it can’t overcome the rules of TCP/IP networking.
Figure 3. The app-cli application web interface should be up and running and available
Focusing on the components that deploy and deliver the app-cli application (figure 3), you can see the relationship between the service, the newly created route, and the end users. The route is tied to the app-cli service, and users access the application pod through the route:
Users ←→ Route ←→ Service ←→ Pod
Relationships in OpenShift are imporatant; multiple application components work in concert to build, deploy, and manage your applications.