DevOps Employed II Flashcards

1
Q

Tell me about yourself and what was your role?

A

I worked on a team of 6 people which included team leads, consultants, managers, scrum masters etc. On the team, I managed the AWS part and automation part of it.

Talk about any Infrastructure deployment scripts that you’ve written. It can also include any Configuration management that you’ve done and any deployment automation you have done

So, it will includes setting up a monitoring system, setting up the entire thing on the cloud. Collaborating with teams. (Maybe add what your code review process is)
(Whatever I do in my Team)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How good are you at programming?

A

I have not written a full-fledged application such as Java or Ruby on Rails, but I am good at Ruby, Python, Shell and Perl from a scripting language perspective. I’m not a full fledged programmer, but I know the ins and outs of scripts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Are you from the Dev-side with the Ops-side?

A

I’m from the Ops-side, but you have a good hold on the programming languages which are used for scripting and configuration management, and whatever code is required, I can write that easily.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How quickly can you learn, given a chance can you architect an application?

A

Yes. I have been working with architects recently, and that you’ve been contributing to the architecture from a DevOps perspective, and giving your input so that the application could be developed in a more easily deployable manner.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Given the chance to lead a team can you do that?

A

Yes, definitely. I have more than five years experience. Typically, this is true for people who have at least five years experience leading a team because less than that would really not be meaningful.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the command to view the CRONTAB?

A

The command is CRONTAB hyphen L.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is an alias in Linux?

A

An alias in Linux is something that tells you the shortcuts on that system, these are defined in the ect/bashrc file.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does the CHMOD command do?

A

The CHMOD command basically allows you to change the permission of a file in Linux. It can be changed from rewrite in executable mode, so it can be changed from read to rewrite or to read, write, execute depending on the use case,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is SSH port forwarding?

A

It’s a way to actually forward your ports through SSH protocols, so it allows you to bypass firewalls and also tunnel ports through strictly guarded environments. It’ss one of the ways in which you can connect to instances or services in your private subnets in AWS or your data center.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a Zombie Process?

A

Zombie process is a process which is in a terminated state, but which has not yet released the resources. So basically it’s most commonly a child process where the parent has exited, but the child is still there.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a Bluegreen deployment?

A

A BlueGreen deployment is something where you have X number of resources running your application, and say, that number is 10, so you have 10 servers and a web farm. With a bluegreen deployment, what you do is you take out half of those from the actual production state, you deploy the new code on those. In the meantime, the other 5 are the other half would be serving the production traffic, and these 5 would not be hindered or hampered in any way. Now, the first five, on which you complete the deployment, which you’ve taken out of the actual production load, you put those back in, and you wait for those to come back into service. And then you take the other five out of the actual load and you start deploying on those. So in a bluegreen deployment, what happens is you never let the end user see the downtime. It’s an always up environment.

A web farm is a group of two or more web servers (or nodes) that host multiple instances of an app. When requests from users arrive to a web farm, a load balancer distributes the requests to the web farm’s nodes. Web farms improve: … Capacity/performance: Multiple nodes can process more requests than a single server

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do you do a hot deployment?

A

This is just a rephrasing of the previous question really a hot deployment can be done either by having two environments of the same size, and then you redirect traffic through a Load Balancer or a proxy service to one of the environments deployed on the other, and then redirect to the first one and apply it to the second, and just like a bluegreen deployment you never show the end user the downtime.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is your rollback strategy?

A

This is something you need to be very confident about because every deployment should have a rollback associated with it.

So let’s say your deployment fails, how do you roll back the system. It has to be linked to a bluegreen or a deployment. So you have to say that you have a Jenkins job or a script, which basically does a bluegreen deployment. And in the middle of that bluegreen deployment, it checks to see whether or not services are up and running, and at the least the SGTP endpoints of your application are running or not.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Have you used Jenkins for deployment?

A

Yes, I’ve used it, and you’ve used it using a couple of strategies, you’ve used it with plugins, and I’ve used it with my script so my plugins used to deploy code on the environment using build or publish or SSH, and I have also had my Ansible or Chef or Puppet code, which used to do the deployment for me. So we used to use Jenkins as an orchestrator and puppet and chef and Ansible for the actual deployment of the target service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What Jenkins plugins have you used?

A

I have used Maven and Gradle. I’ve used Cobertura for testing, PMD for programmatic mistake detection as text for Ruby on Rails testing karma for Angular j s testing and integration with S3. I’ve also used Git plugin for checking out code SVM plugin for checking out SPN repositories upstream downstream plugin for connecting the builds. In addition to this, I’ve also used archive artifacts plugin publish HTML reports plugin to publish test reports.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Have you ever used user-data for deployment?

A

Yes, I’ve used it in AWS. So when you have instances behind load balances we use to use it, it will actually define the deployment script so that every time a new instance comes up in the autoscaling group, it has got the latest code on it so that latest code is checked out using the SH command, which is specified inside the user data. Looking at the AWS question.

17
Q

What is a VPC?

A

A VPC is a slice of the Amazon cloud, which they give you to run your resources.

18
Q

What is the difference between a Public and Private subnet?

A

A Public Subnet is a Subnet which is directly accessible from the internet.

A Private Subnet is a Subnet that is not accessible from the internet, it’s only accessible from within the VPC.

19
Q

What is a reserved instance?

A

A reserved instance is an instance which is reserved for you by Amazon for a year, and they give you significant price reductions on that you can buy that no upfront, partial upfront or full upfront payments, and you get discounts from 20 to 60% based on the payment type and terms.

20
Q

What is the difference between spot instance and reserved instance?

A

A Spot Instance is like a bid instance where you have a specific price which you have bid, and based on that, you are assigned instances. Now the moment your bid is lower than the next highest bid, your instance is terminated, and it’s assigned to the next highest bidder. A reserved instance, on the other hand is not a bidable instance, you have to buy it for specific terms.

21
Q

Have you used route 53?

A

Yes, I’ve used route 53, I used it for managing DNS. I use it to redirect our code, ID DNS to route 53 using Amazon name servers, and then from there in route 53 we used to create cnm entries, a records MX records. txt records, and we used to manage the entire DNS from there.

22
Q

What is the best feature of AWS which you like?

A

I like the autoscaling group and the elastic load balancer because it allows you to infinitely scale your application to any level.

23
Q

Which configuration management tool have you used?

A

I have used all three, Chef, Ansible, and Puppet in one project or another. I have good hands on experience with each of them and I’ve written Chef Cookbooks, Puppet modules and Ansible Playbooks.

24
Q

What is the difference between Chef and Ansible?

A

So Chef is a Ruby based tool, it uses Ruby as its main programming language, it’s open source just like Ansible. Chef uses a client server and the client only architecture. So the difference is that Ansible is simply an agentless tool. So there is no agent in Ansible whatsoever and Chef there are two methodologies one is agent based and the other is agentless. But if you talk about the difference. One of the differences is that there is no agent in Ansible, and the other difference is Ansible uses YAML for defining the state of the systems and its playbooks, whereas Chef uses Ruby for defining the state of the systems in its Cookbooks, and it’s more of a programming language when it comes to chef and YAML in Ansible is not really a programming language it’s more of a statically typed code.

25
Q

Have you written any Cookbooks for modules in the Playbooks?

A

Yes, I’ve used both from the repositories which each of these configuration management tools provide. So for chef, I’ve used the Chef supermarket cookbooks, and I’ve also written custom cookbooks. For Puppet. I’ve used modules from puppet Forge, and I’ve written my own modules for custom applications for Ansible I’ve used a community provided playbooks as well as I’ve written my own custom playbooks for provisioning of instances on the cloud, as well as managing.

26
Q

What is the biggest issue you faced in a production environment?

A

There’s an application we used to run on the cloud and there was some application deployment error. It was a logical error in the application which caused an error behind the Elastic Load Balancer. So we had an Elastic Load Balancer which was tied to the Autoscaling group. So it kind of snowballed the application and we were not able to control it. So the application scaled infinitely, and it was snowballing the instances, so we had to go in and manually freeze the size of the auto scaling group. Once the size was frozen we were able to go into the instances, check the logs, fix the issue rebuild the AMI restart the auto scaling process. So that was one of the biggest issues in production at face recently.

27
Q

What is your DR strategy in a live website?

A

So, a DR strategy is that you do a failover check using route 53 DNS. So one of them is a DNS based failover in which you’ve got an exact replica of your environment, your web servers your database your cache. So your database may not be synced in real time with a DrR So it could be one day behind sync. so the traffic is switched to Dr. Whenever there’s a downtime and production, automatically through a DNS switch. The other DR strategy is that you can have instances in two availability zones. And then you can have your load balancer to switch traffic based on the availability of those two zones. But that is the same region DNR. So from a multi region DNR, the best option is to have a DNS based switch. And you can also have an interface tool interface proxy, which can route traffic based on the health of your endpoints.

28
Q

How do you scale it production web service?

A

So if you’re on the cloud, you can use the autoscaling group, which will scale automatically based on demand. If you don’t want to go into instances and you don’t want to manage instances, you can use the Elastic Beanstalk for a managed stack, which takes care of the Load Balancing itself. If you have an application which is already containerized. It’s a better practice to run them on ECS on the cloud, which would allow you to scale your applications based on demand. And you can create an application load balancer there. If you’re in house, you can scale your application by having a container management system in which you can have some base hosts on which you run multiple services, so you can keep on adding multiple hosts to that cluster, and you don’t have to worry about the scaling part, because the host can be added on the fly to that hardware, so your Docker cluster, like a swarm or Kubernetes can take care of getting those new machines into your cluster and running the containers which are existing on your cluster on those new machines. Thanks a lot, everyone, this is all from the interview question.