interview fix Flashcards
(104 cards)
Amazon Elastic Block Store
Amazon Elastic Block Store (EBS) is an easy to use, high performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.
Application-Layer Attacks
The application layer is the topmost layer of the OSI network model and the one closest to the user?s interaction with the system. Attacks that make use of the application layer focus primarily on direct Web traffic. Potential avenues include HTTP, HTTPS, DNS, or SMTP.
Containerization
Containerization is defined as a form of operating system virtualization, through which applications are run in isolated user spaces called containers, all using the same shared operating system (OS).
Data Availability vs. Durability
Availability and durability are two very different aspects of data accessibility. Availability refers to system uptime, i.e. the storage system is operational and can deliver data upon request. Historically, this has been achieved through hardware redundancy so that if any component fails, access to data will prevail. Durability, on the other hand, refers to long-term data protection, i.e. the stored data does not suffer from bit rot, degradation or other corruption. Rather than focusing on hardware redundancy, it is concerned with data redundancy so that data is never lost or compromised.
DIFFERENCE BETWEEN STORAGE TYPES
File storage: Economical and easily structured, data are saved in files and folders. They are usually found on hard drives, which means that they appear exactly the same for the user and on the hard drive.
Block storage: Data are stored in blocks of uniform size. Although more expensive, complex, and less scalable, block storage is ideal for data that needs to be accessed and modified frequently.
Object storage: Data is stored as objects with unique metadata and identifiers. Although, in general, this type of storage is less expensive, objects? storage is only ideal for data that does not require modification.
Encryption at rest vs in transit
At rest : This kind of data is typically in a stable state: it is not traveling within the system or network, and it is not being acted upon by any application or third-party. It?s something that has reached a destination, at least temporarily.
In transit : Data that is going through a system or network, this data can be encrypted using https for example
IDS
An intrusion detection system (IDS) is a device or software application that monitors a network or systems for malicious activity or policy violations. Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. With the cloud, the collection and aggregation of account and network activities is simplified, but it can be time consuming for security teams to continuously analyze event log data for potential threats. With GuardDuty, you now have an intelligent and cost-effective option for continuous threat detection in AWS. The service uses machine learning, anomaly detection, and integrated threat intelligence to identify and prioritize potential threats. GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail event logs, Amazon VPC Flow Logs, and DNS logs. With a few clicks in the AWS Management Console, GuardDuty can be enabled with no software or hardware to deploy or maintain. By integrating with Amazon CloudWatch Events, GuardDuty alerts are actionable, easy to aggregate across multiple accounts, and straightforward to push into existing event management and workflow systems.
If you need to construct a 3-tier layer of storage, how can you divide where you store each file?
The answer is : you would use lifecycle management. Most accessed files are in S3, less S3 Standard infrequent access, rarely accessed files in Amazon s3 glacier extremly rare in S3 glacier deep archive. Non AWS answer: SSD for fast access, SSHD for less, HDD 7200RPM for rarely, HDD 5400RPM for extremely rareA traditional IDS typically relies on monitoring network traffic at specific network traffic control points, like firewalls and host network interfaces. This allows the IDS to use a set of preconfigured rules to examine incoming data packet information and identify patterns that closely align with network attack types. Traditional IDS have several challenges in the cloud:
Networks are virtualized. Data traffic control points are decentralized and traffic flow management is a shared responsibility with the cloud provider. This makes it difficult or impossible to monitor all network traffic for analysis.
Cloud applications are dynamic. Features like auto-scaling and load balancing continuously change how a network environment is configured as demand fluctuates.
Most traditional IDS require experienced technicians to maintain their effective operation and avoid the common issue of receiving an overwhelming number of false positive findings. As a compliance assessor, I have often seen IDS intentionally de-tuned to address the false positive finding reporting issue when expert, continuous support isn?t available.
GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail, Amazon Virtual Private Cloud (Amazon VPC) flow logs, and Amazon Route 53 DNS logs. This gives GuardDuty the ability to analyze event data, such as AWS API calls to AWS Identity and Access Management (IAM) login events, which is beyond the capabilities of traditional IDS solutions. Monitoring AWS API calls from CloudTrail also enables threat detection for AWS serverless services, which sets it apart from traditional IDS solutions. However, without inspection of packet contents, the question remained, ?Is GuardDuty truly effective in detecting network level attacks that more traditional IDS solutions were specifically designed to detect??
AWS asked Foregenix to conduct a test that would compare GuardDuty to market-leading IDS to help answer this question for us. AWS didn?t specify any specific attacks or architecture to be implemented within their test. It was left up to the independent tester to determine both the threat space covered by market-leading IDS and how to construct a test for determining the effectiveness of threat detection capabilities of GuardDuty and traditional IDS solutions which included open-source and commercial IDS.
Foregenix configured a lab environment to support tests that used extensive and complex attack playbooks. The lab environment simulated a real-world deployment composed of a web server, a bastion host, and an internal server used for centralized event logging. The environment was left running under normal operating conditions for more than 45 days. This allowed all tested solutions to build up a baseline of normal data traffic patterns prior to the anomaly detection testing exercises that followed this activity.
Foregenix determined that GuardDuty is at least as effective at detecting network level attacks as other market-leading IDS. They found GuardDuty to be simple to deploy and required no specialized skills to configure the service to function effectively. Also, with its inherent capability of analyzing DNS requests, VPC flow logs, and CloudTrail events, they concluded that GuardDuty was able to effectively identify threats that other IDS could not natively detect and required extensive manual customization to detect in the test environment. Foregenix recommended that adding a host-based IDS agent on Amazon Elastic Compute Cloud (Amazon EC2) instances would provide an enhanced level of threat defense when coupled with Amazon GuardDuty.
As a PCI Qualified Security Assessor (QSA) company, Foregenix states that they consider GuardDuty as a qualifying network intrusion technique for meeting PCI DSS requirement 11.4. This is important for AWS customers whose applications must maintain PCI DSS compliance. Customers should be aware that individual PCI QSAs might have different interpretations of the requirement, and should discuss this with their assessor before a PCI assessment.
Customer PCI QSAs can also speak with AWS Security Assurance Services, an AWS Professional Services team of PCI QSAs, to obtain more information on how customers can leverage AWS services to help them maintain PCI DSS Compliance. Customers can request Security Assurance Services support through their AWS Account Manager, Solutions Architect, or other AWS support.
We invite you to download the Foregenix Amazon GuardDuty Security Review whitepaper to see the details of the testing and the conclusions provided by Foregenix.
IOPS
Input/output operations per second (IOPS, pronounced eye-ops) is an input/output performance measurement used to characterize computer storage devices like hard disk drives (HDD), solid state drives (SSD), and storage area networks (SAN)
NAS vs SAN
NAS Single point of failure attached on network small medium business part of local area network, subject to bottleneck
SAN mounts like a local drive more available multiple arrays shared in a cluster of servers dedicated network for enterprise level NET AFFECTED BY NETWORK TRAFFIC HIGH SPEED
object storage Vs file system
File storage organizes and represents data as a hierarchy of files in folders; block storage chunks data into arbitrarily organized, evenly sized volumes; and object storage manages data and links it to associated metadata.
OSI MODEL
https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/
Protocol Attacks
A protocol attack focuses on damaging connection tables in network areas that deal directly with verifying connections. By sending successively slow pings, deliberately malformed pings, and partial packets, the attacking computer can cause memory buffers in the target to overload and potentially crash the system. A protocol attack can also target firewalls. This is why a firewall alone will not stop denial of service attacks.
RAID level 0 - Striping
In a RAID 0 system data are split up into blocks that get written across all the drives in the array. By using multiple disks (at least 2) at the same time, this offers superior I/O performance.
RAID level 1 -Mirroring
Data are stored twice by writing them to both the data drive (or set of data drives) and a mirror drive (or set of drives). If a drive fails, the controller uses either the data drive or the mirror drive for data recovery and continues operation
RAID level 10 -combining RAID 1 and RAID 0
It is possible to combine the advantages (and disadvantages) of RAID 0 and RAID 1 in one single system. This is a nested or hybrid RAID configuration. It provides security by mirroring all data on secondary drives while using striping across each set of drives to speed up data transfers.
RAID level 5
RAID 5 is the most common secure RAID level. It requires at least 3 drives but can work with up to 16. Data blocks are striped across the drives and on one drive a parity checksum of all the block data is written. The parity data are not written to a fixed drive, they are spread across all drives, as the drawing below shows. Using the parity data, the computer can recalculate the data of one of the other data blocks, should those data no longer be available. That means a RAID 5 array can withstand a single drive failure without losing data or access to data.
RAID level 6 - Striping with double parity
RAID 6 is like RAID 5, but the parity data are written to two drives. That means it requires at least 4 drives and can withstand 2 drives dying simultaneously. The chances that two drives break down at exactly the same moment are of course very small. However, if a drive in a RAID 5 systems dies and is replaced by a new drive, it takes hours or even more than a day to rebuild the swapped drive. If another drive dies during that time, you still lose all of your data. With RAID 6, the RAID array will even survive that second failure.
Symmetric and Asymmetric encryption
Symmetric encryption uses a single key that needs to be shared among the people who need to receive the message while asymmetrical encryption uses a pair of public key and a private key to encrypt and decrypt messages when communicating.
Throughput vs Latency
Latency is the time required to perform some action or to produce some result. Latency is measured in units of time – hours, minutes, seconds, nanoseconds or clock periods.
Throughput is the number of such actions executed or results produced per unit of time. This is measured in units of whatever is being produced (cars, motorcycles, I/O samples, memory words, iterations) per unit of time. The term “memory bandwidth” is sometimes used to specify the throughput of memory systems.
Volumetric Attacks DDOS
The most common DDoS attack overwhelms a machine?s network bandwidth by flooding it with false data requests on every open port the device has available. Because the bot floods ports with data, the machine continually has to deal with checking the malicious data requests and has no room to accept legitimate traffic. UDP floods and ICMP floods comprise the two primary forms of volumetric attacks.
Web Application Firewall Vs Firewall
In a technical sense, the difference between application-level firewalls and network-level firewalls is the layers of security they operate on. While web application firewalls operate on layer 7 (applications), network firewalls operate on layers 3 and 4 (data transfer and network). WAFs are focused on protecting applications, while network firewalls are more concerned with traffic into and out of your broader network
What are DDoS Attacks?
A distributed denial-of-service (DDoS) attack is a malicious attempt to disrupt normal traffic of a targeted server, service or network by overwhelming the target or its surrounding infrastructure with a flood of Internet traffic.
What is Database Clustering
Database Clustering is the process of combining more than one servers or instances connecting a single database. Sometimes one server may not be adequate to manage the amount of data or the number of requests, that is when a Data Cluster is needed. Database clustering, SQL server clustering, and SQL clustering are closely associated with SQL is the language used to manage the database information.
The main reasons for database clustering are its advantages a server receives; Data redundancy, Load balancing, High availability, and lastly, Monitoring and automation.