Exam Flashcards

(100 cards)

1
Q

Which of the following is a recommended pre-installation step?
A) Disable the default search app.
B) Configure search head forwarding.
C) Download the latest version of KV Store from MongoDBxom.
D) Install the latest Python distribution on the search head.

A

Answer: B

According to the Splunk Enterprise Security documentation, one of the recommended pre-installation steps is to configure search head forwarding. Search head forwarding is a feature that allows the search head to forward its internal logs and metrics to an indexer or a heavy forwarder for indexing and analysis. This feature
helps you monitor the health and performance of the search head and troubleshoot any issues that may arise. You can configure search head forwarding by editing the outputs.conf file on the search head and specifying the destination indexer or forwarder. See Configure search head forwarding for more details. The other options are not recommended, because they are either unnecessary or harmful for the installation of ES. Disabling the default search app is not a good option, because it may cause some features of ES to not work properly, such as the Content Management page and the navigation editor. Downloading the latest version of KV Store from MongoDB.com is not a good option, because ES uses the built-in KV Store service that comes with Splunk Enterprise and does not require any external installation or configuration. Installing the latest Python distribution on the search head is not a good option, because it may cause compatibility issues with ES, which uses the Python version that comes with Splunk Enterprise. Therefore, the correct answer is B. Configure search head forwarding. References = Configure search head forwarding.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The Add-On Builder creates Splunk Apps that start with what?
A) DA-
B) SA-
C) TA-
D) App-

A

Answer: C

The Splunk Add-on Builder helps you create technology add-ons, which are specialized add-ons that help to collect, transform, and normalize data feeds from specific sources in your environment. Technology add-ons
are often referred to as TAs, and they start with the prefix TA-12.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How should an administrator add a new look up through the ES app?
A) Upload the lookup file in Settings -> Lookups -> Lookup Definitions
B) Upload the lookup file in Settings -> Lookups -> Lookup table files
C) Add the lookup file to /etc/apps/SplunkEnterpriseSecuritySuite/lookups
D) Upload the lookup file using Configure -> Content Management -> Create New Content -> Managed
Lookup

A

Answer: D

The correct way to add a new lookup through the ES app is to upload the lookup file using Configure > Content Management > Create New Content > Managed Lookup. This allows the user to create or select an existing lookup file and definition, specify the lookup type, label, and description, and enable editing of the lookup file. This also stores the lookup file at the application level, which makes it easier to edit and share. The other options are either incorrect or not recommended for ES. Uploading the lookup file in Settings > Lookups > Lookup table files does not create a lookup definition or a label and description for the lookup. Uploading the lookup file in Settings > Lookups > Lookup Definitions does not upload the lookup file itself,
but only creates a definition for an existing file. Adding the lookup file to
/etc/apps/SplunkEnterpriseSecuritySuite/lookups requires manual editing of the file system and is not recommended for ES.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following is an adaptive action that is configured by default for ES?
A) Create notable event
B) Create new correlation search
C) Create investigation
D) Create new asset

A

Answer: A

According to the Splunk Enterprise Security documentation, the Create Notable Event adaptive response action is one of the included adaptive response actions that is configured by default for ES. This action allows you to create a notable event from the results of a correlation search or from the details of another notable
event. You can customize the title, description, urgency, owner, and other fields of the notable event. The Create Notable Event action is useful for creating alerts or tasks based on specific conditions or criteria. Therefore, the correct answer is A. Create notable event. References = Create Notable Event.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What can be exported from ES using the Content Management page?
A) Only correlation searches, managed lookups, and glass tables.
B) Only correlation searches.
C) Any content type listed in the Content Management page.
D) Only correlation searches, glass tables, and workbench panels.

A

Answer = C

The Content Management page in Splunk Enterprise Security allows you to export any content type that is listed on the page as an app. The content types include correlation searches, glass tables, dashboards, reports,
saved searches, key indicators, workbench panels, and managed lookups. You can use the export option to share custom content with other ES instances, such as migrating customized searches from a development or
testing environment into production. You can also import content from other ES instances or from Splunkbase using the Content Management page.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

ES apps and add-ons from $SPLUNK_HOME/etc/apps should be copied from the staging instance to what
location on the cluster deployer instance?
A) $SPLUNK_HOME/etc/master-apps/
B) $SPLUNK_HOME/etc/system/local/
C) $SPLUNK_HOME/etc/shcluster/apps
D) $SPLUNK_HOME/var/run/searchpeers/

A

Answer = C

ES apps and add-ons from $SPLUNK_HOME/etc/apps should be copied from the staging instance to the
$SPLUNK_HOME/etc/shcluster/apps location on the cluster deployer instance. This is the directory where the deployer stores the configuration bundle that it distributes to the search head cluster members. The configuration bundle consists of apps and other configuration files that are not replicated by the cluster. The deployer does not use the $SPLUNK_HOME/etc/master-apps/ directory, which is used by the master node in
an indexer cluster. The deployer does not use the $SPLUNK_HOME/etc/system/local/ directory, which is used to store local configuration files for the deployer instance itself. The deployer does not use the
$SPLUNK_HOME/var/run/searchpeers/ directory, which is used by the search head to store information about
the indexer cluster peers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which of these Is a benefit of data normalization?
A) Reports run faster because normalized data models can be optimized for better performance.
B) Dashboards take longer to build.
C) Searches can be built no matter the specific source technology for a normalized data type.
D) Forwarder-based inputs are more efficient.

A

Answer: C

According to the Splunk Enterprise Security documentation, one of the benefits of data normalization is that searches can be built no matter the specific source technology for a normalized data type. Data normalization
is a way to ingest and store data in the Splunk platform using a common format for consistency and efficiency. When data is normalized, it follows the same field names and event tags for equivalent events from different
sources or vendors. This allows you to perform cross-source analysis and correlation of security events without worrying about the differences in data formats. For example, if you have data from Windows, Linux, and Mac OS systems, you can normalize them using the Endpoint data model and use the same fields, such as , , and , to search for endpoint events across all systems. Therefore, the correct answer is C. Searches can be built no matter the specific source technology for a normalized data type.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A site has a single existing search head which hosts a mix of both CIM and non-CIM compliant applications.
All of the applications are mission-critical. The customer wants to carefully control cost, but wants good ES
performance. What is the best practice for installing ES?
A) Install ES on the existing search head.
B) Add a new search head and install ES on it.
C) Increase the number of CPUs and amount of memory on the search head, then install ES.
D) Delete the non-CIM-compliant apps from the search head, then install ES.

A

Answer: B

This is because ES is a resource-intensive application that requires a dedicated search head with sufficient CPU and memory. Installing ES on the existing search head may cause performance issues and conflicts with
other applications. Deleting the non-CIM-compliant apps from the search head is not recommended, as they are mission-critical for the site. Increasing the number of CPUs and amount of memory on the search head
may not be enough to handle the load of ES and other applications. Therefore, option B is the most suitable answer. You can find more information about installing ES on this web page1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which column in the Asset or Identity list is combined with event security to make a notable event’s urgency?
A) VIP
B) Priority
C) Importance
D) Criticality

A

Answer: B

Explanation
The priority column in the asset or identity list is combined with the event severity to make a notable event’s
urgency in Splunk Enterprise Security. The urgency is a measure of how important it is to address a notable
event, and it is calculated based on a matrix that maps the priority of the asset or identity involved in the event
and the severity of the event. The urgency can be one of the following values: low, medium, high, or critical12. For example, by default, medium, high, and critical priority, combined with critical severity, will generate a critical urgency ranking3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which argument to the | tstats command restricts the search to summarized data only?
A) summaries=t
B) summaries=all
C) summariesonly=t
D) summariesonly=all

A

Answer: C

The argument to the | tstats command that restricts the search to summarized data only is summariesonly=t. Summarized data is the data that is generated by the data model acceleration process, which creates summary indexes (TSIDX files) for the data models. By using summariesonly=t, the tstats command will only search the summary indexes, which can improve the performance and efficiency of the search. However, this also means
that the search will not return any events that are not covered by the data model acceleration, such as events outside the acceleration time range or events that do not match the data model constraints12.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

When installing Enterprise Security, what should be done after installing the add-ons necessary for normalizing data?
A) Configure the add-ons according to their README or documentation.
B) Disable the add-ons until they are ready to be used, then enable the add-ons.
C) Nothing, there are no additional steps for add-ons.
D) Configure the add-ons via the Content Management dashboard.

A

Answer: A

After installing the add-ons necessary for normalizing data, you should configure the add-ons according to their README or documentation. The add-ons that are included in the Splunk Enterprise Security package are preconfigured and do not require additional steps. However, the add-ons that are downloaded separately from Splunkbase may require additional configuration steps, such as enabling inputs, setting up credentials, or
modifying props and transforms. You should review the README or documentation for each add-on to determine the specific configuration requirements and follow the instructions accordingly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What feature of Enterprise Security downloads threat intelligence data from a web server?
A) Threat Service Manager
B) Threat Download Manager
C) Threat Intelligence Parser
D) Therat Intelligence Enforcement

A

Answer: B

The Threat Download Manager is a feature of Splunk Enterprise Security that downloads threat intelligence data from a web server. The Threat Download Manager is a modular input that runs on a schedule and fetches threat intelligence data from various sources, such as STIX/TAXII servers, RSS feeds, or custom URLs. The Threat Download Manager then passes the downloaded data to the Threat Intelligence Parser for further
processing12.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Where should an ES search head be installed?
A) On a Splunk server with top level visibility.
B) On any Splunk server.
C) On a server with a new install of Splunk.
D) On a Splunk server running Splunk DB Connect.

A

Answer: C

According to the Splunk Enterprise Security documentation, the recommended way to install ES is on a server with a new install of Splunk. This is because ES requires a dedicated search head that is not shared with other
apps or users. Installing ES on a server with a new install of Splunk ensures that there are no conflicts or performance issues with other apps or configurations. If you want to install ES on an existing search head, you need to follow some additional steps, such as redirecting distributed search connections, purging KV Store,
and backing up existing data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which of the following is a key feature of a glass table?
A) Rigidity.
B) Customization.
C) Interactive investigations.
D) Strong data for later retrieval.

A

Answer: B

A key feature of a glass table is customization. A glass table is a dashboard that allows you to create dynamic and interactive visualizations of your security data. You can customize a glass table by adding static images and text, the results of ad-hoc searches, and security metrics that show the values of KPIs, service health
scores, or notable events. You can also configure the appearance, behavior, and drilldown options of the glass table elements. A glass table is not rigid, but flexible and adaptable to your security needs. A glass table is not designed for interactive investigations, but for high-level monitoring and analysis. A glass table does not store data for later retrieval, but shows real-time data generated by KPIs and services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

After data is ingested, which data management step is essential to ensure raw data can be accelerated by a Data
Model and used by ES?
A) Applying Tags.
B) Normalization to Customer Standard.
C) Normalization to the Splunk Common Information Model.
D) Extracting Fields.

A

Answer C

After data is ingested, the data management step that is essential to ensure raw data can be accelerated by a data model and used by ES is normalization to the Splunk Common Information Model (CIM). The CIM is a
standard and consistent way of naming and structuring the fields and tags for different types of data, such as network, web, email, authentication, and malware. The CIM allows you to use the same search queries and
dashboards across different data sources, even if they have different formats or schemas. Normalizing data to the CIM involves mapping the raw data fields and tags to the CIM fields and tags using technology add-ons.
Technology add-ons are Splunk apps that provide the necessary configurations and extractions for specific data sources. By normalizing data to the CIM, you can enable data model acceleration for the data models thatuse the CIM fields and tags. Data model acceleration is a feature that speeds up searches and reports that use data models by pre-computing and storing the results of the data model queries. Data model acceleration is
required for most of the dashboards and correlation searches in Splunk Enterprise Security.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What kind of value is in the red box in this picture?
Additional Fields // Value
HTTP Method // GET
Source // 10.98.27.195 (red box with 500)

A) A risk score.
B) A source ranking.
C) An event priority.
D) An IP address rating.

A

Answer: D

The value in the red box is an IP address rating. This is a numerical value that represents the risk associated with an IP address. The higher the value, the higher the risk. This value is calculated based on the number of security events associated with the IP address, the severity of those events, and the time since the last event.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which of the following is a Web Intelligence dashboard?
A) Network Center
B) Endpoint Center
C) HTTP Category Analysis
D) stream: http Protocol dashboard

A

Answer: C

According to the Splunk Enterprise Security documentation, the HTTP Category Analysis dashboard is one of the Web Intelligence dashboards that help you analyze web traffic in your network and identify notable HTTP
categories, user agents, new domains, and long URLs. The dashboard shows the top HTTP categories by bytes, requests, and users, and allows you to filter the data by time range, category, user, and domain. The dashboard also provides drilldown links to other dashboards, such as the Web User Agent Analysis dashboard and the Web Domain Analysis dashboard, for further analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Which indexes are searched by default for CIM data models?
A) notable and default
B) summary and notable
C) _internal and summary
D) All indexes

A

Answer: D

By default, the CIM data models search all indexes in Splunk Enterprise Security. This means that any event that matches the tags and fields of a data model can be included in the data model, regardless of the index where it is stored. However, this can also affect the performance and efficiency of the data model searches, especially if there are many indexes that do not contain relevant data for the data model. Therefore, it is recommended to use the indexes allow list setting in the CIM add-on to constrain the indexes that each data model searches. The indexes allow list is a comma-separated list of indexes that you want to include in the data model search. You can specify index names or index macros. For example, you can set the indexes allow
list for the Authentication data model to index=main, index=security, index=auth to limit the search to only
those three indexes12.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

To observe what network services are in use in a network’s activity overall, which of the following dashboards in Enterprise Security will contain the most relevant data?
A) Intrusion Center
B) Protocol Analysis
C) User Intelligence
D) Threat Intelligence

A

Answer = B

To observe what network services are in use in a network’s activity overall, the Protocol Analysis dashboard in Enterprise Security will contain the most relevant data. The Protocol Analysis dashboard shows the network traffic data by protocol, such as TCP, UDP, ICMP, and others. You can use this dashboard to identify the most active protocols, the most active hosts, the most active ports, and the most active connections in your network. You can also filter the dashboard by protocol, host, port, or connection to narrow down your analysis. The Protocol Analysis dashboard uses the data from the Network Resolution (stream) data model, which requires the Splunk Stream app to collect network packet data1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How is it possible to navigate to the ES graphical Navigation Bar editor?
A) Configure -> Navigation Menu
B) Configure -> General -> Navigation
C) Settings -> User Interface -> Navigation -> Click on “Enterprise Security”
D) Settings -> User Interface -> Navigation Menus -> Click on “default” next to
SplunkEnterpriseSecuritySuite

A

Answer = B

To navigate to the ES graphical Navigation Bar editor, you need to click the Configure menu in the ES app bar, then select General, and then select Navigation. The Navigation page allows you to customize the navigation bar of the ES app by adding, removing, or reordering the menu items. You can also edit the labels,
icons, and links of the menu items. You can use the graphical editor to drag and drop the menu items, or you can edit the navigation XML directly. For more information, see Customize the navigation bar in Splunk Enterprise Security1. The other options, A, C, and D, are not correct. There is no Navigation Menu option under the Configure menu. The Settings menu does not allow you to edit the navigation bar of the ES app. The Settings menu only allows you to edit the navigation menus of the Splunk platform, such as the app launcher
and the user menu.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What does the risk framework add to an object (user, server or other type) to indicate increased risk?
A) An urgency.
B) A risk profile.
C) An aggregation.
D) A numeric score.

A

Answer: D

The risk framework in Splunk Enterprise Security adds a numeric score to an object (user, server or other type) to indicate increased risk. The numeric score is calculated by summing up the risk scores of all the risk
modifiers that are associated with the object. A risk modifier is an event that modifies the risk of an object, such as a malware infection, a failed login, or a suspicious activity. The risk score of a risk modifier is determined by the correlation search that triggers the risk analysis response action, which can be customized or
created by the user12. The numeric score of an object reflects its overall risk level and can be used to prioritize investigation and response actions3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

The option to create a Short ID for a notable event is located where?
A) The Additional Fields.
B) The Event Details.
C) The Contributing Events.
D) The Description.

A

Answer: B

According to the Splunk Enterprise Security documentation, the option to create a Short ID for a notable event is located in the Event Details section of the notable event. The Event Details section shows the basic information about the notable event, such as title, description, urgency, owner, status, and others. It also provides a link to Create Short ID, which generates a 6-digit alphanumeric code that can be used to identify and share the notable event. The Short ID is appended to the URL of the Incident Review dashboard and can
be used to filter the notable events by the Short ID field. See Manually create a notable event in Splunk Enterprise Security for more details. Therefore, the correct answer is B. The Event Details.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What should be used to map a non-standard field name to a CIM field name?
A) Field alias.
B) Search time extraction.
C) Tag.
D) Eventtype.

A

Answer: A

A field alias is a knowledge object that maps a non-standard field name to a CIM field name. A field alias allows you to use the same search string to retrieve data from different data sources, even if the data sources use different field names for the same type of data. For example, if you have data sources that use different field names for the source IP address, such as src_ip, source_ip, or sip, you can create a field alias that maps these field names to the CIM field name src. This way, you can use src as a common field name in your searches and reports, and Splunk will automatically replace it with the appropriate field name for each data
source. Field aliases are applied at search time, so they do not affect the original data or the index time field
extractions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Which feature contains scenarios that are useful during ES Implementation?
A) Use Case Library
B) Correlation Searches
C) Predictive Analytics
D) Adaptive Responses

A

Answer: A

According to the Splunk Enterprise Security documentation, the Use Case Library is a feature that contains scenarios that are useful during ES implementation. The Use Case Library provides a collection of Analytic Stories that provide actionable guidance for detecting, analyzing, and addressing security threats. An Analytic Story contains the searches, data sources, and explanations that you need to implement the scenario in your own ES environment. The Use Case Library also allows you to explore, activate, bookmark, and configure the searches that are related to each Analytic Story. You can filter the Analytic Stories by industry use cases, frameworks, or data sources. The Use Case Library helps you to quickly and easily deploy the most relevant
security content for your organization. Therefore, the correct answer is A. Use Case Library.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Accelerated data requires approximately how many times the daily data volume of additional storage space per year? A) 3.4 B) 5.7 C) 1.0 A) 2.5
Answer = A According to the Splunk Lantern article on Managing data models in Enterprise Security, accelerated data requires approximately 3.4 times the daily data volume of additional storage space per year1. This means that if the daily input volume is 100 GB, the accelerated data model storage per year would be 100 GB x 3.4 = 340 GB. This estimate may vary depending on the data model configuration, the data retention policy, and the indexer cluster replication factor.
26
Which of the following are the default ports that must be configured for Splunk Enterprise Security to function? A) SplunkWeb (8068), Splunk Management (8089), KV Store (8000) B) SplunkWeb (8390), Splunk Management (8323), KV Store (8672) C) SplunkWeb (8000), Splunk Management (8089), KV Store (8191) D) SplunkWeb (8043), Splunk Management (8088), KV Store (8191)
Answer = C According to the Splunk Enterprise Security documentation, the default ports that must be configured for Splunk Enterprise Security to function are the following: SplunkWeb (8000): This port provides the socket for Splunk Web, the web interface for Splunk Enterprise Security. It allows you to access the dashboards, reports, alerts, and other features of Splunk Enterprise Security from your browser. You can change this port in the web.conf file or by using the splunk set web-port command. Splunk Management (8089): This port is used to communicate with the splunkd daemon, the main process that runs Splunk Enterprise Security. Splunk Web talks to splunkd on this port, as does the command line interface, and any distributed connections from other servers. This port also provides the REST API endpoint for Splunk Enterprise Security. You can change this port in the server.conf file or by using the splunk set splunkd-port command. KV Store (8191): This port is used by the KV Store, a MongoDB-based service that stores key-value pairs of data for Splunk Enterprise Security. The KV Store is used to store and manage data for various features of Splunk Enterprise Security, such as asset and identity correlation, threat intelligence, adaptive response, and investigations. You can change this port in the server.conf file.
27
When creating custom correlation searches, what format is used to embed field values in the title, description, and drill-down fields of a notable event? A) $fieldname$ B) “fieldname” C) %fieldname% D) _fieldname_
Answer: A When creating custom correlation searches, you can use the fieldname format to embed field values in the title, description, and drill-down fields of a notable event. This allows you to customize the notable event with dynamic information from the search results. For example, you can use src to include the source IP address of the event, or user to include the user name of the event1.
28
How does ES know local customer domain names so it can detect internal vs. external emails? A) Web and email domain names are set in General -> General Configuration. B) ES uses the User Activity index and applies machine learning to determine internal and external domains. C) The Corporate Web and Email Domain Lookups are edited during initial configuration. D) ES extracts local email and web domains automatically from SMTP and HTTP logs.
Answer: C Splunk Enterprise Security knows the local customer domain names so it can detect internal vs. external emails by using the Corporate Web and Email Domain Lookups. These are lookup files that contain the list of domains that are considered internal or corporate for the organization. The Corporate Web and Email Domain Lookups are edited during the initial configuration of Splunk Enterprise Security, and they are used to enrich events with the tag=internal_web or tag=internal_email fields. These fields indicate whether the web or email activity is internal or external, and they are used by dashboards and correlation searches in Splunk Enterprise Security to monitor and analyze the web and email traffic.
29
Which lookup table does the Default Account Activity Detected correlation search use to flag known default accounts? A) Administrative Identities B) Local User Intel C) Identities D) Privileged Accounts
Answer: B According to the Splunk Enterprise Security documentation, the Default Account Activity Detected correlation search uses the Local User Intel lookup table to flag known default accounts. The Local User Intel lookup table contains a list of default usernames and passwords for various systems and applications, such as admin, root, guest, and others. The correlation search compares the authentication events from the Authentication data model with the usernames in the lookup table and generates a notable event if there is a match. The notable event indicates that a default account was used to access a system or application, which could be a sign of a brute force attack or a misconfiguration.
30
Which settings indicated that the correlation search will be executed as new events are indexed? A) Always-On B) Real-Time C) Scheduled D) Continuous
Answer: B A correlation search that is set to run in real-time mode will be executed as new events are indexed. Real-time mode means that the search continuously runs over a rolling window of time, such as the last 15 minutes or the last hour. Real-time searches can detect patterns and anomalies in near real-time and trigger adaptive response actions accordingly. However, real-time searches are more resource-intensive than scheduled searches and may impact the overall performance of the system. Therefore, Splunk Enterprise Security uses indexed real-time searches by default for some correlation searches, which are more efficient than non-indexed real-time searches. You can change the search mode of a correlation search from real-time to scheduled or vice versa from the Content Management page. See Configure correlation searches in Splunk Enterprise Security1 for more details. The other options, A, C, and D, are not correct. Always-On is not a valid search mode for correlation searches. Scheduled mode means that the search runs at a specified interval, such as every 5 minutes or every hour. Continuous mode is a deprecated search mode that is no longer supported by Splunk Enterprise Security.
31
Which two fields combine to create the Urgency of a notable event? A) Priority and Severity. B) Priority and Criticality. C) Criticality and Severity. D) Precedence and Time.
Answer: A The urgency of a notable event is a value that indicates how important or urgent the event is for investigation and response. The urgency of a notable event is determined by two fields: the priority and the severity. The priority is a value that is assigned to an asset or an identity based on how critical or valuable it is for the organization. The priority can be unknown, low, medium, high, or critical. The severity is a value that is assigned to a notable event based on how serious or harmful the event is for the security posture. The severity can be unknown, informational, low, medium, high, or critical. The urgency of a notable event is calculated by combining the priority and the severity values using a lookup table called urgency_lookup. The urgency can be informational, low, medium, high, or critical. You can use the urgency field to prioritize the investigation of notable events in Splunk Enterprise Security.
32
Which of the following actions may be necessary before installing ES? A) Redirect distributed search connections. B) Purge KV Store. C) Add additional indexers. D) Add additional forwarders.
Answer = A According to the Splunk Enterprise Security documentation, one of the actions that may be necessary before installing ES is to redirect distributed search connections. This action is required if you are installing ES on a search head that is already connected to a distributed search environment, such as a search head cluster or a search head pool. You need to redirect the distributed search connections from the existing search head to a new search head that will run ES. This is because ES requires a dedicated search head that is not shared with other apps or users. You can use the Distributed Configuration Management tool to redirect the distributed search connections and create a Splunk Enterprise Security app for indexers. See Redirect distributed search connections for more details. The other actions are not necessary before installing ES, but they may be helpful for optimizing the performance and scalability of ES. Purging KV Store can free up some disk space and remove stale data, but it is not required before installing ES. See Purge the KV Store for more information. Adding additional indexers can improve the indexing and searching capacity of ES, but it is not required before installing ES. See Deployment planning for more information. Adding additional forwarders can increase the data ingestion and forwarding capability of ES, but it is not required before installing ES.
33
Which setting is used in indexes.conf to specify alternate locations for accelerated storage? A) thawedPath B) tstatsHomePath C) summaryHomePath D) warmToColdScript
Answer: B The setting that is used in indexes.conf to specify alternate locations for accelerated storage is tstatsHomePath. Accelerated storage is the location where Splunk Enterprise stores the summary data for accelerated data models and reports. By default, acceleration storage is allocated in the same location as the index containing the raw events being accelerated. However, if you need to specify alternate locations for your accelerated storage, you can use the tstatsHomePath setting in indexes.conf. This setting allows you to define a different path for the summary data, which can improve the performance and efficiency of the data model acceleration. For example, you can set the tstatsHomePath to a faster disk or a different volume than the index homePath12.
34
Which of the following is a risk of using the Auto Deployment feature of Distributed Configuration Management to distribute indexes.conf? A) Indexes might crash. B) Indexes might be processing. C) Indexes might not be reachable. D) Indexes have different settings.
Answer: D The Auto Deployment feature of Distributed Configuration Management is a tool that allows you to automatically distribute configuration files, such as indexes.conf, to your Splunk platform instances. However, using this feature to distribute indexes.conf can pose a risk of having indexes with different settings across your instances. This can happen if you have manually edited the indexes.conf file on some of your instances, or if you have different versions of Splunk Enterprise Security installed on different instances. If the indexes have different settings, such as retention policies, storage paths, or bucket sizes, this can cause data inconsistency, search inefficiency, or data loss. Therefore, it is recommended to use the Manual Deployment feature of Distributed Configuration Management to review and validate the indexes.conf file before deploying it to your instances12.
35
Where is detailed information about identities stored? A) The Identity Investigator index. B) The Access Anomalies collection. C) The User Activity index. D) The Identity Lookup CSV file.
Answer: D Detailed information about identities, such as user names, email addresses, phone numbers, and roles, is stored in the Identity Lookup CSV file in Splunk Enterprise Security. The Identity Lookup CSV file is a lookup file that contains the identity data that is collected and extracted from various data sources, such as Active Directory, LDAP, or custom identity lists. The Identity Lookup CSV file is used to enrich events with identity information and generate notable events based on identity correlation searches.
36
Which data model populated the panels on the Risk Analysis dashboard? A) Risk B) Audit C) Domain analysis D) Threat intelligence
Answer: A The Risk Analysis dashboard uses the Risk data model to populate the panels. The Risk data model is a data model that contains information about the risk scores and risk modifiers of various objects, such as systems, users, hashes, and network artifacts. The Risk data model accelerates these fields for the Risk Analysis and Incident Review dashboards. The Risk data model also handles case insensitive asset and identity correlation, allowing risk modifiers that are applied to system or user name variants to be correctly attributed to the same risk_object1. The other options, B, C, and D, are not correct. The Audit data model contains information about audit events, such as user logins, password changes, and system access. The Domain Analysis data model contains information about the domains that are visited by the systems in the network. The Threat Intelligence data model contains information about the threat intelligence sources, indicators, and matches.
37
In order to include an event type in a data model node, what is the next step after extracting the correct fields? A) Save the settings. B) Apply the correct tags. C) Run the correct search. D) Visit the CIM dashboard.
Answer: B In order to include an eventtype in a data model node, you need to apply the correct tags to the eventtype. Tags are labels that you can assign to event types to identify them as belonging to a specific category or domain. Tags are used by data models to map event types to data model nodes. For example, if you have an eventtype named windows_performance that contains events related to Windows performance metrics, you can tag it with performance and os. Then, you can include the eventtype in a data model node that matches those tags, such as the Performance node in the Operating System data model12. To apply tags to an eventtype, you can use the Settings > Event types page in Splunk Web, or the eventtypes.conf and tags.conf configuration files3.
38
What is the default schedule for accelerating ES Datamodels? A) 1 minute B) 5 minutes C) 15 minutes D) 1 hour
Answer: B According to the Splunk Enterprise Security documentation, the default schedule for accelerating ES data models is every 5 minutes. This means that the data model acceleration searches run every 5 minutes to summarize the newly indexed data and store the results in the tsidx files. The 5-minute schedule is recommended for most use cases, as it provides a balance between search performance and resource consumption. However, you can change the schedule of a data model acceleration search in the Content Management page of Splunk Enterprise Security, if needed.
39
Which correlation search feature is used to throttle the creation of notable events? A) Schedule priority. B) Window interval. C) Window duration. D) Schedule windows.
Answer: C The correlation search feature that is used to throttle the creation of notable events is the window duration. The window duration is the time period during which a correlation search will not create a new notable event for the same issue. For example, if the window duration is set to 1 day, and a correlation search triggers a notable event for a certain condition, such as a brute force attack from a source IP address, the correlation search will not create another notable event for the same condition within the next 24 hours. This prevents the correlation search from generating too many alerts for the same issue, which can reduce the alert fatigue and noise. The window duration can be configured in the correlation search settings, under the Throttling section12.
40
What is the bar across the bottom of any ES window? A) The Investigator Workbench. B) The Investigation Bar. C) The Analyst Bar. D) The Compliance Bar.
Answer: B According to the Splunk Enterprise Security documentation, the bar across the bottom of any ES window is called the Investigation Bar. The Investigation Bar is a tool that helps you create and manage investigations in ES. An investigation is a collection of related notable events, comments, and artifacts that document a security incident or a threat hunting activity. You can use the Investigation Bar to do the following tasks: Create a new investigation or open an existing one. Add notable events, comments, and artifacts to an investigation. Assign an owner and a status to an investigation. Share an investigation with other users or roles. Export an investigation as a PDF report. The Investigation Bar also provides a link to the Investigation Workbench, which is a dashboard that shows a timeline and a summary of an investigation. Therefore, the correct answer is B.
41
What is the first step when preparing to install ES? A) Install ES. B) Determine the data sources used. C) Determine the hardware required. D) Determine the size and scope of installation.
Answer = D According to the Splunk Enterprise Security documentation, the first step when preparing to install ES is to determine the size and scope of installation. This involves estimating the amount of data that you plan to ingest, the number of users that will access ES, the number of search heads and indexers that you need, and the hardware requirements for each component. This step helps you plan your deployment architecture and ensure optimal performance and scalability of ES. Therefore, the correct answer is D.
42
Which of the following is part of tuning correlation searches for a new ES installation? A) Configuring correlation notable event index. B) Configuring correlation permissions. C) Configuring correlation adaptive responses. D) Configuring correlation result storage.
Answer: C Correlation searches can perform adaptive response actions when they find a pattern in the data. Adaptive response actions are automated or manual responses that you can use to modify your environment based on notable events. For example, you can block an IP address, add a user to a watchlist, or send an email notification. Configuring correlation adaptive responses is part of tuning correlation searches for a new ES installation, as it allows you to customize the actions that are triggered by the correlation searches. You can enable, disable, or modify the adaptive response actions for each correlation search, or create your own custom actions.
43
Which columns in the Assets lookup are used to identify an asset in an event? A) src, dvc, dest B) cidr, port, netbios, saml C) ip, mac, dns, nt_host D) host, hostname, url, address
Answer: C The columns in the Assets lookup that are used to identify an asset in an event are ip, mac, dns, and nt_host. These columns contain the network identifiers of the assets, such as IP address, MAC address, DNS name, and NetBIOS name. Splunk Enterprise Security uses these columns to match the asset fields with the event fields, such as src, dest, dvc, host, and hostname. When a match is found, Splunk Enterprise Security enriches the event with the asset information, such as category, priority, business unit, and location. This allows you to search and analyze events based on the asset attributes and context.
44
Which of the following are data models used by ES? (Choose all that apply) A) Web B) Anomalies C) Authentication D) Network Traffic
Answers: A C D The data models that are used by Splunk Enterprise Security are the ones that are defined and provided by the Common Information Model add-on (Splunk_SA_CIM) and the Enterprise Security-specific data models. The Common Information Model add-on contains 12 data models that cover various domains of security data, such as Web, Authentication, Network Traffic, Change, DLP, Email, Endpoint, Intrusion Detection, Malware, Performance, Ticket Management, and Vulnerabilities1. The Enterprise Security-specific data models are Anomalies, Audit, Business Context, Data Loss Prevention, Identity Management, Risk, Threat Intelligence, and Web Proxy2. Therefore, the data models that are used by ES are Web, Authentication, Network Traffic, and Anomalies, among others.
45
After installing Enterprise Security, the distributed configuration management tool can be used to create which app to configure indexers? A) Splunk_DS_ForIndexers.spl B) Splunk_ES_ForIndexers.spl C) Splunk_SA_ForIndexers.spl D) Splunk_TA_ForIndexers.spl
Answer: D
46
To which of the following should the ES application be uploaded? A) The indexer. B) The KV Store. C) The search head. D) The dedicated forwarder.
Answer: C The ES application should be uploaded to the search head, which is the component that runs the ES user interface and executes the searches, alerts, and reports. The search head should be dedicated to ES and not run any other applications. The indexer is the component that indexes the data and stores it in buckets. The KV Store is a feature that stores and manages data as key-value pairs. The dedicated forwarder is a component that collects data from various sources and forwards it to the indexer. None of these components can run the ES application.
47
Which of the following threat intelligence types can ES download? (Choose all that apply) A) Text B) STIX/TAXII C) VulnScanSPL D) Splunk Enterprise Threat Generator
Answer: B Splunk Enterprise Security supports downloading threat intelligence from STIX/TAXII servers. STIX is a structured language for describing cyber threat information, and TAXII is a protocol for exchanging STIX data. Splunk Enterprise Security can download STIX/TAXII feeds from any server that supports the TAXII 1.1 specification and the STIX 1.1.1 or 1.2 specification. Splunk Enterprise Security does not support downloading threat intelligence from text, VulnScanSPL, or Splunk Enterprise Threat Generator sources. References = Add threat intelligence to Splunk Enterprise Security, Upload a STIX or OpenIOC structured threat intelligence file
48
What tools does the Risk Analysis dashboard provide? A) High risk threats. B) Notable event domains displayed by risk score. C) A display of the highest risk assets and identities. D) Key indicators showing the highest probability correlation searches in the environment.
Answer: C The Risk Analysis dashboard provides tools to analyze the risk scores and risk modifiers of various objects, such as systems, users, hashes, and network artifacts. The dashboard shows the risk score by object, the most active sources of risk, the risk score by category, the risk score over time, and the risk modifiers by object. The dashboard also allows you to create ad hoc risk entries, view the risk details of an object, and export the risk data as a CSV file. The other options, A, B, and D, are not correct. The Risk Analysis dashboard does not provide tools to show high risk threats, notable event domains, or key indicators of correlation searches. These are features of other dashboards in Splunk Enterprise Security, such as the Threat Activity dashboard, the Domain Analysis dashboard, and the Correlation Search Audit dashboard.
49
Following the Installation of ES, an admin configured Leers with the ©ss_uso r role the ability to close notable events. How would the admin restrict these users from being able to change the status of Resolved notable events to closed? A) From the Status Configuration window select the Resolved status. Remove ess_user from the status transitions for the closed status. B) From the Status Configuration windows select the closed status. Remove ess_use r from the status transitions for the Resolved status. C) In Enterprise Security, give the ess_user role the own Notable Events permission. D) From Splunk Access Controls, select the ess_user role and remove the edit_notabie_events capability.
Answer: A According to the Splunk Enterprise Security documentation, the Status Configuration window allows you to customize the status values and transitions for notable events. You can define which roles can change the status of a notable event from one value to another, and which roles can view the notable events with a specific status. To restrict the users with the ess_user role from being able to change the status of Resolved notable events to closed, you need to do the following steps: On the Enterprise Security menu bar, select Configure > Incident Management > Status Configuration. In the Status Configuration window, select the Resolved status from the list of values. In the Status Transitions section, find the row for the closed status and click the Edit icon. In the Edit Status Transition dialog box, remove the ess_user role from the Roles field and click Save.Click Save Changes to apply the changes to the Status Configuration window. This will prevent the users with the ess_user role from changing the status of any notable event from Resolved to closed. They will still be able to change the status of other notable events to closed, if they have the permission to do so. Therefore, the correct answer is A. From the Status Configuration window select the Resolved status. Remove ess_user from the status
50
Enterprise Security’s dashboards primarily pull data from what type of knowledge object? A) Tstats B) KV Store C) Data models D) Dynamic lookups
Answer: C Data models are the primary source of data for Enterprise Security dashboards. Data models provide a structured and consistent way of defining and retrieving data from indexes. Data models accelerate searches by using prebuilt summaries of the data. Data models also enable the use of the tstats command, which can perform statistical analysis on the data model summaries. Data models are mapped to the Common Information Model (CIM), which provides a common language for describing data across domains and technologies.
51
What role should be assigned to a security team member who will be taking ownership of notable events in the incident review dashboard? A) ess_user B) ess_admin C) ess_analyst D) ess_reviewer
Answer: C The role that should be assigned to a security team member who will be taking ownership of notable events in the incident review dashboard is the ess_analyst role. The ess_analyst role is a predefined role in Splunk Enterprise Security that grants the user the ability to view, edit, comment, and change the status and owner of notable events. The ess_analyst role also allows the user to access the dashboards, reports, and searches related to security analysis and investigation12.
52
Which of the following are examples of sources for events in the endpoint security domain dashboards? A) REST API invocations. B) Investigation final results status. C) Workstations, notebooks, and point-of-sale systems. D) Lifecycle auditing of incidents, from assignment to resolution.
Answer: C The endpoint security domain dashboards in Splunk Enterprise Security display endpoint data relating to malware infections, patch history, system configurations, and time synchronization information. The sources for events in the endpoint security domain dashboards are the devices that are considered endpoints in your network, such as workstations, notebooks, and point-of-sale systems1.
53
How is it possible to navigate to the list of currently-enabled ES correlation searches? A) Configure -> Correlation Searches -> Select Status “Enabled” B) Settings -> Searches, Reports, and Alerts -> Filter by Name of “Correlation” C) Configure -> Content Management -> Select Type “Correlation” and Status “Enabled” D) Settings -> Searches, Reports, and Alerts -> Select App of “SplunkEnterpriseSecuritySuite” and filter by “- Rule”
Answer: C The way to navigate to the list of currently-enabled ES correlation searches is to use the Content Management page in Splunk Enterprise Security. The Content Management page allows you to view, enable, disable, and edit the content items that are included in Splunk Enterprise Security, such as correlation searches, dashboards, reports, and lookups. To access the Content Management page, you need to select Configure > Content > Content Management from the Splunk ES menu bar. Then, you can filter the content items by Type and Status to view only the correlation searches that are enabled. You can also use other filters, such as App, Domain, or Owner, to further refine your view12.
54
55
Which tool Is used to update indexers In E5? A) Index Updater B) Distributed Configuration Management C) indexes.conf D) Splunk_TA_ForIndexeres. spl
Answer: B According to the Splunk Enterprise Security documentation, the Distributed Configuration Management tool is used to update indexers in ES. This tool allows you to create and distribute a Splunk Enterprise Security app for indexers, which contains the necessary configurations for indexers to work with ES, such as index-time field extractions, tags, and event types. The app name is Splunk_ES_ForIndexers.spl and it is created by running the distributed_config_manager.py script on the search head. You can then deploy the app to theindexers using the deployment server or the cluster master.
55
What is the maximum recommended volume of indexing per day, per indexer, for a non-cloud (on-prem) ES deployment? A)50 GB B) 100 GB C) 300 GB D) 500 MB
Answer: B According to the Splunk Reference Architecture document1, for ES, Splunk recommends sizing based on 80 to 100 GB ingest per indexer per day. This means an ES deployment with 2 TB daily ingest will require up to 20 indexers. This recommendation is for a non-cloud (on-prem) ES deployment. For a cloud-based ES deployment, the recommended volume of indexing per day, per indexer, is 50 GB2. The other options, 300 GB and 500 MB, are not recommended by Splunk for ES deployments.
56
A set of correlation searches are enabled at a new ES installation, and results are being monitored. One of the correlation searches is generating many notable events which, when evaluated, are determined to be false positives. What is a solution for this issue? A) Suppress notable events from that correlation search. B) Disable acceleration for the correlation search to reduce storage requirements. C) Modify the correlation schedule and sensitivity for your site. D) Change the correlation search's default status and severity.
Answer: C A correlation search is a scheduled search that runs periodically to detect patterns of interest in the data and generate notable events or other actions when the search conditions are met. A correlation search can generate false positives, which are notable events that do not represent a real security incident or threat. False positives can create noise and reduce the efficiency and accuracy of the security analysis. To reduce false positives from a correlation search, you can modify the correlation schedule and sensitivity for your site. The correlation schedule determines how often the correlation search runs and over what time range. The sensitivity determines the threshold or limit for the search conditions to trigger a notable event. By adjusting the correlation schedule and sensitivity, you can fine-tune the correlation search to match your environment and data sources, and avoid generating notable events for normal or benign activities. You can modify the correlation schedule and sensitivity for a correlation search using the Content Management page in Splunk Enterprise Security.
57
Analysts have requested the ability to capture and analyze network traffic data. The administrator has researched the documentation and, based on this research, has decided to integrate the Splunk App for Stream with ES. Which dashboards will now be supported so analysts can view and analyze network Stream data? A) Endpoint dashboards. B) User Intelligence dashboards. C) Protocol Intelligence dashboards. D) Web Intelligence dashboards.
Answer: C According to the Splunk Enterprise Security documentation, the Protocol Intelligence dashboards are the dashboards that support the ability to view and analyze network Stream data. The Protocol Intelligence dashboards provide a summary of network traffic by protocol, such as TCP, UDP, ICMP, and others. They also show the top sources, destinations, ports, and applications for each protocol. The dashboards allow you to filter the data by time range, protocol, source, destination, port, and application. The dashboards also provide drilldown links to other dashboards, such as the Network Resolution dashboard and the Traffic Size Analysis dashboard, for further analysis. The Protocol Intelligence dashboards require the Splunk App for Stream and the Splunk Add-on for Stream to capture and parse network traffic data.
58
Which of the following actions can improve overall search performance? A) Disable indexed real-time search. B) Increase priority of all correlation searches. C) Reduce the frequency (schedule) of lower-priority correlation searches. D) Add notable event suppressions for correlation searches with high numbers of false positives.
Answers: C D Correlation searches are scheduled searches that run in Splunk Enterprise Security to detect security incidents or other notable events. They can consume a lot of resources and affect the overall search performance. To improve the search performance, you can do the following actions: Reduce the frequency (schedule) of lower-priority correlation searches. This will reduce the number of searches that run concurrently and free up some resources for other searches. You can edit the schedule of a correlation search in the Content Management page of Splunk Enterprise Security. See Edit a correlation search in Splunk Enterprise Security for more details. Add notable event suppressions for correlation searches with high numbers of false positives. This will prevent the correlation search from generating notable events that are not relevant or actionable, and reduce the load on the Notable Event Framework. You can add suppression rules for a correlation search in the Content Management page of Splunk Enterprise Security. See Suppress notable events in Splunk Enterprise Security for more details. The other two actions are not recommended, because they can have negative effects on the search performance or the security posture. Disabling indexed real-time search can cause some dashboards and panels to not display data correctly, and increasing the priority of all correlation searches can cause resource contention and degrade the performance of other searches.
59
Both “Recommended Actions” and “Adaptive Response Actions” use adaptive response. How do they differ? A) Recommended Actions show a textual description to an analyst, Adaptive Response Actions show them encoded. B) Recommended Actions show a list of Adaptive Responses to an analyst, Adaptive Response Actions run them automatically. C) Recommended Actions show a list of Adaptive Responses that have already been run, Adaptive Response Actions run them automatically. D) Recommended Actions show a list of Adaptive Responses to an analyst, Adaptive Response Actions run manually with analyst intervention.
Answer: B Recommended Actions show a list of Adaptive Responses to an analyst, which are possible actions that can betaken in response to a notable event. Adaptive Response Actions run automatically when a correlation search triggers a notable event, and can perform actions such as sending an email, adding a comment, or modifying a risk score. Recommended Actions are configured in the correlation search editor, while Adaptive Response Actions are configured in the alert actions manager.
60
Following the installation of ES, an admin configured users with the ess_user role the ability to close notable events. How would the admin restrict these users from being able to change the status of Resolved notable events to Closed? A) In Enterprise Security, give the ess_user role the Own Notable Events permission. B) From the Status Configuration window select the Closed status. Remove ess_user from the status transitions for the Resolved status. C) From the Status Configuration window select the Resolved status. Remove ess_user from the status transitions for the Closed status. D) From Splunk Access Controls, select the ess_user role and remove the edit_notable_events capability.
Answer: B The Status Configuration window in Splunk Enterprise Security allows you to manage and customize the investigation statuses and the status transitions for notable events. You can specify which roles can change the status of a notable event from one status to another. For example, you can restrict the ess_user role from changing the status of Resolved notable events to Closed by removing the ess_user role from the status transitions for the Closed status. This way, only the roles that have the permission to change the status to Closed can close the Resolved notable events.
61
What are the steps to add a new column to the Notable Event table in the Incident Review dashboard? A) Configure -> Incident Management -> Notable Event Statuses B) Configure -> Content Management -> Type: Correlation Search C) Configure -> Incident Management -> Incident Review Settings -> Event Management D) Configure -> Incident Management -> Incident Review Settings -> Table Attributes
Answer: D To add a new column to the Notable Event table in the Incident Review dashboard, you need to follow these steps: On the Splunk Enterprise Security menu bar, click Configure > Incident Management > Incident Review Settings. On the Incident Review Settings page, click the Table Attributes tab. On the Table Attributes tab, click Add New Attribute. Enter the name of the attribute that you want to add as a column, such as src or dest. The name must match the field name in the notable event data model. Enter a label for the attribute that will appear as the column header, such as Source or Destination. Enter a description for the attribute that will appear as a tooltip when you hover over the column header. Select the data type for the attribute, such as string or number. Select the visibility for the attribute, such as visible or hidden. Click Save to save the new attribute. Refresh the Incident Review dashboard to see the new column in the Notable Event table.
62
Which of the following steps will make the Threat Activity dashboard the default landing page in ES? A) From the Edit Navigation page, drag and drop the Threat Activity view to the top of the page. B) From the Preferences menu for the user, select Enterprise Security as the default application. C) From the Edit Navigation page, click the 'Set this as the default view" checkmark for Threat Activity. D) Edit the Threat Activity view settings and checkmark the Default View option.
Answer: C According to the Splunk Enterprise Security documentation, the way to make the Threat Activity dashboard the default landing page in ES is to use the Edit Navigation page and click the ‘Set this as the default view’ checkmark for Threat Activity. The Edit Navigation page allows you to customize the menu bar of ES and add links to custom dashboards, reports, or other views. You can also set the default view for each app context, which determines the landing page when you open the app. To set the Threat Activity dashboard as the default view, you need to do the following steps: On the Enterprise Security menu bar, select Configure > General > Navigation. In the Edit Navigation page, select the Enterprise Security app context from the drop-down menu. In the Navigation XML section, find the line that contains the view name ‘threat_activity’. Add the attribute default=“true” to the line and remove the same attribute from any other line in the same app context. Click Save Changes to apply the changes to the Edit Navigation page. This will make the Threat Activity dashboard the default landing page when you open the Enterprise Security app. See Customize the navigation bar for more details. The other options are not the correct ways to make the Threat Activity dashboard the default landing page in ES. Dragging and dropping the Threat Activity view to the top of the page will not change the default view, but only the order of the menu items. Selecting Enterprise Security as the default application will not change the default view within the app, but only the app that opens when you log in to Splunk. Editing the Threat Activity view settings will not change the default view, but only the title, description, permissions, and schedule of the dashboard.
63
A newly built custom dashboard needs to be available to a team of security analysts In ES. How is It possible to Integrate the new dashboard? A) Add links on the ES home page to the new dashboard. B) Create a new role Inherited from es_analyst, make the dashboard permissions read-only, and make this dashboard the default view for the new role. C) Set the dashboard permissions to allow access by es_analysts and use the navigation editor to add it to the menu. D) Add the dashboard to a custom add-in app and install it to ES using the Content Manager.
Answer: C According to the Splunk Enterprise Security documentation, the best way to integrate a newly built custom dashboard to a team of security analysts in ES is to set the dashboard permissions to allow access by es_analysts and use the navigation editor to add it to the menu. This will ensure that the dashboard is visible and accessible to the users with the es_analyst role, which is the default role for security analysts in ES. The navigation editor allows you to customize the menu bar of ES and add links to custom dashboards, reports, or other views. See Customize Splunk Enterprise Security dashboards to fit your use case and Customize the navigation bar for more details. The other options are not recommended, because they either do not integrate the dashboard properly or they create unnecessary complexity. Adding links on the ES home page to the new dashboard is not a good option, because it does not integrate the dashboard into the menu bar and it may clutter the home page. Creating a new role inherited from es_analyst, making the dashboard permissions read-only, and making this dashboard the default view for the new role is not a good option, because it creates a redundant role and it may confuse the users who expect to see the Security Posture dashboard as the default view. Adding the dashboard to a custom add-in app and installing it to ES using the Content Manager is not a good option, because it requires creating and maintaining a separate app and it may cause conflicts or performance issues with ES. Therefore, the correct answer is C. Set the dashboard permissions to allow access by es_analysts and use the navigation editor to add it to the menu.
64
An administrator wants to ensure that none of the ES indexed data could be compromised through tampering. What feature would satisfy this requirement? A) Index consistency. B) Data integrity control. C) Indexer acknowledgement. D) Index access permissions.
Answer: B Data integrity control is a feature of Splunk Enterprise that helps you verify the integrity of data that it indexes. When you enable data integrity control for an index, Splunk Enterprise computes hashes on every slice of data using the SHA-256 algorithm. It then stores those hashes so that you can verify the integrity of your data later. This feature prevents data tampering and ensures that the data is trustworthy and reliable.
65
Which of the following would allow an add-on to be automatically imported into Splunk Enterprise Security? A) A prefix of CIM_ B) A suffix of .spl C) A prefix of TECH_ D) A prefix of Splunk_TA_
Answer: D A prefix of Splunk_TA_ would allow an add-on to be automatically imported into Splunk Enterprise Security. Splunk Enterprise Security uses a naming convention to identify and import add-ons that are compatible with the Common Information Model (CIM). Add-ons that start with Splunk_TA_ are automatically imported into Splunk Enterprise Security and mapped to the appropriate data models. Add-ons that do not follow this naming convention must be manually imported and configured in Splunk Enterprise Security1. A prefix of CIM_ or TECH_ does not indicate an add-on that can be automatically imported. A suffix of .spl is the file extension for Splunk apps and add-ons, but it does not guarantee that they are compatible with Splunk Enterprise Security.
66
When investigating, what is the best way to store a newly-found IOC? A) Paste it into Notepad. B) Click the “Add IOC” button. C) Click the “Add Artifact” button. D) Add it in a text note to the investigation.
Answer: C When investigating an incident in Splunk Enterprise Security, the best way to store a newly-found IOC (indicator of compromise) is to click the “Add Artifact” button. This button allows you to add an artifact to the current investigation from any dashboard or search result. An artifact is a piece of machine data that indicates risk, such as an IP address, a domain name, a file hash, or a user name. By adding an artifact to the investigation, you can enrich the context of the incident, track the artifact across multiple data sources, and share the artifact with other analysts. You can also use the artifact to create a threat intelligence indicator, which can be used to detect and alert on future threats12.
67
Who can delete an investigation? A) ess_admin users only. B) The investigation owner only. C) The investigation owner and ess-admin. D) The investigation owner and collaborators.
Answer: A According to the Splunk Enterprise Security documentation, only users with the ess_admin role or theManage All Investigations capability can delete an investigation. The investigation owner and collaborators can edit the investigation, but not delete it. Therefore, the correct answer is A. ess_admin users only. References = Manage investigations in Splunk Enterprise Security
68
The Brute Force Access Behavior Detected correlation search is enabled, and is generating many false positives. Assuming the input data has already been validated. How can the correlation search be made less sensitive? A) Edit the search and modify the notable event status field to make the notable events less urgent. B) Edit the search, look for where or xswhere statements, and after the threshold value being compared to make it less common match. C) Edit the search, look for where or xswhere statements, and alter the threshold value being compared to make it a more common match. D) Modify the urgency table for this correlation search and add a new severity level to make notable events from this search less urgent.
Answer: B If the number of failed logins is greater than or equal to the threshold value, the search triggers a notable event. To make the search less sensitive, the threshold value can be increased, so that only more frequent failed logins will trigger a notable event. For example, the default threshold value is 4, which means that 4 or more failed logins within a 1-minute window will trigger a notable event. If the threshold value is changed to 10, then only 10 or more failed logins within a 1-minute window will trigger a notable event.
69
Which of the following features can the Add-on Builder configure in a new add-on? A) Expire data. B) Normalize data. C) Summarize data. D) Translate data.
Answer: B The Add-on Builder can configure a new add-on to normalize data by mapping the data fields to the Common Information Model (CIM). The CIM provides a common language for describing data across domains and technologies. Normalizing data enables the data to be used by other Splunk apps, such as Splunk Enterprise Security and Splunk IT Service Intelligence. The Add-on Builder can also configure other features in a new add-on, such as collecting data from various sources, extracting fields from the data, creating alert actions and adaptive response actions, and testing and validating the add-on. However, the Add-on Builder cannot configure an add-on to expire data, summarize data, or translate data. These are not features of the Add-on Builder.
70
How is it possible to specify an alternate location for accelerated storage? A) Configure storage optimization settings for the index. B) Update the Home Path setting in indexes, conf C) Use the tstatsHomePath setting in props, conf D) Use the tstatsHomePath Setting in indexes, conf
Answer: D The tstatsHomePath setting in indexes.conf allows you to specify an alternate location for accelerated storage. Accelerated storage is where Splunk Enterprise stores the summary data for data models that are accelerated. The summary data is used to speed up searches and reports that use the data models. By default, the accelerated storage is located in the same volume as the index that contains the events referenced by the data model. However, you can use the tstatsHomePath setting to change the location of the accelerated storage to a different volume or path. This can help you optimize the performance and disk space usage of your Splunk Enterprise deployment.
71
What does the Security Posture dashboard display? A) Active investigations and their status. B) A high-level overview of notable events. C) Current threats being tracked by the SOC. D) A display of the status of security tools.
Answer: B The Security Posture dashboard displays a high-level overview of notable events across all domains of your deployment, suitable for display in a Security Operations Center (SOC). This dashboard shows all events from the past 24 hours, along with the trends over the past 24 hours, and provides real-time event information and updates. The dashboard consists of several panels that show key indicators, notable events by urgency, notable events over time, top notable events, and top notable event sources1.
72
What does the summariesonly=true option do for a correlation search? A) Searches only accelerated data. B) Forwards summary indexes to the indexing tier. C) Uses a default summary time range. D) Searches summary indexes only.
Answer: A The summariesonly=true option is a macro that modifies a correlation search to search only accelerated data. Accelerated data is the summary data that is generated by the data model acceleration process. Data model acceleration is a feature that speeds up searches and reports that use data models by pre-computing and storing the results of the data model queries. By using the summariesonly=true option, a correlation search can run faster and more efficiently, as it does not need to scan the raw events or the index time field extractions. However, the summariesonly=true option also requires that the data model acceleration is enabled and complete for the data model that the correlation search uses. Otherwise, the correlation search may not return any results or may miss some events that are not accelerated.
73
An administrator is asked to configure an “Nslookup” adaptive response action, so that it appears as a selectable option in the notable event’s action menu when an analyst is working in the Incident Review dashboard. What steps would the administrator take to configure this option? A) Configure -> Content Management -> Type: Correlation Search -> Notable -> Nslookup B) Configure -> Type: Correlation Search -> Notable -> Recommended Actions -> Nslookup C) Configure -> Content Management -> Type: Correlation Search -> Notable -> Next Steps -> Nslookup D) Configure -> Content Management -> Type: Correlation Search -> Notable -> Recommended Actions -> Nslookup
Answer: D To configure an “Nslookup” adaptive response action, so that it appears as a selectable option in the notable event’s action menu when an analyst is working in the Incident Review dashboard, the administrator would take the following steps: On the Splunk Enterprise Security menu bar, click Configure > Content > Content Management. Filter the content by Type: Correlation Search and select the correlation search that you want to add the Nslookup action to. Click Edit and go to the Notable tab. Under Recommended Actions, click Add New Action and select Nslookup from the drop-down menu. Enter the required fields for the Nslookup action, such as the host field, the DNS server, and the output index. Click Save to save the changes to the correlation search. The Nslookup action will now appear as an option in the notable event’s action menu on the Incident Review dashboard. References = Set up Adaptive Response actions in Splunk Enterprise Security Included adaptive response actions with Splunk Enterprise Security
74
Where is the Add-On Builder available from? A) GitHub B) SplunkBase C) www.splunk.com D) The ES installation package
Answer: B The Add-On Builder is available from SplunkBase, which is the official source of apps and add-ons for the Splunk platform. SplunkBase allows you to browse, download, and install apps and add-ons that are compatible with your Splunk deployment. You can also upload and share your own apps and add-ons with the Splunk community. The Add-On Builder is a Splunk app that helps you build and validate technology add-ons for your Splunk Enterprise deployment. Technology add-ons are specialized add-ons that help to collect, transform, and normalize data feeds from specific sources in your environment. The Add-On Builder guides you through the process of creating an add-on, following best practices and naming conventions, maintaining CIM compliance, and testing and validating the add-on1. The Add-On Builder is not available from GitHub, www.splunk.com, or the ES installation package.
75
When ES content is exported, an app with a .spl extension is automatically created. What is the best practice when exporting and importing updates to ES content? A) Use new app names each time content is exported. B) Do not use the .spl extension when naming an export. C) Always include existing and new content for each export. D) Either use new app names or always include both existing and new content.
Answer: D When exporting and importing updates to ES content, you should follow the best practices described in the Splunk Enterprise Security Admin documentation1. One of the best practices is to avoid overwriting existing content on the destination system. To do this, you have two options: either use new app names each time you export content, or always include both existing and new content in each export. This way, you can preserve the original content and avoid conflicts or data loss. The other options, A, B, and C, are not correct. Using new app names each time content is exported is only one of the options, not the only one. Using the .spl extension when naming an export is not a problem, as it is the default extension for Splunk apps. Including only new content for each export is not a good practice, as it may overwrite existing content on the destination system.
76
Which component normalizes events? A) SA-CIM. B) SA-Notable. C) ES application. D) Technology add-on.
Answer: D A technology add-on (TA) is a Splunk app that contains the configurations for ingesting and normalizing data from a specific data source or vendor. A TA can include sourcetype definitions, index-time and search-time field extractions, event types, tags, lookups, and other settings that help to map the data to the Splunk Common Information Model (CIM). The CIM is a set of predefined data models that provide a common standard for organizing and naming data fields across different data sources. Splunk Enterprise Security uses the CIM to enable cross-source analysis and correlation of security events.
77
A customer site is experiencing poor performance. The UI response time is high and searches take a very long time to run. Some operations time out and there are errors in the scheduler logs, indicating too many concurrent searches are being started. 6 total correlation searches are scheduled and they have already been tuned to weed out false positives. Which of the following options is most likely to help performance? A) Change the search heads to do local indexing of summary searches. B) Add heavy forwarders between the universal forwarders and indexers so inputs can be parsed before indexing. C) Increase memory and CPUs on the search head(s) and add additional indexers. D) If indexed realtime search is enabled, disable it for the notable index.
Answer: C
78
Which of the following lookup types in Enterprise Security contains information about known hostile IP addresses? A) Security domains. B) Threat intel. C) Assets. D) Domains.
Answer: B Threat intel is the lookup type in Enterprise Security that contains information about known hostile IP addresses, as well as other indicators of compromise (IOCs) such as domains, URLs, hashes, and email addresses. Threat intel is collected from various sources, such as Splunk Enterprise Security, Splunk Add-on for Enterprise Security, Splunk Enterprise Security Content Update, and third-party threat intelligence providers. Threat intel is used to enrich events and generate notable events when a match is found between an IOC and an event field. You can view and manage the threat intel sources and lookups in Enterprise Security using the Threat Intelligence framework.
79
At what point in the ES installation process should Splunk_TA_ForIndexes.spl be deployed to the indexers? A) When adding apps to the deployment server. B) Splunk_TA_ForIndexers.spl is installed first. C) After installing ES on the search head(s) and running the distributed configuration management tool. D) Splunk_TA_ForIndexers.spl is only installed on indexer cluster sites using the cluster master and the splunk apply cluster-bundle command.
Answer: C The point in the ES installation process when Splunk_TA_ForIndexes.spl should be deployed to the indexers is after installing ES on the search head(s) and running the distributed configuration management tool. Splunk_TA_ForIndexes.spl is a Splunk add-on that contains the index-time configurations for the data models used by ES. It is required to be installed on all indexers that receive data from ES data sources, such as network devices, endpoints, threat intelligence feeds, and so on. The recommended way to deploy Splunk_TA_ForIndexes.spl to the indexers is to use the distributed configuration management tool in ES, which is a feature that allows you to automatically distribute configuration files, such as indexes.conf, props.conf, and transforms.conf, to your Splunk platform instances. To use the distributed configuration management tool, you need to first install ES on the search head(s) and then run the tool from the ES menu bar. The tool will prompt you to select the configuration files that you want to deploy, including Splunk_TA_ForIndexes.spl, and the instances that you want to deploy them to, such as indexers, forwarders, or other search heads. The tool will also validate the configuration files and restart the instances as needed12.
80
What do threat gen searches produce? A) Threat Intel in KV Store collections. B) Threat correlation searches. C) Threat notables in the notable index. D) Events in the threat activity index.
Answer: D According to the Splunk Enterprise Security documentation, threat gen searches are searches that generate synthetic events in the threat activity index to simulate security threats. Threat gen searches are useful for testing and validating the correlation searches, notable events, and adaptive response actions in Splunk Enterprise Security. Threat gen searches produce events in the threat activity index, which is a dedicated index for storing the synthetic events. The events in the threat ctivity index have the sourcetype of threatgen and the tag of threat. You can use the Threat Activity dashboard to view and analyze the events in the threat activity index. See Threat gen searches for more details. The other options are not correct, because threat gen searches do not produce them. Threat gen searches do not produce threat intel in KV Store collections, which are key-value pairs of data that store and manage threat intelligence in Splunk Enterprise Security. Threat gen searches do not produce threat correlation searches, which are searches that correlate events with threat intelligence and generate notable events in Splunk Enterprise Security. Threat gen searches do not produce threat notables in the notable index, which are alerts or tasks that indicate potential security incidents or threats in Splunk Enterprise Security. Therefore, the correct answer is D. Events in the threat activity index.
81
When using distributed configuration management to create the Splunk_TA_ForIndexers package, which three files can be included? A) indexes.conf, props.conf, transforms.conf B) web.conf, props.conf, transforms.conf C) inputs.conf, props.conf, transforms.conf D) eventtypes.conf, indexes.conf, tags.conf
Answer: A According to the Splunk Enterprise Security documentation, when using the Distributed Configuration Management tool to create the Splunk_TA_ForIndexers package, you can include the following three files: indexes.conf: This file defines the indexes that are used by Splunk Enterprise Security, such as main, summary, and notable. It also specifies the index settings, such as retention policy, replication factor, and search factor. See indexes.conf for more details. props.conf: This file defines the properties of the data sources that are ingested by Splunk Enterprise Security, such as sourcetype, timestamp, line breaking, and field extraction. It also specifies the data model mappings, tags, and event types for the data sources. See props.conf for more details. transforms.conf: This file defines the transformations that are applied to the data sources that are ingested by Splunk Enterprise Security, such as lookup definitions, field aliases, field formats, and calculated fields. It also specifies the regex patterns, delimiters, and formats for the transformations. See transforms.conf for more details.
82
Which of the following is a way to test for a property normalized data model? A) Use Audit -> Normalization Audit and check the Errors panel. B) Run a | datamodel search, compare results to the CIM documentation for the datamodel. C) Run a | loadjob search, look at tag values and compare them to known tags based on the encoding. D) Run a | datamodel search and compare the results to the list of data models in the ES normalization guide.
Answer: B One way to test for a properly normalized data model is to run a | datamodel search against the data model or a dataset within the data model and compare the results to the CIM documentation for the datamodel. The CIM documentation provides the expected fields, tags, and constraints for each data model and dataset, as well as examples of normalized events. By running a | datamodel search, you can examine the JSON output of the data model or dataset and verify that it matches the CIM specifications. You can also use the search mode option of the | datamodel command to return either results or a search string that you can further inspect or modify.
83
Adaptive response action history is stored in which index? A) cim_modactions B) modular_history C) cim_adaptiveactions D) modular_action_history
Answer: A Adaptive response action history is stored in the cim_modactions index. This index contains the events generated by the adaptive response actions that are triggered by correlation searches or run manually from Incident Review. You can use this index to search, audit, and report on the adaptive response actions that have been executed in your environment. You can also view the adaptive response action history on the Adaptive Response dashboard in Enterprise Security1.
84
What is the main purpose of the Dashboard Requirements Matrix document? A) Identifies on which data model(s) each dashboard depends. B) Provides instructions for customizing each dashboard for local data models. C) Identifies the searches used by the dashboards. D) Identifies which data model(s) depend on each dashboard.
Answer: A The main purpose of the Dashboard Requirements Matrix document is to identify on which data model(s) each dashboard in Splunk Enterprise Security depends. The Dashboard Requirements Matrix document is a web page that lists all the dashboards in Splunk Enterprise Security and the data model datasets that populate them. The data model datasets are linked to the Common Information Model (CIM) documentation, which describes the tags, field names, and field values that the events must use to be CIM-compliant. The Dashboard Requirements Matrix document helps you to determine which data models you need to enable and accelerate for your Splunk Enterprise Security deployment,
85
Where is it possible to export content, such as correlation searches, from ES? A) Content exporter B) Configure -> Content Management C) Export content dashboard D) Settings Menu -> ES -> Export
Answer: B You can export content from Splunk Enterprise Security as an app from the Content Management page. Use the export option to share custom content with other ES instances, such as migrating customized searches from a development or testing environment into production. The Content Management page allows you to view, edit, enable, disable, and export content in Splunk Enterprise Security. You can also import content from other ES instances or from the Splunk Security Essentials app.
86
Where are attachments to investigations stored? A) KV Store B) notable index C) attachments.csv lookup D) /etc/apps/SA-Investigations/default/ui/views/attachments
Answer: A Attachments to investigations are stored in a KV Store collection named investigation_attachment. KV Store is a feature that stores and manages data as key-value pairs. Splunk Enterprise Security uses KV Store to store investigation information in several collections, such as investigation, investigation_event, investigation_lead, and investigation_attachment. You can view or modify the KV Store collections using the KV Store API endpoint.
87
The Remote Access panel within the User Activity dashboard is not populating with the most recent hour of data. What data model should be checked for potential errors such as skipped searches? A) Web B) Risk C) Performance D) Authentication
Answer: D The Remote Access panel within the User Activity dashboard is based on the Authentication data model, which contains information about authentication events from various sources, such as VPN, SSH, RDP, and others. The Authentication data model is accelerated by default, which means that it generates summary data to speed up searches. However, if the summary data is not up to date, the dashboard panel may not show the most recent data. This can happen if the data model acceleration search is skipped, disabled, or encountering errors12. To check the status of the data model acceleration, you can use the Data Model Audit dashboard
88
What is an example of an ES asset? A) MAC address B) User name C) Server D) People
Answer: C According to the Splunk Enterprise Security documentation, an asset is a physical or logical device that is part of your network infrastructure, such as a server, a workstation, a router, or a firewall. An asset can have various attributes, such as IP address, MAC address, DNS name, NT host name, priority, business unit, owner, and others. Splunk Enterprise Security uses asset data to enrich and correlate security events and provide context for analysis. You can manage asset data using the Asset and Identity Management page in Splunk Enterprise Security. See Manage assets and identities in Splunk Enterprise Security for more details. The other options are not examples of ES assets, but they may be related to other types of data. A MAC address is an attribute of an asset, not an asset itself. A user name is an example of an identity, which is a person or group that is associated with an asset or an event. Splunk Enterprise Security uses identity data to enrich and correlate security events and provide context for analysis. You can manage identity data using the Asset and Identity Management page in Splunk Enterprise Security. See Manage assets and identities in Splunk Enterprise Security for more details. People is a data model in the Splunk Common Information Model (CIM), which provides a common standard for organizing and naming data fields across different data sources. Splunk Enterprise Security uses the CIM to enable cross-source analysis and correlation of security events. The People data model contains the fields and tags for events that are related to people, such as user names, email addresses, phone numbers, and others.
89
If a username does not match the ‘identity’ column in the identities list, which column is checked next? A) Email. B) Nickname C) IP address. D) Combination of Last Name, First Name.
Answer: A If a username does not match the ‘identity’ column in the identities list, Splunk Enterprise Security checks the ‘email’ column next. The ‘email’ column contains the email address associated with the identity. If the email address matches the username, Splunk Enterprise Security assigns the identity to the user. If the email address does not match, Splunk Enterprise Security checks the ‘nickname’ column next, followed by the ‘ip’ column, and finally the ‘last_name’ and ‘first_name’ columns.
90
Which of the following ES features would a security analyst use while investigating a network anomaly notable? A) Correlation editor. B) Key indicator search. C) Threat download dashboard. D) Protocol intelligence dashboard.
Answer: D A network anomaly notable is a type of notable event that indicates a possible network attack or misconfiguration. It is generated by the Network - Anomaly Detection - Rule correlation search
91
An administrator is provisioning one search head prior to installing ES. What are the reference minimum requirements for OS, CPU, and RAM for that machine? A) OS: 32 bit, RAM: 16 MB, CPU: 12 cores B) OS: 64 bit, RAM: 32 MB, CPU: 12 cores C) OS: 64 bit, RAM: 12 MB, CPU: 16 cores D) OS: 64 bit, RAM: 32 MB, CPU: 16 cores
Answer: D According to the Splunk Enterprise Security Admin documentation, the minimum hardware requirements for a dedicated search head running ES are as follows: OS: 64 bit, RAM: 32 GB, CPU: 16 cores. These requirements are based on the assumption that the search head is not performing any other tasks besides running ES. The documentation also recommends having at least 500 GB of disk space for the search head.
92
A security manager has been working with the executive team en long-range security goals. A primary goal for the team Is to Improve managing user risk in the organization. Which of the following ES features can help identify users accessing inappropriate web sites? A) Configuring the identities lookup with user details to enrich notable event Information for forensic analysis. B) Make sure the Authentication data model contains up-to-date events and is properly accelerated. C) Configuring user and website watchlists so the User Activity dashboard will highlight unwanted user actions. D) Use the Access Anomalies dashboard to identify unusual protocols being used to access corporate sites.
Answer: C User and website watchlists are lists of users or websites that you want to monitor for suspicious or unwanted activity. You can configure user and website watchlists in Splunk Enterprise Security to generate notable events when a user on the watchlist accesses a website on the watchlist. The User Activity dashboard displays the notable events generated by the watchlists, as well as other user activity information such as top users, top websites, and top categories. Configuring user and website watchlists can help identify users accessing inappropriate web sites,
93
After managing source types and extracting fields, which key step comes next In the Add-On Builder? A) Validate and package B) Configure data collection. C) Create alert actions. D) Map to data models.
Answer: D According to the Splunk Add-on Builder documentation, after managing source types and extracting fields, the key step that comes next in the Add-on Builder is to map to data models. Data models are predefined schemas that provide a common standard for organizing and naming data fields across different data sources. Splunk Enterprise Security uses the Splunk Common Information Model (CIM) to enable cross-source analysis and correlation of security events. The Add-on Builder helps you to map your data fields to the CIM data models, such as Authentication, Change, Endpoint, and others.
94
ES needs to be installed on a search head with which of the following options? A) No other apps. B) Any other apps installed. C) All apps removed except for TA-*. D) Only default built-in and CIM-compliant apps.
Answer: A Splunk Enterprise Security requires a dedicated search head with no other apps installed. This is because ES is a resource-intensive application that may cause performance issues and conflicts with other apps. Installing ES on a search head with other apps may also result in data loss or corruption. Therefore, it is recommended to install ES on a clean search head with only the default built-in apps and the Common Information Model (CIM) app. The CIM app is a prerequisite for ES
95
Glass tables can display static images and text, the results of ad-hoc searches, and which of the following objects? A) Lookup searches. B) Summarized data. C) Security metrics. D) Metrics store searches.
Answer: C Glass tables can display static images and text, the results of ad-hoc searches, and security metrics. Security metrics are visualizations that show the values of KPIs, service health scores, or notable events. You can add security metrics to a glass table by using the Security Metrics menu in the glass table editor. You can also configure the appearance, behavior, and drilldown options of the security metrics. Glass tables cannot display lookup searches, summarized data, or metrics store searches directly, although
96
Which of the following actions would not reduce the number of false positives from a correlation search? A) Reducing the severity. B) Removing throttling fields. C) Increasing the throttling window. D) Increasing threshold sensitivity.
Answer: B Removing throttling fields would not reduce the number of false positives from a correlation search. Throttling fields are the fields that are used to group events and suppress duplicate alerts. For example, if you use src and dest as throttling fields, then the correlation search will only generate one alert per unique pair of src and dest values within the throttling window. This can help reduce the number of false positives by avoiding repeated alerts for the same issue.
97
What are adaptive responses triggered by? A) By correlation searches and users on the incident review dashboard. B) By correlation searches and custom tech add-ons. C) By correlation searches and users on the threat analysis dashboard. D) By custom tech add-ons and users on the risk analysis dashboard.
Answer: A Adaptive responses are actions that can be performed in response to notable events or other security incidents. Adaptive responses can be triggered by correlation searches and users on the incident review dashboard. Correlation searches are scheduled searches that run periodically to detect patterns of interest in the data and generate notable events or other actions when the search conditions are met. Users can configure correlation searches to trigger adaptive responses automatically when a notable event is created.
98
How is notable event urgency calculated? A) Asset priority and threat weight. B) Alert severity found by the correlation search. C) Asset or identity risk and severity found by the correlation search. D) Severity set by the correlation search and priority assigned to the associated asset or identity.
Answer: D Notable event urgency is calculated by combining the severity set by the correlation search and the priority assigned to the associated asset or identity. The severity is a value that indicates the impact or importance of the event, such as low, medium, high, or critical. The priority is a value that indicates the significance or sensitivity of the asset or identity involved in the event, such as unknown, low, medium, high, or critical. The urgency is a value that indicates the level of attention or action required for the event, such as informational, low, medium, high, or critical.
99