All Topics

Top

All Topics

Hi, I want to display time on my dashboard but all I see just two fields with data any help with the search to populate the rest of the fields would be appreciated. I have attached my dashboard. ... See more...
Hi, I want to display time on my dashboard but all I see just two fields with data any help with the search to populate the rest of the fields would be appreciated. I have attached my dashboard. my search that looks like this: Index=a sourcetype=b earliest=-1d [| inputlookup M003_siem_ass_list where FMA_id=*OS -001* | stats values(ass) as search | eval seaqqrch=mvjoin(search,", OR ")] | fields ip FMA_id _time d_role | stats latest(_time) as _time values(*) by ip
Hi Splunkers, I have a doubt about users that run scheduled searches. Until now, I now very well that, if a user own a knowledge object like a correlation searches, when it is deleted/disabled, we c... See more...
Hi Splunkers, I have a doubt about users that run scheduled searches. Until now, I now very well that, if a user own a knowledge object like a correlation searches, when it is deleted/disabled, we can encounter some problems, like the Orphaned object one. So the best pratice is to create a service user and assign it to KO. Fine. My wondering is: suppose we have many scheduled correlation searches, for example more than 100 and 200. Assign all those searches to one single service user is fine, or is better to create multiple one, so to avoid some performance experience? The question is made based on a case some colleagues shared with me once: due there were some problems with search lag/skipped searches, in addiction to fix searches scheduler, involved people splitted their ownership to multiple users. Is that useful or not?
Hi Experts ,  Someone has installed ESCU app directly on the Search head members . Now I am upgrading this app to a newer release .  Question :- Since this app was not installed from the deployer b... See more...
Hi Experts ,  Someone has installed ESCU app directly on the Search head members . Now I am upgrading this app to a newer release .  Question :- Since this app was not installed from the deployer but I want to upgrade it via deployer what is the best practice and method to achieve this  Here is my plan , please correct me if I am thinking wrong  Step 1) First I will copy the installed folder from one of the SHC member to deployer under /etc/app so that it install itself on the deployer and then I can manually upgrade it using deployer GUI Step2) Once upgraded , I will copy upgraded app from /etc/apps folder to /etc/shcluster/apps folder  Step3) run apply shcluster-bundle on the deployer to push the upgraded app to SHC members . Do you think above is the right approach ? if not what else I can do   
Hello Splunk Community, I'm encountering an issue with configuration replication in Splunk Cloud Victoria Experience when using search head clusters behind a load balancer. Here's the scenario: I h... See more...
Hello Splunk Community, I'm encountering an issue with configuration replication in Splunk Cloud Victoria Experience when using search head clusters behind a load balancer. Here's the scenario: I have developed a private custom search command app that requires some user configuration. For this purpose, I've added a custom config file in the /etc/apps/<appname>/default directory. Additionally, I've configured the app.conf as follows:   [triggers] reload.<custom_conf> = simple [shclustering] deployer_push_mode = full   I've also included a server.conf inside etc/apps/<appname>/default with the following configuration: [shclustering] conf_replication_include.<custom_conf_name> = true When attempting to install this private app using the install_app_from_file option in a Splunk Cloud Victoria Experience with search head clusters behind a load balancer, it appears that the app configuration is not being replicated across search heads. Could someone please assist me in identifying if there's anything I'm missing or doing incorrectly? Thank you. Avnish
Hello. At splunk dashboard visualization charts that display data have their background color set to white (#FFFFFF) by default and it turns to black if I change theme from Light to Dark. I want to f... See more...
Hello. At splunk dashboard visualization charts that display data have their background color set to white (#FFFFFF) by default and it turns to black if I change theme from Light to Dark. I want to find a way to get the same color behavior for Rectangle element. Currently it's default fill color is grey (#c3cbd4) and it turns to dark-grey if I change theme from light to dark. If I change background color of rectangle in settings - it stops changing color when I change the theme. How to make Rectangle element at Splunk dashboard to be white for Light theme and black for Dark theme ? Thanks!  
Guide to Monitoring AWS Athena Performance with AppDynamics Amazon Athena is a serverless, interactive analytics service that provides a simplified and flexible way to analyze petabytes of da... See more...
Guide to Monitoring AWS Athena Performance with AppDynamics Amazon Athena is a serverless, interactive analytics service that provides a simplified and flexible way to analyze petabytes of data where it lives. It is an important tool for analyzing existing data. AppDynamics is one of the best monitoring solutions that gives you insight into applications and infrastructure. Let’s dive into our setup. Prerequisites Machine Agent installed on any Linux box The Linux box should have permission to fetch CloudWatch metrics Setting up Amazon Athena Set up Amazon Athena If you all have Athena set up, then scroll down. If not, follow the below steps: Create an S3 bucket where we will save Athena query results. I created one called “athena-query-result-abhi” Setting Up Query Result Location Click “Edit settings” in the Athena console Specify your S3 bucket: Enter s3://athena-query-result-abhi as the query result location Save the settings Enable Amazon Athena to publish query metrics to AWS CloudWatch Edit the WorkGroup your Amazon Athena is part of and select “Publish query metrics to AWS CloudWatch” Running the Sample Queries In the Athena console, run the following queries to create a sample database and table: Create the Database: CREATE DATABASE sampledb;​ Create a Sample Table with some inline data: CREATE TABLE sampledb.sampletable AS SELECT 'value1' AS col1, 'value2' AS col2, 'value3' AS col3 UNION ALL SELECT 'value4' AS col1, 'value5' AS col2, 'value6' AS col3;​ Run a Sample Query to generate activity: SELECT * FROM sampledb.sampletable LIMIT 10;​ Running Queries to Generate Metrics: Execute the following queries to generate sufficient activity and metrics: SELECT * FROM sampledb.sampletable LIMIT 10; SELECT col1, COUNT(*) FROM sampledb.sampletable GROUP BY col1; SELECT COUNT(*) FROM sampledb.sampletable WHERE col2 = 'value2'; SELECT col1, col2 FROM sampledb.sampletable WHERE col3 = 'value3';​ Great work, your Athena is all set up     Machine Agent Now, let’s work on the machine agent side. SSH inside the box where your Machine Agent is running In the Machine Agent Home folder, Go to the Monitors folder and create a directory called DynamoDb. In my case I used, MA_HOME = /opt/appdynamics/ma cd /opt/appdynamics/ma/monitors mkdir Athena​ Inside the Athena folder, create a file called script.sh with the below content: NOTE: Please edit REGION and START_TIME, END_TIME if required #!/bin/bash # List of all metrics you want to fetch for Athena declare -a METRICS=("DPUAllocated" "DPUConsumed" "DPUCount" "EngineExecutionTime" "ProcessedBytes" "QueryPlanningTime" "QueryQueueTime" "ServicePreProcessingTime" "ServiceProcessingTime" "TotalExecutionTime") # Define the time period (in ISO8601 format) START_TIME=$(date --date='48 hours ago' --utc +%Y-%m-%dT%H:%M:%SZ) END_TIME=$(date --utc +%Y-%m-%dT%H:%M:%SZ) # AWS region REGION="us-east-1" # Fetch all workgroups WORKGROUPS=$(aws athena list-work-groups --region $REGION --query 'WorkGroups[*].Name' --output text) # Loop through each workgroup and fetch the metrics for WORKGROUP in $WORKGROUPS; do # Loop through each metric and fetch the data for METRIC_NAME in "${METRICS[@]}"; do # Fetch the metric data using AWS CLI METRIC_VALUE=$(aws cloudwatch get-metric-statistics --region $REGION --namespace AWS/Athena \ --metric-name $METRIC_NAME \ --dimensions Name=QueryState,Value=SUCCEEDED Name=QueryType,Value=DML Name=WorkGroup,Value=$WORKGROUP \ --start-time $START_TIME \ --end-time $END_TIME \ --period 300 \ --statistics Sum \ --query 'Datapoints | sort_by(@, &Timestamp)[-1].Sum' \ --output text) # If metric value is empty, set to 0, else format it as an integer if [ -z "$METRIC_VALUE" ]; then METRIC_VALUE="0" else METRIC_VALUE=$(echo $METRIC_VALUE | awk '{if($1+0==$1){print int($1)}else{print "0"}}') fi # Echo the metric in the specified format echo "name=Custom Metrics|Athena|$WORKGROUP|$METRIC_NAME,value=$METRIC_VALUE" done done Create another file called monitor.xml with the below content: <monitor> <name>Athena monitoring</name> <type>managed</type> <description>Athena monitoring</description> <monitor-configuration> </monitor-configuration> <monitor-run-task> <execution-style>periodic</execution-style> <name>Run</name> <type>executable</type> <task-arguments> </task-arguments> <executable-task> <type>file</type> <file>script.sh</file> </executable-task> </monitor-run-task> </monitor>​ Let’s restart your Machine Agent Once you are done, you will be able to see your Athena metrics in the AppDynamics Machine Agent’s metric browser, as seen below
I have dashboard that doesn't exist on the internet.  It shows the user session activities on the dashboard for windows, gateway, and Linux.  It also shows the activities via interactive actions show... See more...
I have dashboard that doesn't exist on the internet.  It shows the user session activities on the dashboard for windows, gateway, and Linux.  It also shows the activities via interactive actions show privilege escalation and process running.  This was created to answer external audit asks based NIST 800 -53.  Would dashboard qualify for the contest or any of the super session on the .conf on the main conference  floor?  (I had to mask environment information)   Here is the windows audit GPO required to monitor the session correctly.
Hello! I am having an issue getting annotations to work within the Dashboard Studio column chart. I have tried a bunch of different ways, but it isn't cooperating. The chart I have is just System_Na... See more...
Hello! I am having an issue getting annotations to work within the Dashboard Studio column chart. I have tried a bunch of different ways, but it isn't cooperating. The chart I have is just System_Name on the X axis and Risk_Score on the Y axis. I'd like to be able to highlight where the System_Name in question shows up on the chart as annotation examples have demonstrated in the documentation. My current code for the chart is as follows. Does anyone have any suggestions as to what I'm doing wrong here? Chart itself: { "type": "splunk.column", "options": { "seriesColorsByField": {}, "annotationColor": "> annotation | seriesByIndex('2')", "annotationLabel": "> annotation | seriesByIndex('1')", "annotationX": "> annotation | seriesByIndex('0')", "legendDisplay": "off" }, "dataSources": { "primary": "ds_abUJLKDj", "annotation": "ds_YPQ3EYqR" }, "showProgressBar": false, "showLastUpdated": false, "context": {} }  Searches: "ds_abUJLKDj": { "type": "ds.search", "options": { "query": "`index` \n| stats latest(Risk_Score) AS Risk_Score by System_Name\n| eval Risk_Score=round(Risk_Score, 2)\n| sort Risk_Score" }, "name": "risk_score_chart" }, "ds_YPQ3EYqR": { "type": "ds.search", "options": { "query": "`index` \n| stats latest(Risk_Score) AS Risk_Score by System_Name\n| eval Risk_Score=round(Risk_Score, 2), color=\"#f44336\", Annotation_Label= (\"The risk score for $system_name$ is \" + Risk_Score) \n| sort Risk_Score\n| where System_Name = \"$system_name$\"\n| table System_Name, Annotation_Label, color" }, "name": "risk_score_chart_annotation"
Hi all, in the past I used a CLI command to disable indicators feature. do you know how can I enable it back?
Morning, Splunkers. I've got a dashboard that gets some of it's input from an external link. The input that comes in determines which system is being displayed by the dashboard with different settin... See more...
Morning, Splunkers. I've got a dashboard that gets some of it's input from an external link. The input that comes in determines which system is being displayed by the dashboard with different settings through a <change> line in each, then shows the necessary information in a line graph. That part is working perfectly, but what I'm trying to do is set the color of the line graph based on the system chosen, and I'm trying to keep is simple for future edits. I've set the colors I'm currently using in the <init> section as follows:   <init> <set token="red">0xFF3333</set> <set token="purple">0x8833FF</set> <set token="green">0x00FF00</set> </init>   The system selection looks like this:   <input token="system" depends="$NotDisplayed$"> <change> <condition value="System-A"> <set token="index_filter">index_A</set> <set token="display_name">System-A</set> <set token="color">$purple$</set> </condition> <condition value="System-B"> <set token="index_filter">index_B</set> <set token="display_name">System-B</set> <set token="color">$green$</set> </condition> <condition value="System-C"> <set token="index_filter">index_C</set> <set token="display_name">System-C</set> <set token="color">$red$</set> </condition> </change> </input>     I now have a single query window putting up a line graph with the necessary information brought in from the eternal link. Like I said above, that part works perfectly, but what DOESN'T work is the color. Here's what my option field currently looks like:   <option name="charting.fieldColors">{"MyField":$color$}</option>     The idea here is if I add future systems, I don't have to keep punching in hex codes for colors, I just enter a color name token. Unfortunately, what ends up happening is the line graph color is black, no matter what color I use. If I take the $color$ token out of the code and put in the hex code directly it works fine. It also works if I put the hex code directly in the system selection instead of the color name token. Is there a trick to having a token reference another token in a dashboard? Or is this one of those "quit being fancy and do it the hard way" type of things? Any help will be appreciated. Running Splunk 8.2.4, in case it matters.
Hello, After upgrading from Classic to Victoria Experience on our Splunk Cloud stack, we have encountered issues retrieving data from AWS SQS-based S3. The inputs remained after the migration, but f... See more...
Hello, After upgrading from Classic to Victoria Experience on our Splunk Cloud stack, we have encountered issues retrieving data from AWS SQS-based S3. The inputs remained after the migration, but for some, it seems the SQS queue name is missing. When we try to configure these inputs, we immediately receive a 404 error in the python.log. Please see the screenshot below for reference. Furthermore, the error message indicates that the SQS queue may not be present in the given region. However, we have confirmed that the queue does exist in the specified region. Has anyone else experienced this issue and can offer assistance? Thank you.
Has anyone noticed the push notifications through the Splunk Mobile app has stopped working recently. We are using Spunk on prem, Splunk Secure Gateway set up with prod.spacebridge.spl.mobi set as t... See more...
Has anyone noticed the push notifications through the Splunk Mobile app has stopped working recently. We are using Spunk on prem, Splunk Secure Gateway set up with prod.spacebridge.spl.mobi set as the Gateway but I noticed the notifications stopped appearing on my home screen of when my iPhone was locked. Other colleagues using different devices are complaining of the same issue.    I can't remember the exact date but it may have been around the 3rd May.   No changes to our config have been made but i'd be interested to know if anyone else is having this issue.
Hi,  we have Splunk (v9.2) in a clustered environment that manages tons of different logs from a complex and varied network. There are a few departments that have a Sophos firewall each, that sends... See more...
Hi,  we have Splunk (v9.2) in a clustered environment that manages tons of different logs from a complex and varied network. There are a few departments that have a Sophos firewall each, that sends logs through syslog (we would have used a UF, but we couldn't because the IT security can't touch those servers). In order to split the inputs based on the source type, we set those Sophos logs to be sent to port 513 of one of our HFs and created an app to parse those through the use of a regex. The goal was to reduce the logs and save license usage. So far, so good... Everything was working as intended... Until... As it turns out, every night, exactly at midnight, the Heavy Forwarder stops the collection from those sources (only those) and nothing is indexed, until someone gives a restart to the splunkd service (which could be potentially never) and gives new life to the collector. Here's the odd part: during the no-collection time, tcpdump shows the reception of syslog data through the port 513, so the firewall never stops sending data to the HF, but no logs are indexed. Only after a restart we can see the logs are indexed again. The Heavy Forwarder at issue sits on top of a Ubuntu 22 LTS minimized server edition. Here are the app configuration files: - inputs.conf [udp:513] sourcetype = syslog no_appending_timestamp = true index = generic_fw   - props.conf [source::udp:513] TRANSFORMS-null = nullQ TRANSFORMS-soph = sophos_q_fw, sophos_w_fw, null_ip   - transforms.conf [sophos_q_fw] REGEX = hostname\sulogd\[\d+\]\:.*action=\"accept\".*initf=\"eth0\".* DEST_KEY = queue FORMAT = indexQueue # [sophos_w_fw] REGEX = hostname\sulogd\[\d+\]\:.*action=\"accept\".*initf=\"eth0\".* DEST_KEY = _MetaData:Index FORMAT = custom_sophos # [null_ip] REGEX = dstip=\"192\.168\.1\.122\" DEST_KEY = queue FORMAT = nullQueue   We didn't see anything out of the ordinary in the pocesses that start at midnight on the HF. At this point we have no clue about what's happening. How can we troubleshoot this situation? Thanks
I'm trying to run personal scripts in Splunk from a dashboard. I want the dashboard to call a script by user input and then output the script to a table. I'm testing the ability with a Python script ... See more...
I'm trying to run personal scripts in Splunk from a dashboard. I want the dashboard to call a script by user input and then output the script to a table. I'm testing the ability with a Python script that calls a PowerShell script, returns the data to the Python script, and then returns the data to the Splunk dashboard. This is what I have so far:  Test_PowerShell.py Python Script:    import splunk.Intersplunk import sys import subprocess results,unused1,unused2 = splunk.Intersplunk.getOrganizedResults() # Define the path to the PowerShell script ps_script_path = "./Test.ps1" # Define the argument to pass to the PowerShell script argument = sys.argv[1] # Execute the PowerShell script with the argument results = subprocess.run(['powershell.exe', '-File', ps_script_path, argument], capture_output=True, text=True) splunk.Intersplunk.outputResults(results)   Page XML:    <form version="1.1" theme="dark"> <label>Compliance TEST</label> <description>TESTING</description> <fieldset submitButton="false" autoRun="false"></fieldset> <row> <panel> <title>Input Panel</title> <input type="text" token="user_input"> <label>User Input:</label> <default>*</default> </input> </panel> </row> <row> <panel> <title>Script Output</title> <table> <search> <query>| script python testps $user_input$ | table field1</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>   Test.ps1 PowerShell Script:    Write-Host $args[0]   commands.conf:   [testps] filename = Test_PowerShell.py streaming=true python.version = python3   default.meta   [commands/testps] access = read : [ * ], write : [ admin ] export = system [scripts/Test_PowerShell.py] access = read : [ * ], write : [ admin ] export = system   The error I'm getting is the following: External search command 'testps' returned error code 1. 
Hello, i wanted to ask if there is a way in Splunk to collect failured Login Data from Users on a Virtual Machine that is hosted with VMware, so that i can see if a user tried to login like 5 times ... See more...
Hello, i wanted to ask if there is a way in Splunk to collect failured Login Data from Users on a Virtual Machine that is hosted with VMware, so that i can see if a user tried to login like 5 times and failed the Login on VM 5 times?  Would be nice to use it for finding out if there is some Kind of Brute Force Attack or something else going on.
Hello Team, We are getting below error, while deploying the java agent. For few min it's coming-up and after sometime the agent is crashing along with the application. It seems some issue while inst... See more...
Hello Team, We are getting below error, while deploying the java agent. For few min it's coming-up and after sometime the agent is crashing along with the application. It seems some issue while instrumentation the class. Below are the logs for you refence. [main] 17 May 2024 11:25:46,397 WARN LightweightThrowable - java.lang.NoSuchMethodException: java.lang.Throwable.getStackTraceElement(int) caught trying to reflect Throwable methods [AD Thread Pool-Global0] 17 May 2024 11:25:48,998 INFO ErrorProcessor - Sending ADDs to register [ApplicationDiagnosticData{key='java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:', name=SQLSyntaxErrorException : OracleDatabaseException, diagnosticType=ERROR, configEntities=null, summary='java.sql.SQLSyntaxErrorException caused by oracle.jdbc.OracleDatabaseException'}] [AD Thread Pool-Global0] 17 May 2024 11:25:48,998 INFO ErrorProcessor - To enable reverse proxy, use the node property or set env/system variables [AD Thread Pool-Global0] 17 May 2024 11:25:49,094 INFO ErrorProcessor - Setting AgentClassLoader as Context ClassLoader [AD Thread Pool-Global0] 17 May 2024 11:25:49,194 INFO ErrorProcessor - Restoring Context ClassLoader to com.singularity.ee.agent.appagent.kernel.classloader.Post19AgentClassLoader@14bf9759 [AD Thread Pool-Global0] 17 May 2024 11:25:49,194 INFO ErrorProcessor - Error Objects registered with controller :{java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:=1873198} [AD Thread Pool-Global0] 17 May 2024 11:25:49,194 INFO ErrorProcessor - Adding entry to errorKeyToUniqueKeyMap [1873198], ErrorKey[cause=[java.sql.SQLSyntaxErrorException, oracle.jdbc.OracleDatabaseException]], java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException: [AD Thread Pool-Global0] 17 May 2024 11:25:49,294 INFO ErrorProcessor - Sending ADDs to register [ApplicationDiagnosticData{key='java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:-1465592621', name=java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:, diagnosticType=STACK_TRACE, configEntities=[Type:ERROR, id:1873198], summary='java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:'}] [AD Thread Pool-Global0] 17 May 2024 11:25:49,294 INFO ErrorProcessor - To enable reverse proxy, use the node property or set env/system variables [AD Thread Pool-Global0] 17 May 2024 11:25:49,336 INFO ErrorProcessor - Setting AgentClassLoader as Context ClassLoader [AD Thread Pool-Global0] 17 May 2024 11:25:49,396 INFO ErrorProcessor - Restoring Context ClassLoader to com.singularity.ee.agent.appagent.kernel.classloader.Post19AgentClassLoader@14bf9759 [AD Thread Pool-Global0] 17 May 2024 11:25:49,396 INFO ErrorProcessor - Error Objects registered with controller :{java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:-1465592621=2272870} [AD Thread Pool-Global0] 17 May 2024 11:25:49,396 INFO ErrorProcessor - Adding entry to errorKeyToUniqueKeyMap [2272870], StackTraceErrorKey{hashCode=-1465592621}, java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:-1465592621 [AD Thread Pool-Global0] 17 May 2024 11:25:56,893 INFO DynamicRulesManager - The config directory /opt/appdyn/javaagent/23.12.0.35361/ver23.12.0.35361/conf/namicggtd52d-onboarding-25-ll7bx--1 is not initialized, not writing /opt/appdyn/javaagent/23.12.0.35361/ver23.12.0.35361/conf/namicggtd52d-onboarding-25-ll7bx--1/bcirules.xml [AD Thread-Metric Reporter0] 17 May 2024 11:26:07,293 INFO MetricSender - To enable reverse proxy, use the node property or set env/system variables [AD Thread Pool-Global1] 17 May 2024 11:26:38,997 INFO ErrorProcessor - Sending ADDs to register [ApplicationDiagnosticData{key='java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:-1702013436', name=java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:, diagnosticType=STACK_TRACE, configEntities=[Type:ERROR, id:1873198], summary='java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:'}] [AD Thread Pool-Global1] 17 May 2024 11:26:38,997 INFO ErrorProcessor - To enable reverse proxy, use the node property or set env/system variables [AD Thread Pool-Global1] 17 May 2024 11:26:39,093 INFO ErrorProcessor - Setting AgentClassLoader as Context ClassLoader [AD Thread Pool-Global1] 17 May 2024 11:26:39,094 INFO ErrorProcessor - Restoring Context ClassLoader to com.singularity.ee.agent.appagent.kernel.classloader.Post19AgentClassLoader@14bf9759 [AD Thread Pool-Global1] 17 May 2024 11:26:39,094 INFO ErrorProcessor - Error Objects registered with controller :{java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:-1702013436=2272876} [AD Thread Pool-Global1] 17 May 2024 11:26:39,094 INFO ErrorProcessor - Adding entry to errorKeyToUniqueKeyMap [2272876], StackTraceErrorKey{hashCode=-1702013436}, java.sql.SQLSyntaxErrorException:oracle.jdbc.OracleDatabaseException:-1702013436 Kindly assist. Regards, Amit Singh Bisht
Has anyone attempted to enable all the correlation searches in the "Use Case Library" for enterprise security? There are over 1,000 correlation searches. Will this impact the performance of the Sea... See more...
Has anyone attempted to enable all the correlation searches in the "Use Case Library" for enterprise security? There are over 1,000 correlation searches. Will this impact the performance of the Search Head (SH) and indexer? If I have 1,000 EPS, what hardware resources would be required? Alternatively, what minimum hardware resources are needed to enable all the correlation searches in the use case library? Thank you.
Hi,   We recently changed the tsidxWritingLevel from 1 to 4 for performance and space-saving. Is there any way to check if the above modification has improved the performance and space in our envir... See more...
Hi,   We recently changed the tsidxWritingLevel from 1 to 4 for performance and space-saving. Is there any way to check if the above modification has improved the performance and space in our environment   Thanks
Hi, I'm looking for my next role and wanted to reach out to the community for guidance on where to look for roles that use AppDynamics as I would love to continue working with this amazing technology... See more...
Hi, I'm looking for my next role and wanted to reach out to the community for guidance on where to look for roles that use AppDynamics as I would love to continue working with this amazing technology and helping improve online experiences Thanks Sunil
In a Dashboard Studio: I applied drilldown to one of the standard icons and linked to another dashboard. The goal is to view the linked dashboard upon clicking on the icon, and it works. However, p... See more...
In a Dashboard Studio: I applied drilldown to one of the standard icons and linked to another dashboard. The goal is to view the linked dashboard upon clicking on the icon, and it works. However, people get distracted when they place mouse upon the icon and the export and Full screen icons pump up. Is there a way to disable this default unneeded functionality so nothings pumps up on mouse hovering over an icon ?   @elizabethl_splu