All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, We are integrating phantom with splunk using below doc https://docs.splunk.com/Documentation/PhantomRemoteSearch/1.0.14/PhantomRemoteSearch/Connecttosinglesplunk As per the doc we create two u... See more...
Hi, We are integrating phantom with splunk using below doc https://docs.splunk.com/Documentation/PhantomRemoteSearch/1.0.14/PhantomRemoteSearch/Connecttosinglesplunk As per the doc we create two users - phantomsearchuser​ and ​phantomdeleteuser May I know why these users are created?? what they will do? Also, as per this --> https://splunkbase.splunk.com/app/4399/#/details   (A userid is created at phantom end and it is added to below lookup in splunk) --> I have created it as phantom_test at the phantom end. May I know why this user is required? 4- Edit phantomusers.csv file under <Splunk>/etc/apps/splunk_app_phantom/lookups and add new entries. Each entry should map the phantom userid to the phantom username. You can get the userids/username mapping from your Phantom instance under Administration -> User Management -> Users and click on each individual user to get the userid. Totally I have 3 users (2 users created in splunk and 1 user created in phantom)  
I am about to upgrade the Security Essentials App (Installed on ES) to it's most current version 3.4.0. I read that Security Essentials depends on ES content update App. The question is do I need to ... See more...
I am about to upgrade the Security Essentials App (Installed on ES) to it's most current version 3.4.0. I read that Security Essentials depends on ES content update App. The question is do I need to update this to it's latest version as well? Thank u very much for your response in advance.
I'm trying to configure a cloudwatch logs input but I continue to receive invalid key errors when restarting Splunk on the HF. I've gone by the doc as well as opened a support case but haven't had su... See more...
I'm trying to configure a cloudwatch logs input but I continue to receive invalid key errors when restarting Splunk on the HF. I've gone by the doc as well as opened a support case but haven't had success. The samples that I've seen and the keys that the support guy were telling me to try are all different than the documentation. I'm starting to think this input type is just bugged and doesn't work at all. Has anyone been able to configure this type of input? And if so, do you mind sharing what you did?   Errors   Invalid key in stanza [aws_cloudwatch_logs://REDACTED] in /opt/splunk/etc/apps/Splunk_TA_aws/local/inputs.conf, line 34: account (value: REDACTED). Invalid key in stanza [aws_cloudwatch_logs://REDACTED] in /opt/splunk/etc/apps/Splunk_TA_aws/local/inputs.conf, line 35: groups (value: /blah/blah/redacted, /blah/more/redacted). Invalid key in stanza [aws_cloudwatch_logs://REDACTED] in /opt/splunk/etc/apps/Splunk_TA_aws/local/inputs.conf, line 38: region (value: us-east-1).     Details Splunk Enterprise 8.1.4 AWS add-on 5.0.3 Installed on Amazon Linux Doc:  https://docs.splunk.com/Documentation/AddOns/released/AWS/CloudWatchLogs My current stanza   [aws_cloudwatch_logs://REDACTED] account = REDACTED groups = /blah/blah/redacted, /blah/more/redacted index = staging sourcetype = aws:cloudwatchlogs:log interval = 1800 region = us-east-1 disabled = 0      
hello, Can anyone tell me how to exclude the subsearch result from main search? I want to exclude the result that failed at 1st attempt, but later the person purchased successfully. I only want to... See more...
hello, Can anyone tell me how to exclude the subsearch result from main search? I want to exclude the result that failed at 1st attempt, but later the person purchased successfully. I only want to capture PURCHASEID(s) that failed and has not been able to purchase yet. Here's my pseudo code that I am trying to accomplish index=main sourcetype="access_combined_wcookie" AND ("*TIME_OUT*") | rex field=_raw "\[(?<PURCHASEID>\d{12}\-\d{3})\]" | search NOT [ search index=main sourcetype="access_combined_wcookie" AND ("*Successfully Ordered*")" | rex field=_raw "\[(?<PURCHASEID>\d{12}\-\d{3})\]" | table PURCHASEID] | table PURCHASEID, _raw | dedup PURCHASEID | sort +PURCHASEID
I am able to collect object data when using Splunk add-on for salesforce but I am not lucky to collect events. I have checked permission and also Eventlog files are listed in my org when using curl o... See more...
I am able to collect object data when using Splunk add-on for salesforce but I am not lucky to collect events. I have checked permission and also Eventlog files are listed in my org when using curl or ELF browser (app in Heroku) but not able to collect events in Splunk. My only guess is that I need the Event Monitoring Licenses in SF to use this add-on, can you confirm?
It is necessary to check if the user is in the index in this file or not. If not, then add to the file, if it is in the file, then nothing happens file.csv  username info1 info2 john abcd ... See more...
It is necessary to check if the user is in the index in this file or not. If not, then add to the file, if it is in the file, then nothing happens file.csv  username info1 info2 john abcd qwer   index = IndexName username info1 info2 Aram ghjk qweiq   Condition, if the user is not found in the file, then write it to the file  output  username info1 info2 john abcd qwer Aram ghjk qweiq
With the AWS Add-On for Splunk (version 5.0.3) we can pull logs from a CloudFront S3 bucket via the "Generic S3" type, or from an Application Load Balance with the ELB, Generic S3 type Input. The pr... See more...
With the AWS Add-On for Splunk (version 5.0.3) we can pull logs from a CloudFront S3 bucket via the "Generic S3" type, or from an Application Load Balance with the ELB, Generic S3 type Input. The problem is that the time format for the CloudFront logs are without timezone specified in the S3 objects and our splunk instance is incorrectly defaulting to localtime.  The ELB logs are correctly converted from UTC to localtime in searches. How might we force the timezone to UTC for these events at ingest? I tried creating a /opt/splunk/etc/apps/Splunk_TA_aws/local/props.conf file (yes, $SPLUNK_HOME is /opt/splunk) on our heavy forwarder and restarted with this content: [aws:cloudfront:accesslogs] TZ = UTC Alas, no dice yet.  Suggestions?
I would like to create a Pie chart to show how many calls took less than 100ms, 200ms, and 300ms.  index=star env=prod |search time > 100 | stats count by time   How can I append > 200 and >300 in... See more...
I would like to create a Pie chart to show how many calls took less than 100ms, 200ms, and 300ms.  index=star env=prod |search time > 100 | stats count by time   How can I append > 200 and >300 in the same query? 
I have Mitre App for Splunk installed in my Enterprise security.  I have the Mitre Dashboard up. I need help to create a repeatable process that will create custom content. I want to bring into the M... See more...
I have Mitre App for Splunk installed in my Enterprise security.  I have the Mitre Dashboard up. I need help to create a repeatable process that will create custom content. I want to bring into the Mitre Dashboard "Detection Catalog" which was created by one of our Engineers. I need help on how I can automate or create a repeatable process of putting this custom data content into the Mitre Dashboard. 
We see some hosts not reporting currently into Splunk in Oct 2021. When analyzed in its previous month's i.e. Sept 2021, those hosts were reporting to Splunk.   Do we have any query/method to fin... See more...
We see some hosts not reporting currently into Splunk in Oct 2021. When analyzed in its previous month's i.e. Sept 2021, those hosts were reporting to Splunk.   Do we have any query/method to find the exact time when these were reported last?   Thanks
I am trying to determine the length of spike to see if it goes beyond our requirements.   Here is a test of my search: index="database" source = IIQDB:* | fields _time, FileGrpName, source, sourc... See more...
I am trying to determine the length of spike to see if it goes beyond our requirements.   Here is a test of my search: index="database" source = IIQDB:* | fields _time, FileGrpName, source, sourcetype, database,Spaced_Used_Per, AvailSpaceMB, Value, SQL_Server_Process_CPU_Utilization,System_Idle_Process, Other_Process_CPU_Utilization,free_log_space_Perc, lag_seconds, Requests, host, server ,Task_Name, job, recent_failures, last_run, Target | rex field=host "^(?P<hostname>[^\.]+)" | rex field=Value "(?P<pctValue>.*)\%" | eval TasksPaused = if(sourcetype="mssql:AGS:TaskSchP",Task_Name, null()) | search TasksPaused="*" TasksPaused="Intel-TaskSchedule-FullTextIndexRefresh" host="agsprdb1.ed.cps.intel.com" | eval ptime=strptime(last_run,"%Y-%m-%d %H:%M:%S") | eval TimeDiff=(now()-ptime)/60 | sort _time | streamstats reset_on_change=true earliest(_time) as earlyTime latest(_time) as lastTime by TasksPaused | eval duration = (lastTime - earlyTime)/60   Some of it is extra from the whole search. I am trying to narrow down the problem with this section.   Wish we could post a picture of our timeline but I will simulate it here.                  /\                                     ---/\                                                       /--------\ --------/       \--------------------/           \-------------------------------/                   \--------------------------------------------
Hi,  I'm trying to use a lookup file inside an if statement, and it doesn't return any data. I would appreciate it if anyone could help me. Thanks! The lookup file has 4 columns (TenantName, tena... See more...
Hi,  I'm trying to use a lookup file inside an if statement, and it doesn't return any data. I would appreciate it if anyone could help me. Thanks! The lookup file has 4 columns (TenantName, tenantId, Region, DB) and my base search is returning 5 columns (_time, TenantName, tenantId, Region, Status). I need to find the database name (or DB) for each record, and it should be done by using tenantId in base search wherever tenantId is not "Unknown". <base search> | table _time TenantName tenantId Region Status | eval Database=if(tenantId!="Unknown", [| inputlookup myLookup | where tenantId=tenantId | return $DB], [| inputlookup myLookup | where TenantName=TenantName | return $DB])  
Hi everyone. I was watching some events from the internal logs and I saw so many events related to "ERROR AdminManagerDispatch - Admin handler 'alert_manager' not found.". Recently, I upgraded "aler... See more...
Hi everyone. I was watching some events from the internal logs and I saw so many events related to "ERROR AdminManagerDispatch - Admin handler 'alert_manager' not found.". Recently, I upgraded "alert manager" app from v2 to v3 but I do not know if that upgrade has a relation.   Does anyone know what could be happening?   Thank you so much
In my organization we are planning to install heavy forwarders for some domains. What are the hardware requirements for heavy forwarders?  What is the ratio of servers to heavy forwarder?
Hello All! Hope everyone can help. After we upgrade our splunk enterpriste to 8.2 we are getting this messages errors in our search heads clusters regarding to our indexers. Auto Load Balanced TCP ... See more...
Hello All! Hope everyone can help. After we upgrade our splunk enterpriste to 8.2 we are getting this messages errors in our search heads clusters regarding to our indexers. Auto Load Balanced TCP Output Root Cause(s): More than 70% of forwarding destinations have failed. Ensure your hosts and ports in outputs.conf are correct. Also ensure that the indexers are all running, and that any SSL certificates being used for forwarding are correct.
Hello   Is it possible to run the search of a dashboard by using its ID?   Also, can I add fields to the search above? I.e. if a dashboard conducts this search: (index="mysource" earliest=-264h ... See more...
Hello   Is it possible to run the search of a dashboard by using its ID?   Also, can I add fields to the search above? I.e. if a dashboard conducts this search: (index="mysource" earliest=-264h latest=now()) | eval metric=case(index="mysource", '_time') ...   Can I do something like: search $dashboard_id$ = 'my_dashboard' | eval Timestamp=strftime(now(),"%d/%m/%Y %H:%M:00") | table A1 A2  Timestamp   I.e. append additional code?   Thanks!
Hello I upgraded recently to 3.7.0 in doing so I've encountered a few issues.  1. The bulk edit features dont show up in the UI and there arent any checkboxes as shown in docs. 2. the "Edit Incide... See more...
Hello I upgraded recently to 3.7.0 in doing so I've encountered a few issues.  1. The bulk edit features dont show up in the UI and there arent any checkboxes as shown in docs. 2. the "Edit Incident" button doesnt do anything 3. The doexternalworkflowaction field shows when it usually does not as well as the other field names arent the clean field names.   Any ideas on how I can resolve these issues? Thanks for the help.
I am trying to search for a number of events over a select period of time (4 hours) and then expand that to see how much of this traffic is in a 30 day period. I can use the time ranger picker for th... See more...
I am trying to search for a number of events over a select period of time (4 hours) and then expand that to see how much of this traffic is in a 30 day period. I can use the time ranger picker for the initial 4 hours, but when expand it, I am getting too much data.  Search I am using: index="Firewalls" action=blocked | stats count by client_ip | search count > 3500 | sort -count Is there a way to limit the results to be something like "search count > 3500 over 4 hours" and have the time range be 30 days?
Has anyone ever installed the Netwrix addon in Splunk? Having a bit of trouble with how to do so. 
  The file a bug link under the help menu goes here: http://www.splunk.com/r/bugs If you go there it asks you to log in then dumps you to the homepage. If you click on it again it takes you he... See more...
  The file a bug link under the help menu goes here: http://www.splunk.com/r/bugs If you go there it asks you to log in then dumps you to the homepage. If you click on it again it takes you here: https://splunkcommunities.force.com/customers/apex/CP_CaseSubmissionPage?caseID=NewCase