All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,  We are using ES and we have a lookup file downloaded which has a mix of standalone ip's and CIDRs/Subnets/.  The CSV file has 3 columns :  Description, ip, time I want to match dest_ip fro... See more...
Hello,  We are using ES and we have a lookup file downloaded which has a mix of standalone ip's and CIDRs/Subnets/.  The CSV file has 3 columns :  Description, ip, time I want to match dest_ip from my search results to any of those IPs in the lookup table ( Column "ip")  and if any matches, the results  should be displayed in a table format I am using this search          | makeresults count=2 | streamstats count | eval src_ip = case(count=1,"1.2.3.4", count=2,"2.3.4.5") | eval dst_ip = case(count=1 OR count=2, "1.234.65.61") | lookup ip_reputation_list ip as dst_ip OUTPUT ip | table _time ip            Is the above correct?  The problem i am facing is even if no IPs match, the search results still shows me the _time column  and ip column (as empty) .    How to get the search NOT show any results if IPs don't match in the lookup ,  and secondly can i compare IP with a CIDR in the lookup ?  
Hi Friends, I have integrated ServiceNow to Splunk through Splunk add-On for ServiceNow, I want to input "Cases" but it is not listed in the Input tab. I can see the incident input but I don't nee... See more...
Hi Friends, I have integrated ServiceNow to Splunk through Splunk add-On for ServiceNow, I want to input "Cases" but it is not listed in the Input tab. I can see the incident input but I don't need that. I want to input the Cases/tickets created by customers. Please help at the earliest. Thank you. @splunk  
My lookUp is a KV Store lookup.  It has three column  'is_active' , 'user', 'robot'. I have a SPL query that gives me more information about the user. And I want to enrich the lookup with addition... See more...
My lookUp is a KV Store lookup.  It has three column  'is_active' , 'user', 'robot'. I have a SPL query that gives me more information about the user. And I want to enrich the lookup with additional coulmns from that SPL output. The SPL is , index=population sourcetype=bsassioan | table age, gender, email, user_name. user_name in the same field as 'user' from the lookup. I want to update my Kvstore lookup such as it should contain columns such as is_active, user, robot, age, gender, email  by matching the user_name with user field.   If the match is not found , let the field be empty and I don't want to override anything in the lookup, just addition of new columns and null/empty fileds (no overwrite) if user_name / user match is not found.
Hello *, I am looking for an SPL that reads the first part of a string via regex and replaces all occurrences of a certain character with this first part. This is to be used for summary indexing. ... See more...
Hello *, I am looking for an SPL that reads the first part of a string via regex and replaces all occurrences of a certain character with this first part. This is to be used for summary indexing.  Example:   1;A__B 2;B__C__D__A__E 3;G   is to be transformed into (each line will become a value in a multivalue field):   1;A 1;B 2;B 2;C 2;D 2;A 2;E 3;G   Neither replace nor rex seem to be able to afford multiple replacements of this kind. I also tried foreach with some field extractions but failed. Before I write a custom search command for it, I hope for your ideas to solve the problem with some clever standard SPL.   Thank you in advance for your thoughts!
I currently have a Splunk cluster that looks like this: Splunk CentOS Version Splunk Version Master 7.5 7.0.0 Forwarder 7.5 Universal Forwarder 6.6.3 Search Head 6.5 7.0.0 Ind... See more...
I currently have a Splunk cluster that looks like this: Splunk CentOS Version Splunk Version Master 7.5 7.0.0 Forwarder 7.5 Universal Forwarder 6.6.3 Search Head 6.5 7.0.0 Indexer 1 6.5 7.0.0 Indexer 2 6.5 7.0.0 Indexer 3 6.5 7.0.0 Indexer 4 6.5 7.0.0   I have 4 new, better servers that I want to build as indexers to replace the 4 indexers that I currently have (while moving from Splunk 7.0.0 to Splunk 8.x in the process). It seems to me that I have two options: Option 1 Create a new indexer cluster with the 4 new indexers, so that I have an old indexer cluster and a new indexer cluster.  Send new data to the new indexer cluster only and in time the data on the old indexer cluster will age off and I can retire that cluster. During this time I will configure the search head to search across both indexer clusters so that the old and new data is searchable. Option 2 Add the new indexers to the existing indexer cluster, so that I have 4 old indexers and 4 new indexers in the same cluster. Allow the data to replicate across the 8 node cluster, and then retire the old indexers one by one. Leaving me with the 4 new indexers only. In both scenarios above the old indexers will be CentOS 6.x and the new indexers will be CentOS 7.x, but as I understand it this won't be a problem. Would anyone be able to offer any advice on which method would be best/easiest or point out any pitfalls or anything obvious that I may not have considered?
We are receiving the same event over multiple notables. We would like to have a way to stop the duplicate events or try to remove them from being generated in two different notables. Any Suggestions ?
Hi All,   I am getting the below error in our SHC. Unable to initialize modular input "checkpoint_opseclea" defined in the app "Splunk_TA_checkpoint-opseclea": Introspecting scheme=checkpoint_opse... See more...
Hi All,   I am getting the below error in our SHC. Unable to initialize modular input "checkpoint_opseclea" defined in the app "Splunk_TA_checkpoint-opseclea": Introspecting scheme=checkpoint_opseclea: script running failed (exited with code 1).. Also, unable to load the Inputs and Configuration page of the app.   Splunk_TA_checkpoint-opseclea version is 4.3.1 Splunk enterprise version is 8.1.2 In the below link, splunk enterprise version 8.1.2 is not listed in the compatible version list of the app Splunk_TA_checkpoint-opseclea. https://docs.splunk.com/Documentation/AddOns/released/OPSEC-LEA/Releasehistory This issue is not there in the knownissue. Some of the HFs also have 4.3.1 version of the app, but, there the input and configuration pages are loading. Is it because of the version compatibility or any other reason is there
Hello everyone, I have tried looking a lot for the right solution for this but I am not able to understand it right. I am very new to splunk and I am looking at trying to edit the font size and colo... See more...
Hello everyone, I have tried looking a lot for the right solution for this but I am not able to understand it right. I am very new to splunk and I am looking at trying to edit the font size and color for my initial label(at the very top) of my dashboard. I was able to change the font size and color for the individual panels I built, however I have had no luck with the editing the label. I have pasted the code below to give a better understand of what I am working with.   <form theme="dark"> <label>Trial Dashboard</label>     --->(i am trying to format the font size and color here) <description>Select the time limit</description> <fieldset submitButton="false" autoRun="true"> <input type="time" token="time_tok" searchWhenChanged="true"> <label>Time</label> <default> <earliest>-30d@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel id="trial1"> <title>Sample Pannel</title> <single> <search> <query>sample_query_in_here</query> <earliest>$time_tok.earliest$</earliest> <latest>$time_tok.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>300s</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="height">249</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> <option name="useColors">1</option> </single> <html> <style> #trial1 .dashboard-panel h2 { font-size: 28px !important; color:#9000ff !important; font-weight: bold !important; } </style> </html> </panel> </row> </form>   2nd line from top "Trial Dashboard" is the label that I am trying to change. I hope to find a solution similar to the way I added the formats to the panels as it was easy to have it all under the same xml source code. Thank you in advance, Rookie99
Hi Here is th e log: 2021-10-26 08:17:19,117 WARN AbCD-App2-0000 [SqlExceptionHelper] SQL Error: -268, SQLState: 23000 2021-10-26 08:01:47,674 WARN CUS.AbCD-APP2-0000000 [SqlExceptionHelper] SQL E... See more...
Hi Here is th e log: 2021-10-26 08:17:19,117 WARN AbCD-App2-0000 [SqlExceptionHelper] SQL Error: -268, SQLState: 23000 2021-10-26 08:01:47,674 WARN CUS.AbCD-APP2-0000000 [SqlExceptionHelper] SQL Error: -100, SQLState: IX000   expected output: Error    Status 268     23000 100      IX000   Thanks
Hi, We are integrating phantom with splunk using below doc https://docs.splunk.com/Documentation/PhantomRemoteSearch/1.0.14/PhantomRemoteSearch/Connecttosinglesplunk As per the doc we create two u... See more...
Hi, We are integrating phantom with splunk using below doc https://docs.splunk.com/Documentation/PhantomRemoteSearch/1.0.14/PhantomRemoteSearch/Connecttosinglesplunk As per the doc we create two users - phantomsearchuser​ and ​phantomdeleteuser May I know why these users are created?? what they will do? Also, as per this --> https://splunkbase.splunk.com/app/4399/#/details   (A userid is created at phantom end and it is added to below lookup in splunk) --> I have created it as phantom_test at the phantom end. May I know why this user is required? 4- Edit phantomusers.csv file under <Splunk>/etc/apps/splunk_app_phantom/lookups and add new entries. Each entry should map the phantom userid to the phantom username. You can get the userids/username mapping from your Phantom instance under Administration -> User Management -> Users and click on each individual user to get the userid. Totally I have 3 users (2 users created in splunk and 1 user created in phantom)  
I am about to upgrade the Security Essentials App (Installed on ES) to it's most current version 3.4.0. I read that Security Essentials depends on ES content update App. The question is do I need to ... See more...
I am about to upgrade the Security Essentials App (Installed on ES) to it's most current version 3.4.0. I read that Security Essentials depends on ES content update App. The question is do I need to update this to it's latest version as well? Thank u very much for your response in advance.
I'm trying to configure a cloudwatch logs input but I continue to receive invalid key errors when restarting Splunk on the HF. I've gone by the doc as well as opened a support case but haven't had su... See more...
I'm trying to configure a cloudwatch logs input but I continue to receive invalid key errors when restarting Splunk on the HF. I've gone by the doc as well as opened a support case but haven't had success. The samples that I've seen and the keys that the support guy were telling me to try are all different than the documentation. I'm starting to think this input type is just bugged and doesn't work at all. Has anyone been able to configure this type of input? And if so, do you mind sharing what you did?   Errors   Invalid key in stanza [aws_cloudwatch_logs://REDACTED] in /opt/splunk/etc/apps/Splunk_TA_aws/local/inputs.conf, line 34: account (value: REDACTED). Invalid key in stanza [aws_cloudwatch_logs://REDACTED] in /opt/splunk/etc/apps/Splunk_TA_aws/local/inputs.conf, line 35: groups (value: /blah/blah/redacted, /blah/more/redacted). Invalid key in stanza [aws_cloudwatch_logs://REDACTED] in /opt/splunk/etc/apps/Splunk_TA_aws/local/inputs.conf, line 38: region (value: us-east-1).     Details Splunk Enterprise 8.1.4 AWS add-on 5.0.3 Installed on Amazon Linux Doc:  https://docs.splunk.com/Documentation/AddOns/released/AWS/CloudWatchLogs My current stanza   [aws_cloudwatch_logs://REDACTED] account = REDACTED groups = /blah/blah/redacted, /blah/more/redacted index = staging sourcetype = aws:cloudwatchlogs:log interval = 1800 region = us-east-1 disabled = 0      
hello, Can anyone tell me how to exclude the subsearch result from main search? I want to exclude the result that failed at 1st attempt, but later the person purchased successfully. I only want to... See more...
hello, Can anyone tell me how to exclude the subsearch result from main search? I want to exclude the result that failed at 1st attempt, but later the person purchased successfully. I only want to capture PURCHASEID(s) that failed and has not been able to purchase yet. Here's my pseudo code that I am trying to accomplish index=main sourcetype="access_combined_wcookie" AND ("*TIME_OUT*") | rex field=_raw "\[(?<PURCHASEID>\d{12}\-\d{3})\]" | search NOT [ search index=main sourcetype="access_combined_wcookie" AND ("*Successfully Ordered*")" | rex field=_raw "\[(?<PURCHASEID>\d{12}\-\d{3})\]" | table PURCHASEID] | table PURCHASEID, _raw | dedup PURCHASEID | sort +PURCHASEID
I am able to collect object data when using Splunk add-on for salesforce but I am not lucky to collect events. I have checked permission and also Eventlog files are listed in my org when using curl o... See more...
I am able to collect object data when using Splunk add-on for salesforce but I am not lucky to collect events. I have checked permission and also Eventlog files are listed in my org when using curl or ELF browser (app in Heroku) but not able to collect events in Splunk. My only guess is that I need the Event Monitoring Licenses in SF to use this add-on, can you confirm?
It is necessary to check if the user is in the index in this file or not. If not, then add to the file, if it is in the file, then nothing happens file.csv  username info1 info2 john abcd ... See more...
It is necessary to check if the user is in the index in this file or not. If not, then add to the file, if it is in the file, then nothing happens file.csv  username info1 info2 john abcd qwer   index = IndexName username info1 info2 Aram ghjk qweiq   Condition, if the user is not found in the file, then write it to the file  output  username info1 info2 john abcd qwer Aram ghjk qweiq
With the AWS Add-On for Splunk (version 5.0.3) we can pull logs from a CloudFront S3 bucket via the "Generic S3" type, or from an Application Load Balance with the ELB, Generic S3 type Input. The pr... See more...
With the AWS Add-On for Splunk (version 5.0.3) we can pull logs from a CloudFront S3 bucket via the "Generic S3" type, or from an Application Load Balance with the ELB, Generic S3 type Input. The problem is that the time format for the CloudFront logs are without timezone specified in the S3 objects and our splunk instance is incorrectly defaulting to localtime.  The ELB logs are correctly converted from UTC to localtime in searches. How might we force the timezone to UTC for these events at ingest? I tried creating a /opt/splunk/etc/apps/Splunk_TA_aws/local/props.conf file (yes, $SPLUNK_HOME is /opt/splunk) on our heavy forwarder and restarted with this content: [aws:cloudfront:accesslogs] TZ = UTC Alas, no dice yet.  Suggestions?
I would like to create a Pie chart to show how many calls took less than 100ms, 200ms, and 300ms.  index=star env=prod |search time > 100 | stats count by time   How can I append > 200 and >300 in... See more...
I would like to create a Pie chart to show how many calls took less than 100ms, 200ms, and 300ms.  index=star env=prod |search time > 100 | stats count by time   How can I append > 200 and >300 in the same query? 
I have Mitre App for Splunk installed in my Enterprise security.  I have the Mitre Dashboard up. I need help to create a repeatable process that will create custom content. I want to bring into the M... See more...
I have Mitre App for Splunk installed in my Enterprise security.  I have the Mitre Dashboard up. I need help to create a repeatable process that will create custom content. I want to bring into the Mitre Dashboard "Detection Catalog" which was created by one of our Engineers. I need help on how I can automate or create a repeatable process of putting this custom data content into the Mitre Dashboard. 
We see some hosts not reporting currently into Splunk in Oct 2021. When analyzed in its previous month's i.e. Sept 2021, those hosts were reporting to Splunk.   Do we have any query/method to fin... See more...
We see some hosts not reporting currently into Splunk in Oct 2021. When analyzed in its previous month's i.e. Sept 2021, those hosts were reporting to Splunk.   Do we have any query/method to find the exact time when these were reported last?   Thanks
I am trying to determine the length of spike to see if it goes beyond our requirements.   Here is a test of my search: index="database" source = IIQDB:* | fields _time, FileGrpName, source, sourc... See more...
I am trying to determine the length of spike to see if it goes beyond our requirements.   Here is a test of my search: index="database" source = IIQDB:* | fields _time, FileGrpName, source, sourcetype, database,Spaced_Used_Per, AvailSpaceMB, Value, SQL_Server_Process_CPU_Utilization,System_Idle_Process, Other_Process_CPU_Utilization,free_log_space_Perc, lag_seconds, Requests, host, server ,Task_Name, job, recent_failures, last_run, Target | rex field=host "^(?P<hostname>[^\.]+)" | rex field=Value "(?P<pctValue>.*)\%" | eval TasksPaused = if(sourcetype="mssql:AGS:TaskSchP",Task_Name, null()) | search TasksPaused="*" TasksPaused="Intel-TaskSchedule-FullTextIndexRefresh" host="agsprdb1.ed.cps.intel.com" | eval ptime=strptime(last_run,"%Y-%m-%d %H:%M:%S") | eval TimeDiff=(now()-ptime)/60 | sort _time | streamstats reset_on_change=true earliest(_time) as earlyTime latest(_time) as lastTime by TasksPaused | eval duration = (lastTime - earlyTime)/60   Some of it is extra from the whole search. I am trying to narrow down the problem with this section.   Wish we could post a picture of our timeline but I will simulate it here.                  /\                                     ---/\                                                       /--------\ --------/       \--------------------/           \-------------------------------/                   \--------------------------------------------