All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Friends, I'm importing new services through Ad-hoc Search SPL. After complete all the steps while I'm adding my Services (Parent & Child) to existing parent service, I'm not getting the below pop... See more...
Hi Friends, I'm importing new services through Ad-hoc Search SPL. After complete all the steps while I'm adding my Services (Parent & Child) to existing parent service, I'm not getting the below pop up window to enable the services to start running the KPI searches and map entities.  Due to this no entities mapped with my services.  Could you please assist on  this? Thanks in advance.   
Hello, I am trying to create a new clustered search head and was planning to create this using the docker images, I have set the management URL to our index manager node however I am receiving this e... See more...
Hello, I am trying to create a new clustered search head and was planning to create this using the docker images, I have set the management URL to our index manager node however I am receiving this error  WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Login failed is this planned architecture possible? as I cannot see anything in the documentation that follows this path?  this is the original move and then later down the line the indexers will be move to containers however is out of scope currently 
Hello Splunk Ninjas! I will require your assistance with designing my regex expression. I need to filter for the value of Message in this sample log line:   2022-09-23T13:20:25.765+01:00 [29]... See more...
Hello Splunk Ninjas! I will require your assistance with designing my regex expression. I need to filter for the value of Message in this sample log line:   2022-09-23T13:20:25.765+01:00 [29] WARN Core.ErrorResponse - {} - Error message being sent to user with Http Status code: BadRequest: {"Details":{"field1":value,"field2":"value2"},"Message":"This is the message.","UserMessage":null,"Code":86,"Explanation":null,"Resolution":null,"Category":4}   I will be interested in extracting value of field1, field2 (inside of {Details}, message and code, Any help, much appreciated! Thanks again
We recently upgraded to Splunk 9.0.0 on our platform and the Splunk Add-On for Active Directory stopped working. We connect to our Active Directory instance using SSL ... and we're getting errors lik... See more...
We recently upgraded to Splunk 9.0.0 on our platform and the Splunk Add-On for Active Directory stopped working. We connect to our Active Directory instance using SSL ... and we're getting errors like this one now 2022-10-20 08:05:03,580, Level=ERROR, Pid=5668, File=search_command.py, Line=390, Abnormal exit: (LDAPSocketOpenError('socket ssl wrapping error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1106)'),)   What needs to be changed in order to make this work with Splunk 9? ... (We still have an instance running 8.2.6 and the AddOn still works perfectly fine there with the exact same configuration)   ( we're using the most recent version {3.0.5} of SA-ldapsearch )   Thank you
so im trying to find outside IP addresses hiting our firewall and seeing if they have been blocked or not. is there a script i can run to find them?  i have this one im trying  index="firewall" src... See more...
so im trying to find outside IP addresses hiting our firewall and seeing if they have been blocked or not. is there a script i can run to find them?  i have this one im trying  index="firewall" src_ip=( IP address here ) and i get nothing any help will be appreciated  
500 internal server error Oops. The server encountered an unexpected condition which prevented it from fulfilling the request. Click here to return to Splunk homepage.   I have installed Splu... See more...
500 internal server error Oops. The server encountered an unexpected condition which prevented it from fulfilling the request. Click here to return to Splunk homepage.   I have installed Splunk instance on the Ubuntu (WSL), and had some issues while starting the splunk so i followed the steps suggested by @naisanza (https://community.splunk.com/t5/Getting-Data-In/Why-am-I-getting-quot-homePath-opt-splunk-var-lib-splunk-audit/m-p/220231/highlight/false#M43245), this was fixed, And still my Splunkweb was not up and running so i have followed steps suggested by @fearofcasanova in the post(https://community.splunk.com/t5/Installation/splunk-installation-is-failing-to-generate-cert-pem/m-p/405406/highlight/false#M5445) the issue was fixed And i tried to login to splunk web link "https://<ip.o.o.o>:8000" and when i entered the credentials it was giving me the above 500 error. Can somebody help me out in fixing this issue, thanks in advance.  
Hello, I'm trying to set up SNMP monitoring for my  Zscaler NSS  on splunk.  Does anyone know where I can get documentation or does anyone have any ideas how I can do this? Thanks!
Hi All, I need help on plotting backlog data on timechart We have set of tickets in backlog on specific dates with workgroups, wanted to show them in Timechart  Below is the situation  exampl... See more...
Hi All, I need help on plotting backlog data on timechart We have set of tickets in backlog on specific dates with workgroups, wanted to show them in Timechart  Below is the situation  example ticket123 is backlog on 1st Oct with group A and same ticket123 moved to group B on 03rd Oct and with them till 05th Oct.  at last ticket moved Group C on 06th Now below is the table that shows in Splunk.  Date        Ticket     Workgroup  status 01-Oct     123             A                 Pending 03-Oct     123             B                 Pending 06-Oct     123            C                  Pending   from above table  if we do timechart its shows ticket123 in backlog on 01st , 03rd and 06th  however ticket  ticket123,   in backlog on 01st and 02nd in group A ticket123,   in backlog on 03rd,04th and 05th in group B ticket123,   in backlog on 06th in group B how to get dates in  02nd,04th,05th in Table so that we can show on the timechart that the ticket in the backlog has specific dates.  
I installed DB Connect on Heavy Forwarder but I get message the task server cannot start. Eariler I installed DB Connect of Search Head (the same way) and it worked fine. What is the problem with Hea... See more...
I installed DB Connect on Heavy Forwarder but I get message the task server cannot start. Eariler I installed DB Connect of Search Head (the same way) and it worked fine. What is the problem with Heavy Forwarder? Can anyone please give mi clear answer? I
Good afternoon! I have a request based on which I create an aller: index="main" sourcetype="testsystem-script707" | eval srcMsgId_Исх_Сообщения=if(len('Correlation_srcMsgId')==0 OR isnull('Correla... See more...
Good afternoon! I have a request based on which I create an aller: index="main" sourcetype="testsystem-script707" | eval srcMsgId_Исх_Сообщения=if(len('Correlation_srcMsgId')==0 OR isnull('Correlation_srcMsgId'),'srcMsgId','Correlation_srcMsgId') | eventstats count as Message_Number by srcMsgId_Исх_Сообщения | rex field=routepointID "^(?<Routepoint_ID_num>\d+)\." | sort routepointID | eventstats first(Routepoint_ID_num) as RouteID by srcMsgId_Исх_Сообщения | sort _time | eventstats first(_time) as MessageTime by srcMsgId_Исх_Сообщения | eval Time_Now=now() | eval diff_time=Time_Now-MessageTime | where RouteID!=1 AND diff_time>15 But the alert is constantly triggering trigger on old message threads that match the conditions of the query above. I would like to get rid of triggers on old chains by adding fields to messages: alert=true or alert=0. Accordingly, I would add an additional condition to my request: work only when: alert=0. Tell me how to do it?
We have alerts for high Windows Server CPU usage, and we have automated vulnerability scanners which can trip these alerts, which we'd like to ignore. The existing alert has just one nested search to... See more...
We have alerts for high Windows Server CPU usage, and we have automated vulnerability scanners which can trip these alerts, which we'd like to ignore. The existing alert has just one nested search to check CPU usage, and the main search uses that host name to figure out the processes with high CPU, roughly like this:     ```analyze processes on high-cpu hosts``` index=perf source=process cpu>25 ```find high-CPU hosts``` [search index=perf source=cpudata cpu>95 |stats count by host |search count > 2 |fields host ] ```omitted: other macros, formatting, etc.```     In order to ignore the vulnerability scans, we have to check the IIS index for requests from servers that start with a particular name (scanner*) -- this works fine:     ```analyze processes on high-cpu hosts``` index=perf source=process cpu>25 ```disregard hosts taking scanner traffic``` [search index=iis |lookup dnslookup clientip OUTPUT clienthost |eval scanner=(if(like(clienthost,"scanner%"),1,0) |stats sum(scanner) as count by host |search count<10 |fields host ```find high-CPU hosts``` |search [search index=perf source=cpudata cpu>95 |stats count by host |search count > 2 |fields host ]] ```omitted: other macros, formatting, etc.```     The problem is that not all servers use IIS. For example a SQL Server will never appear in the IIS index. So I'm trying to find a way to have the host value passed to the main search when the IIS sub-search doesn't have any hits at all. I kind of suspect I should combine the IIS search into the high-CPU subsearch (reference both indexes) but I'm having a hard time wrapping my head around how that would work. As a side note, performance seems pretty bad. Each subsearch runs subsecond, stand-alone (and our indexes like IIS have 1 million plus events per minute), but the multi-subsearch version takes a little over 1 minute -- this surprises me since only a few host values are passed out of the innermost search. Performance suggestions are very welcome but we can live with the processing time if I can solve this other issue.
I am trying to create a search where if there is a change of 30 percent within 5 mins of a few field values, I would like to create an alert. The search should take a sum of the field values where... See more...
I am trying to create a search where if there is a change of 30 percent within 5 mins of a few field values, I would like to create an alert. The search should take a sum of the field values where field names are like traffic_in#abc and traffic_out#abc, and then use delta command to find difference between current and previous values. Now, the issue is I have 11 field values like abc(e.g., abc, cde,efg etc.) and I want delta of total of (traffic_in#***+traffic_out#***) and then find the values when traffic has changed by over 30%. The search that I have can be used when I have only one value like abc, but I want to change it so that it can work with multiple values. The search is :   eventtype=cacti:mirage host="onl-cacti-02" host_id=193 ldi IN("8835","8836","8837","8839","8840","8841","8846","8847","8848","8843","8844",) | reverse | eval combination=rrdn+"#"+name_cache | streamstats current=t window=2 global=f range(_time) as deltaTime range(rrdv) AS rrd_value_delta by combination | eval isTraffic = if(like(rrdn,"%traffic%"),1,0) | eval kpi = if(isTraffic==1,rrd_value_delta*8/deltaTime,rrd_value_delta/deltaTime) | timechart limit=0 useother=f span=5min last(kpi) by combination | addtotals fieldname=total | delta total as change | eval change_percent=change/(total-change)*100 | timechart span=5min first(total) AS total_traffic, first(change_percent) AS traffic_change | where abs(traffic_change) > 30  
Good afternoon! I figured out how to set up alerts. Understood with the parameter: Cron Expression. Currently I am using: */1 * * * * (run every minute). Tell me how to run in seconds, I tried a ... See more...
Good afternoon! I figured out how to set up alerts. Understood with the parameter: Cron Expression. Currently I am using: */1 * * * * (run every minute). Tell me how to run in seconds, I tried a lot of options, but the splunk swears - it gives an error. How, for example, to run every 30 or 40 seconds?   Thanks in advance!
The goal is to take all eventIds with "operation failed" and exclude events with "Duplicate key" and "Event processed successfully": index="idx" "Transaction failed" | table eventId | dedup eventId... See more...
The goal is to take all eventIds with "operation failed" and exclude events with "Duplicate key" and "Event processed successfully": index="idx" "Transaction failed" | table eventId | dedup eventId | search NOT [search index="idx" "Duplicate key" | table eventId ] | search NOT [search index="idx" "Event processed successfully" | table eventId ] But for some reason the last NOT subquery doesn't exclude the events which processed successfully: | search NOT [search index="idx" "Event processed successfully" | table eventId ]
Hi  For example  Using below query i can see  when we received the last log to splunk, based on that if I search for events it's not showing  Using below spl i can see when we we received lates... See more...
Hi  For example  Using below query i can see  when we received the last log to splunk, based on that if I search for events it's not showing  Using below spl i can see when we we received latest events with below combination with 30 days timerange   |Tstats latest(_time) as _time where index=abc sourcetype="Cisco devises" host=1234 by index source sourcetype host.     But if I search same spl for seeing events not showing the results --same timerange index=abc sourcetype="Cisco devises" host=1234     
Add "A" field from another index if "B" and ""C" are equal across indexes I have search that returns events with fields  Index=1 "B" "C"  "D" index=2 "A" "B"  "C"  On a search in ind... See more...
Add "A" field from another index if "B" and ""C" are equal across indexes I have search that returns events with fields  Index=1 "B" "C"  "D" index=2 "A" "B"  "C"  On a search in index 1, I would like to kind of 'lookup'  dynamicaly field from index=2. I know "B" "C" will most likelly be equal, so I would like to use them to find event, and pull value of field "A". Same activity is similarly recorded within two indexes but one has "A" Within 5min timeframe of an event (search & map?) if I find event where fields B and C are equal accross both indexes, (if) then pull value from index=2 field "A" and map it to field "D" on index=1 in that event.  else do a lookup from lookuptable (if false | lookup table x as B OUTPUTNEW z as D) I hope it's clear, please let me know in case of any quesions.  
Hi I have created dashboard with column charts  and tables and it is working fine but when i export the dashboard to pdf i am unable to see the bar chart but the statistic tables are visible in the... See more...
Hi I have created dashboard with column charts  and tables and it is working fine but when i export the dashboard to pdf i am unable to see the bar chart but the statistic tables are visible in the pdf. Any help will be appreciated.   Thanks  
Hello Splunkers !!   Last week Current week New Error  "enableEnhancedCheckout"  "enableEnhancedCheckout"  "error_in_python_script"   "error_in_python_script"   ... See more...
Hello Splunkers !!   Last week Current week New Error  "enableEnhancedCheckout"  "enableEnhancedCheckout"  "error_in_python_script"   "error_in_python_script"     Above is the use case I have, In which I want to compare two week errors. And if any new error introduced then I want to highlight that error.  Below is the SPL I have used so far. Please let me know what I need to correct in below query and How can I achieve, if you have any other approach. Index="ABC" source="/abc.log" ("ERROR" OR "EXCEPTION") earliest=-14d latest=now() | rex field=_raw "Error\s(?<Message>.+)MulesoftAdyenNotification" | rex field=_raw "fetchSeoContent\(\)\s(?<Exception>.+)" | rex field=_raw "Error:(?<Error2>.+)" | rex field=_raw "(?<ErrorM>Error in template script)+" | rex field=_raw "(?ms)^(?:[^\\|\\n]*\\|){3}(?P<Component>[^\\|]+)" | rex "service=(?<Service>[A-Za-z._]+)" | rex "Sites-(?<Country>[A-Z]{2})" | eval Error_Exception= coalesce(Message,Error2,Exception,ErrorM) | eval Week=case(now()-_time<604800,"Current_Week",_time>604800, "Last_Week") | stats dc(Week) AS Week_count values(Week) AS Week by Error_Exception | eval Error_Status=if(Week_count=2,"Both Weeks",Week) | eval Difference1= abs(tonumber(Last_Week) - tonumber(Current_Week)) | stats count by Difference1 | fields - count
Hi all,  I have a server that collects logs from CISCO ISE which then pipe the logs to Splunk which then generates Splunk report on a weekly basis.  The logs stopped coming into Splunk about two ... See more...
Hi all,  I have a server that collects logs from CISCO ISE which then pipe the logs to Splunk which then generates Splunk report on a weekly basis.  The logs stopped coming into Splunk about two weeks ago. I did some basic troubleshooting and checked no issues on the network level, as other directories are still sending logs to Splunk as per normal.  Would like to ask for some directions in my troubleshooting process for this issue. If there are more information required, please feel free to let me know.  Thank you all in advance. 
Hi, We have close to 1000 indexers in our splunk cluster on AWS. Each indexer has 15TB SSD local storage. Our retention is 30 days and we enable smartstore with AWS S3. The total s3 bucket size f... See more...
Hi, We have close to 1000 indexers in our splunk cluster on AWS. Each indexer has 15TB SSD local storage. Our retention is 30 days and we enable smartstore with AWS S3. The total s3 bucket size for our cluster says it is around 9 PB, however the disk usage on almost all of our indexers is around 95% which leads to (1000 * 0.95 * 15 TB) = 14.2 PB.  What is taking up additional ~5 PB of disk space on indexers? I'm sure the hot data(which isn't on s3) is definitely not of 2.5 PB (RF =2) size. Can someone please throw some light here? Thanks.