All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

For anyone using Hurricane Labs "Broken hosts" app (https://splunkbase.splunk.com/app/3247) note that the latest version, 4.2.2, appears to have a very minor but breaking bug. The file /default/saved... See more...
For anyone using Hurricane Labs "Broken hosts" app (https://splunkbase.splunk.com/app/3247) note that the latest version, 4.2.2, appears to have a very minor but breaking bug. The file /default/savedsearches.conf has a stanza for the "Broken Hosts Alert - by contact" alert. Depending how you use the app, that potentially drives your entire alerting mechanism. Two lines in that file (121 & 130) wrap a built-in search macro in double quotes where they should not exist:   | fillnull value="`default_expected_time`" lateSecs   should be:   | fillnull value=`default_expected_time` lateSecs   The result of this is to assign the string value "`default_expected_time`" to the lateSecs variable, rather than expanding to whatever default integer you configured in the macro. Removing those double quotes from both lines seems to fix the issue. I've also raised an issue on the Hurricane Labs github page below...though activity there is pretty stale and I'm not sure if anyone is looking there... https://github.com/HurricaneLabs/brokenhosts/issues/3
@SmeetsS , if you have a moment could you clarify this? I'm unfamiliar with using custom javascript in Splunk.  I have a bunch of dashboards with this issue in the default launcher app. I created a ... See more...
@SmeetsS , if you have a moment could you clarify this? I'm unfamiliar with using custom javascript in Splunk.  I have a bunch of dashboards with this issue in the default launcher app. I created a nopopup.js in etc/apps/launcher/appserver/static.  I then modified the dashboard statement in the source to: <dashboard version="1.1" theme="dark" script="nopopup.js"> But this doesn't seem to work.  Am i missing something?
You are re-using field name severity - you also already seem to have values extracted to fields. What do you have for this | table tool host object_class object severity parameter value message supp... See more...
You are re-using field name severity - you also already seem to have values extracted to fields. What do you have for this | table tool host object_class object severity parameter value message support_group
Perhaps this will help. ((index="wss_desktop_os") (sourcetype="support_remedy")) ASSIGNED_GROUP="DESKTOP_SUPPORT" STATUS_TXT IN ("ASSIGNED", "IN PROGRESS", "PENDING") earliest=-1d@d ``` Convert REPO... See more...
Perhaps this will help. ((index="wss_desktop_os") (sourcetype="support_remedy")) ASSIGNED_GROUP="DESKTOP_SUPPORT" STATUS_TXT IN ("ASSIGNED", "IN PROGRESS", "PENDING") earliest=-1d@d ``` Convert REPORTED_DATE to epoch form ``` | eval REPORTED_DATE2=strptime(REPORTED_DATE, "%Y-%m-%d %H:%M:%S") ``` Keep events reported more than 12 hours ago so are due in < 12 hours ``` | where REPORTED_DATE2 <= relative_time(now(), "-12h") | eval MTTRSET = round((now()-REPORTED_DATE2)/3600) | dedup ENTRY_ID | stats LAST(REPORTED_DATE) AS Reported, values(ASSIGNEE) AS Assignee, values(STATUS_TXT) as Status,values(MTTRSET) as MTTR by ENTRY_ID
Hi @Kira.Huang  This feature is only available in SaaS controller. Its not available in on-prem controller that you seem to be using unfotunately. https://docs.appdynamics.com/appd/21.x/latest/en... See more...
Hi @Kira.Huang  This feature is only available in SaaS controller. Its not available in on-prem controller that you seem to be using unfotunately. https://docs.appdynamics.com/appd/21.x/latest/en/application-monitoring/business-transactions/monitor-the-performance-of-business-transactions/automated-transaction-diagnostics Seems like its enabled in your environment. Please set the microservice.snapshot.analysis.enabled flag to false from admin.jsp page to disable this feature so that you stop seeing this message. Thanks, Satbir Singh
Hi @ITWhisperer , Thanks for your reply.  I have taken your code and modified it with the correct columns | table tool host object_class object severity parameter value message support_group | rex... See more...
Hi @ITWhisperer , Thanks for your reply.  I have taken your code and modified it with the correct columns | table tool host object_class object severity parameter value message support_group | rex field=object "^([^:]*:){3}(?<severity>\w*)" | eventstats values(severity) as AllSeverities by host "OEM_ISSUE" value | eval AllSeverities=if(severity="Clear",AllSeverities,severity) | mvexpand AllSeverities | eval object=host.":OEM_ISSUE:".value.":".AllSeverities | fields object severity | dedup object severity I am getting 2 records for the first clear however instead of the 2 rows showing as serverA:zabbix:123456:Warning Clear serverA:zabbix:123456:Critical Clear  I am getting serverA:zabbix:123456:Clear Clear serverA:zabbix:123456:Critical Critical  after the first clear severity, I am getting only one record as (different incident id and server for example) serverB:zabbix:123457:Clear Clear Any help is greatly appreciated!
Hi All, Is there any way to enable and disable the Splunk alerts automatically based on the  logs source. e.g. We have Site1 and Site 2 is active-passive setup.   case1:- Site 1 is active and S... See more...
Hi All, Is there any way to enable and disable the Splunk alerts automatically based on the  logs source. e.g. We have Site1 and Site 2 is active-passive setup.   case1:- Site 1 is active and Site 2 is passive all Site 1 alerts should get enabled automatically. we can search for Site1 host as condition to enable alerts. Case 2 :- Site 2 is active and Site 1 is passive all Site 2 alerts should get enabled automatically. we can search for Site2 host as condition to enable alerts.    
Hi @Pavan.Jadhav, Since this post is over 3 years old, I highly recommend re-asking this question on the community as it's own post so it gets more visibility from the community.
We don't know your data.  Ideally, your site has a data dictionary with this information, but that's rare.  Consult your Splunk admin about that. You can use the metadata command to get a list of so... See more...
We don't know your data.  Ideally, your site has a data dictionary with this information, but that's rare.  Consult your Splunk admin about that. You can use the metadata command to get a list of sourcetypes or use this query. | tstats count where index=* by index,sourcetype then take educated guesses about which sourcetype is more likely to contain the data you seek.  Search that sourcetype to verify your guess.  
Splunk queries not returning anything in table. I see events matching for these queries but nothing under 'Statistics' section. 1. index=address-validation RESP_MARKER | rex field=log "\"operatio... See more...
Splunk queries not returning anything in table. I see events matching for these queries but nothing under 'Statistics' section. 1. index=address-validation RESP_MARKER | rex field=log "\"operationPath\"\:\"(?<path>\w+).*\"operationType\"\:\"(?<type>\w+).*\"region\"\:\"(?<reg>\w+).*" | table path, type, reg 2.  index=club-finder RESP_MARKER | rex field=log "\"operationPath\"\:\"\/(?<path>\w+).*\"operationType\"\:\"(?<type>\w+).*\"region\"\:\"(?<reg>\w+).*\"totalTime\"\:(?<timeTaken>\w+)" | table type, path, timeTaken, reg
Hello Experts, I am looking at an alert that is using a join function to match a work_center with a work order. I am wondering what records in a stream of records the join is looking at to get that ... See more...
Hello Experts, I am looking at an alert that is using a join function to match a work_center with a work order. I am wondering what records in a stream of records the join is looking at to get that result? Is there a way to get the latest result.  To explain further, the work center in some cases will change based on where work is being completed, so I would like to grab the latest result when the alert runs.  The current code I am looking at using this give us a way to compare the work center in the source="punch" vs the current stream of data. I am wondering if I can further manipulate that subsearch to look at the last result in source="punch". I tried a couple things but didn't have any luck. Not super familiar with joins in my normal work.  | join cwo type left [search source=punch | rename work_center as position]
additional info. We searched the error, and found that: "The maximum number of concurrent running jobs for a historical scheduled search has been reached." Now, we have export python script runnin... See more...
additional info. We searched the error, and found that: "The maximum number of concurrent running jobs for a historical scheduled search has been reached." Now, we have export python script running, the error shows that is this python export script that is causing problems, with concurrent jobs maybe
Hi All, I have a many index and sourcetypes but i don't know which one i have to use to search for specific ip address traffic with port.. please guide me like how can i identify and use the  e... See more...
Hi All, I have a many index and sourcetypes but i don't know which one i have to use to search for specific ip address traffic with port.. please guide me like how can i identify and use the  existing index and sourcetypes to  analyze  particular traffic.....
Hello @gcusello, Firewalld is running, and I do not see anything disabling web interface in server.conf. The "trustedIP" is commented out, but I do not know if that matters.    
Splunk Core and Splunk SOAR both have concepts of multivalue fields but treat them differently. Splunk SOAR expects the multivalue fields to be split out into individual artifacts. It would not be un... See more...
Splunk Core and Splunk SOAR both have concepts of multivalue fields but treat them differently. Splunk SOAR expects the multivalue fields to be split out into individual artifacts. It would not be unusual to have 100's of artifacts in a single container, each artifact being relatively small. We also have the artifact labeling system to help differentiate artifacts. My recommendation would be to embrace the option of sending over multivalue fields as individual artifacts. If there is no mechanism to split the multivalue fields before ingestion, then you can use a preprocess playbook to grab the multivalue field using a utility like "list_demux" and split the output, and then create individual artifacts using the "artifact_create" utility. This will make it easier for all future playbooks to grab the artifact values from that container.  You could use "list_demux" to split a mulltivalue field without creating new artifacts, but then you would need to use that utility in every playbook and that would not be ideal. I hope that helps! Let me know if need additional clarification.
We have standalone environment and are getting error "the percentage of non-high priority searches skipped (61%) over the last 24 hours is very high and exceeded the red threshold (20%) on this splun... See more...
We have standalone environment and are getting error "the percentage of non-high priority searches skipped (61%) over the last 24 hours is very high and exceeded the red threshold (20%) on this splunk instance."  The environment: Customer has standalone where we created an app with a savedsearch script that pulls all indexed events every 1 hour and bundles them into a  .json file, customer then compresses it into a .gz file for transfer into our production environment.   What we are seeing is this skipped searches message and when we check the specific job, we see that every time it runs there are 2 things that come up as jobs, the export app started by python calling the script and then the actual search job activity with our SPL search, both jobs are 1 second apart and stays in the jobs page for 10 minutes each, customer states that it takes ~2.5 minutes for this job to complete.   The python script seems to stay longer for some reason, even after its job  Not sure how to proceed, since we had it scheduled every 4 hours and it was doing the same thing, so we lowered it to 1 hour, no difference. Our search looks at the last completed .json file epoch time and current epoch time to grab those events in that range, so not sure if that message is like a false positive by the way we are catching events (timestamps).  How can i remove the skipped searches error message.  Tips??      
Hi Thanks for the reply.. yes I have some index and sourcetypes but I don't know how to choose the index and sourcetypes for this ip address Thanks,
Hello, I have a search as shown below which gives me the start time (start_run), end time (end_run) and duration when the value of (ValueE) is greater than 20 for the Instrument (my_inst_226). I ... See more...
Hello, I have a search as shown below which gives me the start time (start_run), end time (end_run) and duration when the value of (ValueE) is greater than 20 for the Instrument (my_inst_226). I need to get the values (ValueE) from 11 other Instrument for the duration of my_inst_226 while ValueE is greater than 20 I would like to use "start_run" and "end_run"  to find the value of (ValueE).  I'm thinking that "start_run" and "end_run" would be variables that I can use when searching the ValueE for my 11 other Instruments but I am stuck on how I can use "start_run" and "end_run" for the next stage of my search.   index=my_index_plant sourcetype=my_sourcetype_plant Instrument="my_inst_226" | sort 0 Instrument _time | streamstats global=false window=1 current=false last(ValueE) as previous by Instrument | eval current_over=if(ValueE > 20, 1, 0) | eval previous_over=if(previous > 20, 1, 0) | eval start=if(current_over=1 and previous_over=0,1,0) | eval end=if(current_over=0 and previous_over=1,1,0) | where start=1 OR end=1 | eval start_run=if(start=1, _time, null()) | eval end_run=if(end=1, _time, null()) | filldown start_run end_run | eval run_duration=end_run-start_run | eval check=_time | where end=1 | streamstats count as run_id | eval earliest=strftime(start_run, "%F %T") | eval latest=strftime(end_run, "%F %T") | eval run_duration=tostring(run_duration, "duration") | table run_id earliest latest start_run end_run run_duration current_over previous_over end Instrument ValueE   Any and all tips, help and advice will be gratefully received.
Hi @jmrubio , did you disabled local firewall on this server? check if you disabled web interface in server.conf. Ciao. Giuseppe
Hi @DaClyde , the requested reference hardware is 12 CPUs and 12 GB RAM for both the servers if you don't have Premium App (ES or ITSI) and it depends on the number of users and scheduled searches, ... See more...
Hi @DaClyde , the requested reference hardware is 12 CPUs and 12 GB RAM for both the servers if you don't have Premium App (ES or ITSI) and it depends on the number of users and scheduled searches, this resources must be dedicated not shared. In addition the bottleneck of each Splunk infrastructure is the storage performances: Splunk requires at least 800 IOPS. You can analyze the indexing and search performances using the Monitoring Console App. Then you coult trasform eventual real time searches in scheduled searches. Ciao. Giuseppe