All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You might also want to give a try to logcraft.io you can version knowledge objects individually and deploy them where you want, that's repository for splunk
Hi  Hi, If i select "MM" ,How do i get entity associated(ie materia;,supplied material) to that particular domain...My second drop has static values which then fetch the results for 4 different q... See more...
Hi  Hi, If i select "MM" ,How do i get entity associated(ie materia;,supplied material) to that particular domain...My second drop has static values which then fetch the results for 4 different queries parallely when i select from data entity.then later how do i perform multiselect option for (material and supplied material)
Hi @ITWhisperer  Not sure what you mean by re-using field name severity as Column_1 is the object and column_2 is the severity Here is what some output looks like for  | table tool host object_c... See more...
Hi @ITWhisperer  Not sure what you mean by re-using field name severity as Column_1 is the object and column_2 is the severity Here is what some output looks like for  | table tool host object_class object severity parameter value message support_group Ignore the first column (I put that in for explanation purposes)   tool host object_class object severity parameter value message support_group 1 Tool ServerA OS_INCIDENT ServerA.zabbix:1380217:Warning WARNING OS_ISSUE_NUM 1380217 ServerA - Disk space is at 80% OS  Support 2 Tool ServerA OS_INCIDENT ServerA.zabbix:1380217:Critical CRITICAL OS_ISSUE_NUM 1380217 CALL OUT - ServerA - Disk Space is at 90% OS Support 3 Tool ServerA OS_INCIDENT ServerA.zabbix:1380217:Clear CLEAR OS_ISSUE_NUM 1380217 ServerA -Disk Space Clear OS  Support                     4 Tool ServerA OS_INCIDENT ServerA.zabbix:1380217:Warning CLEAR OS_ISSUE_NUM 1380217 ServerA -Disk Space Clear OS  Support 5 Tool ServerA OS_INCIDENT ServerA.zabbix:1380217:Critical CLEAR OS_ISSUE_NUM 1380217 CALL OUT - ServerA - Disk Space Clear OS Support What I am currently getting is 1,2,3, however what I need is 1 and 2, and when I get a result like 3, change it be 4 and 5 Hope that makes sense
Hello Splunk community, I'm in the process of installing Splunk for the first time on a Windows server. I've followed the official installation guide, but I've encountered an issue during the instal... See more...
Hello Splunk community, I'm in the process of installing Splunk for the first time on a Windows server. I've followed the official installation guide, but I've encountered an issue during the installation process. After running the installer, I received an error message that says 'Error 123: The filename, directory name, or volume label syntax is incorrect.' I've double-checked the installation path and made sure there are no special characters, but I still can't seem to get past this error. Has anyone else experienced this issue during installation? What steps can I take to resolve it and successfully install Splunk on my Windows server? Any help would be greatly appreciated. Thank you!
For anyone using Hurricane Labs "Broken hosts" app (https://splunkbase.splunk.com/app/3247) note that the latest version, 4.2.2, appears to have a very minor but breaking bug. The file /default/saved... See more...
For anyone using Hurricane Labs "Broken hosts" app (https://splunkbase.splunk.com/app/3247) note that the latest version, 4.2.2, appears to have a very minor but breaking bug. The file /default/savedsearches.conf has a stanza for the "Broken Hosts Alert - by contact" alert. Depending how you use the app, that potentially drives your entire alerting mechanism. Two lines in that file (121 & 130) wrap a built-in search macro in double quotes where they should not exist:   | fillnull value="`default_expected_time`" lateSecs   should be:   | fillnull value=`default_expected_time` lateSecs   The result of this is to assign the string value "`default_expected_time`" to the lateSecs variable, rather than expanding to whatever default integer you configured in the macro. Removing those double quotes from both lines seems to fix the issue. I've also raised an issue on the Hurricane Labs github page below...though activity there is pretty stale and I'm not sure if anyone is looking there... https://github.com/HurricaneLabs/brokenhosts/issues/3
@SmeetsS , if you have a moment could you clarify this? I'm unfamiliar with using custom javascript in Splunk.  I have a bunch of dashboards with this issue in the default launcher app. I created a ... See more...
@SmeetsS , if you have a moment could you clarify this? I'm unfamiliar with using custom javascript in Splunk.  I have a bunch of dashboards with this issue in the default launcher app. I created a nopopup.js in etc/apps/launcher/appserver/static.  I then modified the dashboard statement in the source to: <dashboard version="1.1" theme="dark" script="nopopup.js"> But this doesn't seem to work.  Am i missing something?
You are re-using field name severity - you also already seem to have values extracted to fields. What do you have for this | table tool host object_class object severity parameter value message supp... See more...
You are re-using field name severity - you also already seem to have values extracted to fields. What do you have for this | table tool host object_class object severity parameter value message support_group
Perhaps this will help. ((index="wss_desktop_os") (sourcetype="support_remedy")) ASSIGNED_GROUP="DESKTOP_SUPPORT" STATUS_TXT IN ("ASSIGNED", "IN PROGRESS", "PENDING") earliest=-1d@d ``` Convert REPO... See more...
Perhaps this will help. ((index="wss_desktop_os") (sourcetype="support_remedy")) ASSIGNED_GROUP="DESKTOP_SUPPORT" STATUS_TXT IN ("ASSIGNED", "IN PROGRESS", "PENDING") earliest=-1d@d ``` Convert REPORTED_DATE to epoch form ``` | eval REPORTED_DATE2=strptime(REPORTED_DATE, "%Y-%m-%d %H:%M:%S") ``` Keep events reported more than 12 hours ago so are due in < 12 hours ``` | where REPORTED_DATE2 <= relative_time(now(), "-12h") | eval MTTRSET = round((now()-REPORTED_DATE2)/3600) | dedup ENTRY_ID | stats LAST(REPORTED_DATE) AS Reported, values(ASSIGNEE) AS Assignee, values(STATUS_TXT) as Status,values(MTTRSET) as MTTR by ENTRY_ID
Hi @Kira.Huang  This feature is only available in SaaS controller. Its not available in on-prem controller that you seem to be using unfotunately. https://docs.appdynamics.com/appd/21.x/latest/en... See more...
Hi @Kira.Huang  This feature is only available in SaaS controller. Its not available in on-prem controller that you seem to be using unfotunately. https://docs.appdynamics.com/appd/21.x/latest/en/application-monitoring/business-transactions/monitor-the-performance-of-business-transactions/automated-transaction-diagnostics Seems like its enabled in your environment. Please set the microservice.snapshot.analysis.enabled flag to false from admin.jsp page to disable this feature so that you stop seeing this message. Thanks, Satbir Singh
Hi @ITWhisperer , Thanks for your reply.  I have taken your code and modified it with the correct columns | table tool host object_class object severity parameter value message support_group | rex... See more...
Hi @ITWhisperer , Thanks for your reply.  I have taken your code and modified it with the correct columns | table tool host object_class object severity parameter value message support_group | rex field=object "^([^:]*:){3}(?<severity>\w*)" | eventstats values(severity) as AllSeverities by host "OEM_ISSUE" value | eval AllSeverities=if(severity="Clear",AllSeverities,severity) | mvexpand AllSeverities | eval object=host.":OEM_ISSUE:".value.":".AllSeverities | fields object severity | dedup object severity I am getting 2 records for the first clear however instead of the 2 rows showing as serverA:zabbix:123456:Warning Clear serverA:zabbix:123456:Critical Clear  I am getting serverA:zabbix:123456:Clear Clear serverA:zabbix:123456:Critical Critical  after the first clear severity, I am getting only one record as (different incident id and server for example) serverB:zabbix:123457:Clear Clear Any help is greatly appreciated!
Hi All, Is there any way to enable and disable the Splunk alerts automatically based on the  logs source. e.g. We have Site1 and Site 2 is active-passive setup.   case1:- Site 1 is active and S... See more...
Hi All, Is there any way to enable and disable the Splunk alerts automatically based on the  logs source. e.g. We have Site1 and Site 2 is active-passive setup.   case1:- Site 1 is active and Site 2 is passive all Site 1 alerts should get enabled automatically. we can search for Site1 host as condition to enable alerts. Case 2 :- Site 2 is active and Site 1 is passive all Site 2 alerts should get enabled automatically. we can search for Site2 host as condition to enable alerts.    
Hi @Pavan.Jadhav, Since this post is over 3 years old, I highly recommend re-asking this question on the community as it's own post so it gets more visibility from the community.
We don't know your data.  Ideally, your site has a data dictionary with this information, but that's rare.  Consult your Splunk admin about that. You can use the metadata command to get a list of so... See more...
We don't know your data.  Ideally, your site has a data dictionary with this information, but that's rare.  Consult your Splunk admin about that. You can use the metadata command to get a list of sourcetypes or use this query. | tstats count where index=* by index,sourcetype then take educated guesses about which sourcetype is more likely to contain the data you seek.  Search that sourcetype to verify your guess.  
Splunk queries not returning anything in table. I see events matching for these queries but nothing under 'Statistics' section. 1. index=address-validation RESP_MARKER | rex field=log "\"operatio... See more...
Splunk queries not returning anything in table. I see events matching for these queries but nothing under 'Statistics' section. 1. index=address-validation RESP_MARKER | rex field=log "\"operationPath\"\:\"(?<path>\w+).*\"operationType\"\:\"(?<type>\w+).*\"region\"\:\"(?<reg>\w+).*" | table path, type, reg 2.  index=club-finder RESP_MARKER | rex field=log "\"operationPath\"\:\"\/(?<path>\w+).*\"operationType\"\:\"(?<type>\w+).*\"region\"\:\"(?<reg>\w+).*\"totalTime\"\:(?<timeTaken>\w+)" | table type, path, timeTaken, reg
Hello Experts, I am looking at an alert that is using a join function to match a work_center with a work order. I am wondering what records in a stream of records the join is looking at to get that ... See more...
Hello Experts, I am looking at an alert that is using a join function to match a work_center with a work order. I am wondering what records in a stream of records the join is looking at to get that result? Is there a way to get the latest result.  To explain further, the work center in some cases will change based on where work is being completed, so I would like to grab the latest result when the alert runs.  The current code I am looking at using this give us a way to compare the work center in the source="punch" vs the current stream of data. I am wondering if I can further manipulate that subsearch to look at the last result in source="punch". I tried a couple things but didn't have any luck. Not super familiar with joins in my normal work.  | join cwo type left [search source=punch | rename work_center as position]
additional info. We searched the error, and found that: "The maximum number of concurrent running jobs for a historical scheduled search has been reached." Now, we have export python script runnin... See more...
additional info. We searched the error, and found that: "The maximum number of concurrent running jobs for a historical scheduled search has been reached." Now, we have export python script running, the error shows that is this python export script that is causing problems, with concurrent jobs maybe
Hi All, I have a many index and sourcetypes but i don't know which one i have to use to search for specific ip address traffic with port.. please guide me like how can i identify and use the  e... See more...
Hi All, I have a many index and sourcetypes but i don't know which one i have to use to search for specific ip address traffic with port.. please guide me like how can i identify and use the  existing index and sourcetypes to  analyze  particular traffic.....
Hello @gcusello, Firewalld is running, and I do not see anything disabling web interface in server.conf. The "trustedIP" is commented out, but I do not know if that matters.    
Splunk Core and Splunk SOAR both have concepts of multivalue fields but treat them differently. Splunk SOAR expects the multivalue fields to be split out into individual artifacts. It would not be un... See more...
Splunk Core and Splunk SOAR both have concepts of multivalue fields but treat them differently. Splunk SOAR expects the multivalue fields to be split out into individual artifacts. It would not be unusual to have 100's of artifacts in a single container, each artifact being relatively small. We also have the artifact labeling system to help differentiate artifacts. My recommendation would be to embrace the option of sending over multivalue fields as individual artifacts. If there is no mechanism to split the multivalue fields before ingestion, then you can use a preprocess playbook to grab the multivalue field using a utility like "list_demux" and split the output, and then create individual artifacts using the "artifact_create" utility. This will make it easier for all future playbooks to grab the artifact values from that container.  You could use "list_demux" to split a mulltivalue field without creating new artifacts, but then you would need to use that utility in every playbook and that would not be ideal. I hope that helps! Let me know if need additional clarification.
We have standalone environment and are getting error "the percentage of non-high priority searches skipped (61%) over the last 24 hours is very high and exceeded the red threshold (20%) on this splun... See more...
We have standalone environment and are getting error "the percentage of non-high priority searches skipped (61%) over the last 24 hours is very high and exceeded the red threshold (20%) on this splunk instance."  The environment: Customer has standalone where we created an app with a savedsearch script that pulls all indexed events every 1 hour and bundles them into a  .json file, customer then compresses it into a .gz file for transfer into our production environment.   What we are seeing is this skipped searches message and when we check the specific job, we see that every time it runs there are 2 things that come up as jobs, the export app started by python calling the script and then the actual search job activity with our SPL search, both jobs are 1 second apart and stays in the jobs page for 10 minutes each, customer states that it takes ~2.5 minutes for this job to complete.   The python script seems to stay longer for some reason, even after its job  Not sure how to proceed, since we had it scheduled every 4 hours and it was doing the same thing, so we lowered it to 1 hour, no difference. Our search looks at the last completed .json file epoch time and current epoch time to grab those events in that range, so not sure if that message is like a false positive by the way we are catching events (timestamps).  How can i remove the skipped searches error message.  Tips??