All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I am trying to implement a dashboard in splunk that presents data basing on Jenkins events. I use Splunk App for Jenkins and Splunk Jenkins plugin to send the events data. Idea of the dashboa... See more...
Hi, I am trying to implement a dashboard in splunk that presents data basing on Jenkins events. I use Splunk App for Jenkins and Splunk Jenkins plugin to send the events data. Idea of the dashboard is to display data about running active checks for Pull Requests in associated GitHub repository. Checks are designed in Jenkins in a way to have a trigger job which calls downstream jobs. In a dashboard, I'd like to present basic info about pull request and results of test coming from downstream jobs. Unfortunately, event for trigger job does not provide info about its downstream jobs. I collect it in such a way: index="jenkins_statistics" event_tag=job_event build_url="job/test_trigger/10316*" | eventstats latest(type) as latest_type by build_url, host | where latest_type="started" | eval full_build_url=https://+host+"/"+build_url | eval started=tostring(round((now() - strptime(job_started_at, "%Y-%m-%dT%H:%M:%S.%N"))/60,0)) + " mins ago" | append [ search index="jenkins_statistics" event_tag=job_event upstream="job/test_trigger/10316*" trigger_by="*test_trigger*"] where subsearch is appended to provide information about downstream jobs. I checked https://plugins.jenkins.io/splunk-devops/#plugin-content-i-am-using-upstreamdownstream-jobs-how-can-i-consolidate-the-test-results-to-root-trigger-job but this does not fit to my case, as tests are represented by downstream jobs and I'd like to have an actual data of them to display only failed ones in the dashboard. I had a plan to create custom python command (https://dev.splunk.com/enterprise/docs/devtools/customsearchcommands/createcustomsearchcmd/) which will: parse data from downstream jobs events create new fields in trigger job event basing on above finally, return only trigger job event Having that, I could be able to have all the interesting data in one event per Pull Request and format the table at the end. Unfortunately, it does not work as I wanted. It makes Splunk hanging even for single Pull Request case (1 trigger event + 20 downstream events). This python script iterates over the events twice (firstly to process downstream jobs and secondly to find trigger and add new fields there and return that). I am afraid that it is not a best approach. Example script based on https://github.com/splunk/splunk-app-examples/blob/master/custom_search_commands/python/customsearchcommands_app/bin/filter.py is presented below:   #!/usr/bin/env python # coding=utf-8 # # Copyright 2011-2015 Splunk, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"): you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import sys sys.path.append(os.path.join('opt', 'splunk', 'etc', 'apps', 'splunk_app_jenkins', 'bin', 'libs')) # splunklib from splunklib.searchcommands import dispatch, EventingCommand, Configuration, Option @Configuration() class ProcessTestsCommand(EventingCommand): """ Filters and updates records on the events stream. ##Example :code:`index="*" | processtests """ @staticmethod def get_win(tests_list, pr_num): for test in tests_list: if test.get('job_name') == 'build-win' and test.get('upstream') == f"job/test_trigger/{pr_num}": return test.get('job_result') if test.get('job_result') else '' @staticmethod def get_ubuntu(tests_list, pr_num): for test in tests_list: if test.get('job_name') == 'build-ubuntu' and test.get('upstream') == f"job/test_trigger/{pr_num}": return test.get('job_result') if test.get('job_result') else '' @staticmethod def get_failed_tests(tests_list, pr_num): failed_tests = [] failed_string = '' for test in tests_list: if test.get('upstream') == f"job/test_trigger/{pr_num}" and test.get('job_result') != 'SUCCESS': failed_tests.append(test) for failed_test in failed_tests: name = failed_test.get('job_name').split('/')[-1] status = failed_test.get('job_result') failed_string += f"{name} {status}\n" return failed_string def transform(self, records): tests = [] for record in records: if record.get('job_name') != 'test_trigger': tests.append(record) for record in records: if record.get('job_name') == 'test_trigger': pr_num = record.get('build_number') build_win = self.get_win(tests, pr_num) build_ubuntu = self.get_ubuntu(tests, pr_num) failed_tests = self.get_failed_tests(tests, pr_num) record['win'] = build_win record['ubuntu'] = build_ubuntu record['failed_tests'] = failed_tests yield record return dispatch(ProcessTestsCommand, sys.argv, sys.stdin, sys.stdout, __name__)   Goal is to have all the data (from trigger and downstream jobs) in a single event representing trigger job. Do you have any ideas, what could be the better way to achieve that? I also thought about dropping this subsearch and collecting the data of downstream jobs via GitHub or Jenkins API inside Python script, but this is not preferred (API may return some malformed data, I could be limited by api hitting limits). Appreciate any help.
The results you shared earlier show NB as 0 which is odd since this doesn't appear to be what you should be getting given the events you have shared and the search you are apparently using. Do any of... See more...
The results you shared earlier show NB as 0 which is odd since this doesn't appear to be what you should be getting given the events you have shared and the search you are apparently using. Do any of your events actually have 0 as the trade_id? Have you examined the events which are present in the results to see why they might be being parsed in such a way to give NB as 0? At the end of the day, the searches being suggested work with the data you have shared, so if they are not working as expected, it is likely to be because of the actual data you are using, and in order for us to be able to help you more, you should share some more of the data which is not working.
Sure, also all the results are in a single row, ideally I would want them in separate Wahts important to note is that I did get all the columns in the fields command but they are empty (the ones i... See more...
Sure, also all the results are in a single row, ideally I would want them in separate Wahts important to note is that I did get all the columns in the fields command but they are empty (the ones i stated , eg. NB onwards) Time Event 03/01/2025 16:05:37.609 2025-01-03 16:05:37.609, system="murex", id="646523556", sky_id="646523556", trade_id="32248978", event_id="100120362", mx_status="live", operation="nooperation", action="modification", tradebooking_sgp="2025/01/02 02:01:23.0000", eventtime_sgp="2025/01/02 02:01:21.3800", sky_to_mq_latency="-1.-620", portfolio_name="test_oprtoflio", portfolio_entity="test_entity", trade_type="VanillaSwap" 03/01/2025 11:05:39.000 32248978;LIVE;0.00001000;AUD;IRD;CD;;test_prtoflio;CAMBOOYAPTSYDAU
The by clause will match with events with exactly the same src_mac - this includes any trailing or leading spaces, punctuation, etc. Since MAC addresses are potentially sensitive information, which y... See more...
The by clause will match with events with exactly the same src_mac - this includes any trailing or leading spaces, punctuation, etc. Since MAC addresses are potentially sensitive information, which you might not wish to share, are there any differences between the way the MAC addresses are stored in the different events (apart from the upper/lower case you already mentioned)?
Please share some examples of the events which have these fields which are not being returned
These fields are missing TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO
With the assistance of this forum, I managed to combine the events of two sourcetypes and run stats to correlate the fields on a single shared field between the two sourcetypes. The problem is, when ... See more...
With the assistance of this forum, I managed to combine the events of two sourcetypes and run stats to correlate the fields on a single shared field between the two sourcetypes. The problem is, when running stats, it creates a table with mostly blank spots and only a few with all columns filled. The search is meant to look at switch logs and pull connection data inlcuding the MAC, the IP, the switch name, and the port id. Secondly, the search pulls from a sourcetype containing all devices that have been active on the network where it pulls the hostname and mac for that device. I then use stats to match those results up on the shared mac address field—the only difference between them being the mac field from one of the sourcetypes is in lowercase vs upper. The end goal of this is to be able to have a table showing me a device's name, it's IP, it's MAC, and which switch and port it connected to. As it is, the search does appear to work, however, because of how it's written, the resulting table is filled with blank spots from where the events from each source don't have the fields from the other source. How can I change things so it only shows rows where it has an entry for each column? Right now, based on other posts I've seen on this forum.....I'm considering whether I may be able to use eval and create fields like src_mac-{index} or something like that.....maybe with the inclusion of the coalesce command. Is this the right course of action, or is there a better way? The only other consideration is speed.....unfortunately, there's a very good chance I may end up searching millions of events. I'm trying to find ways restrict the search, but even if I manage to, it's still going to be a lot. I'm not trying to get an instant search, but if I can get it complete in less than thirty seconds as opposed to 3+ minutes..... Thank you     (index="routerswitch" action_type IN(Failed_Attempts, Passed_Attempts) src_mac=* SwitchName=switch1 Port_Id=GigabitEthernet1/0/21 earliest=-30d) OR (index=connections source="/var/devices.log" src_ip=172.* earliest=-30d src_mac=*) | fields src_mac dhcp_host_name src_ip IP_Address SwitchName Port_Id | eval src_mac=upper(src_mac) | stats values(dhcp_host_name) as hostname values(src_ip) as IP values(IP_Address) as net_IP values(SwitchName) as switch values(Port_Id) as portID by src_mac    
After upgrade from 9.3.1 to 9.4.0 in windows platform, there is a warning shows 41 files that did not match. After validate files, it shows the following query. How to solve the warning? ... See more...
After upgrade from 9.3.1 to 9.4.0 in windows platform, there is a warning shows 41 files that did not match. After validate files, it shows the following query. How to solve the warning? C:\Program Files\Splunk\bin>splunk.exe validate files Validating installed files against hashes from 'C:\Program Files\Splunk\splunk-9.4.0-6b4ebe426ca6-windows-x64-manifest' Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-console-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-datetime-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-debug-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-errorhandling-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-file-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-file-l1-2-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-file-l2-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-handle-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-heap-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-interlocked-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-libraryloader-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-localization-l1-2-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-memory-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-namedpipe-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-processenvironment-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-processthreads-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-processthreads-l1-1-1.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-profile-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-rtlsupport-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-string-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-synch-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-synch-l1-2-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-sysinfo-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-timezone-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-util-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-conio-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-convert-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-environment-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-filesystem-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-heap-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-locale-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-math-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-multibyte-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-private-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-process-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-runtime-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-stdio-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-string-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-time-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-utility-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/ucrtbase.dll': The system cannot find the file specified.  
You can also do it more elegantly by hiding the Populating message when the search is in progress using <progress> and <done> clauses in the search, e.g. <form version="1.1" theme="light"> <label>... See more...
You can also do it more elegantly by hiding the Populating message when the search is in progress using <progress> and <done> clauses in the search, e.g. <form version="1.1" theme="light"> <label>populating</label> <init> <set token="input_message_display">none</set> </init> <fieldset submitButton="false"> <input type="time" token="time_range" searchWhenChanged="true"> <label>Time Range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> <change> <set token="input_message_display">none</set> </change> </input> <input id="user_input" type="dropdown" token="tok_user" searchWhenChanged="true"> <label>User</label> <search> <query>index=_audit | stats count by user</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> <progress> <set token="input_message_display">none</set> </progress> <done> <set token="input_message_display"></set> </done> </search> <fieldForLabel>user</fieldForLabel> <fieldForValue>user</fieldForValue> </input> </fieldset> <row depends="$AlwaysHideCSS$"> <panel> <html> <style> #user_input .splunk-choice-input-message{ display: $input_message_display$; } </style> </html> </panel> </row> <row> <panel> <table> <search> <query>index=_audit user=$tok_user$ | stats count by user</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form> This will hide the populating message every time you change the time that the dropdown depends on, then when the search is finished, it will re-enable it - so that if you get no results, it will say 'Search produced no results'    
That is the splunk-choice-input-message class - so you can do it like this - note that this will hide it always.   <form version="1.1" theme="light"> <label>populating</label> <fieldset submitB... See more...
That is the splunk-choice-input-message class - so you can do it like this - note that this will hide it always.   <form version="1.1" theme="light"> <label>populating</label> <fieldset submitButton="false"> <input type="time" token="time_range" searchWhenChanged="true"> <label>Time Range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input id="user_input" type="dropdown" token="tok_user" searchWhenChanged="true"> <label>User</label> <search> <query>index=_audit | stats count by user</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> </search> <fieldForLabel>user</fieldForLabel> <fieldForValue>user</fieldForValue> </input> </fieldset> <row depends="$AlwaysHideCSS$"> <panel> <html> <style> #user_input .splunk-choice-input-message{ display: none !important; } </style> </html> </panel> </row> <row> <panel> <table> <search> <query>index=_audit user=$tok_user$ | stats count by user</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form>    
Hi! I was wondering if anybody knows if there's a way to hide the "populating..." text under this drillthrough on my dashboard? 
Thanks for clarifying. I understand you want to mark your ingested data at the time of ingest, so that is forever constant regardless of any changes made to the lookup. As @richgalloway has said, it... See more...
Thanks for clarifying. I understand you want to mark your ingested data at the time of ingest, so that is forever constant regardless of any changes made to the lookup. As @richgalloway has said, it should be possible - I am unsure of the sequence of INGEST_EVAL statemensts where there are more than one. Have you tried putting the json_extract AND the lookup in a single statement, as in Rich's linked example to see if that works, at least for one of the fields.  
Hi All, I am rather hoping someone can assist me in creating a search that can be used for an alert to detect when a connection to MQ fails to re-connect for a system I am supporting. I am new to S... See more...
Hi All, I am rather hoping someone can assist me in creating a search that can be used for an alert to detect when a connection to MQ fails to re-connect for a system I am supporting. I am new to Splunk and although I have found posts related to this topic, I have so far not been able to adapt them for my particular scenario. I was hopeful the search below would suffice, but then realised it only works as I wanted it to if the MQ connection actually drops, otherwise the count evaluates as 0 and I end up with a false alerts.   index="sepa_instant" source="D:\\Apps\\Instant_Sepa_01\\log\\ContinuityRequester*" | transaction startswith="connection_down_error raised" maxspan=4m | search "-INFOS- {3} QM reconnected" | stats count | where count="0" Any assistance provided would be very much appreciated.  
Yes one can use lookup, but it needs INGEST_EVAL, not the “normal” lookup definition which are working only in search time.
I've had this issue before with a custom app and tried recreating it from scratch to no avail. However, changing the locale in the URL (e.g. en-US to en-GB) somehow got the inputs page to load. Perha... See more...
I've had this issue before with a custom app and tried recreating it from scratch to no avail. However, changing the locale in the URL (e.g. en-US to en-GB) somehow got the inputs page to load. Perhaps it will work for you.
Is there any reason why you aren't creating a token from the interface under Settings->Users and Authentication->Tokens, and then using it to call the API? That would be much more reliable than using... See more...
Is there any reason why you aren't creating a token from the interface under Settings->Users and Authentication->Tokens, and then using it to call the API? That would be much more reliable than using a single session key.
Apologies if this is in the wrong place.  Im using the Splunk REST API to connect and run search requests through a Python script. I sadly don't have access to the SDK so I have to use the REST API.... See more...
Apologies if this is in the wrong place.  Im using the Splunk REST API to connect and run search requests through a Python script. I sadly don't have access to the SDK so I have to use the REST API. The issue I'm running into is that after the initial authentication and login, I get back the session key to use for subsequent API calls. The subsequent API calls have a chance to run into a 401 error more often than not, and my current working solution is to use a while loop to keep sending the information until it works. The code looks like below. I set a delay an API call happens every few seconds, but I can't figure out why it will usually fail, but then randomly choose to work.      done = False while not done: r = requests.post(host + '/services/search/jobs/', headers={'Authorization':'Splunk %s' %Session_key}, data={'Search':query}, verify=False) if r.status+code ==201: done = True    
This was my issue. Thanks for this!
Lookup tables *can* be used at index-time as explicitly stated in the docs page linked in my reply.
You must have own transform stanzas for those INGEST_EVAL definitions. Or at least one which have those all in one line. Lookup tables are used only in search time, not in index time. You should tes... See more...
You must have own transform stanzas for those INGEST_EVAL definitions. Or at least one which have those all in one line. Lookup tables are used only in search time, not in index time. You should test your INGEST_EVAL settings in search time and after it works in one eval xx=yy, zz=xyz then you can copy this into your transforms.conf. See more https://docs.splunk.com/Documentation/Splunk/latest/Admin/Transformsconf#transforms.conf.example