All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@danielbb  Splunk can support 4.x+, or 5.4.x kernel Linux distributions
That's why I suggested to look into DMC which has many searches. If you write those searches yourself it will take a lot of time. DMC will give those pre-built searches.   Now, if you don't have ac... See more...
That's why I suggested to look into DMC which has many searches. If you write those searches yourself it will take a lot of time. DMC will give those pre-built searches.   Now, if you don't have access to DMC in your environment, you can just install Splunk on your local laptop and use that to get searches.   To get the searches, you can open any panel in any panel, by clicking on the bottom-left "Open in search".   I hope this helps!!!
@mostafadehghad6   The Keycloak integration process is straightforward, it seems. You can follow these steps: 1. Open the add-on, navigate to the configuration tab, click "Add," and provide the nec... See more...
@mostafadehghad6   The Keycloak integration process is straightforward, it seems. You can follow these steps: 1. Open the add-on, navigate to the configuration tab, click "Add," and provide the necessary details, such as the client ID and secret key. 2. Create an input based on your specific requirements. 3. Ensure that the firewall rules allow communication between Splunk and Keycloak. I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.  
We are about to create new VMs with the Ubuntu OS. Which version of Ubuntu is supported and recommended? 
The following instructions seem to remedy 99% of the issues: docs.splunk.com/Documentation/Splunk/9.3.1/Admin/Shareperformancedata#How_to_opt_out Apologies for the noise.
Splunk installation in a secure facility.  I see the following blocked attempts to phone-home in our logs and infosec is unhappy.  How do I prevent Splunk from phoning home every 15 seconds? TCP_DEN... See more...
Splunk installation in a secure facility.  I see the following blocked attempts to phone-home in our logs and infosec is unhappy.  How do I prevent Splunk from phoning home every 15 seconds? TCP_DENIED/403 3836 CONNECT beam.scs.splunk.com:443 - HIER_NONE/- text/html TCP_DENIED/403 3906 CONNECT quickdraw.splunk.com:443 - HIER_NONE/- text/html Splunk Enterprise Version:9.3.1 Build:0b8d769cb912
Hello @VatsalJagani  Thanks for the info, Yes we have those DMC enabled but the problem is as we are new to Splunk we had given only limited access for now to SH. So we wanted to create some dashb... See more...
Hello @VatsalJagani  Thanks for the info, Yes we have those DMC enabled but the problem is as we are new to Splunk we had given only limited access for now to SH. So we wanted to create some dashboards to look with in the internal logs to detect the issues. I would like to start with the Universal Forwarder first.
Hello @gcusello  Thanks for the reply, is that possible to share the app info or share the source code of the dashboards ?
We need to connect a FortiWeb Cloud with a Splunk Heavy Forwarder. It is over internet so SSL must be used. We are receiving the test event correctly using TCP (without SSL) But it is not bein... See more...
We need to connect a FortiWeb Cloud with a Splunk Heavy Forwarder. It is over internet so SSL must be used. We are receiving the test event correctly using TCP (without SSL) But it is not being decrypted with SSL Reviewing the documentation, we do not undesrtand how to configure the ssl-tcp input, and what certificates should be configured in FortiWeb. We have seen some solutions centered in SSL between Splunk components, but none of them explain what certificates should be configured on the source. Does anyone know how to make this work? With FortiWeb or any other third party input
@gcusello  Thanks for your response. Yes, the log event is in one block. But the below query is showing incorrect results. It is showing historical data as well. (not the latest block events) Can I ... See more...
@gcusello  Thanks for your response. Yes, the log event is in one block. But the below query is showing incorrect results. It is showing historical data as well. (not the latest block events) Can I handle this in "inputs.conf" file to only show the latest one log file only? I am not looking for any historical data.
Hi @shashankk , if youe logs arrive in block (more or less the same timestamp), you could use a solution like this: index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log [ search ... See more...
Hi @shashankk , if youe logs arrive in block (more or less the same timestamp), you could use a solution like this: index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log [ search index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log | head 1 | eval earliest= _time-60, latest=_time+60 | fields earliest latest ] | rex field=_raw "(?<Severity>[^\|]+)\|(?<Hostname>[^\|]+)\|(?<CertIssuer>[^\|]+)\|(?<FilePath>[^\|]+)\|(?<Status>[^\|]+)\|(?<ExpiryDate>[^\|]+)" | multikv forceheader=1 | table Severity Hostname CertIssuer FilePath Status ExpiryDate  It runs if your logs are all in blocks of around 60 seconds. Ciao. Giuseppe
My requirement is simple, I have created a Certificate monitoring script and passing the log file through a splunk dashboard. I want splunk to only check the latest log file and not store any histori... See more...
My requirement is simple, I have created a Certificate monitoring script and passing the log file through a splunk dashboard. I want splunk to only check the latest log file and not store any historical data in search events. Below is the sample log file output - (It is a "|" separated log file output)     ALERT|appu2.de.com|rootca12|/applications/hs_cert/cert/live/h_hcm.jks|Expired|2020-10-18 WARNING|appu2.de.com|key|/applications/hs_cert/cert/live/h_hcm.jks|Expiring Soon|2025-06-14 INFO|appu2.de.com|rootca13|/applications/hs_cert/cert/live/h_core.jks|Valid|2026-10-18 ALERT|appu2.de.com|rootca12|/applications/hs_cert/cert/live/h_core.jks|Expired|2020-10-18 WARNING|appu2.de.com|key|/applications/hs_cert/cert/live/h_core.jks|Expiring Soon|2025-03-22 ALERT|appu2.de.com|key|/applications/hs_cert/cert/live/h_mq.p12|Expired|2025-01-03       I am looking for 2 points here: 1. How do I handle only latest log file content (no history) in "inputs.conf" - what changes to be done? 2. Below is the sample SPL query, kindly check and suggest if any changes.   index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log | rex field=_raw "(?<Severity>[^\|]+)\|(?<Hostname>[^\|]+)\|(?<CertIssuer>[^\|]+)\|(?<FilePath>[^\|]+)\|(?<Status>[^\|]+)\|(?<ExpiryDate>[^\|]+)" | multikv forceheader=1 | table Severity Hostname CertIssuer FilePath Status ExpiryDate   @ITWhisperer - Kindly help
I have tested the EVAL statement as provided in the transforms.conf at seaech time and it is working fine. But the new fields that i want to add from the csv file is not getting appended to the logs ... See more...
I have tested the EVAL statement as provided in the transforms.conf at seaech time and it is working fine. But the new fields that i want to add from the csv file is not getting appended to the logs that are getting ingested on a match dst_ip field of log with the dst_ip field of csv. From the documentation i came to know that i have to configure fields.conf also. I have configured the same with INDEXED=true for the new field that i want to append to the logs. But still the logs are not appended with the new fields.  i followed the link https://docs.splunk.com/Documentation/Splunk/7.2.3/Data/Configureindex-timefieldextraction#Define_additional_indexed_fields . this shows to append new fields to the logs based on extraction from the actual log. What i actually require is that i want the logs to be appended with fields from my csv file. Can you please guide us in configuring the props.conf and transforms.conf properly such that the logs are enriched with fields from the csv file for match. thanks and regards
Hi, I have a pretty long search I want to be able to utilize as a savedsearch and allow others benefit from one shared search and maybe mutually edit the search, if need be. There is a part in the s... See more...
Hi, I have a pretty long search I want to be able to utilize as a savedsearch and allow others benefit from one shared search and maybe mutually edit the search, if need be. There is a part in the search utilizing a structure   search index=ix2 eventStatus="Successful" | localize timeafter=0m timebefore=1m | map search="search index=ix1 starttimeu=$starttime$ endtimeu=$endtime$ ( [ search index=ix2 eventStatus="Successful" | return 1000 eventID ] ) | stats values(client) values(port) values(target) by eventID   This is a simplified extraction of what I am really doing, but the search works fine when run as a plain direct search from the GUI. If I save it and try using it with   |savedsearch "my-savedsearch"   I get the error Error in 'savedsearch' command: Encountered the following error while building a search for saved search 'my-savedsearch': Error while replacing variable name='starttime'. Could not find variable in the argument map. It looks like the $starttime$ and $endtime$ cause trouble, but what can I do to come around? I want to have this stuff in a saved search to avoid operating with a long search all the time in the browser. also, it is essential to use the localize - map construction, because otherwise I am not able to run this search for long time windows and I would really like to be able to do it. There was a ticket by @neerajs_81 about pretty much the same issue, but there were no details about the saved search and above all, there seemed not to be a solution.
I installed the keycloak extension but I don't know how to configure it. Can you help me?
Hi, I am trying to implement a dashboard in splunk that presents data basing on Jenkins events. I use Splunk App for Jenkins and Splunk Jenkins plugin to send the events data. Idea of the dashboa... See more...
Hi, I am trying to implement a dashboard in splunk that presents data basing on Jenkins events. I use Splunk App for Jenkins and Splunk Jenkins plugin to send the events data. Idea of the dashboard is to display data about running active checks for Pull Requests in associated GitHub repository. Checks are designed in Jenkins in a way to have a trigger job which calls downstream jobs. In a dashboard, I'd like to present basic info about pull request and results of test coming from downstream jobs. Unfortunately, event for trigger job does not provide info about its downstream jobs. I collect it in such a way: index="jenkins_statistics" event_tag=job_event build_url="job/test_trigger/10316*" | eventstats latest(type) as latest_type by build_url, host | where latest_type="started" | eval full_build_url=https://+host+"/"+build_url | eval started=tostring(round((now() - strptime(job_started_at, "%Y-%m-%dT%H:%M:%S.%N"))/60,0)) + " mins ago" | append [ search index="jenkins_statistics" event_tag=job_event upstream="job/test_trigger/10316*" trigger_by="*test_trigger*"] where subsearch is appended to provide information about downstream jobs. I checked https://plugins.jenkins.io/splunk-devops/#plugin-content-i-am-using-upstreamdownstream-jobs-how-can-i-consolidate-the-test-results-to-root-trigger-job but this does not fit to my case, as tests are represented by downstream jobs and I'd like to have an actual data of them to display only failed ones in the dashboard. I had a plan to create custom python command (https://dev.splunk.com/enterprise/docs/devtools/customsearchcommands/createcustomsearchcmd/) which will: parse data from downstream jobs events create new fields in trigger job event basing on above finally, return only trigger job event Having that, I could be able to have all the interesting data in one event per Pull Request and format the table at the end. Unfortunately, it does not work as I wanted. It makes Splunk hanging even for single Pull Request case (1 trigger event + 20 downstream events). This python script iterates over the events twice (firstly to process downstream jobs and secondly to find trigger and add new fields there and return that). I am afraid that it is not a best approach. Example script based on https://github.com/splunk/splunk-app-examples/blob/master/custom_search_commands/python/customsearchcommands_app/bin/filter.py is presented below:   #!/usr/bin/env python # coding=utf-8 # # Copyright 2011-2015 Splunk, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"): you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import sys sys.path.append(os.path.join('opt', 'splunk', 'etc', 'apps', 'splunk_app_jenkins', 'bin', 'libs')) # splunklib from splunklib.searchcommands import dispatch, EventingCommand, Configuration, Option @Configuration() class ProcessTestsCommand(EventingCommand): """ Filters and updates records on the events stream. ##Example :code:`index="*" | processtests """ @staticmethod def get_win(tests_list, pr_num): for test in tests_list: if test.get('job_name') == 'build-win' and test.get('upstream') == f"job/test_trigger/{pr_num}": return test.get('job_result') if test.get('job_result') else '' @staticmethod def get_ubuntu(tests_list, pr_num): for test in tests_list: if test.get('job_name') == 'build-ubuntu' and test.get('upstream') == f"job/test_trigger/{pr_num}": return test.get('job_result') if test.get('job_result') else '' @staticmethod def get_failed_tests(tests_list, pr_num): failed_tests = [] failed_string = '' for test in tests_list: if test.get('upstream') == f"job/test_trigger/{pr_num}" and test.get('job_result') != 'SUCCESS': failed_tests.append(test) for failed_test in failed_tests: name = failed_test.get('job_name').split('/')[-1] status = failed_test.get('job_result') failed_string += f"{name} {status}\n" return failed_string def transform(self, records): tests = [] for record in records: if record.get('job_name') != 'test_trigger': tests.append(record) for record in records: if record.get('job_name') == 'test_trigger': pr_num = record.get('build_number') build_win = self.get_win(tests, pr_num) build_ubuntu = self.get_ubuntu(tests, pr_num) failed_tests = self.get_failed_tests(tests, pr_num) record['win'] = build_win record['ubuntu'] = build_ubuntu record['failed_tests'] = failed_tests yield record return dispatch(ProcessTestsCommand, sys.argv, sys.stdin, sys.stdout, __name__)   Goal is to have all the data (from trigger and downstream jobs) in a single event representing trigger job. Do you have any ideas, what could be the better way to achieve that? I also thought about dropping this subsearch and collecting the data of downstream jobs via GitHub or Jenkins API inside Python script, but this is not preferred (API may return some malformed data, I could be limited by api hitting limits). Appreciate any help.
The results you shared earlier show NB as 0 which is odd since this doesn't appear to be what you should be getting given the events you have shared and the search you are apparently using. Do any of... See more...
The results you shared earlier show NB as 0 which is odd since this doesn't appear to be what you should be getting given the events you have shared and the search you are apparently using. Do any of your events actually have 0 as the trade_id? Have you examined the events which are present in the results to see why they might be being parsed in such a way to give NB as 0? At the end of the day, the searches being suggested work with the data you have shared, so if they are not working as expected, it is likely to be because of the actual data you are using, and in order for us to be able to help you more, you should share some more of the data which is not working.
Sure, also all the results are in a single row, ideally I would want them in separate Wahts important to note is that I did get all the columns in the fields command but they are empty (the ones i... See more...
Sure, also all the results are in a single row, ideally I would want them in separate Wahts important to note is that I did get all the columns in the fields command but they are empty (the ones i stated , eg. NB onwards) Time Event 03/01/2025 16:05:37.609 2025-01-03 16:05:37.609, system="murex", id="646523556", sky_id="646523556", trade_id="32248978", event_id="100120362", mx_status="live", operation="nooperation", action="modification", tradebooking_sgp="2025/01/02 02:01:23.0000", eventtime_sgp="2025/01/02 02:01:21.3800", sky_to_mq_latency="-1.-620", portfolio_name="test_oprtoflio", portfolio_entity="test_entity", trade_type="VanillaSwap" 03/01/2025 11:05:39.000 32248978;LIVE;0.00001000;AUD;IRD;CD;;test_prtoflio;CAMBOOYAPTSYDAU
The by clause will match with events with exactly the same src_mac - this includes any trailing or leading spaces, punctuation, etc. Since MAC addresses are potentially sensitive information, which y... See more...
The by clause will match with events with exactly the same src_mac - this includes any trailing or leading spaces, punctuation, etc. Since MAC addresses are potentially sensitive information, which you might not wish to share, are there any differences between the way the MAC addresses are stored in the different events (apart from the upper/lower case you already mentioned)?
Please share some examples of the events which have these fields which are not being returned