All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Here's a straightforward hack that uses a zero width space as a padded value prefix to determine a cell's status. For example, a status of Unknown is one zero width space. The SPL uses the urldecode(... See more...
Here's a straightforward hack that uses a zero width space as a padded value prefix to determine a cell's status. For example, a status of Unknown is one zero width space. The SPL uses the urldecode() eval function to convert URL-encoded UTF-8 characters to strings. <table id="table2"> <search> <query>| makeresults format=csv data=" _time,HOSTNAME,PROJECTNAME,JOBNAME,INVOCATIONID,RUNSTARTTIMESTAMP,RUNENDTIMESTAMP,RUNMAJORSTATUS,RUNMINORSTATUS,RUNTYPENAME 2025-01-20 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-19 20:18:25.0,,STA,RUN,Run 2025-01-19 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-18 20:18:25.0,2025-01-18 20:18:29.0,FIN,FWF,Run 2025-01-18 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-17 20:18:25.0,2025-01-17 20:18:29.0,FIN,FOK,Run 2025-01-17 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-16 20:18:25.0,2025-01-16 20:18:29.0,FIN,FWW,Run 2025-01-16 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-15 20:18:25.0,2025-01-15 20:18:29.0,FIN,HUH,Run " ``` use zero width space as pad ``` | eval status_unknown=urldecode("%E2%80%8B") | eval status_success=urldecode("%E2%80%8B%E2%80%8B") | eval status_failure=urldecode("%E2%80%8B%E2%80%8B%E2%80%8B") | eval status_warning=urldecode("%E2%80%8B%E2%80%8B%E2%80%8B%E2%80%8B") | eval status_running=urldecode("%E2%80%8B%E2%80%8B%E2%80%8B%E2%80%8B%E2%80%8B") | eval _time=strptime(_time, "%Y-%m-%d %H:%M:%S.%Q") | search PROJECTNAME="*" INVOCATIONID="*" RUNMAJORSTATUS="*" RUNMINORSTATUS="*" | eval status=case(RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWW", status_warning, RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FOK", status_success, RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWF", status_failure, RUNMAJORSTATUS="STA" AND RUNMINORSTATUS="RUN", status_running, 1=1, status_unknown) | eval tmp=JOBNAME."|".INVOCATIONID | eval date=strftime(strptime(RUNSTARTTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q"), "%Y-%m-%d") | eval value=status.if(status==status_unknown, "Unknown", "start time: ".coalesce(strftime(strptime(RUNSTARTTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q"), "%H:%M"), "").urldecode("%0a").if(status==status_running, "Running", "end time: ".coalesce(strftime(strptime(RUNENDTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q"), "%H:%M"), ""))) | xyseries tmp date value | eval tmp=split(tmp, "|"), Job=mvindex(tmp, 0), Country=mvindex(tmp, 1) | fields - tmp | table Job Country *</query> </search> <option name="drilldown">none</option> <option name="wrap">true</option> <format type="color"> <colorPalette type="expression">case(match(value, "^\\u200b{1}[^\\u200b]"), "#D3D3D3", match(value, "^\\u200b{2}[^\\u200b]"), "#90EE90", match(value, "^\\u200b{3}[^\\u200b]"), "#F0807F", match(value, "^\\u200b{4}[^\\u200b]"), "#FEEB3C", match(value, "^\\u200b{5}[^\\u200b]"), "#ADD9E6")</colorPalette> </format> </table>  
Hi. I am new to Splunk and SentinelOne. Here is what I've done so far: I need to forward logs from SentinelOne to a single Splunk instance. Since it is a single instance, I installed the Splunk CIM ... See more...
Hi. I am new to Splunk and SentinelOne. Here is what I've done so far: I need to forward logs from SentinelOne to a single Splunk instance. Since it is a single instance, I installed the Splunk CIM Add-on and the SentinelOne App. (which is mentioned in the Installation of the app. https://splunkbase.splunk.com/app/5433 ) In the SentinelOne App of the Splunk instance, I changed the search index to sentinelone in Application Configuration. I already created the index for testing purpose. In the API configuration, I added the url which is xxx-xxx-xxx.sentinelone.net and the api token. It is generated by adding a new service user in SentinelOne and clicking generate API token. The scope is global. I am not sure if its the correct API token. Moreover, I am not sure which channel I need to pick in SentinelOne inputs in Application Configuration(SentineOne App), such as Agents/Activities/Applications etc. How do I know which channel do i need to forward or i just add all channels? Clicking the application health overview, there is no data ingest of items. Using this SPL index=_internal sourcetype="sentinelone*" sourcetype="sentinelone:modularinput" does not show any action=saving_checkpoint, which means no data. Any help/documentation for the setup would be helpful. I would like to know the reason for no data and how to fix it. Thank you.
Gcusello, This is exactly what's going on. That log file is updated frequently but its by a script which 99% of the time writes the identical output (when it doesn't detect any problems). That means... See more...
Gcusello, This is exactly what's going on. That log file is updated frequently but its by a script which 99% of the time writes the identical output (when it doesn't detect any problems). That means Windows shows the file has a new update timestamp, but the file hash doesn't actually change. I'll edit my script to put a dynamic timestamp in the file or something to make the content change so the Splunk Forwarder sends the changes. Thank you so much!
Since you apparently did a local connectivity test and it succeeded, there must be something external to Splunk itself preventing you from connecting. Your iptables rules seem to not be interfering (... See more...
Since you apparently did a local connectivity test and it succeeded, there must be something external to Splunk itself preventing you from connecting. Your iptables rules seem to not be interfering (you don't have port 8000 explicitly open but the general policy is ACCEPT). So it points to something network-related. Routing? Filtering on some intermediate device? It's something best solved with your local admin staff since it doesn't seem to be related to Splunk as such.
When using this package in Jupyter Notebook, I'm using Python to apply different models to the data based on whether it's during working hours or not. Although I'm using autoencoder as the main archi... See more...
When using this package in Jupyter Notebook, I'm using Python to apply different models to the data based on whether it's during working hours or not. Although I'm using autoencoder as the main architectural framework, I'm taking this approach because the data follows different distributions under these two scenarios." When using this package in Jupyter Notebook, I'm using Python to apply different models to the data based on whether it's during working hours or not. Although I'm using autoencoder as the main architectural framework, I'm taking this approach because the data follows different distributions under these two scenarios." Are there any other better approaches
It looks like port 8000 is already open on the host firewall (I believe "irdmi" referenced is the service name for  port 8000 on RHEL), so it sounds like the host itself should be allowing connectivi... See more...
It looks like port 8000 is already open on the host firewall (I believe "irdmi" referenced is the service name for  port 8000 on RHEL), so it sounds like the host itself should be allowing connectivity. Nevertheless, you could try explicitly allowing port 8000 and checking the logs: Open Port 8000 in the Firewall: sudo firewall-cmd --zone=public --add-port=8000/tcp --permanent sudo firewall-cmd --reload   Verify with: sudo firewall-cmd --list-all   Check Splunk logs for any errors: $SPLUNK_HOME/var/log/splunk/web_service.log $SPLUNK_HOME/var/log/splunk/splunkd.log Have you been able to confirm that no network changes were made around the time?   
Hi @wdhaar  Once you have downloaded the Cisco Security Cloud splunk app (cisco-security-cloud_301.tgz) you need to install the app onto your existing Splunk instance. The method for doing this dep... See more...
Hi @wdhaar  Once you have downloaded the Cisco Security Cloud splunk app (cisco-security-cloud_301.tgz) you need to install the app onto your existing Splunk instance. The method for doing this depends on your setup: Single server instance: https://docs.splunk.com/Documentation/AddOns/released/Overview/Singleserverinstall Distributed environment: https://docs.splunk.com/Documentation/AddOns/released/Overview/Distributedinstall If you are using Splunk Cloud then you actually do not need to download from Splunkbase - instead you can install it via the App Browser in Splunk Cloud.  Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Here’s a sample curl request to create a muting rule in the Splunk Observability Suite using the provided API reference: curl -X POST "https://api.us0.signalfx.com/v2/incidents/muting-rules" \ ... See more...
Here’s a sample curl request to create a muting rule in the Splunk Observability Suite using the provided API reference: curl -X POST "https://api.us0.signalfx.com/v2/incidents/muting-rules" \ -H "Content-Type: application/json" \ -H "X-SF-TOKEN: YOUR_ACCESS_TOKEN" \ -d '{ "filter": { "severity": "Warning", "incidentType": "SIGNAL", "tags": { "environment": ["prod"], "team": ["infra"] } }, "reason": "Scheduled maintenance", "startTime": 1672531200000, "endTime": 1672617600000, "enabled": true }' Explanation: URL: The API endpoint to create muting rules. Headers: Content-Type: application/json: Specifies JSON payload. X-SF-TOKEN: Your Splunk Observability API token. Payload: filter: Defines what incidents to mute based on severity, type, and tags. reason: Explanation for the muting rule (e.g., scheduled maintenance). startTime and endTime: Unix epoch time (in milliseconds) specifying when the rule will be active. enabled: Boolean to activate the muting rule immediately. Replace YOUR_ACCESS_TOKEN and customize the payload as needed for your setup. Refer to the Splunk Observability API docs for further customization options. Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
Hi @cdavidsonbp  The content packs might be helpful if you're running ITSI/ITE Work but you will still need to look at collecting the data. The Windows TA you referenced is a great starting point as... See more...
Hi @cdavidsonbp  The content packs might be helpful if you're running ITSI/ITE Work but you will still need to look at collecting the data. The Windows TA you referenced is a great starting point as it can collect AD events and win event logs that should help create the info you need. Have a look at these docs on AD Audit policy configuration, the docs are for the older exchange app but this functionality is now in the Add-on for Windows. https://docs.splunk.com/Documentation/MSExchange/4.0.4/DeployMSX/ConfigureActiveDirectoryauditpolicy Please let me know how you get on and consider upvoting/karma this answer if it has helped. Regards Will
But i have issues with ".url.com" since it don't exactly matches the hostname. I have tried to replace them with "*.url.com" but splunk lookup don't match wildcard. This is not correct.  As @andre... See more...
But i have issues with ".url.com" since it don't exactly matches the hostname. I have tried to replace them with "*.url.com" but splunk lookup don't match wildcard. This is not correct.  As @andrew_nelson points out.  The problem is that you are trying to use inputlookup when lookup is the logical solution.  Once you define a lookup with WILDCARD(url), you do not need to add an additional field, however. (You may want to use case-insensitive match, too.)  This is how you do it in Splunk Web: Here, I name the lookup definition without .csv. This is the search to count matches per url as defined in the lookup.   index=my-proxy [inputlookup all_urls | rename url as hostname ] | lookup all_urls url as hostname output url as url | stats count by url   This does effectively the same as Andrew's except you don't need to add a second column.  You also do not need a where command because the inputlookup subsearch already does that. I understand that your reason of using inputlookup is to print 0 if there is no match.  So you add one more step:   | append [inputlookup all_urls] | stats values(count) as count by url | fillnull count   Given the following events in index my-proxy (assuming field hostname is already extracted at search time and represents the destination in your proxy log): _time hostname 1969-12-31 16:00:01 abc.url2.com 1969-12-31 16:00:02 def.url1.com 1969-12-31 16:00:03 ghi.url2.com 1969-12-31 16:00:04 www.url1.com 1969-12-31 16:00:05 site.url2.com 1969-12-31 16:00:06 abc.url1.com 1969-12-31 16:00:07 def.url2.com 1969-12-31 16:00:08 ghi.url1.com 1969-12-31 16:00:09 www.url2.com 1969-12-31 16:00:10 site.url1.com 1969-12-31 16:00:11 abc.url2.com 1969-12-31 16:00:12 def.url1.com 1969-12-31 16:00:13 ghi.url2.com 1969-12-31 16:00:14 www.url1.com 1969-12-31 16:00:15 site.url2.com the above search should give you url count *.url2.com 8 site.url3.com 0 www.url1.com 2 Here is an emulation for you to play with and compare with real data   | makeresults count=15 | streamstats count as _time | eval _domain = json_object(1, "abc", 2, "def", 3, "ghi", 4, "www", 0, "site") | eval hostname = json_extract(_domain, tostring(_time % 5)) . ".url" . (_time % 2 + 1) . ".com" ``` the above emulates index=my-proxy [inputlookup all_urls | rename url as hostname ] ```      
Configured as below.. Now the error is resolved - But not getting the jenkins logs into splunk . only seeing the below response in Splunk   Configuration :  [http://jenkins_build_logs] descriptio... See more...
Configured as below.. Now the error is resolved - But not getting the jenkins logs into splunk . only seeing the below response in Splunk   Configuration :  [http://jenkins_build_logs] description = Jenkins build Logs disabled = 0 sourcetype = jenkins:build token =  useACK = 0   Logs in splunk ping from jenkins plugin raw event ping
@Maries  NOTE:  You can keep the index to the default (main, in general) or ‘jenkins’  or whatever you prefer while setting up the token, as the Splunk app for Jenkins is capable of filtering the ev... See more...
@Maries  NOTE:  You can keep the index to the default (main, in general) or ‘jenkins’  or whatever you prefer while setting up the token, as the Splunk app for Jenkins is capable of filtering the events and redirecting them to the correct pre-configured indexes(this app ships with  four indexes – Jenkins, Jenkins_statistics, Jenkins_console, Jenkins_artifact).
@Maries Check this  https://plugins.jenkins.io/splunk-devops/  https://medium.com/cloud-native-daily/monitoring-made-easy-enhancing-ci-cd-with-splunk-and-jenkins-integration-576eab0bff9 
@MariesDid you create the index on the indexer?
@ashutoshh Splunk sales team will respond, if not you can call directly and explain your requirement. The other option is,  Many companies resell Splunk ITSI licenses. You can check for Splunk Partne... See more...
@ashutoshh Splunk sales team will respond, if not you can call directly and explain your requirement. The other option is,  Many companies resell Splunk ITSI licenses. You can check for Splunk Partners or Authorized Resellers in your region.  Purchasing or transferring a Splunk ITSI license from an individual or third party who is not an authorized reseller violates Splunk's licensing agreement. Splunk licenses are non-transferable and bound to the original purchasing entity. Unauthorized sharing or selling of licenses can lead to compliance issues, termination of the license, or legal action from Splunk.
Hi , i did this already 
@ashutoshh   Same goes, for example, with Enterprise Security App.  Reach out to Splunk Sales to discuss your requirements and get a quote for the ITSI app.   
Team, I'm trying to push Jenkins Build Logs to Splunk.   Installed Splunk Plugin (1.10.1) in my Cloudbees Jenkins. Configured HTTP host,  port & token - Tested Connection and it looks good.   In... See more...
Team, I'm trying to push Jenkins Build Logs to Splunk.   Installed Splunk Plugin (1.10.1) in my Cloudbees Jenkins. Configured HTTP host,  port & token - Tested Connection and it looks good.   In Splunk, created a HEC Input in the below file with the content as below File name :  /opt/app/splunk/etc/apps/splunk_httpinput/local/inputs.conf   [http://jenkins_build_logs] description = Jenkins build Logs disabled = 0 index = infra indexes = infra sourcetype = jenkins:build token =  useACK = 0   Getting the below error in the Splunk logs -  /opt/app/splunk/var/log/splunk 02-08-2025 04:52:07.704 +0000 ERROR HttpInputDataHandler [17467 HttpDedicatedIoThread-1] - Failed processing http input, token name=jenkins_build_logs, channel=n/a, source_IP=10.212.102.217, reply=7, status_message="Incorrect index", status=400, events_processed=1, http_input_body_size=381, parsing_err="invalid_index='jenkins_console'" 02-08-2025 04:54:14.617 +0000 ERROR HttpInputDataHandler [17467 HttpDedicatedIoThread-1] - Failed processing http input, token name=jenkins_build_logs, channel=n/a, source_IP=10.212.100.150, reply=7, status_message="Incorrect index", status=400, events_processed=1, http_input_body_size=317, parsing_err="invalid_index='jenkins_statistics'"
Hi try contacting the sales as per the options and submitted email but no response  do you know someone who has that license i can pay for that 
@ashutoshh Splunk ITSI is a premium app, so it requires an additional license beyond the standard Splunk Enterprise license. If you purchased it you will need to make sure that in the 'entitlement' y... See more...
@ashutoshh Splunk ITSI is a premium app, so it requires an additional license beyond the standard Splunk Enterprise license. If you purchased it you will need to make sure that in the 'entitlement' your name is listed with your email.  Otherwise whomever is listed on the entitlement can download it for you.