All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,   I have a dashboard with multiselection + text input field.    <form version="1.1" theme="light"> <label>Multiselect Text</label> <init> <set token="toktext">*</set> </init> <... See more...
Hello,   I have a dashboard with multiselection + text input field.    <form version="1.1" theme="light"> <label>Multiselect Text</label> <init> <set token="toktext">*</set> </init> <fieldset submitButton="false"> <input type="multiselect" token="tokselect"> <label>Field</label> <choice value="category">Group</choice> <choice value="severity">Severity</choice> <default>category</default> <valueSuffix>=REPLACE</valueSuffix> <delimiter> OR </delimiter> <prefix>(</prefix> <suffix>)</suffix> <change> <eval token="tokfilter">replace($tokselect$,"REPLACE","\"".$toktext$."\"")</eval> </change> </input> <input type="text" token="toktext"> <label>Value</label> <default>*</default> <change> <eval token="tokfilter">replace($tokselect$,"REPLACE","\"".$toktext$."\"")</eval> </change> </input> </fieldset> <row> <panel> <event> <title>$tokfilter$</title> <search> <query>| makeresults</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </event> </panel> </row> </form>   Everything is working properly, so if I add something in the input 'Value' field then select an option from the multiselect tab 'Field' the search is looking for e.g. category="something" OR severity="something". I need help to build a plus multiselect option which is able to search for the string value defined in the text field anywhere in the event.  I can imagine like this: If I select the 'Group' and type 'something' into the input field, the search is looking for category="something", but if I select the 'Any Field' and type 'something' into the input field, the search is looking for only "something".   Could you please help to modify this dashboard in this direction?   Thank you so much in advance!
I am facing an issue with,  [otel.javaagent  [signalfx-metrics-publisher] WARN com.splunk.javaagent.shaded.io.micrometer.signalfx.SignalFxMeterRegistry - failed to send metrics: Unable to send datapo... See more...
I am facing an issue with,  [otel.javaagent  [signalfx-metrics-publisher] WARN com.splunk.javaagent.shaded.io.micrometer.signalfx.SignalFxMeterRegistry - failed to send metrics: Unable to send datapoints
Hi All, I have created few tags in splunk which are getting disabled automatically. I want to check using splunk query the time they are getting disabled.  Please can anyone of you suggest me t... See more...
Hi All, I have created few tags in splunk which are getting disabled automatically. I want to check using splunk query the time they are getting disabled.  Please can anyone of you suggest me the query for this . I tried using REST but not getting exact details. I also tried below but not seeing any related logs. index=_internal sourcetype=splunk_audit action=edit status=disabled info=tags Thanks in advance, PNV
I am writing a query which will give total time taken by a log/event for execution in milliseconds : index=xyz cluster_id = [cluster_id] "logs_statistics"| rex field=_raw "Total Time taken in millis... See more...
I am writing a query which will give total time taken by a log/event for execution in milliseconds : index=xyz cluster_id = [cluster_id] "logs_statistics"| rex field=_raw "Total Time taken in milliseconds: (?<totalTime>.*\d+) \n*"|table time totalTime This executes but totalTime is null as shown below : time                                                                    totalTime 2024-06-23T03:00:45.038422703Z   2024-06-23T03:00:15.453872121Z   2024-06-23T03:00:23.33625642Z   Expected : time                                                                    totalTime 2024-06-23T03:00:45.038422703Z 544 2024-06-23T03:00:15.453872121Z 528   What am I missing ?
Eventgen is not showing in Splunk data inputs what do please suggest
Hi Requirement: To fetch the count of events between the start and end of particular event.  Example :  i have to find the count of events (RPWARDA , SPWARAA , SPWARRA ) between events IDJO20P and ... See more...
Hi Requirement: To fetch the count of events between the start and end of particular event.  Example :  i have to find the count of events (RPWARDA , SPWARAA , SPWARRA ) between events IDJO20P and PIDZJEA.    Below query is created to find the events between IDJO20P and PIDZJEA but i am not able to fetch the data of the current date. Can you please help me to add the data of the current date too. Query:  index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="IDJO20P" endswith="PIDZJEA" keeporphans=True | bin span=1d _time | stats sum(eventcount) AS eventcount BY _time file | append [ search index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="PIDZJEA" endswith="IDJO20P" keeporphans=True | bin span=1d _time | stats sum(eventcount) AS eventcount BY _time | eval file="count after PIDZJEA" | table file eventcount _time] | chart sum(eventcount) AS eventcount OVER _time BY file   Also , is it possible to have a visual graph like below to show the details in the graph :  IN_per_24h = count of RPWARDA between IDJO20P and PIDZJEA of the day.  Out_per_24h =  count of SPWARAA + SPWARRA between IDJO20P and PIDZJEA of the day.  Backlog = count after PIDZJEA  of the day.       
I'm using a map visualization with markers and would like to use different colors based on the value of a categorical field. (eg. field = category, and its values are either "open" or "closed". I tr... See more...
I'm using a map visualization with markers and would like to use different colors based on the value of a categorical field. (eg. field = category, and its values are either "open" or "closed". I tried altering the code so that the color is based on the value of a certain field, and tried splitting the code to create multiple layers but all to no avail... Even when ignoring the color based on a field and just trying to change the standard purple color of the marker I'm out of luck... Any ideas?
Below is one of my fields. Quite complex,  I know It could be divided to more atomic values .. but it is not [AuditingPipelinePair, AuditingPipelinePair_response, AuditResponse, RESPONSE] [[ T... See more...
Below is one of my fields. Quite complex,  I know It could be divided to more atomic values .. but it is not [AuditingPipelinePair, AuditingPipelinePair_response, AuditResponse, RESPONSE] [[ Tag = AUDIT-SUCCESS Subject = "TAR_ID":"72503", "YEAR":"2106", "EQ_TY":"STD" BXB ServiceTus TransactionId = sb-W10nXQte_ORf6PjJ4wQ#000000004 Message ID = afa9613.62eeaf42.N6b.1405404bdw7.N7e14 Service Ref = KlmSpsDictanaryS1/proxy/KlmSpsDictanary Operation = getShareEquip Protocol = KTTP Client Address = 11.232.189.10 TransportDevel User = <anonymous> MessageDevel User = dkd Message Pode = 0 Payload = Dipis sb-W10wXDte_ORf6PjJde34wQ0004 ]] Anyway, some of (single Strings) values splunk separated automatically like Protocol or Operation. But how to extract (or even eval in query) parameter with space like  "MessageDevel User"  or "ClientAddress" ?
Hello, Since a few months we are facing an issue with stopping Splunk on Red Hat Linux-rel8. We do "systemctl stop Splunkd" to stop the Splunk proces. In most cases Splunks stops and the systemc... See more...
Hello, Since a few months we are facing an issue with stopping Splunk on Red Hat Linux-rel8. We do "systemctl stop Splunkd" to stop the Splunk proces. In most cases Splunks stops and the systemctl prompts comes back. But sometimes (let say 1 out of 10) Splunk stops, but the systemctl prompt does not comes back. Then, after 6 minues (the timeout in the Splunkd.service) systemctl comes back In /var/log/messages i see this after 6 minutes. Splunkd.service: Failed with result 'timeout'. Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. In the splunkd.log i can see that Splunk has stopped. No Splunk proces is running. With "ps -ef | grep splunk" i can see that there a no Splunk processes running. "ps -ef | grep systemctl" i can see that systemctl is still running. It happens on Search cluster, index cluster, Heavy Forwarders etc. Splunk support says is it an Red Hat Linux issue and Red Hat points to Splunk. I wonder if we are the only one who is having this issue. Any remarks are appreciated. Regards, Harry
Please guide me on integrating jamf-pro with splunk step by step. Jamf Pro Add-on for Splunk | Splunkbase This is the add-on I need to install. Please guide me on which instance (HF, Syslog ... See more...
Please guide me on integrating jamf-pro with splunk step by step. Jamf Pro Add-on for Splunk | Splunkbase This is the add-on I need to install. Please guide me on which instance (HF, Syslog servers, Search Heads, Indexers, Cluster master, License manager, Deployment server) should I install this add-on?  And custom index, should it be created on cluster master and push the bundle to all indexers? should I create on all 3 search heads and 1 adhoc search head that we have? And please guide how the HF forwards the required events to this newly created index? how to let HF know that there is a custom index?
Hi Splunkers, I am currently working on creating an alert that sends an email with a table of inline results when triggered. I need to include a link to a dashboard's tab (e.g., "View Results") in t... See more...
Hi Splunkers, I am currently working on creating an alert that sends an email with a table of inline results when triggered. I need to include a link to a dashboard's tab (e.g., "View Results") in the alert email(when the user clicks th link it must go to the particular tab in dashboard. I've checked some community posts but didn't find any replies. Could you please guide me on how to achieve this?   Thanks in advance
Check Point Skyline - Splunk Configuration Issue: Unable to get Data In  Issue Summary: Splunk Enterprise Indexer will not accept HTTP Event Collector HEC_Token from Check Point Gateway resulting in... See more...
Check Point Skyline - Splunk Configuration Issue: Unable to get Data In  Issue Summary: Splunk Enterprise Indexer will not accept HTTP Event Collector HEC_Token from Check Point Gateway resulting in no Skyline (Open Telemetry) data being ingested into Splunk.  I need help to get splunk indexer to recognise the token and allow data to be ingested. Please note this error was also replicated on different Splunk Instance to determine potential root cause. Could potentially be attributed to the payload-no-tls.json file not being formatted or compiled correctly on the Gateway. Documentation used to configure set up: Check Point Skyline Deployment: https://support.checkpoint.com/results/sk/sk178566 Official Check Point Skyline Guide PDF: https://sc1.checkpoint.com/documents/Appliances/Skyline/CP_Skyline_AdminGuide.pdf Skyline Troubleshooting and FAQ: https://support.checkpoint.com/results/sk/sk179870 HTTP Event Collector in Splunk: https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector Environment Details: Splunk Version: Splunk Enterprise 9.2 (Trial License) Operating System: Ubuntu 22.04 Gateways (Both Virtual running on : CheckPoint_FW4 and CheckPoint_FW3 [Cluster2] Firewall Rules: Cleanup Rule to allow any communication for testing purposes.   Potential Root Cause - Log Analysis: Ran Command: tail -20 /opt/CPotelcol/otelcol.log                                 on CheckPoint_FW4 Response: go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/internal/bounded_memory_queue.go:47 2024-06-26T14:20:34.609+1000    error   exporterhelper/queued_retry.go:391      Exporting failed. The error is not retryable. Dropping data.    {"kind": "exporter", "data_type": "metrics", "name": "prometheusremotewrite", "error": "Permanent error: Permanent error: remote write returned HTTP status 401 Unauthorized; err = %!w(<nil>): Bearer token not recognized. Please contact your Splunk admin.\n", "dropped_items": 284} go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send         go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/queued_retry.go:391 go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send         go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/metrics.go:125 go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1         go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/queued_retry.go:195 go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func1   Completed Installation Steps: (Text highlighted in Green completed) Installed the Third-Party Monitoring Tool Installed the OpenTelemetry Agent and OpenTelemetry Collector on the Check Point Server Configured the OpenTelemetry Collector on the Check Point Server to work with the Third-Party Monitoring Tool: Splunk   Configure HTTP Event Collector on Splunk Enterprise Enable HTTP Event Collector on Splunk Enterprise Before you can use Event Collector to receive events through HTTP, you must enable it. For Splunk Enterprise, enable HEC through the Global Settings dialog box. Click Settings > Data Inputs. Click HTTP Event Collector. Click Global Settings. In the All Tokens toggle button, select Enabled. (Optional) Choose a Default Source Type for all HEC tokens. You can also type in the name of the source type in the text field above the drop-down list box before choosing the source type. (Optional) Choose a Default Index for all HEC tokens. (Optional) Choose a Default Output Group for all HEC tokens. (Optional) To use a deployment server to handle configurations for HEC tokens, click the Use Deployment Server check box. (Optional) To have HEC listen and communicate over HTTPS rather than HTTP, click the Enable SSL checkbox. (Optional) Enter a number in the HTTP Port Number field for HEC to listen on.     Create an Event Collector token on Splunk Enterprise To use HEC, you must configure at least one token. Click Settings > Add Data. Click monitor. Click HTTP Event Collector. In the Name field, enter a name for the token. (Optional) In the Source name override field, enter a source name for events that this input generates. (Optional) In the Description field, enter a description for the input. (Optional) In the Output Group field, select an existing forwarder output group. (Optional) If you want to enable indexer acknowledgment for this token, click the Enable indexer acknowledgment checkbox. Click Next. (Optional) Confirm the source type and the index for HEC events.     Click Review. Confirm that all settings for the endpoint are what you want. If all settings are what you want, click Submit. Otherwise, click < to make changes. (Optional) Copy the token value that Splunk Web displays and paste it into another document for reference later   Confirmed the Token is Status: Enabled Configured payload-no-tls.json in /home/admin/payload-no-tls.json   Step: Run the configuration command to apply the payload - either the CLI command, or the Gaia REST API command: n Method 1 - Run the CLI command "sklnctl": a. Save the JSON payload in a file (for example, /home/admin/payload.json). b. Run this command: sklnctl export --set "$(cat /home/admin/payload.json)" Repeated steps for FW4 Rebooted Gateway FW3 and FW4 Rebooted Splunk Server Restarted all Check Point Firewall Skyline Components Result: Data Failed to be ingested Other troubleshooting completed: Created completely new token and repeated configuration steps Updated the url within the payload.json file to end with /services/collector/raw /services/collector/events Updated “url”: http://10... Instead of https Checked the Skyline Component Log Files for Troubleshooting: What are the relevant Check Point Skyline log files? OpenTelemetry Collector: /opt/CPotelcol/otelcol.log CPView Exporter: /opt/CPviewExporter/otlp_cpview.log CPView API Service: $CPDIR/log/cpview_api_service.elg   Logs CPView API Service and CPView displayed no logs indicating causes of the issues. Confirmed that the bearer token works:   Result: Bearer Token accepted. Confirmed Collector was healthy: Alternative payload-no-tls.json formats attempted:       Gateway Log Analysis (Returned everytime:) SSH into CheckPoint_FW4 xx.xx.xx.xx via Remote Desktop Ran Command: tail /opt/CPotelcol/otelcol.log Result: go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/internal/bounded_memory_queue.go:47 2024-06-26T14:20:34.609+1000    error   exporterhelper/queued_retry.go:391      Exporting failed. The error is not retryable. Dropping data.    {"kind": "exporter", "data_type": "metrics", "name": "prometheusremotewrite", "error": "Permanent error: Permanent error: remote write returned HTTP status 401 Unauthorized; err = %!w(<nil>): Bearer token not recognized. Please contact your Splunk admin.\n", "dropped_items": 284} go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send         go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/queued_retry.go:391 go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send         go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/metrics.go:125 go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1         go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/queued_retry.go:195 go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func1   Finding: Appears to be an issue in which the HTTP Event Collector will not accept the Token Value, even when the token matches identically. Could potentially be attributed to the payload-no-tls.json file not being formatted or compiled correctly on the Gateway.  
In setting up Azure Storage Blob modular inputs for the Splunk Add-on for Microsoft Cloud Services, the following URL prerequisites are required. https://splunk.github.io/splunk-add-on-for-microsoft... See more...
In setting up Azure Storage Blob modular inputs for the Splunk Add-on for Microsoft Cloud Services, the following URL prerequisites are required. https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/Configureinputs5/#prerequisites "All Splunk instances should use the centralized KVStore. In Victoria stack, there is a centralized KVStore so this feature can be used there. If Splunk instances use a different KVStore, there will be data duplication. If one Heavy Forwarder uses its own KVStore and another Heavy Forwarder uses a different KVStore, and both Heavy Forwarders have their inputs collecting data from the same Storage Container, then there will be data duplication." In this case, is there anything that needs to be pre-configured for Splunk heavy forwarders?
Is there a limit to the number of servers on which Forwarder or Enterprise can be installed under the same license? For example, if there is a limit such as "up to 15 servers" for this license. I... See more...
Is there a limit to the number of servers on which Forwarder or Enterprise can be installed under the same license? For example, if there is a limit such as "up to 15 servers" for this license. If a limit exists, I would appreciate it if you could tell me its structure and classification.
When I set the timeframe as 7 days and try to search my splunk query in the Grafana, it returns value. But if I increase the timeframe to 14 days or more, then it returns NoData in Grafana. But I cre... See more...
When I set the timeframe as 7 days and try to search my splunk query in the Grafana, it returns value. But if I increase the timeframe to 14 days or more, then it returns NoData in Grafana. But I created a dashboard in splunk with the same query it returns value. Can anyone give some suggestion.
Hey All I have downloaded the app SSL Certificate lookup I using this search to see information about the certificate, but it gives me no information.   | makeresults | eval dest="example.c... See more...
Hey All I have downloaded the app SSL Certificate lookup I using this search to see information about the certificate, but it gives me no information.   | makeresults | eval dest="example.com" | mvexpand dest | lookup sslcert_lookup dest OUTPUT ssl_subject_common_name ssl_subject_alt_name ssl_end_time ssl_validity_window | eval ssl_subject_alt_name = split(ssl_subject_alt_name,"|") | eval days_left = round(ssl_validity_window/86400)   the domain is using port 8441 When i add for example splunk.com it works but not the one i want to see. What is wrong in the search, or what should i add? Thanks in advance
I'm trying to pass 3 tokens from panel 1 into panel 2, earliest time, latest time, and a basic field value.  I can get the earliest time and field value to work, but latest time always defaults to "n... See more...
I'm trying to pass 3 tokens from panel 1 into panel 2, earliest time, latest time, and a basic field value.  I can get the earliest time and field value to work, but latest time always defaults to "now" no matter what I try. Panel 1 is a stacked timechart over a three week period, each stack is one week.  The values in the stack are different closure statuses from my SIEM.  I want to be able to click on a closure status in a single week and see the details of just the statuses from that week in panel 2. (ex. Mon Jun 17-Sun Jun 23)    Panel 1 looks like: index=siem sourcetype=triage | eval _time=relative_time(_time,"@w1") ```so my stacks start on monday``` | timechart span=1w@w1 count by status WHERE max in top10 useother=false | eval last=_time+604800 ```manually creating a latest time to use as token``` note: panel 1 is using a time input shared across most panels in the dashboard. (defaulting to 3 Mondays ago) In Configuration > Interaction, I'm setting 3 tokens, status=name, earliest=row._time.value, and latest=row.last.value     Panel 2 looks like: index=siem sourcetype=triage earliest=$earliest$ latest=$latest$ | rest of search   When I click a status in week 1 (2 weeks ago) I get statuses for weeks 1, 2, and 3. (earliest and status token is working) When I click a status in week 2 (1 weeks ago) I get statuses for weeks 2 and 3 (earliest and status token is working) When I click a status in week 3 (current week) I get the current week.  (earliest and status token is working Latest always defaults to now.   I've done something similar in the old dashboard, I eval'd the time modifiers while setting the token, but am much less familiar with json, not sure if this is a possibility. What I had previously done: <eval token="earliest">$click.value$-3600</eval>  
Hi, We are continuously in violation the past 3 or 4 months as we are ingesting 600 to 800 GB extra on top of daily limit. We have received multiple hard warnings.  My question is what will happ... See more...
Hi, We are continuously in violation the past 3 or 4 months as we are ingesting 600 to 800 GB extra on top of daily limit. We have received multiple hard warnings.  My question is what will happen if we continue to be in violation or exceed the daily indexing volume limit?    I appreciate your answer in advance. Thanks.
Hi All! First post, super new user to Splunk.  Have a search that i modified from a one a team member previously created, im trying to take the output of ClientVersion and compare the 6wkAvg count... See more...
Hi All! First post, super new user to Splunk.  Have a search that i modified from a one a team member previously created, im trying to take the output of ClientVersion and compare the 6wkAvg count to the Today count for same timespan and see what the percentage -/+ is. Ultimately building towards alerting when below a certain threshold.  | fields _time ClientVersion | eval DoW=strftime(_time, "%A") | eval TodayDoW=strftime(now(), "%A") | where DoW=TodayDoW | search ClientVersion=FAPI* | eval ClientVersion=if((like("ClientVersion=FAPI*","%OR%") OR false()) AND false(), "Combined", ClientVersion) | bin _time span=5m | eval tempTime=strftime(_time,"%m/%d") | where (tempTime!="null") | eval tempTime=if(true() AND _time < relative_time(now(), "@d"), "6wkAvg", "Today") | stats count by ClientVersion _time tempTime | eval _time=round(strptime(strftime(now(),"%Y-%m-%d").strftime(_time,"%H:%M:%S"),"%Y-%m-%d%H:%M:%S"),0) | stats avg(count) as count by ClientVersion _time tempTime | eval ClientVersion=ClientVersion."-".tempTime | eval count=round(count,0)  
Hi,  I need help in extracting the time gaps in a multi-value field represented as Date. My data output looks like this: index=myindex | stats values(_time) as _time values(recs) as recs co... See more...
Hi,  I need help in extracting the time gaps in a multi-value field represented as Date. My data output looks like this: index=myindex | stats values(_time) as _time values(recs) as recs count by Token | eval Date= strftime (_time,"%F %H:%M:%S.%2q ") | where count > 1 | table Token Date Token        Date 363311    2024-06-25 17:20:08.26                     2024-06-25 17:23:51.12 231321    2024-06-25 18:10:58.86                     2024-06-25 18:11:28.12                     2024-06-25 18:12:19.38                     2024-06-25 18:13:21.90 827341    2024-06-25 15:17:18.06                     2024-06-25 15:37:47.93                     2024-06-25 15:41:03.21 I would like to display the difference in time stamps in a new column called "time_gaps", and it would list the time in seconds between the latest time and the previous time, some Tokens have only 2 time stamps, so there should be only 1 value in the time_gap field, however, other that have 4, should have values representing the time difference between the 1st and 2nd, 2nd and 3rd, 3rd and 4th. I tried streamstats but it seems I may be doing something wrong. Any clean and effect SPL would be appreciated. Thanks