All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Guiseppe, thanks you for your answer.   Anyway, let me understand: you have one Indexer on Site1 and two in Site2 indexes on Site2 must be replicated only on Indexers in Site2, instead In... See more...
Hi Guiseppe, thanks you for your answer.   Anyway, let me understand: you have one Indexer on Site1 and two in Site2 indexes on Site2 must be replicated only on Indexers in Site2, instead Indexes in Site1 must be replicated also in Site2. Yes, that is correct. Greetings Roger
Would you look at Payload parameter. Result has many strings with spaces.
ok, got it !  Works perfect    
I feel it could be a good solution but how to use it ?  Should I extract new field with this regex ? 
I'm using a map visualization with markers and would like to use different colors based on the value of a categorical field. (eg. field = category, and its values are either "open" or "closed". I tr... See more...
I'm using a map visualization with markers and would like to use different colors based on the value of a categorical field. (eg. field = category, and its values are either "open" or "closed". I tried altering the code so that the color is based on the value of a certain field, and tried splitting the code to create multiple layers but all to no avail... Even when ignoring the color based on a field and just trying to change the standard purple color of the marker I'm out of luck... Any ideas?
Not quite sure what you're asking but, there are several things you can do there: If fields like "Client Address" are not extracted, you can do a rex command and then use the extracted fields in eva... See more...
Not quite sure what you're asking but, there are several things you can do there: If fields like "Client Address" are not extracted, you can do a rex command and then use the extracted fields in evals etc: | rex "Client Address = (?<address>\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3})" | eval address = ... If they are already extracted, but the field as a space you can do either: | rename "Client Address" as ClientAddress |eval ClientAddress = ... or | eval "Client Address" = ...  
You could use rex, something like this | rex "MessageDevel User = (?<MessageDevelUser>\S+)"
Below is one of my fields. Quite complex,  I know It could be divided to more atomic values .. but it is not [AuditingPipelinePair, AuditingPipelinePair_response, AuditResponse, RESPONSE] [[ T... See more...
Below is one of my fields. Quite complex,  I know It could be divided to more atomic values .. but it is not [AuditingPipelinePair, AuditingPipelinePair_response, AuditResponse, RESPONSE] [[ Tag = AUDIT-SUCCESS Subject = "TAR_ID":"72503", "YEAR":"2106", "EQ_TY":"STD" BXB ServiceTus TransactionId = sb-W10nXQte_ORf6PjJ4wQ#000000004 Message ID = afa9613.62eeaf42.N6b.1405404bdw7.N7e14 Service Ref = KlmSpsDictanaryS1/proxy/KlmSpsDictanary Operation = getShareEquip Protocol = KTTP Client Address = 11.232.189.10 TransportDevel User = <anonymous> MessageDevel User = dkd Message Pode = 0 Payload = Dipis sb-W10wXDte_ORf6PjJde34wQ0004 ]] Anyway, some of (single Strings) values splunk separated automatically like Protocol or Operation. But how to extract (or even eval in query) parameter with space like  "MessageDevel User"  or "ClientAddress" ?
Hello, Since a few months we are facing an issue with stopping Splunk on Red Hat Linux-rel8. We do "systemctl stop Splunkd" to stop the Splunk proces. In most cases Splunks stops and the systemc... See more...
Hello, Since a few months we are facing an issue with stopping Splunk on Red Hat Linux-rel8. We do "systemctl stop Splunkd" to stop the Splunk proces. In most cases Splunks stops and the systemctl prompts comes back. But sometimes (let say 1 out of 10) Splunk stops, but the systemctl prompt does not comes back. Then, after 6 minues (the timeout in the Splunkd.service) systemctl comes back In /var/log/messages i see this after 6 minutes. Splunkd.service: Failed with result 'timeout'. Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. In the splunkd.log i can see that Splunk has stopped. No Splunk proces is running. With "ps -ef | grep splunk" i can see that there a no Splunk processes running. "ps -ef | grep systemctl" i can see that systemctl is still running. It happens on Search cluster, index cluster, Heavy Forwarders etc. Splunk support says is it an Red Hat Linux issue and Red Hat points to Splunk. I wonder if we are the only one who is having this issue. Any remarks are appreciated. Regards, Harry
Please guide me on integrating jamf-pro with splunk step by step. Jamf Pro Add-on for Splunk | Splunkbase This is the add-on I need to install. Please guide me on which instance (HF, Syslog ... See more...
Please guide me on integrating jamf-pro with splunk step by step. Jamf Pro Add-on for Splunk | Splunkbase This is the add-on I need to install. Please guide me on which instance (HF, Syslog servers, Search Heads, Indexers, Cluster master, License manager, Deployment server) should I install this add-on?  And custom index, should it be created on cluster master and push the bundle to all indexers? should I create on all 3 search heads and 1 adhoc search head that we have? And please guide how the HF forwards the required events to this newly created index? how to let HF know that there is a custom index?
Does this help? https://docs.splunk.com/Documentation/Splunk/9.2.1/Alert/Emailnotification#Send_email_to_different_recipients_based_on_search_results  
Hi Splunkers, I am currently working on creating an alert that sends an email with a table of inline results when triggered. I need to include a link to a dashboard's tab (e.g., "View Results") in t... See more...
Hi Splunkers, I am currently working on creating an alert that sends an email with a table of inline results when triggered. I need to include a link to a dashboard's tab (e.g., "View Results") in the alert email(when the user clicks th link it must go to the particular tab in dashboard. I've checked some community posts but didn't find any replies. Could you please guide me on how to achieve this?   Thanks in advance
Check Point Skyline - Splunk Configuration Issue: Unable to get Data In  Issue Summary: Splunk Enterprise Indexer will not accept HTTP Event Collector HEC_Token from Check Point Gateway resulting in... See more...
Check Point Skyline - Splunk Configuration Issue: Unable to get Data In  Issue Summary: Splunk Enterprise Indexer will not accept HTTP Event Collector HEC_Token from Check Point Gateway resulting in no Skyline (Open Telemetry) data being ingested into Splunk.  I need help to get splunk indexer to recognise the token and allow data to be ingested. Please note this error was also replicated on different Splunk Instance to determine potential root cause. Could potentially be attributed to the payload-no-tls.json file not being formatted or compiled correctly on the Gateway. Documentation used to configure set up: Check Point Skyline Deployment: https://support.checkpoint.com/results/sk/sk178566 Official Check Point Skyline Guide PDF: https://sc1.checkpoint.com/documents/Appliances/Skyline/CP_Skyline_AdminGuide.pdf Skyline Troubleshooting and FAQ: https://support.checkpoint.com/results/sk/sk179870 HTTP Event Collector in Splunk: https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector Environment Details: Splunk Version: Splunk Enterprise 9.2 (Trial License) Operating System: Ubuntu 22.04 Gateways (Both Virtual running on : CheckPoint_FW4 and CheckPoint_FW3 [Cluster2] Firewall Rules: Cleanup Rule to allow any communication for testing purposes.   Potential Root Cause - Log Analysis: Ran Command: tail -20 /opt/CPotelcol/otelcol.log                                 on CheckPoint_FW4 Response: go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/internal/bounded_memory_queue.go:47 2024-06-26T14:20:34.609+1000    error   exporterhelper/queued_retry.go:391      Exporting failed. The error is not retryable. Dropping data.    {"kind": "exporter", "data_type": "metrics", "name": "prometheusremotewrite", "error": "Permanent error: Permanent error: remote write returned HTTP status 401 Unauthorized; err = %!w(<nil>): Bearer token not recognized. Please contact your Splunk admin.\n", "dropped_items": 284} go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send         go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/queued_retry.go:391 go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send         go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/metrics.go:125 go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1         go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/queued_retry.go:195 go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func1   Completed Installation Steps: (Text highlighted in Green completed) Installed the Third-Party Monitoring Tool Installed the OpenTelemetry Agent and OpenTelemetry Collector on the Check Point Server Configured the OpenTelemetry Collector on the Check Point Server to work with the Third-Party Monitoring Tool: Splunk   Configure HTTP Event Collector on Splunk Enterprise Enable HTTP Event Collector on Splunk Enterprise Before you can use Event Collector to receive events through HTTP, you must enable it. For Splunk Enterprise, enable HEC through the Global Settings dialog box. Click Settings > Data Inputs. Click HTTP Event Collector. Click Global Settings. In the All Tokens toggle button, select Enabled. (Optional) Choose a Default Source Type for all HEC tokens. You can also type in the name of the source type in the text field above the drop-down list box before choosing the source type. (Optional) Choose a Default Index for all HEC tokens. (Optional) Choose a Default Output Group for all HEC tokens. (Optional) To use a deployment server to handle configurations for HEC tokens, click the Use Deployment Server check box. (Optional) To have HEC listen and communicate over HTTPS rather than HTTP, click the Enable SSL checkbox. (Optional) Enter a number in the HTTP Port Number field for HEC to listen on.     Create an Event Collector token on Splunk Enterprise To use HEC, you must configure at least one token. Click Settings > Add Data. Click monitor. Click HTTP Event Collector. In the Name field, enter a name for the token. (Optional) In the Source name override field, enter a source name for events that this input generates. (Optional) In the Description field, enter a description for the input. (Optional) In the Output Group field, select an existing forwarder output group. (Optional) If you want to enable indexer acknowledgment for this token, click the Enable indexer acknowledgment checkbox. Click Next. (Optional) Confirm the source type and the index for HEC events.     Click Review. Confirm that all settings for the endpoint are what you want. If all settings are what you want, click Submit. Otherwise, click < to make changes. (Optional) Copy the token value that Splunk Web displays and paste it into another document for reference later   Confirmed the Token is Status: Enabled Configured payload-no-tls.json in /home/admin/payload-no-tls.json   Step: Run the configuration command to apply the payload - either the CLI command, or the Gaia REST API command: n Method 1 - Run the CLI command "sklnctl": a. Save the JSON payload in a file (for example, /home/admin/payload.json). b. Run this command: sklnctl export --set "$(cat /home/admin/payload.json)" Repeated steps for FW4 Rebooted Gateway FW3 and FW4 Rebooted Splunk Server Restarted all Check Point Firewall Skyline Components Result: Data Failed to be ingested Other troubleshooting completed: Created completely new token and repeated configuration steps Updated the url within the payload.json file to end with /services/collector/raw /services/collector/events Updated “url”: http://10... Instead of https Checked the Skyline Component Log Files for Troubleshooting: What are the relevant Check Point Skyline log files? OpenTelemetry Collector: /opt/CPotelcol/otelcol.log CPView Exporter: /opt/CPviewExporter/otlp_cpview.log CPView API Service: $CPDIR/log/cpview_api_service.elg   Logs CPView API Service and CPView displayed no logs indicating causes of the issues. Confirmed that the bearer token works:   Result: Bearer Token accepted. Confirmed Collector was healthy: Alternative payload-no-tls.json formats attempted:       Gateway Log Analysis (Returned everytime:) SSH into CheckPoint_FW4 xx.xx.xx.xx via Remote Desktop Ran Command: tail /opt/CPotelcol/otelcol.log Result: go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/internal/bounded_memory_queue.go:47 2024-06-26T14:20:34.609+1000    error   exporterhelper/queued_retry.go:391      Exporting failed. The error is not retryable. Dropping data.    {"kind": "exporter", "data_type": "metrics", "name": "prometheusremotewrite", "error": "Permanent error: Permanent error: remote write returned HTTP status 401 Unauthorized; err = %!w(<nil>): Bearer token not recognized. Please contact your Splunk admin.\n", "dropped_items": 284} go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send         go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/queued_retry.go:391 go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send         go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/metrics.go:125 go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1         go.opentelemetry.io/collector/exporter@v0.82.0/exporterhelper/queued_retry.go:195 go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func1   Finding: Appears to be an issue in which the HTTP Event Collector will not accept the Token Value, even when the token matches identically. Could potentially be attributed to the payload-no-tls.json file not being formatted or compiled correctly on the Gateway.  
https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/TypesofSplunklicenses
In setting up Azure Storage Blob modular inputs for the Splunk Add-on for Microsoft Cloud Services, the following URL prerequisites are required. https://splunk.github.io/splunk-add-on-for-microsoft... See more...
In setting up Azure Storage Blob modular inputs for the Splunk Add-on for Microsoft Cloud Services, the following URL prerequisites are required. https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/Configureinputs5/#prerequisites "All Splunk instances should use the centralized KVStore. In Victoria stack, there is a centralized KVStore so this feature can be used there. If Splunk instances use a different KVStore, there will be data duplication. If one Heavy Forwarder uses its own KVStore and another Heavy Forwarder uses a different KVStore, and both Heavy Forwarders have their inputs collecting data from the same Storage Container, then there will be data duplication." In this case, is there anything that needs to be pre-configured for Splunk heavy forwarders?
Is there a limit to the number of servers on which Forwarder or Enterprise can be installed under the same license? For example, if there is a limit such as "up to 15 servers" for this license. I... See more...
Is there a limit to the number of servers on which Forwarder or Enterprise can be installed under the same license? For example, if there is a limit such as "up to 15 servers" for this license. If a limit exists, I would appreciate it if you could tell me its structure and classification.
Hi @trha_ , Would it be possible to somehow see a copy of this batch file? Cheers,    - Jo.
You could do something like this index = auth0 (data.type IN ("fu", "fp", "s")) | bucket span=5m _time | stats dc(eval(if('data.type'="s", null(), 'data.user_name'))) AS unique_failed_accounts ... See more...
You could do something like this index = auth0 (data.type IN ("fu", "fp", "s")) | bucket span=5m _time | stats dc(eval(if('data.type'="s", null(), 'data.user_name'))) AS unique_failed_accounts dc(eval(if('data.type'="s", 'data.user_name', null()))) AS unique_successful_accounts values(eval(if('data.type'="s", null(), 'data.user_name'))) as tried_accounts values(eval(if('data.type'="s", 'data.user_name', null()))) as successful_accounts values(data.client_name) as clientName values(eval(if('data.type'="s", null(), 'data.type'))) as failure_reason latest(eval(if('data.type'="s", 'data.user_name', null()))) as last_successful_account max(eval(if('data.type'="s", _time, null()))) as last_successful_time max(eval(if('data.type'="s", null(), _time))) as last_failed_time by data.ip | where unique_failed_accounts > 10 Then you can see the latest failed time, successful time and failed and successful accounts and make any decisions needed. What you're essentially after is the eval() test inside the stats to test what to collect. Make sure you wrap the field names in single quotes in that test, as it's an eval statement and the field names contain . characters.
There are some small improvements you could make in case there are 0 results in any given bin - if there is a missing range, then the appendcols may fail to align the data correctly, so this will ens... See more...
There are some small improvements you could make in case there are 0 results in any given bin - if there is a missing range, then the appendcols may fail to align the data correctly, so this will ensure there are the correct number of events before the transpose | makecontinuous aString1Count start=0 end=8 | fillnull count Do that for each search. Also, the initial age calculation is wrong in that it's using now() - _time which should actually be the search latest time, so I show a fix for that below. In addition you want to strip out the -1 to -2 minute section which is NOT in one of the ranges (your first range is +1 to +241) You can make it faster by making a single search rather than appendcols, which is not efficient. There are two ways , which simply depend on how you do the chart, but I include them for a learning exercise. index=anIndex sourcetype=aSourceType (aString1 OR aString2) earliest=-481m@m latest=-1m@m ``` Calculate the age of the event - this is latest timne - event time ``` | addinfo | eval age=info_max_time - _time ``` Calculate range bands we want ``` | eval age_ranges=split("1,6,11,31,61,91,121,241",",") ``` Not strictly necessary, but ensures clean data ``` | eval range=null() ``` This is where you set a condition "1" or "2" depending on whether the event is a result from aString1 or aString2 ``` | eval type=if(event_is_aString1, "A", "B") ``` Band calculation ``` | foreach 0 1 2 3 4 5 6 7 [ eval r=tonumber(mvindex(age_ranges, <<FIELD>>))*60, zone=if(age < 14400 + r AND age > r, <<FIELD>>, null()), range=mvappend(range, zone) ] ``` So this removes the events in the pre-1 minute band ``` | where isnotnull(range) ``` Now this chart gives you 8 rows and 3 columns, first column is range, 2nd is counts for aString1 and 3rd for aString2``` | chart count over range by type ``` This ensures you have values for each range ``` | makecontinuous range start=0 end=8 | fillnull A B ``` And now create your fields on a single row ``` | eval range=range+1 | eval string1Window{range}=A, string2Window{range}=B | stats values(string*) as string*  
Thank you for updating to text as @gcusello suggested.  It would be better if you can illustrate mock data in text tables as well. It is hard to see how ClientVersion in 6wkAvg could be useful, but ... See more...
Thank you for updating to text as @gcusello suggested.  It would be better if you can illustrate mock data in text tables as well. It is hard to see how ClientVersion in 6wkAvg could be useful, but I'll just ignore this point.  Because the only numeric field is Count, I assume that you want percentage change on this field.  Splunk provides a convenient command xyseries to swap fields into row values.  You can do something like this:   | xyseries _time tempTime ClientVersion Count | eval percentChange = round(('Count: Today' - 'Count: 6wkAvg') / 'Count: 6wkAvg' * 100, 2)   Your mock data will give _time ClientVersion: 6wkAvg ClientVersion: Today Count: 6wkAvg Count: Today percentChange 2024-06-26 00:00:00 FAPI-6wkAvg FAPI-today 1582 2123 34.20 2024-06-26 00:05:00 FAPI-6wkAvg FAPI-today 1491 1925 29.11 2024-06-26 00:10:00 FAPI-6wkAvg FAPI-today 1888 2867 51.85 2024-06-26 00:15:00 FAPI-6wkAvg FAPI-today 1983 2593 30.76 2024-06-26 00:20:00 FAPI-6wkAvg FAPI-today 2882 3291 14.19 Is this something you are looking for?  Here is an emulation you can play with and compare with real data   | makeresults format=csv data="ClientVersion, _time, tempTime, Count FAPI-6wkAvg, 2024-06-26 00:00:00, 6wkAvg, 1582 FAPI-today, 2024-06-26 00:00:00, Today, 2123 FAPI-6wkAvg, 2024-06-26 00:05:00, 6wkAvg, 1491 FAPI-today, 2024-06-26 00:05:00, Today, 1925 FAPI-6wkAvg, 2024-06-26 00:10:00, 6wkAvg, 1888 FAPI-today, 2024-06-26 00:10:00, Today, 2867 FAPI-6wkAvg, 2024-06-26 00:15:00, 6wkAvg, 1983 FAPI-today, 2024-06-26 00:15:00, Today, 2593 FAPI-6wkAvg, 2024-06-26 00:20:00, 6wkAvg, 2485 FAPI-today, 2024-06-26 00:20:00, Today, 2939 FAPI-6wkAvg, 2024-06-26 00:20:00, 6wkAvg, 2882 FAPI-today, 2024-06-26 00:20:00, Today, 3291" ``` the above emulates ... | stats avg(count) as count by ClientVersion _time tempTime | eval ClientVersion=ClientVersion."-".tempTime | eval count=round(count,0) ```