All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I should add that this is confusion is probably caused because the Splunk advisory isn't as accurate as it could be (as I understand it).   Section 1b is not a vulnerability by itself, so the label f... See more...
I should add that this is confusion is probably caused because the Splunk advisory isn't as accurate as it could be (as I understand it).   Section 1b is not a vulnerability by itself, so the label for "1." should really say "both" of the following conditions, not "one."
This started out as a question, but is now just an FYI.  Similar to this post, this week I received a old vulnerability notice from Tenable about my Splunk instance.  We'd previously remediated this ... See more...
This started out as a question, but is now just an FYI.  Similar to this post, this week I received a old vulnerability notice from Tenable about my Splunk instance.  We'd previously remediated this issue, so it was weird that it showed up again suddenly. Vulnerability details: https://packetstormsecurity.com/files/144879/Splunk-6.6.x-Local-Privilege-Escalation.html https://advisory.splunk.com/advisories/SP-CAAAP3M?301=/view/SP-CAAAP3M https://www.tenable.com/plugins/nessus/104498 The details in the articles are light, except saying to review the directions here for running Splunk as non-root: https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/RunSplunkasadifferentornon-rootuser Tenable also doesn't give details about exactly what it saw...it just says, "The current configuration of the host running Splunk was found to be vulnerable to a local privilege escalation vulnerability."   My OS is RHEL 7.x.  I'm launching Splunk using systemd with a non-root user and I have no init.d related files for Splunk.   My understanding is that launching with systemd eliminates the issue, since this way, Splunk never starts with root credentials anyway. Per Splunk's own advisory, any Splunk system is vulnerable, if: Satisfied one of the following conditions a. A Splunk init script created via $SPLUNK_HOME/bin/splunk enable boot-start –user on Splunk 6.1.x or later. b. A line with SPLUNK_OS_USER= exists in $SPLUNK_HOME/etc/splunk-launch.conf In my case, this is an old server and at one point, we did run the boot start command, which made changes to the $SPLUNK_HOME/etc/splunk-launch.conf line that sets the SPLUNK_OS_USER.  Although we had commented out the launch line, the Tenable regex is apparently broken and doesn't realize the line was disabled with a hash.  Removing the line entirely made Tenable stop reporting the vulnerability.  I assume their regex was only looking for "SPLUNK_OS_USER=<something>" so it missed the hash. Anyway, hope this helps someone.        
Hi, For the examples mentioned, I might suggest taking a look at the built-in hostmetrics receiver which you can use to monitor processes like you would with "ps -ef" https://docs.splunk.com/obse... See more...
Hi, For the examples mentioned, I might suggest taking a look at the built-in hostmetrics receiver which you can use to monitor processes like you would with "ps -ef" https://docs.splunk.com/observability/en/gdi/opentelemetry/components/host-metrics-receiver.html There are also some available receivers for mq products like ActiveMQ that can provide an mq query count: https://docs.splunk.com/observability/en/gdi/monitors-messaging/apache-activemq.html#activemq I can't personally think of an option to invoke a custom command from a receiver, but perhaps another way to consider that goal would be to have a custom command that runs independently of the collector and directs its output to an existing receiver. For example, if your command can generate output in a format that a receiver is listening for, that would be a good way to ingest that metric. Here is an article that discusses that idea: https://opentelemetry.io/blog/2023/any-metric-receiver/
Have you found a resolution? Having the same issue
I need to migrate our cluster master to a new machine. It currently has these roles: Cluster Master Deployment Server Indexer License Master Search Head SHC Deployer I already migrated the L... See more...
I need to migrate our cluster master to a new machine. It currently has these roles: Cluster Master Deployment Server Indexer License Master Search Head SHC Deployer I already migrated the License master role to the new server and it's working fine. I've been trying to follow the documentation here: https://docs.splunk.com/Documentation/Splunk/8.2.2/Indexer/Handlemanagernodefailure From what I gather, I need to copy all the files in /opt/splunk/etc/deployment-apps, /opt/splunk/etc/shcluster and /opt/splunk/etc/master-apps, plus anything that's in /opt/splunk/etc/system/local. Then add the passwords in plain text to the server.conf in the  local folder, restart Splunk on the new host and point all peers and search heads to the new master in their respective local server.conf files.  Is there anything else that needs done or would this take care of switching the cluster master entirely? And is there a specific order in which to do things?
HEC ACKs require the client to specifically ask for the status.  Does your HEC client do that?  It can't just throw events at Splunk and hope to get an ACK.  The client has to say "did you index it, ... See more...
HEC ACKs require the client to specifically ask for the status.  Does your HEC client do that?  It can't just throw events at Splunk and hope to get an ACK.  The client has to say "did you index it, yet"?  See https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/AboutHECIDXAck#Query_for_indexing_status
Hello, We set HEC http input for several flows of data and related tokens, and we added ACK feature to this configuration. (following https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/AboutHE... See more...
Hello, We set HEC http input for several flows of data and related tokens, and we added ACK feature to this configuration. (following https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/AboutHECIDXAck) We work with a distributed infra, 1 Search Head, two indexers (no cluster) All was Ok with HEC but after some time we got our first error event : ERROR HttpInputDataHandler [2576842 HttpDedicatedIoThread-0] - Failed processing http input, token name=XXXX [...] reply=9, events_processed=0 INFO HttpInputDataHandler [2576844 HttpDedicatedIoThread-2] - HttpInputAckService not in healthy state. The maximum number of ACKed requests pending query has been reached. Server busy error (reply=9) leads to unavailability of HEC, but only for the token(s) where maximum number of ACKed requests pending query have been reached. Restarting the indexer is enough to get rid of the problem, but after many logs have been lost. We did some search and tried to customize some settings, but we only succeeded in delaying the 'server busy' problem (1 week to 1 month). Has anyone experienced the same problem ? How can we avoid increasing those pending query counter ? Thanks a lot for any help. etc/system/local/limits.conf [http_input] # The max number of ACK channels. max_number_of_ack_channel = 1000000 # The max number of acked requests pending query. max_number_of_acked_requests_pending_query = 10000000 # The max number of acked requests pending query per ACK channel. max_number_of_acked_requests_pending_query_per_ack_channel = 4000000 etc/system/local/server.conf [queue=parsingQueue] maxSize=10MB maxEventSize = 20MB maxIdleTime = 400 channel_cookie = AppGwAffinity (this one because we are using load balancer, so cookie is also set on LB)
You must be running version 3.0.0 or later to upgrade to version 5.3.0.  See the docs at https://docs.splunk.com/Documentation/PCI/5.3.0/Install/Upgradetonewerversion
This ERROR will happen when there are lot of files being monitored and `parallelIngestionPipelines` set to high value. Multiple threads are trying to update fishbucket at the same time. First thread ... See more...
This ERROR will happen when there are lot of files being monitored and `parallelIngestionPipelines` set to high value. Multiple threads are trying to update fishbucket at the same time. First thread creates temp file `snapshot.tmp` and if it's still in the process to update fishbucket, other threads will log above ERROR.
The usage of sort is fine if the number of items is not too large. To sort a large number of items is time consuming, and there is a limit in Splunk. Because of the limit, the attempt to sort the ite... See more...
The usage of sort is fine if the number of items is not too large. To sort a large number of items is time consuming, and there is a limit in Splunk. Because of the limit, the attempt to sort the items and then to select the first 10 items might end in a wrong result. In order to avoid this, I filter all items above/below a limit that is specific to the problem. For instance, 50 000 records are processed, more than 49 000 records are processed within 2 seconds, but there are a few records for which the processing takes more time. So I set the limit to 2 seconds. However, if there are just a few records, e.g., 10, then it might be the case that the list of Top 10 results is empty because all of them are below the limit of 2 seconds.
Thanks @PickleRick @isoutamo 
Hi Team, We are running Splunk v9.1.1 and need to upgrade PCI app from v4.0.0 to v5.3.0 I am trying to find out the upgrade path i.e to which version it has to be before it upgraded to 5.3.0 
It looks like you have a placeholder comment where you want to set a field called splunk url? Which parts of the url you listed are static and which parts are dynamic, and how do the dynamic parts re... See more...
It looks like you have a placeholder comment where you want to set a field called splunk url? Which parts of the url you listed are static and which parts are dynamic, and how do the dynamic parts relate to the fields you have present in your events at the point where the eval is done?
Thank you for sharing your inputs.
Hi @shashankk , please try this: <your_search> | rex "instance(?<key1>\d*)\.R(?<key2>[^:]+)" | rex "\[Priority\=(?<Priority>\w+)" | eval TestMQ="TEST.SEP".key1.".".key2 | stats count(eval(Priority=... See more...
Hi @shashankk , please try this: <your_search> | rex "instance(?<key1>\d*)\.R(?<key2>[^:]+)" | rex "\[Priority\=(?<Priority>\w+)" | eval TestMQ="TEST.SEP".key1.".".key2 | stats count(eval(Priority="Low")) as Low, count(eval(Priority="Medium")) as Medium, count(eval(Priority="High")) as High BY TestMQ | fillnull value=0 | addtotals Ciao. Giuseppe
prestats=t is an option which tells tstats to produce results in format apropriate for further processing (most typically by timechart). So the main thing here is the timechart command - it is respon... See more...
prestats=t is an option which tells tstats to produce results in format apropriate for further processing (most typically by timechart). So the main thing here is the timechart command - it is responsible for creating the timeseries with "empty" days counted as 0.
Hi Team, Hope this finds all well. I am trying to create a alert search query and need to create the splunk url as a dynamic value. Here is my search query- index=idx-cloud-azure "c899b9d3-bf20-4... See more...
Hi Team, Hope this finds all well. I am trying to create a alert search query and need to create the splunk url as a dynamic value. Here is my search query- index=idx-cloud-azure "c899b9d3-bf20-4fd6-8b31-60aa05a14caa" metricName="CpuPercentage" | eval CPU_Percent=round((average/maximum)*100,2) | where CPU_Percent > 85 | stats earliest(_time) AS early_time latest(_time) AS late_time latest(CPU_Percent) AS CPU_Percent by amdl_ResourceName | eval InstanceName="GSASMonitoring.High.CPU.Percentage" | lookup Stores_IncidentAssignmentGroup_Reference InstanceName | eval Minutes=(threshold/60) | where Enabled=1 | eval short_description="GSAS App Service Plan High CPU", comments="GSAS Monitoring: High CPU Percentage ".CPU_Percent. " has been recorded" ```splunk url=""``` | eval key=InstanceName."-".amdl_ResourceName | lookup Stores_SNOWIntegration_IncidentTracker _key as key OUTPUT _time as last_incident_time | eval last_incident_time=coalesce(last_incident_time,0) | where (late_time > last_incident_time + threshold) | join type=left key [| inputlookup Stores_OpenIncidents | rex field=correlation_id "(?<key>(.*))(?=\_\w+\-?\w+\_?)"] | where ISNULL(dv_state) | eval correlation_id=coalesce(correlation_id,key."_".late_time) | rename key as _key | table short_description comments InstanceName category subcategory contact_type assignment_group impact urgency correlation_id account _key location and here is the url of the entire search I am trying to convert into dynamic for line no 11- https://tjxprod.splunkcloud.com/en-US/app/stores/search?dispatch.sample_ratio=1&display.page.search.mode=verbose&q=search%20index%3Didx-cloud-azure%20%22c899b9d3-bf20-4fd6-8b31-60aa05a14caa%22%20metricName%3D%22CpuPercentage%22%0A%7C%20eval%20CPU_Percent%3Dround((average%2Fmaximum)*100%2C2)%0A%7C%20where%20CPU_Percent%20%3E%2085%0A%7C%20stats%20earliest(_time)%20AS%20early_time%20latest(_time)%20AS%20late_time%20latest(CPU_Percent)%20AS%20CPU_Percent%20by%20amdl_ResourceName%0A%7C%20eval%20InstanceName%3D%22GSASMonitoring.High.CPU.Percentage%22%0A%7C%20lookup%20Stores_IncidentAssignmentGroup_Reference%20InstanceName%0A%7C%20eval%20Minutes%3D(threshold%2F60)%0A%7C%20where%20Enabled%3D1%0A%7C%20eval%20short_description%3D%22GSAS%20App%20Service%20Plan%20High%20CPU%22%2C%0A%20%20%20%20%20%20%20comments%3D%22GSAS%20Monitoring%3A%20High%20CPU%20Percentage%20%22.CPU_Percent.%20%22%20has%20been%20recorded%22%0A%7C%20eval%20key%3DInstanceName.%22-%22.amdl_ResourceName%0A%7C%20lookup%20Stores_SNOWIntegration_IncidentTracker%20_key%20as%20key%20OUTPUT%20_time%20as%20last_incident_time%0A%7C%20eval%20last_incident_time%3Dcoalesce(last_incident_time%2C0)%0A%7C%20where%20(late_time%20%3E%20last_incident_time%20%2B%20threshold)%0A%7C%20join%20type%3Dleft%20key%20%0A%20%20%20%20%5B%7C%20inputlookup%20Stores_OpenIncidents%20%0A%20%20%20%20%7C%20rex%20field%3Dcorrelation_id%20%22(%3F%3Ckey%3E(.*))(%3F%3D%5C_%5Cw%2B%5C-%3F%5Cw%2B%5C_%3F)%22%5D%20%0A%7C%20where%20ISNULL(dv_state)%0A%7C%20eval%20correlation_id%3Dcoalesce(correlation_id%2Ckey.%22_%22.late_time)%20%0A%7C%20rename%20key%20as%20_key%0A%7C%20table%20short_description%20comments%20InstanceName%20category%20subcategory%20contact_type%20assignment_group%20impact%20urgency%20correlation_id%20account%20_key%20location&earliest=-60m%40m&latest=now&display.page.search.tab=statistics&display.general.type=statistics&sid=1704721919.326369_52C57BD9-5296-4397-B370-BF36A375A0A5
  Hi, my employer uses Splunk Enterprise v9.1.2 which is running On-Prem. We have recently enabled SSO with Azure. After enabling SSO we noticed that authentication to the REST API no longer worke... See more...
  Hi, my employer uses Splunk Enterprise v9.1.2 which is running On-Prem. We have recently enabled SSO with Azure. After enabling SSO we noticed that authentication to the REST API no longer worked with PAT tokens or username/password authentication methods. I created an Authentication Extension script using the example SAML_script_azure.py script. I implemented the getUserInfo() function which has allowed users to authenticate to the REST API and CLI commands with PAT tokens. However, I have been unable to make username/password authentication work with the REST API or CLI since I enabled SSO. I tried adding a login() function to my Authentication Extension script but it does not work. The option for "Allow Token Based Authentication Only" is set to false. The login() function is not called when a user sends a request to API with username/password like this example:         curl --location 'https://mysplunkserver.company.com:8089/services/search/jobs?output_mode=json' --header 'Content-Type: text/plain' --data search="search index=main | head 1 " -u me         These are the documentation pages I have been referencing: https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/ConfigureauthextensionsforSAMLtokens  https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/Createtheauthenticationscript    It is possible to use username/password for API and CLI authentication with SSO enabled?
Hello @PickleRick, Thank you for your inputs. It helped to resolve the issue. It would be very helpful if you could share how the use of prestats helped in this case so that its usage becomes more ... See more...
Hello @PickleRick, Thank you for your inputs. It helped to resolve the issue. It would be very helpful if you could share how the use of prestats helped in this case so that its usage becomes more clear to understand. Thank you Taruchit
Since you have provided more sample data and stated what the common field across the events are, I think a search like this may work.   <base_search> | rex field=_raw "Priority\=(?<Priority>[^\... See more...
Since you have provided more sample data and stated what the common field across the events are, I think a search like this may work.   <base_search> | rex field=_raw "Priority\=(?<Priority>[^\,]+)" | rex "(?:\={3}\>|\<\-{3})\s+TRN[^\:]*\:\s+(?<trn>[^\s]+)" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" | stats count(eval(Priority=="Low")) as Low, count(eval(Priority=="Medium")) as Medium, count(eval(Priority=="High")) as High, values(TestMQ) as TestMQ by trn | stats sum(Low) as Low, sum(Medium) as Medium, sum(High) as High by TestMQ | addtotals fieldname="TotalCount"    This is what the final result looks like running against the sample data you provided.