All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @Eduardo.Rosa, You will need to have your Account Admin give you the proper permissions to make those changes. Please check out this TKB - https://community.appdynamics.com/t5/Knowledge-Base/... See more...
Hello @Eduardo.Rosa, You will need to have your Account Admin give you the proper permissions to make those changes. Please check out this TKB - https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-manage-Accounts-Management-Portal-users-as-an-Admin/ta-p/23286
Rather than 2 events, I see 11.  Eight of them contain a single value and the remainder contain different numbers of values.  A well-formed CSV file will have the same number of values in each event... See more...
Rather than 2 events, I see 11.  Eight of them contain a single value and the remainder contain different numbers of values.  A well-formed CSV file will have the same number of values in each event.  Any values with embedded CRLFs must be enclosed in quotation marks.
Hi @Joseph.McNellage, I would suggest searching the community for existing content. If you don't find anything, you can always reach out to AppD Support.  How do I submit a Support ticket? An FAQ... See more...
Hi @Joseph.McNellage, I would suggest searching the community for existing content. If you don't find anything, you can always reach out to AppD Support.  How do I submit a Support ticket? An FAQ 
I had tried the way you told but I got the same error as above. These are the last two values of my csv file in notepad + +    Does every line should have CRLF ?    
Hi @scottbrion   - I’m a Community Moderator in the Splunk Community.  This question was posted 3 years ago, so it might not get the attention you need for your question to be answered. We recomme... See more...
Hi @scottbrion   - I’m a Community Moderator in the Splunk Community.  This question was posted 3 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Hi my old post about migrate the whole environment. Based on splunk documentation, you cannot have all those roles on one server! r. Ismo
I should add that this is confusion is probably caused because the Splunk advisory isn't as accurate as it could be (as I understand it).   Section 1b is not a vulnerability by itself, so the label f... See more...
I should add that this is confusion is probably caused because the Splunk advisory isn't as accurate as it could be (as I understand it).   Section 1b is not a vulnerability by itself, so the label for "1." should really say "both" of the following conditions, not "one."
This started out as a question, but is now just an FYI.  Similar to this post, this week I received a old vulnerability notice from Tenable about my Splunk instance.  We'd previously remediated this ... See more...
This started out as a question, but is now just an FYI.  Similar to this post, this week I received a old vulnerability notice from Tenable about my Splunk instance.  We'd previously remediated this issue, so it was weird that it showed up again suddenly. Vulnerability details: https://packetstormsecurity.com/files/144879/Splunk-6.6.x-Local-Privilege-Escalation.html https://advisory.splunk.com/advisories/SP-CAAAP3M?301=/view/SP-CAAAP3M https://www.tenable.com/plugins/nessus/104498 The details in the articles are light, except saying to review the directions here for running Splunk as non-root: https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/RunSplunkasadifferentornon-rootuser Tenable also doesn't give details about exactly what it saw...it just says, "The current configuration of the host running Splunk was found to be vulnerable to a local privilege escalation vulnerability."   My OS is RHEL 7.x.  I'm launching Splunk using systemd with a non-root user and I have no init.d related files for Splunk.   My understanding is that launching with systemd eliminates the issue, since this way, Splunk never starts with root credentials anyway. Per Splunk's own advisory, any Splunk system is vulnerable, if: Satisfied one of the following conditions a. A Splunk init script created via $SPLUNK_HOME/bin/splunk enable boot-start –user on Splunk 6.1.x or later. b. A line with SPLUNK_OS_USER= exists in $SPLUNK_HOME/etc/splunk-launch.conf In my case, this is an old server and at one point, we did run the boot start command, which made changes to the $SPLUNK_HOME/etc/splunk-launch.conf line that sets the SPLUNK_OS_USER.  Although we had commented out the launch line, the Tenable regex is apparently broken and doesn't realize the line was disabled with a hash.  Removing the line entirely made Tenable stop reporting the vulnerability.  I assume their regex was only looking for "SPLUNK_OS_USER=<something>" so it missed the hash. Anyway, hope this helps someone.        
Hi, For the examples mentioned, I might suggest taking a look at the built-in hostmetrics receiver which you can use to monitor processes like you would with "ps -ef" https://docs.splunk.com/obse... See more...
Hi, For the examples mentioned, I might suggest taking a look at the built-in hostmetrics receiver which you can use to monitor processes like you would with "ps -ef" https://docs.splunk.com/observability/en/gdi/opentelemetry/components/host-metrics-receiver.html There are also some available receivers for mq products like ActiveMQ that can provide an mq query count: https://docs.splunk.com/observability/en/gdi/monitors-messaging/apache-activemq.html#activemq I can't personally think of an option to invoke a custom command from a receiver, but perhaps another way to consider that goal would be to have a custom command that runs independently of the collector and directs its output to an existing receiver. For example, if your command can generate output in a format that a receiver is listening for, that would be a good way to ingest that metric. Here is an article that discusses that idea: https://opentelemetry.io/blog/2023/any-metric-receiver/
Have you found a resolution? Having the same issue
I need to migrate our cluster master to a new machine. It currently has these roles: Cluster Master Deployment Server Indexer License Master Search Head SHC Deployer I already migrated the L... See more...
I need to migrate our cluster master to a new machine. It currently has these roles: Cluster Master Deployment Server Indexer License Master Search Head SHC Deployer I already migrated the License master role to the new server and it's working fine. I've been trying to follow the documentation here: https://docs.splunk.com/Documentation/Splunk/8.2.2/Indexer/Handlemanagernodefailure From what I gather, I need to copy all the files in /opt/splunk/etc/deployment-apps, /opt/splunk/etc/shcluster and /opt/splunk/etc/master-apps, plus anything that's in /opt/splunk/etc/system/local. Then add the passwords in plain text to the server.conf in the  local folder, restart Splunk on the new host and point all peers and search heads to the new master in their respective local server.conf files.  Is there anything else that needs done or would this take care of switching the cluster master entirely? And is there a specific order in which to do things?
HEC ACKs require the client to specifically ask for the status.  Does your HEC client do that?  It can't just throw events at Splunk and hope to get an ACK.  The client has to say "did you index it, ... See more...
HEC ACKs require the client to specifically ask for the status.  Does your HEC client do that?  It can't just throw events at Splunk and hope to get an ACK.  The client has to say "did you index it, yet"?  See https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/AboutHECIDXAck#Query_for_indexing_status
Hello, We set HEC http input for several flows of data and related tokens, and we added ACK feature to this configuration. (following https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/AboutHE... See more...
Hello, We set HEC http input for several flows of data and related tokens, and we added ACK feature to this configuration. (following https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/AboutHECIDXAck) We work with a distributed infra, 1 Search Head, two indexers (no cluster) All was Ok with HEC but after some time we got our first error event : ERROR HttpInputDataHandler [2576842 HttpDedicatedIoThread-0] - Failed processing http input, token name=XXXX [...] reply=9, events_processed=0 INFO HttpInputDataHandler [2576844 HttpDedicatedIoThread-2] - HttpInputAckService not in healthy state. The maximum number of ACKed requests pending query has been reached. Server busy error (reply=9) leads to unavailability of HEC, but only for the token(s) where maximum number of ACKed requests pending query have been reached. Restarting the indexer is enough to get rid of the problem, but after many logs have been lost. We did some search and tried to customize some settings, but we only succeeded in delaying the 'server busy' problem (1 week to 1 month). Has anyone experienced the same problem ? How can we avoid increasing those pending query counter ? Thanks a lot for any help. etc/system/local/limits.conf [http_input] # The max number of ACK channels. max_number_of_ack_channel = 1000000 # The max number of acked requests pending query. max_number_of_acked_requests_pending_query = 10000000 # The max number of acked requests pending query per ACK channel. max_number_of_acked_requests_pending_query_per_ack_channel = 4000000 etc/system/local/server.conf [queue=parsingQueue] maxSize=10MB maxEventSize = 20MB maxIdleTime = 400 channel_cookie = AppGwAffinity (this one because we are using load balancer, so cookie is also set on LB)
You must be running version 3.0.0 or later to upgrade to version 5.3.0.  See the docs at https://docs.splunk.com/Documentation/PCI/5.3.0/Install/Upgradetonewerversion
This ERROR will happen when there are lot of files being monitored and `parallelIngestionPipelines` set to high value. Multiple threads are trying to update fishbucket at the same time. First thread ... See more...
This ERROR will happen when there are lot of files being monitored and `parallelIngestionPipelines` set to high value. Multiple threads are trying to update fishbucket at the same time. First thread creates temp file `snapshot.tmp` and if it's still in the process to update fishbucket, other threads will log above ERROR.
The usage of sort is fine if the number of items is not too large. To sort a large number of items is time consuming, and there is a limit in Splunk. Because of the limit, the attempt to sort the ite... See more...
The usage of sort is fine if the number of items is not too large. To sort a large number of items is time consuming, and there is a limit in Splunk. Because of the limit, the attempt to sort the items and then to select the first 10 items might end in a wrong result. In order to avoid this, I filter all items above/below a limit that is specific to the problem. For instance, 50 000 records are processed, more than 49 000 records are processed within 2 seconds, but there are a few records for which the processing takes more time. So I set the limit to 2 seconds. However, if there are just a few records, e.g., 10, then it might be the case that the list of Top 10 results is empty because all of them are below the limit of 2 seconds.
Thanks @PickleRick @isoutamo 
Hi Team, We are running Splunk v9.1.1 and need to upgrade PCI app from v4.0.0 to v5.3.0 I am trying to find out the upgrade path i.e to which version it has to be before it upgraded to 5.3.0 
It looks like you have a placeholder comment where you want to set a field called splunk url? Which parts of the url you listed are static and which parts are dynamic, and how do the dynamic parts re... See more...
It looks like you have a placeholder comment where you want to set a field called splunk url? Which parts of the url you listed are static and which parts are dynamic, and how do the dynamic parts relate to the fields you have present in your events at the point where the eval is done?
Thank you for sharing your inputs.