All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This is different from what you originally asked for. Worse than that, the expected output is subtly different to your input events. Please can you explain precisely how the input events are to be pr... See more...
This is different from what you originally asked for. Worse than that, the expected output is subtly different to your input events. Please can you explain precisely how the input events are to be processed to give the expected output?
Server team conducted patches and stopped nginx from running. 
@diogofgm thanks, I keep forgetting using btool So when I run the command you suggested, I see {default] section earlier than my specific index like, [ubunt], [rhel]. So I assume, the whatever ca... See more...
@diogofgm thanks, I keep forgetting using btool So when I run the command you suggested, I see {default] section earlier than my specific index like, [ubunt], [rhel]. So I assume, the whatever came 1st under [default] (in my case, "frozenTimePeriodInSecs") would apply and no what I have under [ubuntu] or [rhel], correct? Thanks for your help. 
I have a timechart that shows a calculated value split by hostname, Ex: [[search]] |  | eval overhead=(totaltime - routingtime) | timechart span=1s eval(round(avg(overhead),1)) by hostname What I a... See more...
I have a timechart that shows a calculated value split by hostname, Ex: [[search]] |  | eval overhead=(totaltime - routingtime) | timechart span=1s eval(round(avg(overhead),1)) by hostname What I am trying to do is also show the calculated overhead value not split by hostname: [[search]] |  | eval overhead=(totaltime - routingtime) | timechart span=1s eval(round(avg(overhead),1)) How do I show the split out overhead values and the combined overhead value in the same timechart?
This is an example of the structure of my data and the query I am currently using. I have tried around 10 different solutions based on various examples from stackoverflow.com and  community.splunk.co... See more...
This is an example of the structure of my data and the query I am currently using. I have tried around 10 different solutions based on various examples from stackoverflow.com and  community.splunk.com. But I have not figured out how to change this query such that eval Tag = "Tag1" can become an array eval Tags = ["Tag1", "Tag4"] and I will get entries for all tags that exist in the array. Could someone guide me in the right direction?   | makeresults | eval _raw = "{ \"Info\": { \"Apps\": { \"ReportingServices\": { \"ReportTags\": [ \"Tag1\" ], \"UserTags\": [ \"Tag2\", \"Tag3\" ] }, \"MessageQueue\": { \"ReportTags\": [ \"Tag1\", \"Tag4\" ], \"UserTags\": [ \"Tag3\", \"Tag4\", \"Tag5\" ] }, \"Frontend\": { \"ClientTags\": [ \"Tag12\", \"Tag47\" ] } } } }" | eval Tag = "Tag1" | spath | foreach *ReportTags{} [| eval tags=mvappend(tags, if(lower('<<FIELD>>') = lower(Tag), "<<FIELD>>", null()))] | dedup tags | stats values(tags)  
Hi team Is there a way to connect the splunk cloud platform with splunk on-prem, this to send a specific index to splunk on-prem? Since the client does not allow modifications to the universal forw... See more...
Hi team Is there a way to connect the splunk cloud platform with splunk on-prem, this to send a specific index to splunk on-prem? Since the client does not allow modifications to the universal forwarder agents.   Regards
@danielbb Please have a look.   
Hi @dude49 -- Im seeing this exact error message. Any memory of what the issue was?
Thank you for your insight. I do see it via https://<indexer>:8089
Five years in the future, I have this exact problem. @siddharthfultar long shot, but did you ever find an answer?
There are some limitations which licenses you could stacked to count towards combined license. I don’t know how it will behaves if there are violations for those rules in one stack. Could it be that ... See more...
There are some limitations which licenses you could stacked to count towards combined license. I don’t know how it will behaves if there are violations for those rules in one stack. Could it be that this is your issue? You could check what is your current license stack an if needed remove old licenses and add just locally this developer license. As already said this license haven’t any limits for user amount, but e.g. free license has.
There are 2 ids ABC00000000001 and ABC00000000002   ABC00000000001 has events types : 'Transfer' and 'MESSAGES'   [21.12.2024 00:31.37] [] [] [INFO ] [Application_name] - Updating DB record with ... See more...
There are 2 ids ABC00000000001 and ABC00000000002   ABC00000000001 has events types : 'Transfer' and 'MESSAGES'   [21.12.2024 00:31.37] [] [] [INFO ] [Application_name] - Updating DB record with displayId=ABC0000001; type=TRANSFER [21.12.2024 00:32.37] [] [] [INFO ] [Application_name] - Updating DB record with displayId=ABC0000001; type=MESSAGES   ABC00000000002 has events: [21.12.2024 00:33.37] [] [] [INFO ] [Application_name] - Updating DB record with displayId=ABC0000002; type=TRANSFER [21.12.2024 00:34.37] [] [] [INFO ] [Application_name] - Updating DB record with displayId=ABC0000002; type=MESSAGES [21.12.2024 00:35.37] [] [] [INFO ] [Application_name] - Updating DB record with displayId=ABC0000002; type=POSTING [21.12.2024 00:35.37] [] [] [INFO ] [Application_name] - Sending message to  Booked topic ver. 1.0 with displayId=ABC0000002 [21.12.2024 00:35.37] [] [] [INFO ] [Application_name] - Sending message to  Booked topic ver. 2.0 with displayId=ABC0000002   index=ABC source=XYZ | fillnull value="SENDING" type | stats values(type) as types by displayId   Expected output is ------------------------- ABC0000001 - TRANSFER                                  MESSAGES   ABC0000002 - TRANSFER                                 MESSAGES                                 POSTING                                 Sending message to Common Booked topic ver. 1.0                                 Sending message to Common Booked topic ver. 2.3   But Ouput is:   ABC0000001 - TRANSFER                                  MESSAGES                                 Sending    ABC0000002 - TRANSFER                                 MESSAGES                                 POSTING                                 Sending 
Also splunk, dbx, os, java and JDBC driver versions could help us.
One option is use SC4S https://splunk.github.io/splunk-connect-for-syslog/main/
Splunk forwarders didn’t support NLB between forwarders and indexers. Only place where you could use it is with HEC.
You should read this as a starting point to understand Splunk precedence. https://docs.splunk.com/Documentation/Splunk/latest/admin/Wheretofindtheconfigurationfiles Then you should also understand t... See more...
You should read this as a starting point to understand Splunk precedence. https://docs.splunk.com/Documentation/Splunk/latest/admin/Wheretofindtheconfigurationfiles Then you should also understand that precedence depends also are you indexing or searching. But as @richgalloway said best way to check it is btool with differentiaali options.
Port 8089 is for splunk internal management communication between nodes. E.g. all traffic from search head to indexer peers goes to this port. Also you could use REST calls to manage, get information... See more...
Port 8089 is for splunk internal management communication between nodes. E.g. all traffic from search head to indexer peers goes to this port. Also you could use REST calls to manage, get information or even run saved searches on nodes. Port 8000 is normally for GUI access.  Here is one diagram of ports and how those are connected https://community.splunk.com/t5/Deployment-Architecture/Diagram-of-Splunk-Common-Network-Ports/m-p/116657  
Hello, Can someone please provide the eksctl command line or command line in combination with a cluster config file that will provide an EKS cluster (control plane and worker node(s)) that is resour... See more...
Hello, Can someone please provide the eksctl command line or command line in combination with a cluster config file that will provide an EKS cluster (control plane and worker node(s)) that is resourced for installation of the splunk-operator and then experimentation with standalone Splunk Enterprise configurations? Thanks, Mark
We see the following on the server via the ss -tulpn  tcp LISTEN 0 128 0.0.0.0:8089 0.0.0.0:* user... See more...
We see the following on the server via the ss -tulpn  tcp LISTEN 0 128 0.0.0.0:8089 0.0.0.0:* users:(("splunkd",pid=392724,fd=4))  However, the browser at http://<Indexer>:8089 returns ERR_CONNECTION_RESET. What can it be?  while http://<Indexer>:8000 works as expected.
Use the btool command to see which settings will take effect the next time Splunk restarts. splunk btool --debug indexes list