All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @hazem , at first the last row isn't mandatory, it's an old configuration and if you put it, you should add one row for each server. Anyway, if you configure more than one Indexer, lofs are forw... See more...
Hi @hazem , at first the last row isn't mandatory, it's an old configuration and if you put it, you should add one row for each server. Anyway, if you configure more than one Indexer, lofs are forwarded to all the Indexers changing destination every 30 seconds using a round robin algorithm for the load balancing. Then, if an Indexers isn't available, the Forwarders tries with another one; id no Indexers are available it saves logs on a local cache and forward them when the connection is established again. Ciao. Giuseppe  
We are planning to on-board Akamai platform logs to Splunk. We are following this link to implement the same - SIEM Splunk connector   In the process we have installed this Akamai add-on - Akamai S... See more...
We are planning to on-board Akamai platform logs to Splunk. We are following this link to implement the same - SIEM Splunk connector   In the process we have installed this Akamai add-on - Akamai SIEM Integration | Splunkbase   When we are going to Settings > Data Inputs as mentioned here - SIEM Splunk connector – we are unable to find this data input - ​Akamai​ Security Incident Event Manager API.   And we are getting the following error in Splunk post installing the add-on.   Deployer       Search head   Can you help us in this challenge? We are stuck at “data inputs”. I think we need to perform these pre-requisites to get this Akamai add-on (Modular Input) work –     Please help us in installing Java in our Splunk instance and whether KVStore is installed or not and is it working fine?
Good day, unfortunately this did not prompt a triggered alert even after changing the usage value to a lower number to test it. Thank you though.
Good day, unfortunately this did not prompt a triggered alert even after changing the usage value to a lower number when testing it. Thank you though.
@cpetterborg can you please help me how to install Java on our Splunk instance?   
Morning, Splunkers! I've been running a dashboard that monitors the performance of the various systems my customer uses, and I recently switched all of my timechart line graphs over to the Downsampl... See more...
Morning, Splunkers! I've been running a dashboard that monitors the performance of the various systems my customer uses, and I recently switched all of my timechart line graphs over to the Downsampled Line Chart because it allows a user to zoom in on a specific time/date range that is already displayed (most of my customer's users aren't Splunk-savy in the slightest). My customer has users literally all over the country, so our Splunk is set for all times to be shown as UTC by default for every account. The problem is the Downsampled Line Chart insists on showing everything in local time, regardless of what our account configurations are set to, and I can't find any documentation on how to get it to stop (I'm not an admin, so I can't just go into settings and start editing configuration files). Does anybody have any idea on how to get it to stop? I'd hate to have to give up the functionality of the chart because it won't show the same times for people on opposite sides of the country, but I'm out of options, here.  
In terms of further breakdown to the previous answer:  Automatic Failover: If mysplunk_indexer1 goes down, the UF will detect the failure and automatically stop sending data to that indexer. Conti... See more...
In terms of further breakdown to the previous answer:  Automatic Failover: If mysplunk_indexer1 goes down, the UF will detect the failure and automatically stop sending data to that indexer. Continued Forwarding to Available Indexers: The UF will continue forwarding data to mysplunk_indexer2:9997. The forwarder does not stop forwarding entirely but rather distributes the load among the remaining available indexers. Retry Logic: The UF will periodically attempt to reconnect to mysplunk_indexer1. Once it becomes available again, data will resume being sent to it. Load Balancing (if applicable): If both indexers were previously receiving traffic in a load-balanced manner (e.g., using autoLBFrequency), the UF would shift all the load to the remaining functional indexer. Also, you might want to consider the following:   If no indexers are available, events will be queued locally in memory (or on disk if useAck is enabled). Ensure you configure proper connectionTimeout and autoLBFrequency settings to optimize failover behavior. If useACK=true (for reliable delivery), the UF will queue events until an indexer acknowledges them. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @hazem  Will the UF continue sending data to both indexers? No, it will only send data to the available indexer (mysplunk_indexer2) Will the UF detect that mysplunk_indexer1 is unreachable? Ye... See more...
Hi @hazem  Will the UF continue sending data to both indexers? No, it will only send data to the available indexer (mysplunk_indexer2) Will the UF detect that mysplunk_indexer1 is unreachable? Yes, the UF will detect the unreachability and automatically adjust its forwarding strategy Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will    
Dear all,   I have the following outputs.conf configuration: [tcpout] defaultGroup = my_indexers   [tcpout:my_indexers] server = mysplunk_indexer1:9997, mysplunk_indexer2:9997   [tcpout-serv... See more...
Dear all,   I have the following outputs.conf configuration: [tcpout] defaultGroup = my_indexers   [tcpout:my_indexers] server = mysplunk_indexer1:9997, mysplunk_indexer2:9997   [tcpout-server://mysplunk_indexer1:9997]   Could you please clarify the Universal Forwarder (UF) behavior in the event that mysplunk_indexer1 goes down? Will the UF continue sending data to both indexers despite mysplunk_indexer1 being down? Or will the UF detect that mysplunk_indexer1 is unreachable and stop forwarding traffic to it?
Hi @yeahnah @gcusello  I used in below way where unique user count is not matching ,why i need to provide specify json...I want to fetch from all event from the splunk log give the unique user list ... See more...
Hi @yeahnah @gcusello  I used in below way where unique user count is not matching ,why i need to provide specify json...I want to fetch from all event from the splunk log give the unique user list for their specify group .group can respresent in [group 1,group 2] or [group1]...then fetch  unique user list of [App.Au1,App.Au2] in one row and unique user list of [App.Au1] in second row
Hi @sufs2000  Have a look at the image below, does this help you work out the settings required to have colour dependant on the value?   Please let me know how you get on and consider adding k... See more...
Hi @sufs2000  Have a look at the image below, does this help you work out the settings required to have colour dependant on the value?   Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @shabamichae  What do your monitor stanzas currently look like for monitoring these files? Do the logs roll to a "logName.log.1" format (.1 being yesterday)? If so. you may be able to update you... See more...
Hi @shabamichae  What do your monitor stanzas currently look like for monitoring these files? Do the logs roll to a "logName.log.1" format (.1 being yesterday)? If so. you may be able to update your existing monitor stanzas to add a whitelist (see https://docs.splunk.com/Documentation/Splunk/9.4.0/Data/Monitorfilesanddirectorieswithinputs.conf#:~:text=whitelist%20%3D%20%3Cregular%20expression%3E) whitelist = <regular expression> If set, the Splunk platform monitors files whose names match the specified regular expression. ## inputs.conf ## [monitor:///var/log/*] index=syslog sourcetype=example ..etc.. whitelist = .*\.1$ Also check out https://docs.splunk.com/Documentation/Splunk/latest/Data/Specifyinputpathswithwildcards Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
Hi @AL3Z  Okay, so that tells us that the inputs on the UF should be working, however the single hostname in the _internal log is inconclusive, as if the UF is on the same server as the main instanc... See more...
Hi @AL3Z  Okay, so that tells us that the inputs on the UF should be working, however the single hostname in the _internal log is inconclusive, as if the UF is on the same server as the main instance it would have the same hostname unless you have specifically modified the serverName on one of the instance? As @gcusello mentioned, having both on the same server/machine will be making things more complicated. Essentially what we're trying to establish here is if the flow isnt going from the UF, or if the input isnt working. Im starting to suspect that the data isnt going from the UF, so I think it would be good to establish some proof either way.  If you search "index=_internal source=*splunkd.log" - How many source do you see in the interested fields on the left? If the UF is sending then you should see 2. How have you configured the forwarding of the data from UF the main instance, and how have you configured the main instance to listen (Presumably on port 9997)? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @livehybrid , I'm seeing only the 1 hostname in the internal logs. Yes windows_logs index exist on main Splunk instance. When i ran the btool cmd i can see the windows inputs list. Thanks.... See more...
Hi @livehybrid , I'm seeing only the 1 hostname in the internal logs. Yes windows_logs index exist on main Splunk instance. When i ran the btool cmd i can see the windows inputs list. Thanks..  
That's another way to tackle this problem. The difference is that my solution is (or at least can be) synchronous - the search gets run when your user opens the dashboard while @gcusello 's one needs... See more...
That's another way to tackle this problem. The difference is that my solution is (or at least can be) synchronous - the search gets run when your user opens the dashboard while @gcusello 's one needs to be run on schedule and you're displaying their result asynchronously.
Since your post was a reply to a very old thread I moved it into its own thread for greater visibility (old thread for reference - https://community.splunk.com/t5/Getting-Data-In/What-happens-when-th... See more...
Since your post was a reply to a very old thread I moved it into its own thread for greater visibility (old thread for reference - https://community.splunk.com/t5/Getting-Data-In/What-happens-when-the-forwarder-is-configured-to-send-data-to-a/m-p/312596#M58584 ) And to your question - the error is a result of a search running mcollect command. By host naming I suppose you have distributed architecture. Are you sure you have properly configured data routing? Your events generated on SHs should be properly routed to indexers. Otherwise you might get into situation like this - you have the indexes on your indexers but the events are generated using collect or mcollect on SHs and since they are not forwarded to indexers, your SHs are trying to index them locally where they might not have destination indexes and not have last chance indexes configured.
I think I'd try to simply use logrotate or some custom script to move the log from yesterday to another directory from which they would normally be ingested  with monitor input.
With the level of vagueness in your question the only response is "something is wrong". We don't know what your code looks like, we don't know your infrastructure, we don't know what results your sc... See more...
With the level of vagueness in your question the only response is "something is wrong". We don't know what your code looks like, we don't know your infrastructure, we don't know what results your script yields - both in terms of general return codes or errors from the whole script as well as any intermediate results. It's hard to say anything with such little information.
is it possible to modify the owner  curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions/notable_suppression-foo \ --data-urlencode descript... See more...
is it possible to modify the owner  curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions/notable_suppression-foo \ --data-urlencode description=baz \ --data-urlencode 'search=`get_notable_index`' --data-urlencode owner="test"
There are two things to tackle here. One is general memory usage. It can be caused by many different things depending on the component and its activity. But most typically the more searching you do,... See more...
There are two things to tackle here. One is general memory usage. It can be caused by many different things depending on the component and its activity. But most typically the more searching you do, the bigger memory usage you cause. Another thing is swap. I'm not a big fan of swap use in modern scenarios. OK, some small amount of swap to let the system move some "running but not quite" daemons out of the way might be useful but nothing more. If your main task (in your case - splunkd) starts swapping out, you're getting into a loop where the system cannot keep up with requests for memory so it starts swapping so it cannot allocate any more memory so it wants to swap some more... I prefer my systems with little or no swap at all. It's very often better for the user to simply kill the process due to memory exhaustion and restart it than to wait for it to crash badly because of the same reason but after a long time of heavy I/O use possibly affecting other components should you be using shared storage infrastructure.