All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @hazem  Will the UF continue sending data to both indexers? No, it will only send data to the available indexer (mysplunk_indexer2) Will the UF detect that mysplunk_indexer1 is unreachable? Ye... See more...
Hi @hazem  Will the UF continue sending data to both indexers? No, it will only send data to the available indexer (mysplunk_indexer2) Will the UF detect that mysplunk_indexer1 is unreachable? Yes, the UF will detect the unreachability and automatically adjust its forwarding strategy Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will    
Dear all,   I have the following outputs.conf configuration: [tcpout] defaultGroup = my_indexers   [tcpout:my_indexers] server = mysplunk_indexer1:9997, mysplunk_indexer2:9997   [tcpout-serv... See more...
Dear all,   I have the following outputs.conf configuration: [tcpout] defaultGroup = my_indexers   [tcpout:my_indexers] server = mysplunk_indexer1:9997, mysplunk_indexer2:9997   [tcpout-server://mysplunk_indexer1:9997]   Could you please clarify the Universal Forwarder (UF) behavior in the event that mysplunk_indexer1 goes down? Will the UF continue sending data to both indexers despite mysplunk_indexer1 being down? Or will the UF detect that mysplunk_indexer1 is unreachable and stop forwarding traffic to it?
Hi @yeahnah @gcusello  I used in below way where unique user count is not matching ,why i need to provide specify json...I want to fetch from all event from the splunk log give the unique user list ... See more...
Hi @yeahnah @gcusello  I used in below way where unique user count is not matching ,why i need to provide specify json...I want to fetch from all event from the splunk log give the unique user list for their specify group .group can respresent in [group 1,group 2] or [group1]...then fetch  unique user list of [App.Au1,App.Au2] in one row and unique user list of [App.Au1] in second row
Hi @sufs2000  Have a look at the image below, does this help you work out the settings required to have colour dependant on the value?   Please let me know how you get on and consider adding k... See more...
Hi @sufs2000  Have a look at the image below, does this help you work out the settings required to have colour dependant on the value?   Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @shabamichae  What do your monitor stanzas currently look like for monitoring these files? Do the logs roll to a "logName.log.1" format (.1 being yesterday)? If so. you may be able to update you... See more...
Hi @shabamichae  What do your monitor stanzas currently look like for monitoring these files? Do the logs roll to a "logName.log.1" format (.1 being yesterday)? If so. you may be able to update your existing monitor stanzas to add a whitelist (see https://docs.splunk.com/Documentation/Splunk/9.4.0/Data/Monitorfilesanddirectorieswithinputs.conf#:~:text=whitelist%20%3D%20%3Cregular%20expression%3E) whitelist = <regular expression> If set, the Splunk platform monitors files whose names match the specified regular expression. ## inputs.conf ## [monitor:///var/log/*] index=syslog sourcetype=example ..etc.. whitelist = .*\.1$ Also check out https://docs.splunk.com/Documentation/Splunk/latest/Data/Specifyinputpathswithwildcards Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
Hi @AL3Z  Okay, so that tells us that the inputs on the UF should be working, however the single hostname in the _internal log is inconclusive, as if the UF is on the same server as the main instanc... See more...
Hi @AL3Z  Okay, so that tells us that the inputs on the UF should be working, however the single hostname in the _internal log is inconclusive, as if the UF is on the same server as the main instance it would have the same hostname unless you have specifically modified the serverName on one of the instance? As @gcusello mentioned, having both on the same server/machine will be making things more complicated. Essentially what we're trying to establish here is if the flow isnt going from the UF, or if the input isnt working. Im starting to suspect that the data isnt going from the UF, so I think it would be good to establish some proof either way.  If you search "index=_internal source=*splunkd.log" - How many source do you see in the interested fields on the left? If the UF is sending then you should see 2. How have you configured the forwarding of the data from UF the main instance, and how have you configured the main instance to listen (Presumably on port 9997)? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @livehybrid , I'm seeing only the 1 hostname in the internal logs. Yes windows_logs index exist on main Splunk instance. When i ran the btool cmd i can see the windows inputs list. Thanks.... See more...
Hi @livehybrid , I'm seeing only the 1 hostname in the internal logs. Yes windows_logs index exist on main Splunk instance. When i ran the btool cmd i can see the windows inputs list. Thanks..  
That's another way to tackle this problem. The difference is that my solution is (or at least can be) synchronous - the search gets run when your user opens the dashboard while @gcusello 's one needs... See more...
That's another way to tackle this problem. The difference is that my solution is (or at least can be) synchronous - the search gets run when your user opens the dashboard while @gcusello 's one needs to be run on schedule and you're displaying their result asynchronously.
Since your post was a reply to a very old thread I moved it into its own thread for greater visibility (old thread for reference - https://community.splunk.com/t5/Getting-Data-In/What-happens-when-th... See more...
Since your post was a reply to a very old thread I moved it into its own thread for greater visibility (old thread for reference - https://community.splunk.com/t5/Getting-Data-In/What-happens-when-the-forwarder-is-configured-to-send-data-to-a/m-p/312596#M58584 ) And to your question - the error is a result of a search running mcollect command. By host naming I suppose you have distributed architecture. Are you sure you have properly configured data routing? Your events generated on SHs should be properly routed to indexers. Otherwise you might get into situation like this - you have the indexes on your indexers but the events are generated using collect or mcollect on SHs and since they are not forwarded to indexers, your SHs are trying to index them locally where they might not have destination indexes and not have last chance indexes configured.
I think I'd try to simply use logrotate or some custom script to move the log from yesterday to another directory from which they would normally be ingested  with monitor input.
With the level of vagueness in your question the only response is "something is wrong". We don't know what your code looks like, we don't know your infrastructure, we don't know what results your sc... See more...
With the level of vagueness in your question the only response is "something is wrong". We don't know what your code looks like, we don't know your infrastructure, we don't know what results your script yields - both in terms of general return codes or errors from the whole script as well as any intermediate results. It's hard to say anything with such little information.
is it possible to modify the owner  curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions/notable_suppression-foo \ --data-urlencode descript... See more...
is it possible to modify the owner  curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions/notable_suppression-foo \ --data-urlencode description=baz \ --data-urlencode 'search=`get_notable_index`' --data-urlencode owner="test"
There are two things to tackle here. One is general memory usage. It can be caused by many different things depending on the component and its activity. But most typically the more searching you do,... See more...
There are two things to tackle here. One is general memory usage. It can be caused by many different things depending on the component and its activity. But most typically the more searching you do, the bigger memory usage you cause. Another thing is swap. I'm not a big fan of swap use in modern scenarios. OK, some small amount of swap to let the system move some "running but not quite" daemons out of the way might be useful but nothing more. If your main task (in your case - splunkd) starts swapping out, you're getting into a loop where the system cannot keep up with requests for memory so it starts swapping so it cannot allocate any more memory so it wants to swap some more... I prefer my systems with little or no swap at all. It's very often better for the user to simply kill the process due to memory exhaustion and restart it than to wait for it to crash badly because of the same reason but after a long time of heavy I/O use possibly affecting other components should you be using shared storage infrastructure.
Hi @AL3Z , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points... See more...
Hi @AL3Z , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @L_Petch , schedule an alert that writes the indexes status in a lookup or in a summary index. Ciao. Giuseppe
You can define the underlying search as a report and use it to power the dashboard panel. Then set the report to be run as owner instead of the calling user.
Hello,   I have a dashboard that checks all indexes and displays the event count for today and the last write time. This allows users of the dashboard to alert if an index has not been written to i... See more...
Hello,   I have a dashboard that checks all indexes and displays the event count for today and the last write time. This allows users of the dashboard to alert if an index has not been written to in a certain amount of time.   My issue is that the dashboard runs when the user clicks into it and runs the searches using their permissions as expected. However they do not have access to all indexes so cannot see the stats for all indexes. What is the easiest way to change this so that they can see an event count for all indexes without having to give them access to the index?    
Is it possible to change owner  curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions \ --data-urlencode name=notable_suppression-foo ... See more...
Is it possible to change owner  curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions \ --data-urlencode name=notable_suppression-foo \ --data-urlencode description=bar \ --data-urlencode 'search=`get_notable_index` _time>1737349200 _time<1737522000' \ --data-urlencode disabled=false --data-urlencode owner="new_user"  
we have a scenario where we roll logs everyday. we want Splunk to index log file for yesterday only. We don't want to ingest todays log files. what specific setting d i require in  my input. Conf f... See more...
we have a scenario where we roll logs everyday. we want Splunk to index log file for yesterday only. We don't want to ingest todays log files. what specific setting d i require in  my input. Conf file to only ingest yesterdays data.  ignoreOlderThan = 1d  also ingests todays logfiles which i do not want to.    
i know its been a long time, but im getting the same problem now. did you manage to solve it?