All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @AL3Z  Okay, so that tells us that the inputs on the UF should be working, however the single hostname in the _internal log is inconclusive, as if the UF is on the same server as the main instanc... See more...
Hi @AL3Z  Okay, so that tells us that the inputs on the UF should be working, however the single hostname in the _internal log is inconclusive, as if the UF is on the same server as the main instance it would have the same hostname unless you have specifically modified the serverName on one of the instance? As @gcusello mentioned, having both on the same server/machine will be making things more complicated. Essentially what we're trying to establish here is if the flow isnt going from the UF, or if the input isnt working. Im starting to suspect that the data isnt going from the UF, so I think it would be good to establish some proof either way.  If you search "index=_internal source=*splunkd.log" - How many source do you see in the interested fields on the left? If the UF is sending then you should see 2. How have you configured the forwarding of the data from UF the main instance, and how have you configured the main instance to listen (Presumably on port 9997)? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @livehybrid , I'm seeing only the 1 hostname in the internal logs. Yes windows_logs index exist on main Splunk instance. When i ran the btool cmd i can see the windows inputs list. Thanks.... See more...
Hi @livehybrid , I'm seeing only the 1 hostname in the internal logs. Yes windows_logs index exist on main Splunk instance. When i ran the btool cmd i can see the windows inputs list. Thanks..  
That's another way to tackle this problem. The difference is that my solution is (or at least can be) synchronous - the search gets run when your user opens the dashboard while @gcusello 's one needs... See more...
That's another way to tackle this problem. The difference is that my solution is (or at least can be) synchronous - the search gets run when your user opens the dashboard while @gcusello 's one needs to be run on schedule and you're displaying their result asynchronously.
Since your post was a reply to a very old thread I moved it into its own thread for greater visibility (old thread for reference - https://community.splunk.com/t5/Getting-Data-In/What-happens-when-th... See more...
Since your post was a reply to a very old thread I moved it into its own thread for greater visibility (old thread for reference - https://community.splunk.com/t5/Getting-Data-In/What-happens-when-the-forwarder-is-configured-to-send-data-to-a/m-p/312596#M58584 ) And to your question - the error is a result of a search running mcollect command. By host naming I suppose you have distributed architecture. Are you sure you have properly configured data routing? Your events generated on SHs should be properly routed to indexers. Otherwise you might get into situation like this - you have the indexes on your indexers but the events are generated using collect or mcollect on SHs and since they are not forwarded to indexers, your SHs are trying to index them locally where they might not have destination indexes and not have last chance indexes configured.
I think I'd try to simply use logrotate or some custom script to move the log from yesterday to another directory from which they would normally be ingested  with monitor input.
With the level of vagueness in your question the only response is "something is wrong". We don't know what your code looks like, we don't know your infrastructure, we don't know what results your sc... See more...
With the level of vagueness in your question the only response is "something is wrong". We don't know what your code looks like, we don't know your infrastructure, we don't know what results your script yields - both in terms of general return codes or errors from the whole script as well as any intermediate results. It's hard to say anything with such little information.
is it possible to modify the owner  curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions/notable_suppression-foo \ --data-urlencode descript... See more...
is it possible to modify the owner  curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions/notable_suppression-foo \ --data-urlencode description=baz \ --data-urlencode 'search=`get_notable_index`' --data-urlencode owner="test"
There are two things to tackle here. One is general memory usage. It can be caused by many different things depending on the component and its activity. But most typically the more searching you do,... See more...
There are two things to tackle here. One is general memory usage. It can be caused by many different things depending on the component and its activity. But most typically the more searching you do, the bigger memory usage you cause. Another thing is swap. I'm not a big fan of swap use in modern scenarios. OK, some small amount of swap to let the system move some "running but not quite" daemons out of the way might be useful but nothing more. If your main task (in your case - splunkd) starts swapping out, you're getting into a loop where the system cannot keep up with requests for memory so it starts swapping so it cannot allocate any more memory so it wants to swap some more... I prefer my systems with little or no swap at all. It's very often better for the user to simply kill the process due to memory exhaustion and restart it than to wait for it to crash badly because of the same reason but after a long time of heavy I/O use possibly affecting other components should you be using shared storage infrastructure.
Hi @AL3Z , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points... See more...
Hi @AL3Z , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @L_Petch , schedule an alert that writes the indexes status in a lookup or in a summary index. Ciao. Giuseppe
You can define the underlying search as a report and use it to power the dashboard panel. Then set the report to be run as owner instead of the calling user.
Hello,   I have a dashboard that checks all indexes and displays the event count for today and the last write time. This allows users of the dashboard to alert if an index has not been written to i... See more...
Hello,   I have a dashboard that checks all indexes and displays the event count for today and the last write time. This allows users of the dashboard to alert if an index has not been written to in a certain amount of time.   My issue is that the dashboard runs when the user clicks into it and runs the searches using their permissions as expected. However they do not have access to all indexes so cannot see the stats for all indexes. What is the easiest way to change this so that they can see an event count for all indexes without having to give them access to the index?    
Is it possible to change owner  curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions \ --data-urlencode name=notable_suppression-foo ... See more...
Is it possible to change owner  curl -k -u admin:pass -X POST https://splunk:8089/servicesNS/nobody/SA-ThreatIntelligence/alerts/suppressions \ --data-urlencode name=notable_suppression-foo \ --data-urlencode description=bar \ --data-urlencode 'search=`get_notable_index` _time>1737349200 _time<1737522000' \ --data-urlencode disabled=false --data-urlencode owner="new_user"  
we have a scenario where we roll logs everyday. we want Splunk to index log file for yesterday only. We don't want to ingest todays log files. what specific setting d i require in  my input. Conf f... See more...
we have a scenario where we roll logs everyday. we want Splunk to index log file for yesterday only. We don't want to ingest todays log files. what specific setting d i require in  my input. Conf file to only ingest yesterdays data.  ignoreOlderThan = 1d  also ingests todays logfiles which i do not want to.    
i know its been a long time, but im getting the same problem now. did you manage to solve it?  
Hello,  I need some help adding colour to my dashboard. I've got the below block sitting on my high level dashboard view, but I want it to change colour (Red or Green) dependent on the values of the... See more...
Hello,  I need some help adding colour to my dashboard. I've got the below block sitting on my high level dashboard view, but I want it to change colour (Red or Green) dependent on the values of the underlying dashboard that it clicks through to which I will share below.  This is the dashboard it displays below when you click on the above... Is there some way, that if any of these 5 boxes do not display "OK", then the top level block (EazyBI) will change to Red? Can anyone help me with that?        
Please can you share the Python code you used to execute the test query so that we can help diagnose. Thanks
In that case @muhammadfahimma  I think it is best to get this raised with Splunk Support, they should let you know the reference number once it has been logged and you can track it on the Release Not... See more...
In that case @muhammadfahimma  I think it is best to get this raised with Splunk Support, they should let you know the reference number once it has been logged and you can track it on the Release Notes (https://docs.splunk.com/Documentation/ES/latest/RN/NewFeatures) page. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
@AL3Z So you are seeing 2 hostnames in your internal logs? And/Or sources from both:  C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log  and C:\Program Files\Splunk\var\log\spl... See more...
@AL3Z So you are seeing 2 hostnames in your internal logs? And/Or sources from both:  C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log  and C:\Program Files\Splunk\var\log\splunk\splunkd.log  Does the windows_logs index exist on your main Splunk instance? In the context of the SplunkUniversalForwarder, can you run: C:\Program Files\SplunkUniversalForwarder\bin\splunk cmd btool inputs list Do your expected Windows inputs get listed? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @livehybrid, @gcusello, Adding the inputs to  C:\Program Files\SplunkUniversalForwarder\etc\system\local I can able to see the logs in splunk.