All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I would like some help creating a report that will show the seconds diff between my event timestamp and the Splunk landing timestamp. I have the below query which will give me the diff between _in... See more...
I would like some help creating a report that will show the seconds diff between my event timestamp and the Splunk landing timestamp. I have the below query which will give me the diff between _indextime  and  _time  but I would also like the seconds difference between GenerationTime (ie...2024-04-23 12:49:52)    and _indextime. index=splunk_index  sourcetype=splunk_sourcetype | eval tnow = now() | convert ctime(tnow) | convert ctime(_indextime) as Index_Time | eval secondsDifference=_indextime-_time | table Node EventNumber GenerationTime Index_Time, _time, secondsDifference 
In short there will be a number of installs and configurations you will have to perform, follow this for Splunk Strem components. https://docs.splunk.com/Documentation/StreamApp/8.1.1/DeployStreamAp... See more...
In short there will be a number of installs and configurations you will have to perform, follow this for Splunk Strem components. https://docs.splunk.com/Documentation/StreamApp/8.1.1/DeployStreamApp/AboutSplunkStream High Level Steps 1.Design your architect and the data flow - ensure all ports required can communicate 2. Ensure your device can send netflow data 3. Configure a Splunk index for the netflow data in Splunk 4. Configure Splunk HEC in Splunk and set it to the netflow index 5. Install and Configure the Splunk netflow componenets (ISF) this is where you would point the device to send its netflow data, which then sends to Splunk HEC. Follow this for Splunk HEC configuration https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector Flow this for your architecture - there is a good diagram that you need to consider for all the componenets https://docs.splunk.com/Documentation/StreamApp/8.1.1/DeployStreamApp/DeploymentArchitecture Example of stream in action https://lantern.splunk.com/Data_Descriptors/DNS_data/Installing_and_configuring_Splunk_Stream
yes all the indexers are up and running fine  So what will be the possible solution in order to mitigate the above issue?
@ITWhisperer  Thank you for your assistance. I've successfully tested the query you provided and managed to retrieve the expected success and failure counts. However, in addition to this, I also nee... See more...
@ITWhisperer  Thank you for your assistance. I've successfully tested the query you provided and managed to retrieve the expected success and failure counts. However, in addition to this, I also need to retrieve the step count for each step from Query 1. For instance, when I ran Query 1 individually for the last 30 days, I observed that 20 steps were extracted as a field along with the count of each step. Similarly, when I ran Query 2 for the same period, approximately 10 successes were extracted as a field with their respective counts. Likewise, when I ran Query 3, around 18 failures were extracted as a field with their counts. So, with the combined search you provided, I'm able to obtain a total of 18 fields comprising both successes and failures. This is because if any of the step fields have either a success or a failure, it reflects in the query output. However, the other two step fields don't have successes or failures, and their step information is not present in the output of the provided query. Therefore, we need to include the output of the first query, which includes the step field along with its count, in addition to the success and failure counts obtained from Queries 2 and 3. Since Query 1 captures all events, Query 2 captures successes, and Query 3 captures failures, we need to ensure that the first query's output is also included in the combined search. Could you please review and update accordingly? Thank you for your attention to this matter.
In the done handler, you only have access to the first row of the results, so you would only be able to set a token based on the first result. Is this what you are actually trying to do?
And each of your indexers is in different site? And all your indexers are online and OK? (BTW, this seems a bit overly complicated; unless you're planning on some further expansion of your environme... See more...
And each of your indexers is in different site? And all your indexers are online and OK? (BTW, this seems a bit overly complicated; unless you're planning on some further expansion of your environment, you could perfectly well get away with a single-site setup).
yes...
We have total 3 indexers and below is the site configuration Replication Factor ? origin:1, site1:1, site2:1, site3:1, total:3 Search Factor ? origin:1, total:2 Can you please help me with tha... See more...
We have total 3 indexers and below is the site configuration Replication Factor ? origin:1, site1:1, site2:1, site3:1, total:3 Search Factor ? origin:1, total:2 Can you please help me with that Thanks 
This appears to be a bug - I would log a support call for this. (You may need to uprade) 
thank  @deepakc   for your reply. on our deployment, we are using UF on our rsyslog box and every data source is sent to ryslog server on a specific file, then we use "Monitor Files & Directories" a... See more...
thank  @deepakc   for your reply. on our deployment, we are using UF on our rsyslog box and every data source is sent to ryslog server on a specific file, then we use "Monitor Files & Directories" as data input. as you mentioned "F5 is not switching it might be due to the continuous stream of syslog data being sent" -i believe the solution to LB issue is to increase the size of these files on UF itself so in this scenario the second  UF will work only if first one  down because of continuous stream of syslog data -other suggestion,  can we configure our LB to achieve the below challenge: if the first Universal Forwarder becomes overwhelmed by the continuous stream of syslog data, another UF can take over and handle the load. please advise with the best practice in this scenario.
It depends on your architecture and your config. How many indexers you have and in what sites are they? What is your siteRF/siteSF configuration?
As @gcusello already pointed out, the Universal Forwarder by default has a limit on data throughput so if you have too many events coming in, the UF might not keep up with sending them out sufficient... See more...
As @gcusello already pointed out, the Universal Forwarder by default has a limit on data throughput so if you have too many events coming in, the UF might not keep up with sending them out sufficiently quickly (the same can happen if your network bandwidth is too low). First question though is where the latency appears - look into Forwarded Events log on your WEC machine and verify if those events you see are current or delayed - that's first hint where to troubleshoot it. There are also two different modes of how WEF operates - in push mode the source machines send the events to the WEC machine but in the pull mode the WEC machine actively pulls the events from the source machines with given schedule (I'm not sure if push mode is continuous only or does it work with scheduled periods as well). That's something you should discuss with your Windows admins. (I suppose there can be also different factors possibly causing WEF delays). Another thing that shows when you exceed given performance level is that WinEventLog sources seem to get capped at some point and you can't go over some performance level using single input (even though the machine itself is perfectly capable of handling additional load). In such case the solution is to create additional EventLog channels beside the "normal" Forwarded Events and split the events from subscriptions into multiple channels (and of course ingest them with UF from those channels). But that's a relatively advanced topic (on the Windows side).
Hi, Could you please let me know how to resolve the issue   Thanks
The message says what your cluster is missing - an indexer located in site3 to which the bucket could be replicated.
Ok the further we go down this thread, the more confusing it gets. You contradict yourself. In one place you say that it's a standalone search-head then in another you say that it's a part of a clust... See more...
Ok the further we go down this thread, the more confusing it gets. You contradict yourself. In one place you say that it's a standalone search-head then in another you say that it's a part of a cluster. So there are two possible scenarios: 1) It is indeed one of the search-heads in a cluster, managed by deployer but you manually installed an app on just one of those search-heads. That still doesn't make the server a stand-alone search-head. 2) It is a stand-alone search-head (not being a part of a search-head cluster). It is _not_ managed by a deployer. It _might_ be managed by deployment server. But might as well be managed by something external. So which one is it? Also I should expect the threatq support to tell you it's not their problem because it has nothing to do with the app itself - it's about your Splunk environment.
@ITWhisperer,I have used stats and i was able to match the data. I want to do one more implementation. I want to se t token based on the availability of Info.runtime_data{}. For every event there wi... See more...
@ITWhisperer,I have used stats and i was able to match the data. I want to do one more implementation. I want to se t token based on the availability of Info.runtime_data{}. For every event there will not be Info.runtime_data{}. I want to set a token if Info.runtime_data{} is present in the event of Info.Title, if not present i want to unset that token. I have tried it in the search query using if condition. But i am not able to implement it in the dashboard. <search> <query> index="abc" sourcetype="abc" Info.Title="$Title$" |spath output=Runtime_data path=Info.runtime_data | eval has_runtime = if(isnotnull(Runtime_data), "Yes", "No") | table _time, has_runtime </query> <done> <condition match="has_runtime=Yes"> <set token="tok_runtime">true</set> </condition> <condition match="has_runtime=No"> <unset token="tok_runtime"></unset> </condition> </done> </search> This is my code, i am not sure the Condition match is correct or not. But im not able to set or unset the token. Please suggest me anything.
Hi    1 bucket stuck at “fixup task pending” state with below error. I tried restarting Splunk, Re-sync and roll but its not working. Can anyone suggest the possible solution in order to troublesho... See more...
Hi    1 bucket stuck at “fixup task pending” state with below error. I tried restarting Splunk, Re-sync and roll but its not working. Can anyone suggest the possible solution in order to troubleshoot the issue   Missing enough suitable candidates to create replicated copy in order to meet replication policy. Missing={ site3:1 }
Hi Phillip You should be able to just find under conf/log there should be log4j or lof4j xml files. You can then adjust the logging level there
@karthi2809    Are you looking for this? <form version="1.1" theme="dark"> <label>Application</label> <fieldset submitButton="false"> <input type="dropdown" token="BankApp" searchWhenChang... See more...
@karthi2809    Are you looking for this? <form version="1.1" theme="dark"> <label>Application</label> <fieldset submitButton="false"> <input type="dropdown" token="BankApp" searchWhenChanged="true"> <label>ApplicationName</label> <choice value="*">All</choice> <search> <query> | makeresults | eval applicationName="Test1,Test2,Test3" | eval applicationName=split(applicationName,",") | stats count by applicationName | table applicationName </query> </search> <fieldForLabel>applicationName</fieldForLabel> <fieldForValue>applicationName</fieldForValue> <default>*</default> <prefix>applicationName="</prefix> <suffix>"</suffix> <change> <condition match="$value$==&quot;*&quot;"> <set token="new_value">applicationName IN ("Test1" , "TEST2" , "Test3")</set> </condition> <condition> <set token="new_value">$BankApp$</set> </condition> </change> </input> <input type="dropdown" token="interface" searchWhenChanged="true"> <label>InterfaceName</label> <choice value="*">All</choice> <search> <query> | inputlookup BankIntegration.csv | search $new_value$ | eval InterfaceName=split(InterfaceName,",") | stats count by InterfaceName | table InterfaceName </query> </search> <fieldForLabel>InterfaceName</fieldForLabel> <fieldForValue>InterfaceName</fieldForValue> <default>*</default> <prefix>InterfaceName="</prefix> <suffix>"</suffix> <change> <condition match="$value$==&quot;*&quot;"> <set token="new_interface">InterfaceName IN ( "USBANK_KYRIBA_ORACLE_CE_BANKSTMTS_INOUT", "USBANK_AP_POSITIVE_PAY", "HSBC_NA_AP_ACH", "USBANK_AP_ACH", "HSBC_EU_KYRIBA_CE_BANKSTMTS_TWIST_INOUT")</set> </condition> <condition> <set token="new_interface">$interface$</set> </condition> </change> </input> </fieldset> <row> <panel> <html> Dropdown Value = $BankApp$ <br/> new_value= $new_value$ <br/> new_interface = $new_interface$ <br/> | inputlookup BankIntegration.csv | search $new_value$ | eval InterfaceName=split(InterfaceName,",") | stats count by InterfaceName | table InterfaceName </html> </panel> </row> </form>
Sounds like you need to to in your custom python script - create a function / code that looks at the inputs file for the key=value as a variable and use that This is a simple test that you can get t... See more...
Sounds like you need to to in your custom python script - create a function / code that looks at the inputs file for the key=value as a variable and use that This is a simple test that you can get the value you want - but you will have to dev the code in your python script python3 -c 'print(open("/Splunk/etc/apps/$app/local/inputs.conf").read())' | grep 'value-name = value'