All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, Could you please let me know how to resolve the issue   Thanks
The message says what your cluster is missing - an indexer located in site3 to which the bucket could be replicated.
Ok the further we go down this thread, the more confusing it gets. You contradict yourself. In one place you say that it's a standalone search-head then in another you say that it's a part of a clust... See more...
Ok the further we go down this thread, the more confusing it gets. You contradict yourself. In one place you say that it's a standalone search-head then in another you say that it's a part of a cluster. So there are two possible scenarios: 1) It is indeed one of the search-heads in a cluster, managed by deployer but you manually installed an app on just one of those search-heads. That still doesn't make the server a stand-alone search-head. 2) It is a stand-alone search-head (not being a part of a search-head cluster). It is _not_ managed by a deployer. It _might_ be managed by deployment server. But might as well be managed by something external. So which one is it? Also I should expect the threatq support to tell you it's not their problem because it has nothing to do with the app itself - it's about your Splunk environment.
@ITWhisperer,I have used stats and i was able to match the data. I want to do one more implementation. I want to se t token based on the availability of Info.runtime_data{}. For every event there wi... See more...
@ITWhisperer,I have used stats and i was able to match the data. I want to do one more implementation. I want to se t token based on the availability of Info.runtime_data{}. For every event there will not be Info.runtime_data{}. I want to set a token if Info.runtime_data{} is present in the event of Info.Title, if not present i want to unset that token. I have tried it in the search query using if condition. But i am not able to implement it in the dashboard. <search> <query> index="abc" sourcetype="abc" Info.Title="$Title$" |spath output=Runtime_data path=Info.runtime_data | eval has_runtime = if(isnotnull(Runtime_data), "Yes", "No") | table _time, has_runtime </query> <done> <condition match="has_runtime=Yes"> <set token="tok_runtime">true</set> </condition> <condition match="has_runtime=No"> <unset token="tok_runtime"></unset> </condition> </done> </search> This is my code, i am not sure the Condition match is correct or not. But im not able to set or unset the token. Please suggest me anything.
Hi    1 bucket stuck at “fixup task pending” state with below error. I tried restarting Splunk, Re-sync and roll but its not working. Can anyone suggest the possible solution in order to troublesho... See more...
Hi    1 bucket stuck at “fixup task pending” state with below error. I tried restarting Splunk, Re-sync and roll but its not working. Can anyone suggest the possible solution in order to troubleshoot the issue   Missing enough suitable candidates to create replicated copy in order to meet replication policy. Missing={ site3:1 }
Hi Phillip You should be able to just find under conf/log there should be log4j or lof4j xml files. You can then adjust the logging level there
@karthi2809    Are you looking for this? <form version="1.1" theme="dark"> <label>Application</label> <fieldset submitButton="false"> <input type="dropdown" token="BankApp" searchWhenChang... See more...
@karthi2809    Are you looking for this? <form version="1.1" theme="dark"> <label>Application</label> <fieldset submitButton="false"> <input type="dropdown" token="BankApp" searchWhenChanged="true"> <label>ApplicationName</label> <choice value="*">All</choice> <search> <query> | makeresults | eval applicationName="Test1,Test2,Test3" | eval applicationName=split(applicationName,",") | stats count by applicationName | table applicationName </query> </search> <fieldForLabel>applicationName</fieldForLabel> <fieldForValue>applicationName</fieldForValue> <default>*</default> <prefix>applicationName="</prefix> <suffix>"</suffix> <change> <condition match="$value$==&quot;*&quot;"> <set token="new_value">applicationName IN ("Test1" , "TEST2" , "Test3")</set> </condition> <condition> <set token="new_value">$BankApp$</set> </condition> </change> </input> <input type="dropdown" token="interface" searchWhenChanged="true"> <label>InterfaceName</label> <choice value="*">All</choice> <search> <query> | inputlookup BankIntegration.csv | search $new_value$ | eval InterfaceName=split(InterfaceName,",") | stats count by InterfaceName | table InterfaceName </query> </search> <fieldForLabel>InterfaceName</fieldForLabel> <fieldForValue>InterfaceName</fieldForValue> <default>*</default> <prefix>InterfaceName="</prefix> <suffix>"</suffix> <change> <condition match="$value$==&quot;*&quot;"> <set token="new_interface">InterfaceName IN ( "USBANK_KYRIBA_ORACLE_CE_BANKSTMTS_INOUT", "USBANK_AP_POSITIVE_PAY", "HSBC_NA_AP_ACH", "USBANK_AP_ACH", "HSBC_EU_KYRIBA_CE_BANKSTMTS_TWIST_INOUT")</set> </condition> <condition> <set token="new_interface">$interface$</set> </condition> </change> </input> </fieldset> <row> <panel> <html> Dropdown Value = $BankApp$ <br/> new_value= $new_value$ <br/> new_interface = $new_interface$ <br/> | inputlookup BankIntegration.csv | search $new_value$ | eval InterfaceName=split(InterfaceName,",") | stats count by InterfaceName | table InterfaceName </html> </panel> </row> </form>
Sounds like you need to to in your custom python script - create a function / code that looks at the inputs file for the key=value as a variable and use that This is a simple test that you can get t... See more...
Sounds like you need to to in your custom python script - create a function / code that looks at the inputs file for the key=value as a variable and use that This is a simple test that you can get the value you want - but you will have to dev the code in your python script python3 -c 'print(open("/Splunk/etc/apps/$app/local/inputs.conf").read())' | grep 'value-name = value'
Hello, @isoutamo. Your assumption is correct and I've tried multiple times your solution that also @gcusello  mentioned this solution before but that was useless. I think that I'll wait for the inter... See more...
Hello, @isoutamo. Your assumption is correct and I've tried multiple times your solution that also @gcusello  mentioned this solution before but that was useless. I think that I'll wait for the interference of threatq support.
Also, see answers in this duplicate post https://community.splunk.com/t5/Dashboards-Visualizations/Passing-Token-values-to-Overlay-field-in-the-line-chart/m-p/684940#M56075  
A bit more information would be useful, but this is a start and is the general technique for combining two data types on a common field index=bla | stats values(*) as * by Info.Title  
There are a number of possibilities but probably the best way would be to use stats values() by Info.Title.
Hi @Cccvvveee0235 , UFs send logs to Indexers in near real time, not in real time: events are grouped and sent in packets with a configurable frequency that depends on the availability of your netwo... See more...
Hi @Cccvvveee0235 , UFs send logs to Indexers in near real time, not in real time: events are grouped and sent in packets with a configurable frequency that depends on the availability of your network bandwidth. There's a configurable limit (256 KB) to the dimension of packets that you can enlarge to unlimited adding maxKBps = 0 to your outputs.conf in the UF. The update frequency is 30 seconds, but you can modify, even if I'd prefer to avoid this, also because, when the UF is connected send all logs and you don't loose any data. Ciao. Giuseppe
Hi All, I have created a dashboard for JSON data. There are 2 sets of data in same index. One is Info.metadata{} and another one is Info.runtime_data{} under same index as different events. But bo... See more...
Hi All, I have created a dashboard for JSON data. There are 2 sets of data in same index. One is Info.metadata{} and another one is Info.runtime_data{} under same index as different events. But both of the events have one common field that is "Info.Title". How can i combine these 2 events?  
You say "I have a single server w/ Splunk Enterprise installed on it, as well as SplunkForwarder" Why did you install a forwader onto the Splunk Server instance as well - There is no need to do t... See more...
You say "I have a single server w/ Splunk Enterprise installed on it, as well as SplunkForwarder" Why did you install a forwader onto the Splunk Server instance as well - There is no need to do this, its not a normal thing to do, hence why you are most likley getting those conflict fails.  I suspect you wanted to collect logs etc from the Splunk instance, hence you may have done this, but the full splunk instance will have this functionality built it in. Keep the Splunk instance clean (only installs apps/ta's etc).  Install the forwarder for your target hosts that you want to monitor for your logs etc. If you now uninstall the the forwarder from the Splunk server, you may get all sorts of errors, and then need to re-install the Splunk server as you may have overwritten various Splunk server files etc...messy.     
I get the impression that your requirement can be reinterpreted as listing the last two installed version and their installed times.  Is this accurate? As @bowesmana suggested, this problem would be... See more...
I get the impression that your requirement can be reinterpreted as listing the last two installed version and their installed times.  Is this accurate? As @bowesmana suggested, this problem would be best solved by maintaining a lookup table, then work from there.  Any search that does not use a static dataset like lookup is bound to be inefficient because your lookback period cannot be predetermined. As a proof of concept, here is a literal implementation of my interpretation of your requirement.  The premise is that you make a search with sufficient coverage for last two versions of packages of interest.  Assume that the search returns something like the following _time host package version 2024-01-21 host1 somesoft1 1.2.1 2024-01-21 host2 somesoft2 2.2.3 2024-03-02 host1 somesoft1 1.2.5 2024-03-03 host2 somesoft2 2.3.0 2024-04-10 host1 somesoft1 1.2.10 You then apply the following:   <some search with sufficient history> | stats max(_time) as _time by package version | eval version = json_object("version", version, "install_time", _time) | stats list(version) as version_installed by package | eval version = json_extract(mvindex(version_installed, -1), "version"), "installed date" = json_extract(mvindex(version_installed, -1), "install_time") | eval last_version = json_extract(mvindex(version_installed, -2), "version"), "last installed date" = json_extract(mvindex(version_installed, -2), "install_time") | fieldformat "installed date" = strftime('installed date', "%F") | fieldformat "last installed date" = strftime('last installed date', "%F") | fields - version_installed   This should give a table like this package installed_date last installed date last_version version somesoft1 2024-03-02 2024-04-10 1.2.10 1.2.5 somesoft2 2024-03-03 2024-01-21 2.2.3 2.3.0 What the code really illustrates is the general approach of a semantic "join" without using join command.  stats is a lot more efficient in SPL.  lookup, using binary search, is another very efficient method. Here is an emulation that produces the mock search output above.  Play with it and compare with real data.   | makeresults format=csv data="_time,host,package,version 2024-01-21,host1,somesoft1,1.2.1 2024-01-21,host2,somesoft2,2.2.3 2024-03-02,host1,somesoft1,1.2.5 2024-03-03,host2,somesoft2,2.3.0 2024-04-10,host1,somesoft1,1.2.10" | eval _time = strptime(_time, "%F") ``` data emulation above ```    
Hello, I want to fetch the value present in the inputs.conf file(/Splunk/etc/apps/$app/local), ie: [stanza-name] value-name = value How can I retrieve this value and use it inside a python lookup ... See more...
Hello, I want to fetch the value present in the inputs.conf file(/Splunk/etc/apps/$app/local), ie: [stanza-name] value-name = value How can I retrieve this value and use it inside a python lookup script (stored in /Splunk/etc/apps/$app/bin)? thanks,
Hello Everyone, please help me with fetching events from Windows event collector. I installed universal Forwarder on windows server 2022, where all events from computers keep in this server. I am try... See more...
Hello Everyone, please help me with fetching events from Windows event collector. I installed universal Forwarder on windows server 2022, where all events from computers keep in this server. I am trying to fetch all forwarded events from this windows server 2022 to my splunk indexer by splunk agent, but agent sends the events sometimes, not in real time. Can't see some errors in splunkforwarder events or in splunk indexer. Also I used Splunk_TA_Windows to fetch events.
Hi @obuobu , let me understand: you have a Splunk Enterprise installed on Ubusntu, then you have Splunk Universal Forwarder installed on a windows machine, you want to see the logs from the Wind... See more...
Hi @obuobu , let me understand: you have a Splunk Enterprise installed on Ubusntu, then you have Splunk Universal Forwarder installed on a windows machine, you want to see the logs from the Windows machine in Splunk, is it correct? At first did you configured your Splunk Enterprise Server to receive logs [Settings > Forwardering and Receiving > Receiving]? Then, did you configured your UF (that I suppose it's installed) to send logs to the Splunk Enterprise Server? Then did you configured the local inputs locally or using a Deployment Server? for more infos see the ingestion process at https://docs.splunk.com/Documentation/Splunk/latest/Data/Usingforwardingagents Ciao. Giuseppe
I guess I'm only seeing half the picture here. I understand you're trying to make a lookup into a index so the idea of rolling forward data is to make 'yesterday' have the entire dataset you care ab... See more...
I guess I'm only seeing half the picture here. I understand you're trying to make a lookup into a index so the idea of rolling forward data is to make 'yesterday' have the entire dataset you care about regardless of any update dates. collect is just a Splunk command that you add to the end of your SPL. Manual or automatic is about whether a search is scheduled or not, nothing to do with what the SPL does. https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/collect If you have a scheduled saved search collect will just collect to the summary index. It is the same as enabling summary indexing on a scheduled saved search, but you have direct control of the parameters. KV store uses a database in Splunk - it used to be mongodb - not sure if that's still the case. You don't need to care - for all intents and purposes, it's just a lookup, just backed by a database, not a CSV. https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/DefineaKVStorelookupinSplunkWeb As for the append - I don't know what you're actually trying to merge together from the vulnerabilities index and what comes from the dbxquery - that's a perfectly valid technique for combining data - but what will you do with that when you have it.  As I said, I've only got half the picture of your whole journey...