All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I get the impression that your requirement can be reinterpreted as listing the last two installed version and their installed times.  Is this accurate? As @bowesmana suggested, this problem would be... See more...
I get the impression that your requirement can be reinterpreted as listing the last two installed version and their installed times.  Is this accurate? As @bowesmana suggested, this problem would be best solved by maintaining a lookup table, then work from there.  Any search that does not use a static dataset like lookup is bound to be inefficient because your lookback period cannot be predetermined. As a proof of concept, here is a literal implementation of my interpretation of your requirement.  The premise is that you make a search with sufficient coverage for last two versions of packages of interest.  Assume that the search returns something like the following _time host package version 2024-01-21 host1 somesoft1 1.2.1 2024-01-21 host2 somesoft2 2.2.3 2024-03-02 host1 somesoft1 1.2.5 2024-03-03 host2 somesoft2 2.3.0 2024-04-10 host1 somesoft1 1.2.10 You then apply the following:   <some search with sufficient history> | stats max(_time) as _time by package version | eval version = json_object("version", version, "install_time", _time) | stats list(version) as version_installed by package | eval version = json_extract(mvindex(version_installed, -1), "version"), "installed date" = json_extract(mvindex(version_installed, -1), "install_time") | eval last_version = json_extract(mvindex(version_installed, -2), "version"), "last installed date" = json_extract(mvindex(version_installed, -2), "install_time") | fieldformat "installed date" = strftime('installed date', "%F") | fieldformat "last installed date" = strftime('last installed date', "%F") | fields - version_installed   This should give a table like this package installed_date last installed date last_version version somesoft1 2024-03-02 2024-04-10 1.2.10 1.2.5 somesoft2 2024-03-03 2024-01-21 2.2.3 2.3.0 What the code really illustrates is the general approach of a semantic "join" without using join command.  stats is a lot more efficient in SPL.  lookup, using binary search, is another very efficient method. Here is an emulation that produces the mock search output above.  Play with it and compare with real data.   | makeresults format=csv data="_time,host,package,version 2024-01-21,host1,somesoft1,1.2.1 2024-01-21,host2,somesoft2,2.2.3 2024-03-02,host1,somesoft1,1.2.5 2024-03-03,host2,somesoft2,2.3.0 2024-04-10,host1,somesoft1,1.2.10" | eval _time = strptime(_time, "%F") ``` data emulation above ```    
Hello, I want to fetch the value present in the inputs.conf file(/Splunk/etc/apps/$app/local), ie: [stanza-name] value-name = value How can I retrieve this value and use it inside a python lookup ... See more...
Hello, I want to fetch the value present in the inputs.conf file(/Splunk/etc/apps/$app/local), ie: [stanza-name] value-name = value How can I retrieve this value and use it inside a python lookup script (stored in /Splunk/etc/apps/$app/bin)? thanks,
Hello Everyone, please help me with fetching events from Windows event collector. I installed universal Forwarder on windows server 2022, where all events from computers keep in this server. I am try... See more...
Hello Everyone, please help me with fetching events from Windows event collector. I installed universal Forwarder on windows server 2022, where all events from computers keep in this server. I am trying to fetch all forwarded events from this windows server 2022 to my splunk indexer by splunk agent, but agent sends the events sometimes, not in real time. Can't see some errors in splunkforwarder events or in splunk indexer. Also I used Splunk_TA_Windows to fetch events.
Hi @obuobu , let me understand: you have a Splunk Enterprise installed on Ubusntu, then you have Splunk Universal Forwarder installed on a windows machine, you want to see the logs from the Wind... See more...
Hi @obuobu , let me understand: you have a Splunk Enterprise installed on Ubusntu, then you have Splunk Universal Forwarder installed on a windows machine, you want to see the logs from the Windows machine in Splunk, is it correct? At first did you configured your Splunk Enterprise Server to receive logs [Settings > Forwardering and Receiving > Receiving]? Then, did you configured your UF (that I suppose it's installed) to send logs to the Splunk Enterprise Server? Then did you configured the local inputs locally or using a Deployment Server? for more infos see the ingestion process at https://docs.splunk.com/Documentation/Splunk/latest/Data/Usingforwardingagents Ciao. Giuseppe
I guess I'm only seeing half the picture here. I understand you're trying to make a lookup into a index so the idea of rolling forward data is to make 'yesterday' have the entire dataset you care ab... See more...
I guess I'm only seeing half the picture here. I understand you're trying to make a lookup into a index so the idea of rolling forward data is to make 'yesterday' have the entire dataset you care about regardless of any update dates. collect is just a Splunk command that you add to the end of your SPL. Manual or automatic is about whether a search is scheduled or not, nothing to do with what the SPL does. https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/collect If you have a scheduled saved search collect will just collect to the summary index. It is the same as enabling summary indexing on a scheduled saved search, but you have direct control of the parameters. KV store uses a database in Splunk - it used to be mongodb - not sure if that's still the case. You don't need to care - for all intents and purposes, it's just a lookup, just backed by a database, not a CSV. https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/DefineaKVStorelookupinSplunkWeb As for the append - I don't know what you're actually trying to merge together from the vulnerabilities index and what comes from the dbxquery - that's a perfectly valid technique for combining data - but what will you do with that when you have it.  As I said, I've only got half the picture of your whole journey...
I don't see why it needs to be split - the events not coming back in subsecond order does not matter to you, so why not just add the 200k in one go - is that causing a problem?
Hi Team, I am looking for an option to monitor the page load performance of a Salesforce Community cloud (built using Lightning Web components) application that run in authenticated mode. We want to... See more...
Hi Team, I am looking for an option to monitor the page load performance of a Salesforce Community cloud (built using Lightning Web components) application that run in authenticated mode. We want to capture the network timings, resource loading and transaction times to name a few.  Is this possible with AppDynamics? If so, please help with required documentations around the same. Thanks. 
Hi, I am not sure the purpose of roll forward existing index. How do I use "collect" to write data in a scheduled report?  My understanding collect is a manual push. I am looking for automatic u... See more...
Hi, I am not sure the purpose of roll forward existing index. How do I use "collect" to write data in a scheduled report?  My understanding collect is a manual push. I am looking for automatic update daily. Where does KV store lookup save the data?  How do I move the DBXquery to KV store?  Does it require "admin"? What do you think about the "append command" in the previous post? Thank you so much for your help.
Hi, What I meant by splitting the data is to split the number of rows. So, if my query has 200k rows, splitting into 2, it becomes 100k rows. Or @marnall suggested split into a smaller queries..  ... See more...
Hi, What I meant by splitting the data is to split the number of rows. So, if my query has 200k rows, splitting into 2, it becomes 100k rows. Or @marnall suggested split into a smaller queries..  not sure if it's possible since I have a large query involving multiple DBs I don't know how to do this in a scheduled report and write it into the same summary index with the same _time setting.  Please suggest. Thanks
Sorry, I don't understand what you are talking about re splitting your data - what is being split with the dbxquery?
If the data is in an index, it must be placed there with a timestamp, so if an app was updated 45 days ago that info ingested to Splunk and the Splunk _time timestamp is 45 days ago, the only way you... See more...
If the data is in an index, it must be placed there with a timestamp, so if an app was updated 45 days ago that info ingested to Splunk and the Splunk _time timestamp is 45 days ago, the only way you can find that data is to search that data with a time range that encompasses that time. But of course you don't know when it was updated. I have used a technique in the past where I roll forward existing index data by running a search at say 1am, that will search for data from yesterday earliest=-d@d and latest=@d and does a stats latest(*) as * by X Y Z Then use collect to write that data to the index with the current timestamp (1am) so effectively all rolled forward items from the previous day PLUS any new items that are added in the same day and collected to the same index. Naturally you would need to massage the data so that any updates would then shift previous->discard, current->previous,  new->current. That means your previous day's data is always the latest view of all versions. Not sure if this helps. Have you tried using kv store for the lookup - that's another story and you can use some accelerated keys for that data that may make it perform faster than standard lookup.  
I'm currently working on optimizing our Splunk deployment and would like to gather some insights on the performance metrics of Splunk forwarders. Transfer Time for Data Transmission: I'm intereste... See more...
I'm currently working on optimizing our Splunk deployment and would like to gather some insights on the performance metrics of Splunk forwarders. Transfer Time for Data Transmission: I'm interested in understanding the typical time it takes for a Splunk forwarder to send a significant volume of data, say 10 GB, to the indexer. Are there any benchmarks or best practices for estimating this transfer time? Are there any factors or configurations that can significantly affect this transfer time? Expected EPS (Events Per Second): Additionally, I'm curious about the achievable Event Per Second (EPS) rates with Splunk forwarders. What are the typical EPS rates that organizations achieve in real-world scenarios? Are there any strategies or optimizations that can help improve EPS rates while maintaining stability and reliability? Any insights, experiences, or recommendations regarding these performance metrics would be greatly appreciated. Thank you!
Hi, Thanks for your help. No I do not care the order. I am afraid if I split the data and re-combine them it will return duplicate/missing data as it doesn't have a unique identifier. Also I d... See more...
Hi, Thanks for your help. No I do not care the order. I am afraid if I split the data and re-combine them it will return duplicate/missing data as it doesn't have a unique identifier. Also I don't know how to split the data and keeping the same _time.  Please help answer this. Thanks How do I split my query from DBXquery (eg. 200k rows)and push it into a Summary Index at the same time?
Hello, Thank you again for your help. Just to clarify,  I cannot set _time to the exact time every time I query the data, , correct? So, I need to filter the data last update, if I want to get the... See more...
Hello, Thank you again for your help. Just to clarify,  I cannot set _time to the exact time every time I query the data, , correct? So, I need to filter the data last update, if I want to get the most recent copy. I currently are using CSV as a lookup, but the limitation is the size like you mentioned. I am trying to replace CSV lookup by doing the following:   Please let me know what you think. https://community.splunk.com/t5/Splunk-Search/How-to-perform-lookup-from-index-search-with-dbxquery/m-p/650654 | index=vulnerability_index | table ip_address, vulnerability, score | append [| dbxquery query="select * from tableCompany"] | stats values(*) as * by ip_address
Hello @bowesmana  Your solution hit the spot! Thank you so much
Without knowing a bit more about your data and extracted fields, you could do something like this   | eval BROWSER=if(BROWSER="Chrome" AND match(_raw, " Edg\/"), "Edge", BROWSER)  
Hi Dear Malaysian Splunkers,  Part of the SplunkTrust tasks, I have created a Splunk User Group for Kuala Lumper Malaysia.  https://usergroups.splunk.com/kuala-lumpur-splunk-user-group/   Pls joi... See more...
Hi Dear Malaysian Splunkers,  Part of the SplunkTrust tasks, I have created a Splunk User Group for Kuala Lumper Malaysia.  https://usergroups.splunk.com/kuala-lumpur-splunk-user-group/   Pls join and lets discuss monthly about Splunk and getting more value from the data. see you there. thanks.    Best Regards Sekar
Hello, I have this search for tabular format.   index="webbff" "SUCCESS: REQUEST" | table _time verificationId code BROWSER BROWSER_VERSION OS OS_VERSION USER_AGENT status | rename verificationId... See more...
Hello, I have this search for tabular format.   index="webbff" "SUCCESS: REQUEST" | table _time verificationId code BROWSER BROWSER_VERSION OS OS_VERSION USER_AGENT status | rename verificationId as "Verification ID", code as "HRC" | sort -_time   The issue is at BROWSER column where even when user access our app via Edge it still shows as Chrome. I found a dissimilarity between the two logs. One that is accessed via Edge contains "Edg" in the logs. Edge logs   metadata={BROWSER=Chrome, LOCALE=, OS=Windows, USER_AGENT=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/xxx.xx (KHTML, like Gecko) Chrome/124.0.0.0 Safari/xxx.xx Edg/124.0.0.0, BROWSER_VERSION=124, LONGITUDE=, OS_VERSION=10, IP_ADDRESS=, APP_VERSION=, LATITUDE=})   Chrome logs   metadata={BROWSER=Chrome, LOCALE=, OS=Mac OS X, USER_AGENT=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/xxx.xx (KHTML, like Gecko) Chrome/124.0.0.0 Safari/xxx.xx, BROWSER_VERSION=124, LONGITUDE=, OS_VERSION=10, IP_ADDRESS=, APP_VERSION=, LATITUDE=})   My question is, how do i create a conditional search for BROWSER like if contains Edg then Edge else BROWSER?
hey guys, with data retention being set, is there a way to whitelist a specific container to prevent it from being deleted?
Thank you @bowesmana,   I was looking for  | where (Check_Feature_Availability="false") AND ("a" IN ("a"))   Thank you.