All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I don't see why it needs to be split - the events not coming back in subsecond order does not matter to you, so why not just add the 200k in one go - is that causing a problem?
Hi Team, I am looking for an option to monitor the page load performance of a Salesforce Community cloud (built using Lightning Web components) application that run in authenticated mode. We want to... See more...
Hi Team, I am looking for an option to monitor the page load performance of a Salesforce Community cloud (built using Lightning Web components) application that run in authenticated mode. We want to capture the network timings, resource loading and transaction times to name a few.  Is this possible with AppDynamics? If so, please help with required documentations around the same. Thanks. 
Hi, I am not sure the purpose of roll forward existing index. How do I use "collect" to write data in a scheduled report?  My understanding collect is a manual push. I am looking for automatic u... See more...
Hi, I am not sure the purpose of roll forward existing index. How do I use "collect" to write data in a scheduled report?  My understanding collect is a manual push. I am looking for automatic update daily. Where does KV store lookup save the data?  How do I move the DBXquery to KV store?  Does it require "admin"? What do you think about the "append command" in the previous post? Thank you so much for your help.
Hi, What I meant by splitting the data is to split the number of rows. So, if my query has 200k rows, splitting into 2, it becomes 100k rows. Or @marnall suggested split into a smaller queries..  ... See more...
Hi, What I meant by splitting the data is to split the number of rows. So, if my query has 200k rows, splitting into 2, it becomes 100k rows. Or @marnall suggested split into a smaller queries..  not sure if it's possible since I have a large query involving multiple DBs I don't know how to do this in a scheduled report and write it into the same summary index with the same _time setting.  Please suggest. Thanks
Sorry, I don't understand what you are talking about re splitting your data - what is being split with the dbxquery?
If the data is in an index, it must be placed there with a timestamp, so if an app was updated 45 days ago that info ingested to Splunk and the Splunk _time timestamp is 45 days ago, the only way you... See more...
If the data is in an index, it must be placed there with a timestamp, so if an app was updated 45 days ago that info ingested to Splunk and the Splunk _time timestamp is 45 days ago, the only way you can find that data is to search that data with a time range that encompasses that time. But of course you don't know when it was updated. I have used a technique in the past where I roll forward existing index data by running a search at say 1am, that will search for data from yesterday earliest=-d@d and latest=@d and does a stats latest(*) as * by X Y Z Then use collect to write that data to the index with the current timestamp (1am) so effectively all rolled forward items from the previous day PLUS any new items that are added in the same day and collected to the same index. Naturally you would need to massage the data so that any updates would then shift previous->discard, current->previous,  new->current. That means your previous day's data is always the latest view of all versions. Not sure if this helps. Have you tried using kv store for the lookup - that's another story and you can use some accelerated keys for that data that may make it perform faster than standard lookup.  
I'm currently working on optimizing our Splunk deployment and would like to gather some insights on the performance metrics of Splunk forwarders. Transfer Time for Data Transmission: I'm intereste... See more...
I'm currently working on optimizing our Splunk deployment and would like to gather some insights on the performance metrics of Splunk forwarders. Transfer Time for Data Transmission: I'm interested in understanding the typical time it takes for a Splunk forwarder to send a significant volume of data, say 10 GB, to the indexer. Are there any benchmarks or best practices for estimating this transfer time? Are there any factors or configurations that can significantly affect this transfer time? Expected EPS (Events Per Second): Additionally, I'm curious about the achievable Event Per Second (EPS) rates with Splunk forwarders. What are the typical EPS rates that organizations achieve in real-world scenarios? Are there any strategies or optimizations that can help improve EPS rates while maintaining stability and reliability? Any insights, experiences, or recommendations regarding these performance metrics would be greatly appreciated. Thank you!
Hi, Thanks for your help. No I do not care the order. I am afraid if I split the data and re-combine them it will return duplicate/missing data as it doesn't have a unique identifier. Also I d... See more...
Hi, Thanks for your help. No I do not care the order. I am afraid if I split the data and re-combine them it will return duplicate/missing data as it doesn't have a unique identifier. Also I don't know how to split the data and keeping the same _time.  Please help answer this. Thanks How do I split my query from DBXquery (eg. 200k rows)and push it into a Summary Index at the same time?
Hello, Thank you again for your help. Just to clarify,  I cannot set _time to the exact time every time I query the data, , correct? So, I need to filter the data last update, if I want to get the... See more...
Hello, Thank you again for your help. Just to clarify,  I cannot set _time to the exact time every time I query the data, , correct? So, I need to filter the data last update, if I want to get the most recent copy. I currently are using CSV as a lookup, but the limitation is the size like you mentioned. I am trying to replace CSV lookup by doing the following:   Please let me know what you think. https://community.splunk.com/t5/Splunk-Search/How-to-perform-lookup-from-index-search-with-dbxquery/m-p/650654 | index=vulnerability_index | table ip_address, vulnerability, score | append [| dbxquery query="select * from tableCompany"] | stats values(*) as * by ip_address
Hello @bowesmana  Your solution hit the spot! Thank you so much
Without knowing a bit more about your data and extracted fields, you could do something like this   | eval BROWSER=if(BROWSER="Chrome" AND match(_raw, " Edg\/"), "Edge", BROWSER)  
Hi Dear Malaysian Splunkers,  Part of the SplunkTrust tasks, I have created a Splunk User Group for Kuala Lumper Malaysia.  https://usergroups.splunk.com/kuala-lumpur-splunk-user-group/   Pls joi... See more...
Hi Dear Malaysian Splunkers,  Part of the SplunkTrust tasks, I have created a Splunk User Group for Kuala Lumper Malaysia.  https://usergroups.splunk.com/kuala-lumpur-splunk-user-group/   Pls join and lets discuss monthly about Splunk and getting more value from the data. see you there. thanks.    Best Regards Sekar
Hello, I have this search for tabular format.   index="webbff" "SUCCESS: REQUEST" | table _time verificationId code BROWSER BROWSER_VERSION OS OS_VERSION USER_AGENT status | rename verificationId... See more...
Hello, I have this search for tabular format.   index="webbff" "SUCCESS: REQUEST" | table _time verificationId code BROWSER BROWSER_VERSION OS OS_VERSION USER_AGENT status | rename verificationId as "Verification ID", code as "HRC" | sort -_time   The issue is at BROWSER column where even when user access our app via Edge it still shows as Chrome. I found a dissimilarity between the two logs. One that is accessed via Edge contains "Edg" in the logs. Edge logs   metadata={BROWSER=Chrome, LOCALE=, OS=Windows, USER_AGENT=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/xxx.xx (KHTML, like Gecko) Chrome/124.0.0.0 Safari/xxx.xx Edg/124.0.0.0, BROWSER_VERSION=124, LONGITUDE=, OS_VERSION=10, IP_ADDRESS=, APP_VERSION=, LATITUDE=})   Chrome logs   metadata={BROWSER=Chrome, LOCALE=, OS=Mac OS X, USER_AGENT=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/xxx.xx (KHTML, like Gecko) Chrome/124.0.0.0 Safari/xxx.xx, BROWSER_VERSION=124, LONGITUDE=, OS_VERSION=10, IP_ADDRESS=, APP_VERSION=, LATITUDE=})   My question is, how do i create a conditional search for BROWSER like if contains Edg then Edge else BROWSER?
hey guys, with data retention being set, is there a way to whitelist a specific container to prevent it from being deleted?
Thank you @bowesmana,   I was looking for  | where (Check_Feature_Availability="false") AND ("a" IN ("a"))   Thank you.
I'm looking for a particular string in the list of strings.   The "a" in the first part is not from a field,  it's just a string that I'm trying to compare against.     I'm trying to implement the ... See more...
I'm looking for a particular string in the list of strings.   The "a" in the first part is not from a field,  it's just a string that I'm trying to compare against.     I'm trying to implement the following logic in python "word1" in ['word1', 'word2', ..., 'word_x', ]
OK, so I assume the list of strings are a, b and c, but what is the FIRST "a" in your statement "a" in ("a","b","c") as I said in my first reply, the search statement is comparing FIELD "a" with th... See more...
OK, so I assume the list of strings are a, b and c, but what is the FIRST "a" in your statement "a" in ("a","b","c") as I said in my first reply, the search statement is comparing FIELD "a" with the string values of the IN part. Is your field "a" something that has those values? For you to get all results simply adding that AND statement, would imply field "a" has "a" in all your events.
|search (Check_Feature_Availability=false) AND NOT ("Choice1" IN ("Choice1", "Choice2", "Choice3")) OR (Check_Feature_Availability=true) AND ("Choice1" IN ("Choice1", "Choice2", "Choice3"))   The... See more...
|search (Check_Feature_Availability=false) AND NOT ("Choice1" IN ("Choice1", "Choice2", "Choice3")) OR (Check_Feature_Availability=true) AND ("Choice1" IN ("Choice1", "Choice2", "Choice3"))   The list is really all the options from a multi-valued dropdown menu.  The values are all different.  
I am trying to dynamically alter my searched data, by utilizing a value from my data source (Check_Feature_Availability - boolean data) with a selected value from a multi-dropdown in the dashboard (l... See more...
I am trying to dynamically alter my searched data, by utilizing a value from my data source (Check_Feature_Availability - boolean data) with a selected value from a multi-dropdown in the dashboard (list of strings).  
So what is "a" in the first part of the statement? Your statement is saying does field "a" (NOT) have a value of a, b or c What is field "a" in your context and do they all have the value "a"?