All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I want to know the total volume of indexes used for splunk searches. Query on information that has to do with how much indexes are used based on splunk searches 
Hi @whitecat001, sorry, it isn't so clear: do you want the amount of data of each search or to know which indexes are used in each search or what else? Ciao. Giuseppe
I want to know how much volume of  index is used for splunk searches 
Hi @whitecat001 , do you want the list of savedsearches, the list or runned searches or what else? could you better detail your request? Ciao. Giuseppe
Hi, I've a similar use case but to get the second and last Saturday's. I tried to use your search but couldn't get there. Can you please help me with that. TIA
Can i get a query that will find searches that users are running in splunk
unfortunately dataset is old and can't be changed  I'd be happy if fields have decent naming pattern.. 
You didn't mention that you have are using the sysmon add-on, if not download and check the inputs.conf - start with those settings, must something in there. (your inputs looks ok but I think the oth... See more...
You didn't mention that you have are using the sysmon add-on, if not download and check the inputs.conf - start with those settings, must something in there. (your inputs looks ok but I think the other setting is source = XmlWinEventLog:Microsoft-Windows-Sysmon/Operational  https://splunkbase.splunk.com/app/5709 
I have a search that returns the following table (after transpose): column row 1 row 2 search_name UC-315 UC-231 ID 7zAt/7 5Dfxdf Time 13:27:17 09:17:09 And I need it to lo... See more...
I have a search that returns the following table (after transpose): column row 1 row 2 search_name UC-315 UC-231 ID 7zAt/7 5Dfxdf Time 13:27:17 09:17:09 And I need it to look like this: column new_row search_name UC-315 ID 7zAt/7 Time 13:27:17 search_name UC-231 ID 5Dfxdf Time 09:17:09 This should work independently of the amount of rows. I've tried using mvexpand, and streamstats but without any luck.
Hello in my case I have a list of products with producttype and weight. For products of the same type, weight might be different although always within some range. As an example: productid ... See more...
Hello in my case I have a list of products with producttype and weight. For products of the same type, weight might be different although always within some range. As an example: productid type weight anomaly? 1 a 100kg   2 a 102kg   3 b 500kg   4 b 550kg   6 a 15kg yes 7 b 2500kg yes   One option would be solving this by calculating average and standard deviation:   index=products | stats list("productweight") as weights by "producttype" | mvexpand weights | eval weight=tonumber(weights) | eventstats avg(weight) as avg stdev(weight) as stdev by "producttype" | eval lowerBound=(avg-stdev*10), upperBound=(avg+stdev*10) | where weight < lowerBound OR weight > upperBound But I was wondering whether there is a way to solve this with the anomalydetection function. The function should search for anonalies within the products of the same producttype and not general for all weights on available.  Something like | anomalydetection by "producttype" but this option doesnt seem to be available. Does somebody know how to do this? Many thanks in advance for your help
Hi @Ismail_BSA, ok, you could use the first eval (in one calculated field) to have the first field "url_primaire_apache" and then  nest the first in the second to calculate the second field "url_pri... See more...
Hi @Ismail_BSA, ok, you could use the first eval (in one calculated field) to have the first field "url_primaire_apache" and then  nest the first in the second to calculate the second field "url_primaire_apache_sans_ports", something like this: EVAL-url_primaire_apache_sans_ports=if(match(if(match(url, "/"), mvindex(split(url, "/"), 0), url), ":"), mvindex(split(url_primaire_apache, ":"), 0), url_primaire_apache) ``` Please adapt my approach to your evals. Ciao. Giuseppe
Hello, Thanks for the reply. It works, sort of... The "dark" theme truns everything dark. It was my hope to have just the Apps panel be dark. I will guess the new version of Splunk over-writes a cs... See more...
Hello, Thanks for the reply. It works, sort of... The "dark" theme truns everything dark. It was my hope to have just the Apps panel be dark. I will guess the new version of Splunk over-writes a css file for the web interface.  I have an older version of Splunk running, I will see if I can find the css file.   Thanks again for the reply. Eric W.
Hi @gcusello    Thank you for your reply.   That's exactly what we are doing now for searches. However, we were wondering if these fields could be directly calculated in the sourcetype. The final... See more...
Hi @gcusello    Thank you for your reply.   That's exactly what we are doing now for searches. However, we were wondering if these fields could be directly calculated in the sourcetype. The final goal is to have those new fields in a data model and later call them in the correlation seraches and notable events. Regards,
If you're hitting csv size limits, kv-store based lookup might indeed be the solution (the additional bonus - you can update kv-store, don't have to delete and re-create it from scratch as with csv-b... See more...
If you're hitting csv size limits, kv-store based lookup might indeed be the solution (the additional bonus - you can update kv-store, don't have to delete and re-create it from scratch as with csv-based lookup). As for the performance - well, it really depends on the use case. External lookups and external commands will always be slower than Splunk's internal mechanisms because you have to spawn external process, interface with it and so on. But with sufficiently small data sample (you just have a small set of results you have to enrich with something external) it might be "bearable".  
Ah, disregard. Setting as a system environment variable did the trick.
This is nuts. Go figure. I ended up fixing this my removing the "Deployment Server" role from the system, saving it, then adding it back, restarting the service, bam! Fixed. I'd rather be lucky tha... See more...
This is nuts. Go figure. I ended up fixing this my removing the "Deployment Server" role from the system, saving it, then adding it back, restarting the service, bam! Fixed. I'd rather be lucky than good...
Is there a way to provide the OPENSSL_FIPS=1 variable to OpenSSL on Windows? I've tried using Powershell and piping the variable in, as well as adding to the openssl.cnf file.
Another fix to try is as follows: Find your distsearch.conf and then find the stanza that has default = true in it. In that stanza, make sure localhost:localhost is listed in the setting below    ... See more...
Another fix to try is as follows: Find your distsearch.conf and then find the stanza that has default = true in it. In that stanza, make sure localhost:localhost is listed in the setting below     servers =       For example, it was like this before: distributedSearch:testgroup1 default = true servers = somehostname.company.com Once you find that stanza, add localhost to make it look like this (and it's literal in that it's simply localhost:localhost) distributedSearch:testgroup1 default = true servers = somehostname.company.com, localhost:localhost Restart the DS and from the internal thread within a few minutes/hours the clients should start to populate again  
Honestly, you're digging deeper and deeper into something that seems that should be much better solved just by preparing the data correctly. You either should make your developers log the same field... See more...
Honestly, you're digging deeper and deeper into something that seems that should be much better solved just by preparing the data correctly. You either should make your developers log the same fields consistently and distinguish the source of the events by... source field? Or maybe some additional field if all events are aggregated into a single point of origin. Alternatively as they apparently have different structure, they should have different sourcetypes so that each is parsed differently (and all can be normalized to a common set of fields). IMHO you're unnecessarily trying to make the life harder than it has to be for yourself.
Hello, My CSV data is coming from DB which get pulled from DBXquery. DB ==> DBXquery ==> CSV Since CSV have size limitation, so I am thinking just to use the DB via DBXLookup Thanks