All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Howdy I have a search like this: Everything is great!   Would it be possible to add a column that contains the timestamp for max(seconds) ? I've googled and even tried out some solutions I fo... See more...
Howdy I have a search like this: Everything is great!   Would it be possible to add a column that contains the timestamp for max(seconds) ? I've googled and even tried out some solutions I found here but can't quite get it... (i.e If I try to add "by host, _time" I get ALL the results).  Thanks!      
I'm trying to identify inactive hosts that crashed (through an alert). Inactive hosts - hosts that haven't logged in the past 1hr host that didn't crash- logs a message like this ".* Gracefully Exi... See more...
I'm trying to identify inactive hosts that crashed (through an alert). Inactive hosts - hosts that haven't logged in the past 1hr host that didn't crash- logs a message like this ".* Gracefully Exited" host that did crash- never logs a message like the one above ^ and eventually becomes inactive For inactive hosts, I've found this search to be useful. It searches the past 2 hours for host that haven't logged within the last hour: | tstats latest(_time) as latest where index=a sourcetype=b source = c earliest=-2h by host | eval logged_within_past_hour = if(latest > relative_time(now(),"-1h"),1,0), time_of_host_last_log = strftime(latest,"%c") | where logged_within_past_hour=0 I'm able to use this splunk search to find logs where the host terminated. index=a sourcetype=b Gracefully Exited Is there a way to find hosts that crashed and have became inactive? I don't want to include the hosts that terminated successfully and didn't crash
Hi All,  One basic thought(issue) on Splunk Search Bar UXD - User Experience Design: 1. on the Splunk Search Bar, enter a basic search "index=test1", then choose time picker to 15mins, then choose ... See more...
Hi All,  One basic thought(issue) on Splunk Search Bar UXD - User Experience Design: 1. on the Splunk Search Bar, enter a basic search "index=test1", then choose time picker to 15mins, then choose fast mode, then if you press "Enter" button, the search still wont run. you have to manually mouse hover above the search button and click Run button.  okies, this is at least fine.  2. After running first search, now leave the search as it is(index=test1) and then on time picker choose 60mins, the search will run "without" clicking Run button.   3. on this 2nd search, after choosing timepicker(the search will be running now), if you want to switch between search modes, you have to "stop/kill" the search and then choose a search mode. when you selected search mode, it will run the search without clicking Run button.  Hope you guys understand this User Experience issue, pls provide your views, thanks. 
We have been using the technique of having a setup.xml file in our apps default directory since Splunk version 6.2.X With the latest update to Splunk 8.2.2.1 our App does not seem to be able to comp... See more...
We have been using the technique of having a setup.xml file in our apps default directory since Splunk version 6.2.X With the latest update to Splunk 8.2.2.1 our App does not seem to be able to complete configuration. When the <Save> button is pressed, the Go to the Setup page is redisplayed. Did something change between Splunk 7.X and Splunk 8.X that would cause this to occur? Thanks.
The closest question I came to is this one, but it's not quite there (and it's old). I have a saved search - actually an alert, with actions - that I want to pass dynamic SPL into. You can do this w... See more...
The closest question I came to is this one, but it's not quite there (and it's old). I have a saved search - actually an alert, with actions - that I want to pass dynamic SPL into. You can do this with dashboards and tokens, of course, but I'm specifically looking for an alert that I'm executing over the API. So I may request something like this over the API: https://splunk.mycompany.com:8089/en-US/app/myApp/search?s=%2FservicesNS%2Fnobody%2FmyApp%2Fsaved%2Fsearches%2FmySearch&ExecID=12345 Where the saved search has something like "Execution_ID=$ExecID$" in it - just like you would when requesting a dashboard. The value for $ExecID$ is unique and populating a lookup table for this simple need seems like serious overkill - and it probab;y doesn't even accomplish what I need. I hope this is written clearly enough. I'm 99% sure it can't be done, but it's been a few years since that last question, and, as noted, it's not really a match, anyway. Thanks.
Greetings, I am in the preliminary stages of upgrading my Splunk Heavy Forwarder (HF), however, I wanted to confirm which file to install. I know that the HF requires a Splunk Enterprise License opp... See more...
Greetings, I am in the preliminary stages of upgrading my Splunk Heavy Forwarder (HF), however, I wanted to confirm which file to install. I know that the HF requires a Splunk Enterprise License opposed to the Universal Forwarder (UF) that doesn't require a Splunk Enterprise License. Therefore, when it comes to installing and upgrading a Heavy Forwarder, do I install the Splunk Forwarder License, the Splunk Enterprise License, or both?  Thank you in advance for your time. -KB 
I have an issue with a URL field being extracted improperly and failing when an ampersand is present in the URL field. Transforms indicates the following delims:     DELIMS = "\t", "="     ... See more...
I have an issue with a URL field being extracted improperly and failing when an ampersand is present in the URL field. Transforms indicates the following delims:     DELIMS = "\t", "="       Btool run on the SH member also shows that no other extract commands or delims are identified. All fields are extracted properly except for URL fields that have an ampersand, which excludes everything beyond the ampersand for the field value.  
I posted this subject a few days ago & a couple of champs stated that it was not advisable because it would over load the ES & was best to create the reports in ES to use the ES use cases. Ok, I have... See more...
I posted this subject a few days ago & a couple of champs stated that it was not advisable because it would over load the ES & was best to create the reports in ES to use the ES use cases. Ok, I have a ton of reports that I'd like to use in ES as well. So what other options are there in order not to put a burden on ES. I appreciate your response in advance. Happy 2022.
On a Dashboard, I have a Pie Chart that Status Codes. If I scroll over each code, Splunk shows me the percentage of the total for that code, but it is not included when I export the Pie Chart to CSV ... See more...
On a Dashboard, I have a Pie Chart that Status Codes. If I scroll over each code, Splunk shows me the percentage of the total for that code, but it is not included when I export the Pie Chart to CSV (Only the codes and the count are included). Is it possible to export the percentage as well? Thanks
In Splunk enterprise when running the following log4j scanner it is picking up that the following files as vulnerable. Can somebody please provide steps on how I can remediate this? Is it a case of ... See more...
In Splunk enterprise when running the following log4j scanner it is picking up that the following files as vulnerable. Can somebody please provide steps on how I can remediate this? Is it a case of upgrading all splunk servers with the latest version from https://logback.qos.ch/download.html. If not please advise steps and will it require me to reboot all related splunk servers please?   log4j/logback scanner https://github.com/logpresso/CVE-2021-44228-Scanner   Files found as being vulnerable C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\jars\command.jar Logback 1.2.3 CVE-2021-42550 C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\jars\dbxquery.jar Logback 1.2.3 CVE-2021-42550 C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\jars\server.jar Logback 1.2.3 CVE-2021-42550   many thanks
that's xml  <SearchCustomer> <Transaction Name=\"Naviline\" Time=\"02/11/1982 01:25:07:223\" Duration=\"9.034\" /> <Transaction Name=\"SePipeline\" Time=\"02/11/1982 01:25:07:899\" Duration=\"0.66... See more...
that's xml  <SearchCustomer> <Transaction Name=\"Naviline\" Time=\"02/11/1982 01:25:07:223\" Duration=\"9.034\" /> <Transaction Name=\"SePipeline\" Time=\"02/11/1982 01:25:07:899\" Duration=\"0.662\" /> <Transaction Name=\"NdwIncuse\" Time=\"02/11/1982 01:25:09:553\" Duration=\"1.614\" /> <Transaction Name=\"EnterDetails\" Time=\"02/11/1982 01:25:11:532\" Duration=\"1.916\" /> <Transaction Name=\"SIline\" Time=\"02/11/1982 01:25:12:703\" Duration=\"1.132\" /> <Transaction Name=\"GetWindowIn\" Time=\"02/11/1982 01:25:20:748\" Duration=\"7.957\" /> <Transaction Name=\"PrimaryAddress\" Time=\"02/11/1982 01:25:22:154\" Duration=\"1.375\" /> <Transaction Name=\"WindowingTouch\" Time=\"02/11/1982 01:25:51:674\" Duration=\"1.365\" /> <Transaction Name=\"dailysearch\" Time=\"02/11/1982 01:26:01:908\" Duration=\"10.141\" /> <Transaction Name=\"SearchInServicing\" Time=\"02/11/1982 01:26:03:115\" Duration=\"1.149\" /> <Transaction Name=\"NavExistingAddresses\" Time=\"02/11/1982 01:26:05:060\" Duration=\"1.885\" /> </PerformanceReport> </SearchCustomer>"   i'm looking for dashboard like below  
Does anyone know of an add-on or other script that would allow one to analyze network traffic to detect beaconing using a Fourier transform (FFT)?
Hello, We have a bunch of files store (or uploaded) in our clients' SharePoint sites.  How I would ingest them into the SPLUNK. Any help will be highly appreciated. Thank you.  
hi   index=toto sourcetype=tutu type=* | fields host _time runq type | join host [ search index=toto sourcetype=tutu type=* | fields host core | stats max(core) as nbcore by host ... See more...
hi   index=toto sourcetype=tutu type=* | fields host _time runq type | join host [ search index=toto sourcetype=tutu type=* | fields host core | stats max(core) as nbcore by host ] | eval Vel = (runq / nbcore) | eval _time = strftime(_time, "%d-%m-%y %H:%M:%S") | sort - _time | rename host as Host, _time as Heure | table Heure Host Vel I use the search below For one host, an event is indexed every 40 seconds Now I need to group these events in a span of 30m So I have added a bin span like this | bin _time span=30m  So for one host there is many events with the same span value Now what I need it's just to the last event indexed for the host in the span So I need to display something like this : "host" "time span" "last event generated" I think it's not very difficult but I have a bug Could you help please?
Is it possible to put time modifiers like "earliest" into a search and essentially disregard the time range drop-down in the Splunk UI? I have data that is logged once every 24 hours, so I'd like to ... See more...
Is it possible to put time modifiers like "earliest" into a search and essentially disregard the time range drop-down in the Splunk UI? I have data that is logged once every 24 hours, so I'd like to embed "WHERE earliest=-24h" into a rather large, complicated query so I can cut-and-paste from my notes without having to mess around with the drop-down (or more importantly, so I don't need to make additional notes to remind myself to set the drop-down). I tried something like this: index=iis sourcetype=xxxx host=xxxx | WHERE earliest=-24h | eval... | table...  But the UI shows "Error in 'where' command: the operator at 'h' is invalid.
Hi There: I'm trying to return the list of access_users with 0 web hits from the web_hits table.  How can i adjust this query to return the list of users with no hits from the web_hits table? Than... See more...
Hi There: I'm trying to return the list of access_users with 0 web hits from the web_hits table.  How can i adjust this query to return the list of users with no hits from the web_hits table? Thanks in advance!   | inputlookup web_hits.csv | lookup local=t access_users.csv user OUTPUT user as access_user | search access_user="*" | stats count as num_webhits by access_user
Dear Team Could you please send me the controller sizing calculator as I need to install an on-premise controller. I have 40 applications for both java and .net applications and the number of node... See more...
Dear Team Could you please send me the controller sizing calculator as I need to install an on-premise controller. I have 40 applications for both java and .net applications and the number of nodes is 30. Could you please help me with the calculator link for controller sizing? Thanks Kamal
Hi All,  I have a dropdown box, few text boxes and submit button on my dashboard. I need to choose one value from the dropdown and enter the text boxes values  and finally click on the SUBMIT butto... See more...
Hi All,  I have a dropdown box, few text boxes and submit button on my dashboard. I need to choose one value from the dropdown and enter the text boxes values  and finally click on the SUBMIT button.   Use case : I have to pass all the above mentioned inputs to call my python script, on clicking the SUBMIT button on my dashboard. (attached the dashboard image for reference) What is the way to achieve this use case in Splunk, Any help on this is appreciated.  Thanks!
Hello, I am wondering when my index will roll from warm to cold with the settings below, the rest of the settings is default: [XXXX] frozenTimePeriodInSecs = 46656000 maxDataSize = auto_high_volu... See more...
Hello, I am wondering when my index will roll from warm to cold with the settings below, the rest of the settings is default: [XXXX] frozenTimePeriodInSecs = 46656000 maxDataSize = auto_high_volume maxTotalDataSizeMB = 4294967295 maxWarmDBCount = 4294967295 repFactor = auto Will the default setting maxHotSpanSecs = 7776000 roll the buckets from warm to cold or is it just the span within the bucket? When will the buckets roll from warm to cold in this case? Thanks in advance!
https://www.appdynamics.com/partners/technology-partners/google-cloud-platform Hi from the link above it is not clear to me if this is a general information or i can really monitor cloud native appl... See more...
https://www.appdynamics.com/partners/technology-partners/google-cloud-platform Hi from the link above it is not clear to me if this is a general information or i can really monitor cloud native applications especially the end-user monitoring. can you please refer to how to configure this as well as the documentation in relation if any. Good day