All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I wanted to establish an alert that will look at the past hour for the past 6 weeks and make some comparisons. So for 2 PM I will have 6 results for week 1,2,3.... I know I can use date_hour but I ne... See more...
I wanted to establish an alert that will look at the past hour for the past 6 weeks and make some comparisons. So for 2 PM I will have 6 results for week 1,2,3.... I know I can use date_hour but I need something that I can set when the alert runs, so for the last 60 minutes somehow.       index=text source=text date_hour=14 | timechart span=1h count      Is there a way I can do this more easily?  
Hello Splunk community,  Im currently trying to use splunk free trial version for enterprise business with my firepower, following the most recent guide https://www.cisco.com/c/en/us/td/docs/securit... See more...
Hello Splunk community,  Im currently trying to use splunk free trial version for enterprise business with my firepower, following the most recent guide https://www.cisco.com/c/en/us/td/docs/security/firepower/70/api/eNcore/eNcore_Operations_Guide_v08.html#_Toc76556475 I get splencore test successful connection, but when I access splunk the firepower dashboard doesn't get populated with info, I tried reinstalling multiple times and following guides with no success. Running debian 10. Any suggestion will be truly appreciated. 
Hello all, I am struggling to find a solution for this. I have two different searches. One shows log entries where system errors have occurred: User Id:              Error code:               Erro... See more...
Hello all, I am struggling to find a solution for this. I have two different searches. One shows log entries where system errors have occurred: User Id:              Error code:               Error Time: 121                     E3002189                2021-08-27 12:01:34 249                     E1000874                2021-08-27 12:05:21 121                     E2000178                2021-08-27 12:27:09 The other search shows where the users were/are located throughout the day: User Id:            Location:             Login Time:                                 Logout Time: 121                   P155                     2021-08-27 11:54:56           2021-08-27 12:14:19 121                   U432                     2021-08-27 12:22:16           2021-08-27 12:34:52 249                   M127                    2021-08-27 12:01:32           2021-08-27 12:35:45 249                  J362                      2021-08-27 12:38:25           2021-08-27 12:50:11   I am trying to join the two searches and then compare the times to find the location of a user at the time of an error. So I tried joining on the user id and the basically comparing the unix time of the times above to find Error Time >= Login Time and Error Time <= Logout Time, but that didn't work. How can I set up the search to accomplish this? Thanks in advance!
Hi All, I will be getting a list of MD5 hash values in my logs. Need a regex expression for the below.  Therefore whenever am getting md5 hash values.   "md5":"b78269ef4034474766cb1351e94edf5c",
Hello to everybody, we are trying to set a search that makes a diff between two files of two different days. This is the working search:   | set diff [| search index=myindex source="*2021-08-27*.c... See more...
Hello to everybody, we are trying to set a search that makes a diff between two files of two different days. This is the working search:   | set diff [| search index=myindex source="*2021-08-27*.csv" | stats count by idx | table idx] [ search index=myindex source="*2021-08-26*.csv" | stats count by idx | table idx] | join idx [ search index=myindex source="*2021-08-27*.csv"] | table "SITE ID",idx,"Title",FQDN,"Asset Primary Identifier","IP Address",Hostname,"Operating System", Port   However, we'd like to make it parametric, we'd like dates contained in source names are calculated automatically, so we tried to insert this:   | set diff [ | eval todayFile=strftime(now(),"*%Y-%m-%d*.csv") | search index=myindex source=todayFile | stats count by idx | table idx] [ search index=myindex source="*2021-08-25*.csv" | stats count by idx | table idx] | join idx [ search index=myindex source=todayFile] | table "SITE ID",idx,"Title",FQDN,"Asset Primary Identifier","IP Address",Hostname,"Operating System", Port   but it's not working, or, better, it doesn't return errors but it doesn't return correct results either. How can we substitute source="*2021-08-25*.csv" with an instruction that dynamically inserts today date in our source filename in order to run the search every day?
Suddenly transforming commands stopped working unless I search in verbose mode. What could cause this issue? This only happens with newly indexed events, but events seems identical. Even a simple | s... See more...
Suddenly transforming commands stopped working unless I search in verbose mode. What could cause this issue? This only happens with newly indexed events, but events seems identical. Even a simple | stats count fails. Regards, G.
I have a Rabbit MQ Message queue logs to be monitored, is there an App or Add on from the Splunk which i can use to monitor those logs , please let me know 
I have a drill down enabled dashboard with base-searches powering the main panels and also some part of the drill down panels.    Lets say my main panels are A. Upon clicking on A    A1, A2,A3 panel... See more...
I have a drill down enabled dashboard with base-searches powering the main panels and also some part of the drill down panels.    Lets say my main panels are A. Upon clicking on A    A1, A2,A3 panels open up. Before implementing drilldowns, my older dashboard was showing all 4 panels(A, A1, A2, A3) the moment dashboard was loading up My question is,   Does it mean that the SPL responsible for displaying result for drill-down panels A1,A2,A3 only begins to execute after I click in the  panel A ? OR the SPL for all the panel (A, A1,A2,A3 ) are executed all at once when I open the dashboard and the drill-down click merely stops me from seeing the drill-down panels output and the drilldown panels' result generating SPLs have already run in the background ? I'm wondering, does/can the drilldown help in anyway in improving performance !!
Hi, how do I get subtotal count for each Host and Total for all count, in additional count for all different status. Host                            Status                             Count Host... See more...
Hi, how do I get subtotal count for each Host and Total for all count, in additional count for all different status. Host                            Status                             Count HostA Disconnected 1 HostA Running 19 HostA RunningWithErrors 2 HostA BadConnectivity 2 HostB Disabled 2 HostB Disconnected 1 HostB Running 17 HostB RunningWithErrors 5 HostC BadConnectivity 1 HostC Running 7 HostC RunningWithErrors 5
Hi, Is there a step-by-step procedure to know how I can setup the Ubiquiti routers, switches and the controller to send logs to Splunk? I am new and lack knowledge in how to set it up. I am using th... See more...
Hi, Is there a step-by-step procedure to know how I can setup the Ubiquiti routers, switches and the controller to send logs to Splunk? I am new and lack knowledge in how to set it up. I am using the trial version of Cloud Platform. What is your recommended approach if there are no guidelines? Thanks. Best, Borjales
Hey Splunk- community, theres another problem which must solved again. The following query.... index=machinedata_w05_sum app=StörungenPulveranlagen "Linie 1" earliest=@d latest=now | where Arbeits... See more...
Hey Splunk- community, theres another problem which must solved again. The following query.... index=machinedata_w05_sum app=StörungenPulveranlagen "Linie 1" earliest=@d latest=now | where Arbeitsplatz="Arbeitsplatz Pulveranlagen" OR Arbeitsplatz="Arbeitsplatz Einhängen" OR Arbeitsplatz="Arbeitsplatz Aushängen" | transaction startswith="kommend" endswith="gehend" | eval Arbeitsbeginn="5:30:00" | eval Arbeitsbeginn_unix=strptime(Arbeitsbeginn, "%H:%M:%S") | eval Störzahl=mvcount(Störung) | search Störzahl!=2 | multireport     [ stats first(Arbeitsbeginn_unix) AS Arbeitsbeginn_unix]     [ stats sum(duration) AS "Stördauer_gesamt"]     [ search Störung="Schichtende" OR Störung="Pause"     | stats sum(duration) AS "Pausendauer"] | eval Stördauer=Stördauer_gesamt-Pausendauer | eval Arbeitszeit=now()-Arbeitsbeginn_unix-Pausendauer | eval Verfügbarkeit=round((Arbeitszeit-Stördauer)/(Arbeitszeit)*100 , 1) | table Stördauer_gesamt Pausendauer Arbeitsbeginn_unix Arbeitszeit Verfügbarkeit Stördauer ...doesn't want to calculate any fields after "| multireport ... ]". I tested  Stördauer="Stördauer_gesamt"-"Pausendauer"; Stördauer='Stördauer_gesamt'-'Pausendauer'; Stördauer=tonumber(Stördauer_gesamt)-tonumber(Pausendauer).... Nothing works. I wonder because all previous and used fields have values: Does sombody have an idea where's the problem? Thanks in advance and kind regards Felix
Hi Team, Current table Application Failure Success A 2 6 B 4 7 C 5 8   Expected Application Failure Success D 11 21   How to add the Applications values and... See more...
Hi Team, Current table Application Failure Success A 2 6 B 4 7 C 5 8   Expected Application Failure Success D 11 21   How to add the Applications values and make it as new application. Also need to sum all the failures and success values. Can anyone help on this? Regards, Madhu R
Is there a way to create a Submit button with different functionality per panel? Lets say 1 panel is to modify and 1 panel is to search. I followed a guide on https://blog.avotrix.com/add-submit-but... See more...
Is there a way to create a Submit button with different functionality per panel? Lets say 1 panel is to modify and 1 panel is to search. I followed a guide on https://blog.avotrix.com/add-submit-button-in-splunk-dashboard-panel/, while yes I can create submit button per panel, they are like the global Submit button - just located at each panel visually instead of at the top of the dashboard. Pressing a Submit button at one panel will also submit the inputs at the other panel.
I have 2 time A anb B is in HH:MM:SS format ..then how to get the difference of A and B in same format
Hi all, I was previously tracking a new Add-on that Splunk were developing for ingesting Google Workspace "audit" data into Splunk. We're already using the excellent existing Add-on that's created ... See more...
Hi all, I was previously tracking a new Add-on that Splunk were developing for ingesting Google Workspace "audit" data into Splunk. We're already using the excellent existing Add-on that's created by Kyle Smith, but were interested to see what the Splunk version would be like too. It looks like it was in a Beta, I could previously access it here: https://splunkbase.splunk.com/app/5556/#/details But all trace of it seems to have disappeared now!? Any ideas what happened to it? Thanks in advance, Stu
Hi All,  I am having some trouble extracing out the following with the following details  1. username  2. Default Msg 3. Date 4. Time This is what I have tried and it gives me the username ... See more...
Hi All,  I am having some trouble extracing out the following with the following details  1. username  2. Default Msg 3. Date 4. Time This is what I have tried and it gives me the username but I am stuck with how to extract the date , time and defaultmsg.  can someone please help me? Thank You so much Index=xxx-xxx  | rex (?<username>\w+@\w+.\w+)  |table username DefaultMsg Date TIme Thank You regards, Alex
Hi All, I have just copied across working props and transforms stanza from one HF to another for sqs logs.  however it’s having issues on using this props and transforms since logs are stopping... See more...
Hi All, I have just copied across working props and transforms stanza from one HF to another for sqs logs.  however it’s having issues on using this props and transforms since logs are stopping and I am getting a message “start writing events to STDOUT” host=“ “ index=“<index>main</index>” stanza= “ “   I am using that transforms to extract hostname index name , source and sourcetype.  any help appreciated! Thanks 
hi I have this issue where the db connect installed on a heavy forwarder is not able to forward logs to the indexers in a particular sourcetype configured in the indexer The heavy forwarder obta... See more...
hi I have this issue where the db connect installed on a heavy forwarder is not able to forward logs to the indexers in a particular sourcetype configured in the indexer The heavy forwarder obtains the logs from the mssql database using http event collector From splunkd log and metric logs i can see the non db connect logs being forwarded on the standard 9997 port We are able to run the sql query in db connect and able to see results Not quite sure where i should troubleshoot and hoping for some leads The db connect version is 3.1.4 and the splunk enterprise version is 7.2.6 I do not see error 400 on the db connect server or command and audit logs tho
Hi, A lot of Splunkers knows how to measure common latency/timeskew in Splunk using _time and _indextime, but who knows to measure the latency in all steps from a UF on it way to the Indexer, wher... See more...
Hi, A lot of Splunkers knows how to measure common latency/timeskew in Splunk using _time and _indextime, but who knows to measure the latency in all steps from a UF on it way to the Indexer, where there could be more Forwarders underway to the indexers (Heavy, Intermediate etc), where latency could raise. The question was really asked here: Indexing latency arising at forwarders? , but never answered. Does anyone know how to nails this information? My idea was somehow to enrich the data in every level, by adding every tier of forwarder to each event with its hostname, and its timestamp, in which way you always would be in control and know the exact source of eventual latency - if you can follow my approach? Ie. Would it be possible to use INGEST_EVAL to add new fields on every new tier the event passes, like: t<no>_host=<host> t<no>_time=<timestamp> This approach will likely also touch on cooked data, and to what extend it's possible to enrich these underway. Let me hear your thoughts and ideas.
I want a report when total events less than 9500000 in a day from sourcetype. Also I tried below query, but its giving me count as 0. | tstats count where index="cb_protect" sourcetype = "carbonbla... See more...
I want a report when total events less than 9500000 in a day from sourcetype. Also I tried below query, but its giving me count as 0. | tstats count where index="cb_protect" sourcetype = "carbonblack:protect" subtype=* | search count<9500000 Need a help in this scenario