All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Dear AppD Team, I have a few services written in .NET Core running on Linux containers. I'm using the latest .NET agent for Linux. Among these services are a Kafka producer and a Kafka consumer. We... See more...
Dear AppD Team, I have a few services written in .NET Core running on Linux containers. I'm using the latest .NET agent for Linux. Among these services are a Kafka producer and a Kafka consumer. We are finding that AppD does not see the Kafka traffic between these services. Do the .NET agents currently support Kafka? Thanks.
Hi y’all. I recently installed splunk enterprise AMI instance in EC2. Unfortunately, I am unable to access with the default credentials. I am getting ‘Server Error’. I am using the latest version. So... See more...
Hi y’all. I recently installed splunk enterprise AMI instance in EC2. Unfortunately, I am unable to access with the default credentials. I am getting ‘Server Error’. I am using the latest version. So, I am assuming below are the credentials. Username: admin Password: SPLUNK-<instance-id> I tried with just instance Id too but I am always seeing ‘server error’. I also tried terminating and launching a new one. Didn’t work. I also tried SSHing into EC2 instance and change password. I don’t have access as ec2 user. Any help with this is huglely appreciated. Thanks in advance.
i have data something like this input:   firstname=value1,lastname=value2,email=value3,address=value4.. etc firstname=value11,lastname=value12,email=value13,address=value14.. etc firstname=value... See more...
i have data something like this input:   firstname=value1,lastname=value2,email=value3,address=value4.. etc firstname=value11,lastname=value12,email=value13,address=value14.. etc firstname=value12,lastname=value13,email=value14,address=value15.. etc   output:   firstname lastname email address value1 value2, value3, value4 value11 value12, value13, value14 value12 value13, value14, value15   i want to extract this data into a table with keys as column headers. Please note these keys are dynamic and it can have any names. i tried      search | extract pairdelim="," kvdelim="="   but not sure how to put them into a table format. any inputs?
Our deployer instance is getting the following error: Snapshots are supposed to be created every 60 seconds, but at least 94142 seconds have passed since last successful snapshot creation. And th... See more...
Our deployer instance is getting the following error: Snapshots are supposed to be created every 60 seconds, but at least 94142 seconds have passed since last successful snapshot creation. And the time since the last successful snapshot creation keeps increasing.
I would like to use indexRouting to move some log lines to a given index and have other log lines go to athe HEC's default index.  The log lines that I want to route are single-line json formatted as... See more...
I would like to use indexRouting to move some log lines to a given index and have other log lines go to athe HEC's default index.  The log lines that I want to route are single-line json formatted as a HEC event.  Below is a pretty-printed example:   { "event":{ "device":{ "id":"dcef6f000bc7a6baffc0f0b5f000", }, "logMessage":{ "description":"Publishing to web socket", "domain":"WebSocketChannel", "severity":"debug" }, "topic":"com.juneoven.dev.analytics" }, "index":"analytics_logs_dev", "level":"INFO", "source":"dev.analytics", "sourcetype":"analytics-logs", "time":1630091106.076237 }     Other log lines are normal text logs (non-json formatted):   2021-08-27 19:09:14,295 INFO [tornado.access] 202 POST /1/analytics/log (10.110.4.224) 35.62ms     I see that there is a customFilter feature.  I am hoping that Ican  key off of the 'index' field in the HEC event to route these json log lines to their index and allow all other lines to go to the default index for the HEC. Is that possible?  Is there some documentation that would help me?  Thanks.
After upgrading Splunk Enterprise to version 8.2.2 from 8.0.x, Splunk will not start on my Indexer/Search head. When I start it I get the following error: Any ideas on what could be causing this... See more...
After upgrading Splunk Enterprise to version 8.2.2 from 8.0.x, Splunk will not start on my Indexer/Search head. When I start it I get the following error: Any ideas on what could be causing this or places to check?   Thanks!  
Hi I am trying to find the min, max and AVG for Percentile 99,90 and 75 with the bellow:   index="main" source="C:\\inetpub\\logs\\LogFiles\\*" host="WIN-699VGN4SK4U" | eval responseTime=round(tim... See more...
Hi I am trying to find the min, max and AVG for Percentile 99,90 and 75 with the bellow:   index="main" source="C:\\inetpub\\logs\\LogFiles\\*" host="WIN-699VGN4SK4U" | eval responseTime=round(time_taken/1000) | timechart span=1mon perc99(time_taken) as 99thPercentile perc90(time_taken) as 90thPercentile perc75(time_taken) as 75thPercentile | stats min(99thPercentile) max(99thPercentile) avg(99thPercentile) min(90thPercentile) max(90thPercentile) avg(90thPercentile) min(75thPercentile) max(75thPercentile) avg(75thPercentile) by _time     min(99thPercentile) max(99thPercentile) avg(99thPercentile) min(90thPercentile) max(90thPercentile) avg(90thPercentile) min(75thPercentile) max(75thPercentile) avg(75thPercentile) 66.50 66.50 66.50 12.5 12.5 12.5 5.984375 5.984375 5.984375     However all the numbers are coming back the same, Any ideas?   Thanks   Joe
I have two logfiles, logfile1.log and logfile2.log. I have created their own field extractions for both of them. Here is an example line for both logs: logfile1.log: file1time, epoch, file1ID, name... See more...
I have two logfiles, logfile1.log and logfile2.log. I have created their own field extractions for both of them. Here is an example line for both logs: logfile1.log: file1time, epoch, file1ID, name, flag, stat1, stat2, stat3 logfile2.log: lastruntime, file2ID, epoch What I need to do is compare logfiles each against the ID's, ensure that they're the same, and output the "name" field in the search that has logfile2.log as it's source. There's probably a very easy way to do this, but I can't think of it. Any help would be greatly appreciated. Thanks!
What OOTB (Fresh install) features of Splunk Enterprise or ES should be kept? Turned off or ON per your expert opinion please? To get the best of Splunk Core / ES? Thank u in advance.
I wanted to establish an alert that will look at the past hour for the past 6 weeks and make some comparisons. So for 2 PM I will have 6 results for week 1,2,3.... I know I can use date_hour but I ne... See more...
I wanted to establish an alert that will look at the past hour for the past 6 weeks and make some comparisons. So for 2 PM I will have 6 results for week 1,2,3.... I know I can use date_hour but I need something that I can set when the alert runs, so for the last 60 minutes somehow.       index=text source=text date_hour=14 | timechart span=1h count      Is there a way I can do this more easily?  
Hello Splunk community,  Im currently trying to use splunk free trial version for enterprise business with my firepower, following the most recent guide https://www.cisco.com/c/en/us/td/docs/securit... See more...
Hello Splunk community,  Im currently trying to use splunk free trial version for enterprise business with my firepower, following the most recent guide https://www.cisco.com/c/en/us/td/docs/security/firepower/70/api/eNcore/eNcore_Operations_Guide_v08.html#_Toc76556475 I get splencore test successful connection, but when I access splunk the firepower dashboard doesn't get populated with info, I tried reinstalling multiple times and following guides with no success. Running debian 10. Any suggestion will be truly appreciated. 
Hello all, I am struggling to find a solution for this. I have two different searches. One shows log entries where system errors have occurred: User Id:              Error code:               Erro... See more...
Hello all, I am struggling to find a solution for this. I have two different searches. One shows log entries where system errors have occurred: User Id:              Error code:               Error Time: 121                     E3002189                2021-08-27 12:01:34 249                     E1000874                2021-08-27 12:05:21 121                     E2000178                2021-08-27 12:27:09 The other search shows where the users were/are located throughout the day: User Id:            Location:             Login Time:                                 Logout Time: 121                   P155                     2021-08-27 11:54:56           2021-08-27 12:14:19 121                   U432                     2021-08-27 12:22:16           2021-08-27 12:34:52 249                   M127                    2021-08-27 12:01:32           2021-08-27 12:35:45 249                  J362                      2021-08-27 12:38:25           2021-08-27 12:50:11   I am trying to join the two searches and then compare the times to find the location of a user at the time of an error. So I tried joining on the user id and the basically comparing the unix time of the times above to find Error Time >= Login Time and Error Time <= Logout Time, but that didn't work. How can I set up the search to accomplish this? Thanks in advance!
Hi All, I will be getting a list of MD5 hash values in my logs. Need a regex expression for the below.  Therefore whenever am getting md5 hash values.   "md5":"b78269ef4034474766cb1351e94edf5c",
Hello to everybody, we are trying to set a search that makes a diff between two files of two different days. This is the working search:   | set diff [| search index=myindex source="*2021-08-27*.c... See more...
Hello to everybody, we are trying to set a search that makes a diff between two files of two different days. This is the working search:   | set diff [| search index=myindex source="*2021-08-27*.csv" | stats count by idx | table idx] [ search index=myindex source="*2021-08-26*.csv" | stats count by idx | table idx] | join idx [ search index=myindex source="*2021-08-27*.csv"] | table "SITE ID",idx,"Title",FQDN,"Asset Primary Identifier","IP Address",Hostname,"Operating System", Port   However, we'd like to make it parametric, we'd like dates contained in source names are calculated automatically, so we tried to insert this:   | set diff [ | eval todayFile=strftime(now(),"*%Y-%m-%d*.csv") | search index=myindex source=todayFile | stats count by idx | table idx] [ search index=myindex source="*2021-08-25*.csv" | stats count by idx | table idx] | join idx [ search index=myindex source=todayFile] | table "SITE ID",idx,"Title",FQDN,"Asset Primary Identifier","IP Address",Hostname,"Operating System", Port   but it's not working, or, better, it doesn't return errors but it doesn't return correct results either. How can we substitute source="*2021-08-25*.csv" with an instruction that dynamically inserts today date in our source filename in order to run the search every day?
Suddenly transforming commands stopped working unless I search in verbose mode. What could cause this issue? This only happens with newly indexed events, but events seems identical. Even a simple | s... See more...
Suddenly transforming commands stopped working unless I search in verbose mode. What could cause this issue? This only happens with newly indexed events, but events seems identical. Even a simple | stats count fails. Regards, G.
I have a Rabbit MQ Message queue logs to be monitored, is there an App or Add on from the Splunk which i can use to monitor those logs , please let me know 
I have a drill down enabled dashboard with base-searches powering the main panels and also some part of the drill down panels.    Lets say my main panels are A. Upon clicking on A    A1, A2,A3 panel... See more...
I have a drill down enabled dashboard with base-searches powering the main panels and also some part of the drill down panels.    Lets say my main panels are A. Upon clicking on A    A1, A2,A3 panels open up. Before implementing drilldowns, my older dashboard was showing all 4 panels(A, A1, A2, A3) the moment dashboard was loading up My question is,   Does it mean that the SPL responsible for displaying result for drill-down panels A1,A2,A3 only begins to execute after I click in the  panel A ? OR the SPL for all the panel (A, A1,A2,A3 ) are executed all at once when I open the dashboard and the drill-down click merely stops me from seeing the drill-down panels output and the drilldown panels' result generating SPLs have already run in the background ? I'm wondering, does/can the drilldown help in anyway in improving performance !!
Hi, how do I get subtotal count for each Host and Total for all count, in additional count for all different status. Host                            Status                             Count Host... See more...
Hi, how do I get subtotal count for each Host and Total for all count, in additional count for all different status. Host                            Status                             Count HostA Disconnected 1 HostA Running 19 HostA RunningWithErrors 2 HostA BadConnectivity 2 HostB Disabled 2 HostB Disconnected 1 HostB Running 17 HostB RunningWithErrors 5 HostC BadConnectivity 1 HostC Running 7 HostC RunningWithErrors 5
Hi, Is there a step-by-step procedure to know how I can setup the Ubiquiti routers, switches and the controller to send logs to Splunk? I am new and lack knowledge in how to set it up. I am using th... See more...
Hi, Is there a step-by-step procedure to know how I can setup the Ubiquiti routers, switches and the controller to send logs to Splunk? I am new and lack knowledge in how to set it up. I am using the trial version of Cloud Platform. What is your recommended approach if there are no guidelines? Thanks. Best, Borjales
Hey Splunk- community, theres another problem which must solved again. The following query.... index=machinedata_w05_sum app=StörungenPulveranlagen "Linie 1" earliest=@d latest=now | where Arbeits... See more...
Hey Splunk- community, theres another problem which must solved again. The following query.... index=machinedata_w05_sum app=StörungenPulveranlagen "Linie 1" earliest=@d latest=now | where Arbeitsplatz="Arbeitsplatz Pulveranlagen" OR Arbeitsplatz="Arbeitsplatz Einhängen" OR Arbeitsplatz="Arbeitsplatz Aushängen" | transaction startswith="kommend" endswith="gehend" | eval Arbeitsbeginn="5:30:00" | eval Arbeitsbeginn_unix=strptime(Arbeitsbeginn, "%H:%M:%S") | eval Störzahl=mvcount(Störung) | search Störzahl!=2 | multireport     [ stats first(Arbeitsbeginn_unix) AS Arbeitsbeginn_unix]     [ stats sum(duration) AS "Stördauer_gesamt"]     [ search Störung="Schichtende" OR Störung="Pause"     | stats sum(duration) AS "Pausendauer"] | eval Stördauer=Stördauer_gesamt-Pausendauer | eval Arbeitszeit=now()-Arbeitsbeginn_unix-Pausendauer | eval Verfügbarkeit=round((Arbeitszeit-Stördauer)/(Arbeitszeit)*100 , 1) | table Stördauer_gesamt Pausendauer Arbeitsbeginn_unix Arbeitszeit Verfügbarkeit Stördauer ...doesn't want to calculate any fields after "| multireport ... ]". I tested  Stördauer="Stördauer_gesamt"-"Pausendauer"; Stördauer='Stördauer_gesamt'-'Pausendauer'; Stördauer=tonumber(Stördauer_gesamt)-tonumber(Pausendauer).... Nothing works. I wonder because all previous and used fields have values: Does sombody have an idea where's the problem? Thanks in advance and kind regards Felix