All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Monitoring & Alerting for noise in an audio file? Hi, I am currently having a spy audio recorder for my daughter kindergarten, since there’s been an increase of violence on the news lately. The re... See more...
Monitoring & Alerting for noise in an audio file? Hi, I am currently having a spy audio recorder for my daughter kindergarten, since there’s been an increase of violence on the news lately. The recorder generates an .WAV file, that can last for more than 24h a day. My question is, does Splunk has the ability to upload such a file, and identify “events” based on a condition that could then be alerts? When you view an audio file, there’s that “line” that moves up and down depends on the sound volume, right? I know that hearing a specific word might not be possible, but maybe when the staff screams and that volume line is at very top height, so generate an alert? Or when there is a continuous “high” line because of high volume that could hint on a crying. This is very important for me, would appreciate ideas. Thank you!
Hello guys,  I want a simple thing. Install an ODBC driver to my Win PC and pull data from Splunk to PowerBI. However, I don't know how I can authenticate. Our company uses Azure AD authentification... See more...
Hello guys,  I want a simple thing. Install an ODBC driver to my Win PC and pull data from Splunk to PowerBI. However, I don't know how I can authenticate. Our company uses Azure AD authentification SSO so I don't have any admin/password credentials. Does anyone have any experience with it?   I go against: company.splunkcloud.com Auth: Azure AD  
My network has 34 hosts with universal forwarder setup on each of them but only 5 of them are forwarding their logs to indexer. So what shall I do to overcome this problem ? PS. The firewall on t... See more...
My network has 34 hosts with universal forwarder setup on each of them but only 5 of them are forwarding their logs to indexer. So what shall I do to overcome this problem ? PS. The firewall on the indexer has already been turned off, to avoid any package drop.
Apologies if this belongs in a different location but it seemed like the best with the choices available. I am looking to host a .txt file within my custom app that can be retrieved as a EDL (exter... See more...
Apologies if this belongs in a different location but it seemed like the best with the choices available. I am looking to host a .txt file within my custom app that can be retrieved as a EDL (external dynamic list) by my firewall.  I have been successful at building a .html document in the app's ./appserver/static/ directory. This works fine but I am concerned this will not work for an EDL. I really need it to be a .txt file (unless someone has had success with .html files and would like to share).  To be clear, this works just fine: https://splunk.example.com:8000/en-US/static/app/example_app/edl.html But this does not: https://splunk.example.com:8000/en-US/static/app/example_app/edl.txt If anybody knows how to make this work or could explain why it is not possible I would appreciate it. Thank you,
Hello Splunk experts, I would like to simplify some complex SPL queries that search for certain events and apply tags to them according to various business rules based on both keyword searching and... See more...
Hello Splunk experts, I would like to simplify some complex SPL queries that search for certain events and apply tags to them according to various business rules based on both keyword searching and pattern matching. The events come from a ticketing system with many attributes, but I will simplify it thus: ID DATE GROUP NOTES SERVER Ticket # Ticket submit date/time Assigned group Details of ticket Affected server   I need to add a new field, called BUCKET, to calculate and store the type of ticket, based on a matrix like this: BUCKET GROUPS KEYWORDS SERVERS AAA g1, g2 f1, f2 abc* BBB g3, g4 f3 server123, xyz* CCC g5, g6 f9 *dmz*   For example, BUCKET should be set to AAA when a ticket event arrives for which group is g1 or g2 and the notes contain keywords f1 or f2 and the server name begins with abc.  I have some some nasty queries currently which have a bunch of where/if/case operations to do the tagging currently. And for each bucket the query has to scan through the same set of ticket events. I would really like to move this business logic to a simple lookup-type CSV file so it can be easily be updated without modifications to any savedsearches or dashboards, and the tagging can be done in a single pass for all buckets by my current scheduled savedsearch which processes the raw ticket data ingested from the ticketing system via DBX. In reality, I have a dozen different buckets, ~50 different groups, and a similar number of keywords. I only have one bucket that actually needs pattern matching on the server name, but it would be nice to support full pattern matching. Our operators ultimately have a trellis-type scorecard dashboard where they have a box for each bucket that shows the current number of tickets and is colored when the number goes above certain levels. When the operator clicks on the bucket number, the are sent to a drilldown that shows a table of the ticket details, and that drilldown has dynamic drilldown hyperlinks directly into the ticketing system. I am imagining this tagging could be done with a fancy lookup somehow. I already have the bucket matrix in a lookup csv file. I have played with using format command to generate the appropriate nested boolean AND/OR search logic with a foreach loop but foreach doesn't seem to know now to iterate down a column in a csv. Does my challenge seem doable? Can anyone share or point me to some example code to use multiple patterns stored in a lookup csv file? FYI using Splunk Enterprise 8.1.9 and I DO NOT have any CLI access to either the SHC or indexers. Thanks for any tips.
I am trying to create an alert when the field toState changes to OPEN and stays in that OPEN state for 5 minutes. I have tried the following but it is not working. Would appreciate if I get some poin... See more...
I am trying to create an alert when the field toState changes to OPEN and stays in that OPEN state for 5 minutes. I have tried the following but it is not working. Would appreciate if I get some pointers.    ... CB_STATE_TRANSITION | timechart span=5m count(toState="OPEN") as state | stats count | where count > 1 I have the alert run every 5 minutes and triggers when the number of results > 0. 
Hi, My Strptime function is not working for the below format. date format: 1/13/23 11:44:11.543 AM eval  time_epoc= strptime(_time, "%m/%d/%Y  %I:%M:%S.%3N %p")
I have the following event 2023-01-25T20:20:45.429989-08:00 abc log-inventory.sh[20519]: Boot timestamp: 2023-01-25 20:15:56   I am trying to extract the Boot timestamp and then calculating the... See more...
I have the following event 2023-01-25T20:20:45.429989-08:00 abc log-inventory.sh[20519]: Boot timestamp: 2023-01-25 20:15:56   I am trying to extract the Boot timestamp and then calculating the difference between current - Boot timestamp   I used following search index=abc | rex field=_raw "Boot\s*timestamp\:\s*(?<Boot_Time>[^.*]+)" | stats Latest(Boot_Time) as Boot_Time latest(_time) as time by host|eval diff = now() - Boot_Time but it shows no results
We are using custom docker containers deployed as azure functions.  The underlying code is all in python.  I'd like to use the splunk logging driver within my container.  I am unsure how to configure... See more...
We are using custom docker containers deployed as azure functions.  The underlying code is all in python.  I'd like to use the splunk logging driver within my container.  I am unsure how to configure azure functions to pass the logging driver and log-opt params. Thanks, Steve
Hello, i am using UF to ingest a csv file that has the timestamp in preamble data, i would like to extract the timestamp and to remove the preamble data and then ingest the csv.  the file looks l... See more...
Hello, i am using UF to ingest a csv file that has the timestamp in preamble data, i would like to extract the timestamp and to remove the preamble data and then ingest the csv.  the file looks like the table below: Time stamp 2023-01-26T11:15:00-05:00   info obj   datainfo blahblah   datadata blahblah         field1 field1 field2 value1 1 info1 value2 2 info2 value3 3 info3 value4 4 info4 value5 5 info5 value6 6 info6 value7 7 info7 value8 8 info8 value9 9 info9   my props.conf: DATETIME_CONFIG = TIME_PREFIX=Time\sstamp, MAX_TIMESTAMP_LOOKAHEAD=22 TIME_FORMAT=%Y-%m-%dT%H:%M:%S%z INDEXED_EXTRACTIONS = CSV FIELD_HEADER_REGEX = (field1.*) LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false NO_BINARY_CHECK = true the issue here is i am able to read the csv and the field names, however the timestamp of the event is the current time and not from the file. how do i fix this?    
Currently I have an inputlookup csv that contains a list of IP addresses and lookup csv that has a list of subnets. I would like to create a query that shows the IPs in the inputlookup table that are... See more...
Currently I have an inputlookup csv that contains a list of IP addresses and lookup csv that has a list of subnets. I would like to create a query that shows the IPs in the inputlookup table that are not part of the subnets specified in the lookup. I am stumped on how to do this any instance would be greatly appreciated. 
The sender and recipient information  I need from Unix/Linux "sendmail" logs is contained in separate lines in the sendmail log.  I am able to correlate all the entries for a given email using nested... See more...
The sender and recipient information  I need from Unix/Linux "sendmail" logs is contained in separate lines in the sendmail log.  I am able to correlate all the entries for a given email using nested search, dedup, and transation using the following search:      index="sendmail_logs" host=relay* [search index="sendmail_logs" host=relay* from=\<*@example.com\> | dedup qid | fields qid ] | transaction fields=qid maxspan=1m which produces the following (simplified and obfuscated): 2023-01-26T23:37:25+00:00 relay1 sendmail[115877]: 30QNbOpD115877: milter=ourmiltname, action=mail, continue 2023-01-26T23:37:25+00:00 relay1 sendmail[115877]: 30QNbOpD115877: milter=ourmiltname, action=rcpt, continue 2023-01-26T23:37:25+00:00 relay1 sendmail[115877]: 30QNbOpD115877: milter=ourmiltname, action=data, continue 2023-01-26T23:37:25+00:00 relay1 sendmail[115877]: 30QNbOpD115877: from=<bounce+e1165d.ef30-username=ourdomain.com@example.com>, size=25677, class=0, nrcpts=1, msgid=<20230126233721.b60dfcd8b6c1249b@example.com>, bodytype=8BITMIME, proto=ESMTPS, daemon=MTA, tls_verify=NO, auth=NONE, relay=m194-164.mailgun.net [161.38.194.164] 2023-01-26T23:37:26+00:00 relay1 sendmail[115877]: 30QNbOpD115877: milter add: header: X-NUNYA-SPF-Record: v=spf1 include:mailgun.org include:_spf.smtp.com ~all 2023-01-26T23:37:26+00:00 relay1 sendmail[115877]: 30QNbOpD115877: milter change: header Subject: from Sample Subject Line to EXTERNAL: Sample Subject Line 2023-01-26T23:37:26+00:00 relay1 sendmail[115877]: 30QNbOpD115877: milter=ourmiltname, action=eoh, continue 2023-01-26T23:37:27+00:00 relay1 sendmail[115887]: 30QNbOpD115877: to=<username@ourdomain.com>, delay=00:00:02, xdelay=00:00:01, mailer=smtp, tls_verify=OK, pri=145677, relay=nexthop.ourdomain.com. [192.168.0.7], dsn=2.0.0, stat=Sent (30QNbQau230876 Message accepted for delivery) 2023-01-26T23:37:27+00:00 relay1 sendmail[115887]: 30QNbOpD115877: done; delay=00:00:02, ntries= Now, what I want to do is reduce the output to only the lines that contain the strings "from=" OR "to=".   I am new to splunk, so i tried adding adding           |  regex _raw="from\=\<|to\=\<" but all the lines are still displayed.   Suggestions on how to correct my query?
I have two drop-down on the dashboard. When I select first dropdown from the list then I need the show the selected (filtered) list based on the first drop down on the second. Pls help.
Hey everyone. I am trying to do a stats count by items{}.description, items{}.price but I'd like to filter out some of the items. In this example, I'd like to include "description one" and "descripti... See more...
Hey everyone. I am trying to do a stats count by items{}.description, items{}.price but I'd like to filter out some of the items. In this example, I'd like to include "description one" and "description two", but not "description three".  In the real situation there is many more that I'd like to include and to exclude. Is this possible?       "time": "01:01:01", "items": [ { "description": "description one", "price": "$220" }, { "description": "description two", "price": "$10" }, { "description": "description three", "price": "$50" } ]        
Hi all, I'm developing a POC using otel collector (running on docker) to collect logs, traces, and metrics from a dotnet application and then send the observability data to AppDynamics. I'm followin... See more...
Hi all, I'm developing a POC using otel collector (running on docker) to collect logs, traces, and metrics from a dotnet application and then send the observability data to AppDynamics. I'm following this section of the documentation: https://docs.appdynamics.com/appd/22.x/22.6/en/application-monitoring/appdynamics-for-opentelemetry Currently I only see traces in the example configuration. Does anyone know if logs and metrics are also supported? And do I need to install dotnet agent to do this (given that I just want to see data successfully received in AppDynamics)? Thank you for the help! Regards,
We have been having a constant stream of log output related to the tier 3 "splunk" plugin, looking to see how to remove this as it is making our bundles difficult to analize and solve issues. The... See more...
We have been having a constant stream of log output related to the tier 3 "splunk" plugin, looking to see how to remove this as it is making our bundles difficult to analize and solve issues. There is a constant stream of jenkins_console comming from most of our backend microservices, is there anyway to remove this from our logs? Thanks! Fernando Devops SquareTrade
For Cisco I used the filter below, I will need to add filters for whatever view I am looking for. I want to look up the total number of users for a specific day of the month on a host. @ What d... See more...
For Cisco I used the filter below, I will need to add filters for whatever view I am looking for. I want to look up the total number of users for a specific day of the month on a host. @ What do I need to add to my filter?   index="its_sslvpn" host=*SIRA* user=*@*   Thank you.  Anthony  
What is best practice when utilizing a search from the below apps? What pros/cons should I consider from each? I appreciate any guidance here.  1) Enable the search stored in the app? 2) Clone th... See more...
What is best practice when utilizing a search from the below apps? What pros/cons should I consider from each? I appreciate any guidance here.  1) Enable the search stored in the app? 2) Clone the search and store in a custom company-specific app, and enable there?  DA-ESS-MitreContent DA-ESS-ContentUpdate
Hi, I would like to add value in two fields based on their name.  I want the output as sum of traffic_in#fw1+traffic_out#fw1 and so on by _time.  
I'm trying to filter out events like the ones below using the regex expression regex _raw!="^[A-Za-z0-9]{4}:.*$"   but its not working.  Can someone help me with this?   Events 0000: 00 0... See more...
I'm trying to filter out events like the ones below using the regex expression regex _raw!="^[A-Za-z0-9]{4}:.*$"   but its not working.  Can someone help me with this?   Events 0000: 00 025e 28:0a000c5f call 0a000c5f 0000: 04 025d 14 ldnull 007a: 00 021d de:2a leave.s :0000=>01 0249 11:07 ldloc.s 07