All Topics

Top

All Topics

Cloud native applications are growing — and so are attack surfaces. To keep up, IT teams need to prioritize security vulnerabilities based on business impact. But how?  Join our webinar on Octobe... See more...
Cloud native applications are growing — and so are attack surfaces. To keep up, IT teams need to prioritize security vulnerabilities based on business impact. But how?  Join our webinar on October 25 to hear security experts share why business risk observability is essential to effectively securing cloud native apps — and how to get started.  Leverage business risk observability for cloud environments to protect what matters most AMER | October 25 at 11am PST / 2 pm EST EMEA | October 26 at 10am GMT / 11am CET APAC | October 26 at 8:30am IST / 11am SGT / 2pm AEST You’ll learn:  The biggest pain points when it comes to cloud security — and how to address them.  How to correlate performance data, business context and security intelligence.   Ways to use Cisco Secure Application to prioritize revenue-impacting security risks.  Register now! Don’t miss this timely security webinar — save your spot now for a deep dive into business risk observability for the cloud.  Speakers      Kashyap Merchant - Director, Product Management Audrey Nahrvar – Product Marketing Manager Roy Hardgrove – Director, Product GTM Strategy and Execution
Hey Splunk Community Ok Ive got a tale of woe, intrigue, revenge, index=_*, and python 3.7 My tale begins a few weeks ago when myself and the other Splunk admin where just like "Ok, I know searc... See more...
Hey Splunk Community Ok Ive got a tale of woe, intrigue, revenge, index=_*, and python 3.7 My tale begins a few weeks ago when myself and the other Splunk admin where just like "Ok, I know searches can be slow but like EVERYTHING is just draggin" We opened a support ticket, talked about it with AOD, let our Splunk team know, got told we might be under provisioned for SVCs and indexers no wait over provisioned, no wait do better searches, no wait again skynet is like "why is you instance doing that?". We also got a Splunk engineer assigned to our case and were told our instance is fine. Le sigh, when I tell you I rabbled rabbled rabbled racka facka Mr. Krabs .... I was definitely salty. So I took it upon myself to dive deeper then I have ever EEEEEVER dived before... index=_* error OR failed OR severe OR ( sourcetype=access_* ( 404 OR 500 OR 503 ) ) I know I know it was a rough one BUT down the rabbit hole I went. I did this search back as far my instance would go. October 2022 and counted from there. I was trying to find any sort of 'spike' or anomaly something to explain that our instance is not fine. October 2022 -2 November 2022- 0 December 2022- 0 January- 25 February- 0 March- 29 April- 15 May-44 June- 1843 July-40,081 August- 569,004 September-119,696,269 October - dont ask, ok fine, so far in October there are 21,604,091 The climb is real and now I had to find what was doing it? From August and back it was a lot of connection/time out errors from the UF on some endpoints so nothing super weird just a lot of them. SEPTEMBER, specifically 9/2/23 11:49:25.331 AM This girl blew up! The 1st event_message was... 09-02-2023 16:49:25.331 +0000 ERROR PersistentScript [3873892 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-Zscaler_CIM/bin/TA_Zscaler_CIM_rh_settings.py persistent}: WARNING:root:Run function: get_password failed: Traceback (most recent call last): The rest of the event messages that followed were these ... see 3 attached screen shots I did a 'last 15 min" search but like September's show this hits the millions. Also, I see it's not just one app, its several of our apps that we use API to get logs into Splunk with, but not all the apps we use shows on the list (weird), and it's not just limited to 3rd party apps, the Splunk cloud admin app is on there among others (see attached VSC doc) I also checked that any of these apps may be out of date and they are all on their current version. I did see one post on community (https://community.splunk.com/t5/All-Apps-and-Add-ons/ERROR-PersistentScript-23354-PersistentScriptIo-From-opt-splunk/m-p/631008) but there was no reply. I also 1st posted on the Slack channel to see if anyone else was or had experienced this happening. https://splunk-usergroups.slack.com/archives/C23PUUYAF/p1696351395640639 and last but not least I did open another support ticket so hopefully I can give an update if I get so good deets! Appreciate you -Kelly
I'm working with these events   Oct 3 17:11:23 hostname Tetration Alert[1485]: [ERR] {"keyId":"keyId","eventTime":"1696266370000","alertTime":"1696266682583","alertText":"Missing Syslog heartbeats... See more...
I'm working with these events   Oct 3 17:11:23 hostname Tetration Alert[1485]: [ERR] {"keyId":"keyId","eventTime":"1696266370000","alertTime":"1696266682583","alertText":"Missing Syslog heartbeats, it might be down","severity":"HIGH","tenantId":"0","type":"CONNECTOR","alertDetails":"{\"Appliance ID\":\"applianceId\",\"Connector ID\":\"connectorId\",\"Connector IP\":\"1.1.1.1/24\",\"Name\":\"SYSLOG\",\"Type\":\"SYSLOG\",\"Deep Link\":\"host.tetrationanalytics.com/#/connectors/details/SYSLOG?id=syslog_id\",\"Last checkin at\":\"Oct 02 2023 16.55.25 PM UTC\"}","rootScopeId":"rootScopeId"} Oct 3 17:11:23 hostname Tetration Alert[1485]: [ERR] {"keyId":"keyId","eventTime":"1696266370000","alertTime":"1696266682583","alertText":"Missing Email heartbeats, it might be down","severity":"HIGH","tenantId":"0","type":"CONNECTOR","alertDetails":"{\"Appliance ID\":\"applianceId\",\"Connector ID\":\"connectorId\",\"Connector IP\":\"1.1.1.1/24\",\"Name\":\"EMAIL\",\"Type\":\"EMAIL\",\"Deep Link\":\"host.tetrationanalytics.com/#/connectors/details/EMAIL?id=6467c9b6379aa00e64072f57\",\"Last checkin at\":\"Oct 02 2023 16.55.25 PM UTC\"}","rootScopeId":"rootScopeId"} Oct 3 09:57:52 hostname Tetration Alert[1393]: [DEBUG] {"keyId":"Test_Key_ID_2023-09-29 09:57:52.73850357 +0000 UTC m=+13322248.433593601","alertText":"Tetration Test Alert","alertNotes":"TestAlert","severity":"LOW","alertDetails":"This is a test of your Tetration Alerts Notifier (TAN) configuration. If you received this then you are ready to start receiving notifications via TAN."}   I set my_json to all the json.  I then use fromjson to pull out the nvps.  I then use fromjson on alertDetails since it is nested in the json.  I can do this from the CLI using   index=main sourcetype="my_sourcetype" | fromjson csw_json | fromjson alertDetails   I need to be able to use that in a props or transforms conf file.  Are these commands able to do that? I tried this in the transforms.conf after extracting myAlertDetail   [stanza_name] REGEX = "(?<_KEY_1>[^"]*)":"(?<_VAL_1>.*)" SOURCE_KEY = myAlertDetail   I get {\ and the test message.  According to regex101.com the regex should pull everything, but it doesn't in Splunk.  Thus the question about fromjson. Splunk 9.0.4 on Linux TIA, Joe
trying to set a token where system_id shows ABC1, ABC1-a, ABC10, ABC10-a and so on.   when I set the token for that system_id as ABC1* to return all the ABC1 and ABC1-a and so on, it also returns t... See more...
trying to set a token where system_id shows ABC1, ABC1-a, ABC10, ABC10-a and so on.   when I set the token for that system_id as ABC1* to return all the ABC1 and ABC1-a and so on, it also returns the ABC10, and ABC10-a and so on. BUt obvisouly if I just do ABC10* it will return the right result. the first portion is the problem. hope my question makes sense.  
system_id = AA-1, AA-1-a, AA-1-b, AA-10, AA-10-a, AA-10-b, AA-12, AA-12-a, AA-12-b,,, and so on. Notice all the system_id starts with common 'AA-1' and * afterward. However, when use it as a token,... See more...
system_id = AA-1, AA-1-a, AA-1-b, AA-10, AA-10-a, AA-10-b, AA-12, AA-12-a, AA-12-b,,, and so on. Notice all the system_id starts with common 'AA-1' and * afterward. However, when use it as a token, as you've already feel the problem, AA-10* would return ALL the following id's start with AA-10* and nothing else, so good. however, if I choose AA-1*, not only it returns the values that start with AA-1 but also AA-10 and AA-12, which I do not want. Trying to make this a dashboard, dropdown with token, where user pikc AA-1, and it only returns ALL the values that only ahs AA-1, aa-1-a, aa-1-b and so on. I need your help search guru,   I want to search for All result AA-1 NOT showing AA-10 or AA-12, YET also need them in one token.
system_id = AA-1, AA-1-a, AA-1-b, AA-10, AA-10-a, AA-10-b, AA-12, AA-12-a, AA-12-b,,, and so on. Notice all the system_id starts with common 'AA-1' and * afterward. However, when use it as a token, a... See more...
system_id = AA-1, AA-1-a, AA-1-b, AA-10, AA-10-a, AA-10-b, AA-12, AA-12-a, AA-12-b,,, and so on. Notice all the system_id starts with common 'AA-1' and * afterward. However, when use it as a token, as you've already feel the problem, AA-10* would return ALL the following id's start with AA-10* and nothing else, so good. however, if I choose AA-1*, not only it returns the values that start with AA-1 but also AA-10 and AA-12, which I do not want. Trying to make this a dashboard, dropdown with token, where user pikc AA-1, and it only returns ALL the values that only ahs AA-1, aa-1-a, aa-1-b and so on.
If you use Splunk Observability Cloud, we invite you to share your valuable insights with us through a brief online survey. Your honest feedback is important to us and will be used to enhance our pro... See more...
If you use Splunk Observability Cloud, we invite you to share your valuable insights with us through a brief online survey. Your honest feedback is important to us and will be used to enhance our product. Click here to participate in the survey and help us make your experience even better.
Hi Guys, I'm playing around with Splunk Soar on prem. No matter what I do, I can't get this addon working. I've followed the readme to the tee: https://github.com/splunk-soar-connectors/azuread/tr... See more...
Hi Guys, I'm playing around with Splunk Soar on prem. No matter what I do, I can't get this addon working. I've followed the readme to the tee: https://github.com/splunk-soar-connectors/azuread/tree/next But still am not having any luck. Any ideas?
For example, system_id = AA-1, AA-1-a, AA-1-b,  AA-10, AA-10-a, AA-10-b, AA-12, AA-12-a, AA-12-b,,, and so on.   Notice all the system_id starts with common 'AA-1' and * afterward. However, whe... See more...
For example, system_id = AA-1, AA-1-a, AA-1-b,  AA-10, AA-10-a, AA-10-b, AA-12, AA-12-a, AA-12-b,,, and so on.   Notice all the system_id starts with common 'AA-1' and * afterward. However, when use it as a token, as you've already feel the problem, AA-10* would return ALL the following id's start with AA-10* and nothing else, so good. however, if I choose AA-1*, not only it returns the values that start with AA-1 but also AA-10 and AA-12, which I do not want. Trying to make this a dashboard, dropdown with token, where user pikc AA-1, and it only returns ALL the values that only ahs AA-1, aa-1-a, aa-1-b and so on.   hope this question makes sense. has anyone fix such issue before?
Hopefully this will set the issue out clearly.  I have two sources, Transaction and Request. The Transaction holds the transaction id, date and time and user details of a user transaction. The Req... See more...
Hopefully this will set the issue out clearly.  I have two sources, Transaction and Request. The Transaction holds the transaction id, date and time and user details of a user transaction. The Request holds the request id, transaction id and an XML string with details of a users search.    I have a query that searches the Request and returns those searches which contain specific strings. However i need to show the user details on the results  table.  index="PreProdIndex" source="Request" "<stringCriterion fieldName=\"Product\" operator=\"equals\" value=\"Soup\"/>" OR "<stringCriterion fieldName=\"Product\" operator=\"equals\" value=\"Biscuits\"/>" | table REQUEST_DATE_TIME REQUEST So I need to add onto the table USER_DETAILS from the Source "Transaction" to the above query based on the common key of the Transaction ID.  In SQL I would simply put in a join on Transaction.ID=Request.Transaction_ID and all would be good but I have failed to find anything that gives a SPLUNK solution yet.   
Hi All, How can we implement the wait logic in a Splunk query. We monitor the Service down traps primarily and create Splunk alerts. We have requirement now, to wait for a time interval and check ... See more...
Hi All, How can we implement the wait logic in a Splunk query. We monitor the Service down traps primarily and create Splunk alerts. We have requirement now, to wait for a time interval and check if the service UP trap received if yes then don't create alert else create an alert. How can we implement this in a single query? Any suggestion please. Example: If ServiceDown trap received:                 Wait for 5 minutes.                 If Good trap received:                                 Return                 Else:                                 Create alarm.   Thanks!
Pretty sure the forwarder can pass eventlogg as either XML or JSON from a host. If this is not incorrect, then could anyone consider sharing a bit of eventlog in "Splunk native" JSON format as "raw"?... See more...
Pretty sure the forwarder can pass eventlogg as either XML or JSON from a host. If this is not incorrect, then could anyone consider sharing a bit of eventlog in "Splunk native" JSON format as "raw"? I have some log samples in JSON format though in non standard layout with some added metadata. What I'm looking for is a sample of eventlog in JSON format which might be accepted by TA_windows and other apps to compare against. Hopefully someone has some sample log they could share and spare me the need to to generate samples Best regards
Looking to create a search / report showing the ingest by source ingestion method in the last 24hours. I am looking for the source to be the amount of data in GB being ingested by total source.  So f... See more...
Looking to create a search / report showing the ingest by source ingestion method in the last 24hours. I am looking for the source to be the amount of data in GB being ingested by total source.  So for example, how much data in GB's is being ingested for the following source ingest methods:  UF's Syslog API HEC DBX
The cisco app shows no data from the syslog but if i run a search my network devices are sending syslogs to the correct indexer.  UDP:514 - cisco:ios My splunk infrastructure is just a single serve... See more...
The cisco app shows no data from the syslog but if i run a search my network devices are sending syslogs to the correct indexer.  UDP:514 - cisco:ios My splunk infrastructure is just a single server preforming all functions.   Please give me some suggestions to troubleshoot! I have tried deleting the data inputs are readding but with no luck.
I am trying to extract the difference of time(duration) of 2 events in days. I have 2 saperate event for the same ID. One is the starting event and the second is the ending event. Looking as follow... See more...
I am trying to extract the difference of time(duration) of 2 events in days. I have 2 saperate event for the same ID. One is the starting event and the second is the ending event. Looking as follows. event1 start: [2023-05-24 12:02:24.674 CEST_] ID:1234 Event 2 end: [2023-05-30 6:13:04:954 CEST_] ID:1234 De following query i tried: Gebeurtenis(=id) =000057927_018448922 |stats min(_time) as start, max(_time) as end, range(_time) as diff by Gebeurtenis |eval start=strftime(Aanmelden, "%d/%m/%Y") |eval end=strftime(Afmelden, "%d/%m/%Y") |eval diff=strftime(diff, "%d/%m/%Y") the result i get is: Diff is calculating the beginning time of splunk and not the 6 days of difference. Any help is welcom.  
Happy CX Day, Splunk Community! CX stands for Customer Experience, and today, October 3rd, is CX Day — a global celebration of the organizations, professionals, and customers themselves that a... See more...
Happy CX Day, Splunk Community! CX stands for Customer Experience, and today, October 3rd, is CX Day — a global celebration of the organizations, professionals, and customers themselves that are at the heart of creating better customer experiences. For us here at Splunk, our community of customers (aka YOU all!) are of the utmost importance and at the forefront of all that we do. We are so excited to recognize our incredible customers and community’s achievements today — and every day! Join us today for a LinkedIn Live at 11 AM PT / 2 PM ET, as our Chief Customer Officer, Toni Pavlovich, speaks with CX Futurist, Blake Morgan, about how building resilience helps transform the customer experience to drive better business outcomes. Want to learn more? Peep the following Splunk resources we’ve curated to unpack the importance of CX and see how companies can be more resilient by prioritizing CX companywide, and be sure to read Toni Pavlovich’s CX Day Announcement. The Splunk Community also offers a myriad of ways to engage with fellow members within our vibrant, passionate community, enhancing your overall customer experience. Check out a few of them below! Splunk Answers: Troubleshoot, ask questions, and dig into conversations with fellow community members over on Splunk Answers. Not sure where to start? Share your examples of great customer experiences in honor of CX Day! User Groups: Find a local chapter to join and meet local Splunk users IRL, or connect with one another across the globe virtually! Community Slack: Troubleshoot your issues and connect with fellow Splunk users in real time SplunkTrust: Meet some of our Splunk superusers and most active community contributors Splunk Ideas: Have an idea on how to make Splunk and your overall customer experience even better? Great! Contribute them here. Lastly, we’d like to say a hearty THANK YOU to all of the  community members who, throughout the years, have given us invaluable feedback and support. You drive us to want to create a better experience for each and every one of you, and we couldn’t do any of this without you! — Splunk Community Team
Hi, Is it somehow possible to find difference between two or more amounts from different events when the events are sorted by time span of 20s.     index=Prod123 methodType='WITHDRAW' curre... See more...
Hi, Is it somehow possible to find difference between two or more amounts from different events when the events are sorted by time span of 20s.     index=Prod123 methodType='WITHDRAW' currency='GBP' jurisdiction=UK transactionAmount>=3 | fieldformat _time = strftime(_time, "%Y-%m-%d %H:%M:%S") | bin span=20s _time | search transactionAmount=* | stats list(transactionAmount) as Total, list(currency) as currency, list(_time) as Time, dc(customerId) as Users by _time | fieldformat Time = strftime(Time, "%Y-%m-%d %H:%M:%S") | fieldformat _time = strftime(_time, "%Y-%m-%d %H:%M:%S") | search Users>=2 | sort - Time     I would like to have it show the difference between the Totals. Like the first one Total 3.8 and 11.2. Is it possible to make it work somehow or would it be better with streamstats and window?  I was thinking about using sum or avg as an option also. Thank you, 
Hi All, I am looking for some dashboards showing the usage of Apps and it's dashboards by User so that I can decommission unused Apps. The below solution is only extracting very partial information... See more...
Hi All, I am looking for some dashboards showing the usage of Apps and it's dashboards by User so that I can decommission unused Apps. The below solution is only extracting very partial information like 15%. Solved: See User Activity by App and View - Splunk Community   Can someone please help. Regards, Devang   @tnesavich_splun 
Hi Fellow Splunkers, Have a hopefully quick question: Want to pull out the source and host from the Windows _internal splunk logs, but my rex (cribbed from a post on here) isn't working.   inde... See more...
Hi Fellow Splunkers, Have a hopefully quick question: Want to pull out the source and host from the Windows _internal splunk logs, but my rex (cribbed from a post on here) isn't working.   index=_internal host IN (spfrd1, spfrd2) source="*\\Splunk\\var\\log\\splunk\\splunkd.log" component=DateParserVerbose | rex "Context: source=(?P<sourcetypeissue>\w+)\Shost=(?P<sourcehost>\w+)" | stats list(sourcetypeissue) as file_name list(sourcehost)   But I get no stats, my events look like this:   08-24-2022 07:50:20.383 -0400 WARN DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Sun Aug 24 07:49:58 2022). Context: source::WMI:WinEventLog:Security|host::SPFRD1|WMI:WinEventLog:Security|1    
Hello Splunk Community, I hope this message finds you well. I'm currently working on enhancing my workflow in the Search and Reporting app, specifically when using the datamodel command. I'm looking... See more...
Hello Splunk Community, I hope this message finds you well. I'm currently working on enhancing my workflow in the Search and Reporting app, specifically when using the datamodel command. I'm looking to streamline the process of adding fields to my search through simple clicks within the app. e.g.: | datamodel summariesonly=t allow_old_summaries=t Windows search | search All_WinEvents.src_user="windows_user" All_WinEvents.EventCode="5140" and I'd like to extend it with All_WinEvents.action="success" but without typing it in but using the search and reporting app itself. I've noticed that when I interactively add fields, the query tends to extend based on indexed fields rather than the datamodel fields. My goal is to understand if there's a way to make this process more datamodel-centric. Is there a way to configure or adjust settings so that when I click to add fields in the Search and Reporting app, it extends the query based on the datamodel command rather than defaulting to indexed fields? e.g result.: | datamodel summariesonly=t allow_old_summaries=t Windows search | search All_WinEvents.src_user="windows_user" All_WinEvents.EventCode="5140"  All_WinEvents.action="success" Any insights, tips, or guidance on achieving this would be highly appreciated. Thank you in advance for your assistance! Best regards,