All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

for example index=test |search control_id=AC-2* this would give me AC-2, AC-2a, AC-20a, AC-22b, and so on. I just want AC-2, AC-2a and not the tenth digit of 2s.
Hey Splunk Community Ok Ive got a tale of woe, intrigue, revenge, index=_*, and python 3.7 My tale begins a few weeks ago when myself and the other Splunk admin where just like "Ok, I know searc... See more...
Hey Splunk Community Ok Ive got a tale of woe, intrigue, revenge, index=_*, and python 3.7 My tale begins a few weeks ago when myself and the other Splunk admin where just like "Ok, I know searches can be slow but like EVERYTHING is just draggin" We opened a support ticket, talked about it with AOD, let our Splunk team know, got told we might be under provisioned for SVCs and indexers no wait over provisioned, no wait do better searches, no wait again skynet is like "why is you instance doing that?". We also got a Splunk engineer assigned to our case and were told our instance is fine. Le sigh, when I tell you I rabbled rabbled rabbled racka facka Mr. Krabs .... I was definitely salty. So I took it upon myself to dive deeper then I have ever EEEEEVER dived before... index=_* error OR failed OR severe OR ( sourcetype=access_* ( 404 OR 500 OR 503 ) ) I know I know it was a rough one BUT down the rabbit hole I went. I did this search back as far my instance would go. October 2022 and counted from there. I was trying to find any sort of 'spike' or anomaly something to explain that our instance is not fine. October 2022 -2 November 2022- 0 December 2022- 0 January- 25 February- 0 March- 29 April- 15 May-44 June- 1843 July-40,081 August- 569,004 September-119,696,269 October - dont ask, ok fine, so far in October there are 21,604,091 The climb is real and now I had to find what was doing it? From August and back it was a lot of connection/time out errors from the UF on some endpoints so nothing super weird just a lot of them. SEPTEMBER, specifically 9/2/23 11:49:25.331 AM This girl blew up! The 1st event_message was... 09-02-2023 16:49:25.331 +0000 ERROR PersistentScript [3873892 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-Zscaler_CIM/bin/TA_Zscaler_CIM_rh_settings.py persistent}: WARNING:root:Run function: get_password failed: Traceback (most recent call last): The rest of the event messages that followed were these ... see 3 attached screen shots I did a 'last 15 min" search but like September's show this hits the millions. Also, I see it's not just one app, its several of our apps that we use API to get logs into Splunk with, but not all the apps we use shows on the list (weird), and it's not just limited to 3rd party apps, the Splunk cloud admin app is on there among others (see attached VSC doc) I also checked that any of these apps may be out of date and they are all on their current version. I did see one post on community (https://community.splunk.com/t5/All-Apps-and-Add-ons/ERROR-PersistentScript-23354-PersistentScriptIo-From-opt-splunk/m-p/631008) but there was no reply. I also 1st posted on the Slack channel to see if anyone else was or had experienced this happening. https://splunk-usergroups.slack.com/archives/C23PUUYAF/p1696351395640639 and last but not least I did open another support ticket so hopefully I can give an update if I get so good deets! Appreciate you -Kelly
I'm working with these events   Oct 3 17:11:23 hostname Tetration Alert[1485]: [ERR] {"keyId":"keyId","eventTime":"1696266370000","alertTime":"1696266682583","alertText":"Missing Syslog heartbeats... See more...
I'm working with these events   Oct 3 17:11:23 hostname Tetration Alert[1485]: [ERR] {"keyId":"keyId","eventTime":"1696266370000","alertTime":"1696266682583","alertText":"Missing Syslog heartbeats, it might be down","severity":"HIGH","tenantId":"0","type":"CONNECTOR","alertDetails":"{\"Appliance ID\":\"applianceId\",\"Connector ID\":\"connectorId\",\"Connector IP\":\"1.1.1.1/24\",\"Name\":\"SYSLOG\",\"Type\":\"SYSLOG\",\"Deep Link\":\"host.tetrationanalytics.com/#/connectors/details/SYSLOG?id=syslog_id\",\"Last checkin at\":\"Oct 02 2023 16.55.25 PM UTC\"}","rootScopeId":"rootScopeId"} Oct 3 17:11:23 hostname Tetration Alert[1485]: [ERR] {"keyId":"keyId","eventTime":"1696266370000","alertTime":"1696266682583","alertText":"Missing Email heartbeats, it might be down","severity":"HIGH","tenantId":"0","type":"CONNECTOR","alertDetails":"{\"Appliance ID\":\"applianceId\",\"Connector ID\":\"connectorId\",\"Connector IP\":\"1.1.1.1/24\",\"Name\":\"EMAIL\",\"Type\":\"EMAIL\",\"Deep Link\":\"host.tetrationanalytics.com/#/connectors/details/EMAIL?id=6467c9b6379aa00e64072f57\",\"Last checkin at\":\"Oct 02 2023 16.55.25 PM UTC\"}","rootScopeId":"rootScopeId"} Oct 3 09:57:52 hostname Tetration Alert[1393]: [DEBUG] {"keyId":"Test_Key_ID_2023-09-29 09:57:52.73850357 +0000 UTC m=+13322248.433593601","alertText":"Tetration Test Alert","alertNotes":"TestAlert","severity":"LOW","alertDetails":"This is a test of your Tetration Alerts Notifier (TAN) configuration. If you received this then you are ready to start receiving notifications via TAN."}   I set my_json to all the json.  I then use fromjson to pull out the nvps.  I then use fromjson on alertDetails since it is nested in the json.  I can do this from the CLI using   index=main sourcetype="my_sourcetype" | fromjson csw_json | fromjson alertDetails   I need to be able to use that in a props or transforms conf file.  Are these commands able to do that? I tried this in the transforms.conf after extracting myAlertDetail   [stanza_name] REGEX = "(?<_KEY_1>[^"]*)":"(?<_VAL_1>.*)" SOURCE_KEY = myAlertDetail   I get {\ and the test message.  According to regex101.com the regex should pull everything, but it doesn't in Splunk.  Thus the question about fromjson. Splunk 9.0.4 on Linux TIA, Joe
Instead of asking volunteers to speculate what you mean by reverse engineering from complex SPL and screenshot, please illustrate some data in text (anonymize as necessary), explain key characteristi... See more...
Instead of asking volunteers to speculate what you mean by reverse engineering from complex SPL and screenshot, please illustrate some data in text (anonymize as necessary), explain key characteristics of dataset, illustrate desired results in text, and explain the logic between data and desired results.
Maybe you can illustrate your sample data in text (anonymize as needed) and explain why the result is "weird"? (I.e., explain the logic between your data and desired output.)  What is the desired res... See more...
Maybe you can illustrate your sample data in text (anonymize as needed) and explain why the result is "weird"? (I.e., explain the logic between your data and desired output.)  What is the desired result? (I.e., manually chart the result yourself as I suggested above.) Also, do UserAgent and LoginType always appear in the same event?
the deployment server to UF's app push works bit strange.  it may take, even years, to understand this DS and the apps structure. good that you are able to understand the how part.  thanks for upda... See more...
the deployment server to UF's app push works bit strange.  it may take, even years, to understand this DS and the apps structure. good that you are able to understand the how part.  thanks for updating your own question. maybe you can do "accept as solution" to post, thanks. 
Hi @fatsug ...Please update us.. 1) if you are able to receive logs from the windows host to the indexer ? index=* host=<win-hostname> | stats count by sourcetype 2) if yes, are you able to receiv... See more...
Hi @fatsug ...Please update us.. 1) if you are able to receive logs from the windows host to the indexer ? index=* host=<win-hostname> | stats count by sourcetype 2) if yes, are you able to receive windows eventlogs or not. if yes, you can try to view a single event and update us, your other queries...    Splunk newbie learning videos, for absolute beginners: https://www.youtube.com/@SiemNewbies101/playlists
Given that some lookup entries contain wildcard, it is reasonable to assume that your lookup is defined with match_type WILDCARD(host).  In the following I will make some simplifying assumptions beca... See more...
Given that some lookup entries contain wildcard, it is reasonable to assume that your lookup is defined with match_type WILDCARD(host).  In the following I will make some simplifying assumptions because I do not know the significance of comparing eventTime or LastEventTime: All you want to compare is with events in your search window.  If your search window is past 7 days, I assume that you want to keep entries that one or more events match in the past week, and that you want to drop any table entries with zero match during this same period.  But if eventTime is important, I'm sure you can adapt the solution to meet your needs. The key here is to utilize lookup; specifically, allow lookup to perform wildcard matches. | metasearch (index=os_* OR index=perfmon_*) | dedup host | eval host=lower(host) ```| eval eventTime=_time | convert timeformat="%Y/%m/%d %H:%M:%S" ctime(eventTime) AS LastEventTime | fields host eventTime LastEventTime index ^^^ the above is not calculated or used ``` | lookup host_lookup output host AS matchhost | append [inputlookup host_lookup | rename host AS tablehost] | eventstats values(matchhost) as matchhost | eval Action = if(tablehost IN matchhost, "Keep Host", "Remove from Lookup") | fields Action tablehost (Obviously you do not need to rename tablehost.  It just makes the intent obvious.) Hope this helps.
Hi @scout29 ... on your search head, you can go to Settings ---> Monitoring Console ----> "indexing" tab ----> License Usage ---- > today or historical this will give you today or last week/months l... See more...
Hi @scout29 ... on your search head, you can go to Settings ---> Monitoring Console ----> "indexing" tab ----> License Usage ---- > today or historical this will give you today or last week/months license usage conception dashboards.  Please suggest if thats what you are looking for, thanks.      Splunk newbie learning videos, for absolute beginners: https://www.youtube.com/@SiemNewbies101/playlists  
its returns very weird results:  
trying to set a token where system_id shows ABC1, ABC1-a, ABC10, ABC10-a and so on.   when I set the token for that system_id as ABC1* to return all the ABC1 and ABC1-a and so on, it also returns t... See more...
trying to set a token where system_id shows ABC1, ABC1-a, ABC10, ABC10-a and so on.   when I set the token for that system_id as ABC1* to return all the ABC1 and ABC1-a and so on, it also returns the ABC10, and ABC10-a and so on. BUt obvisouly if I just do ABC10* it will return the right result. the first portion is the problem. hope my question makes sense.  
system_id = AA-1, AA-1-a, AA-1-b, AA-10, AA-10-a, AA-10-b, AA-12, AA-12-a, AA-12-b,,, and so on. Notice all the system_id starts with common 'AA-1' and * afterward. However, when use it as a token,... See more...
system_id = AA-1, AA-1-a, AA-1-b, AA-10, AA-10-a, AA-10-b, AA-12, AA-12-a, AA-12-b,,, and so on. Notice all the system_id starts with common 'AA-1' and * afterward. However, when use it as a token, as you've already feel the problem, AA-10* would return ALL the following id's start with AA-10* and nothing else, so good. however, if I choose AA-1*, not only it returns the values that start with AA-1 but also AA-10 and AA-12, which I do not want. Trying to make this a dashboard, dropdown with token, where user pikc AA-1, and it only returns ALL the values that only ahs AA-1, aa-1-a, aa-1-b and so on. I need your help search guru,   I want to search for All result AA-1 NOT showing AA-10 or AA-12, YET also need them in one token.
system_id = AA-1, AA-1-a, AA-1-b, AA-10, AA-10-a, AA-10-b, AA-12, AA-12-a, AA-12-b,,, and so on. Notice all the system_id starts with common 'AA-1' and * afterward. However, when use it as a token, a... See more...
system_id = AA-1, AA-1-a, AA-1-b, AA-10, AA-10-a, AA-10-b, AA-12, AA-12-a, AA-12-b,,, and so on. Notice all the system_id starts with common 'AA-1' and * afterward. However, when use it as a token, as you've already feel the problem, AA-10* would return ALL the following id's start with AA-10* and nothing else, so good. however, if I choose AA-1*, not only it returns the values that start with AA-1 but also AA-10 and AA-12, which I do not want. Trying to make this a dashboard, dropdown with token, where user pikc AA-1, and it only returns ALL the values that only ahs AA-1, aa-1-a, aa-1-b and so on.
Ah ok, another addon was more verbose. It sounds like it's a problem with either the base URL in my environment, or the redirect URI. The other addin is now working, but this app is failing with th... See more...
Ah ok, another addon was more verbose. It sounds like it's a problem with either the base URL in my environment, or the redirect URI. The other addin is now working, but this app is failing with the error:     1 action failed Error from server. Status Code: 400 Data from server: invalid_request    
Splunk has a join command that may do the job, but it's inefficient.  The typical solution is to fetch both sources and then group them together by a common field using the stats command. index="Pre... See more...
Splunk has a join command that may do the job, but it's inefficient.  The typical solution is to fetch both sources and then group them together by a common field using the stats command. index="PreProdIndex" (source="Request" "<stringCriterion fieldName=\"Product\" operator=\"equals\" value=\"Soup\"/>" OR "<stringCriterion fieldName=\"Product\" operator=\"equals\" value=\"Biscuits\"/>") OR source="Transaction") | stats values(*) as * by transaction_id | table REQUEST_DATE_TIME REQUEST *  
Hi Guys, I'm playing around with Splunk Soar on prem. No matter what I do, I can't get this addon working. I've followed the readme to the tee: https://github.com/splunk-soar-connectors/azuread/tr... See more...
Hi Guys, I'm playing around with Splunk Soar on prem. No matter what I do, I can't get this addon working. I've followed the readme to the tee: https://github.com/splunk-soar-connectors/azuread/tree/next But still am not having any luck. Any ideas?
Any updates on this, have the same situation
Look at it another way.  Search the last 5 minutes.  If a ServiceDown trap was received without a matching Good trap then create an alarm. index=foo (trap=ServiceDown OR trap=Good) earliest=-6m | de... See more...
Look at it another way.  Search the last 5 minutes.  If a ServiceDown trap was received without a matching Good trap then create an alarm. index=foo (trap=ServiceDown OR trap=Good) earliest=-6m | dedup ```add a field that contains device name``` | where trap=ServiceDown AND _time <= relative_time(now(), "-5m") Trigger the alert if the search returns results.
Consider putting that into Feedback on the docs page and submitting it at https://ideas.splunk.com
For example, system_id = AA-1, AA-1-a, AA-1-b,  AA-10, AA-10-a, AA-10-b, AA-12, AA-12-a, AA-12-b,,, and so on.   Notice all the system_id starts with common 'AA-1' and * afterward. However, whe... See more...
For example, system_id = AA-1, AA-1-a, AA-1-b,  AA-10, AA-10-a, AA-10-b, AA-12, AA-12-a, AA-12-b,,, and so on.   Notice all the system_id starts with common 'AA-1' and * afterward. However, when use it as a token, as you've already feel the problem, AA-10* would return ALL the following id's start with AA-10* and nothing else, so good. however, if I choose AA-1*, not only it returns the values that start with AA-1 but also AA-10 and AA-12, which I do not want. Trying to make this a dashboard, dropdown with token, where user pikc AA-1, and it only returns ALL the values that only ahs AA-1, aa-1-a, aa-1-b and so on.   hope this question makes sense. has anyone fix such issue before?