All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

No. Natively eventlog inputs, because that's what we're talking about, generate either "plain text" or xml events depending on the renderXml parameter. There is no built-in functionality to ingest ev... See more...
No. Natively eventlog inputs, because that's what we're talking about, generate either "plain text" or xml events depending on the renderXml parameter. There is no built-in functionality to ingest eventlog data as json. At leas not natively with UF's eventlog input. You could of course try to use a third party solution like nxlog, kiwi or something like that to generate json events from eventlog (I'm not sure if those particular examples can do that though) but that's a different story and it's a bit pointless really since you have a perfectly well (ok, almost perfectly) working inputs and accompanying TA for windows eventlogs.
@PickleRickThanks.  I've upvoted the idea.
I have bunch of alerts, I received email alert, but I did not receive auto cut incident to service now How to troubleshoot this issue?????
https://ideas.splunk.com/ideas/EID-I-208 It turns out it was not my idea, I just upvoted and commented it from my old account because it had already been there when I wanted to create it  
Hello, I'm working with a Splunk cluster which has two slave peers and I need to disable an index on the Cluster Master using the REST API. I've tried the usual endpoint (/servicesNS/nobody/{app}/co... See more...
Hello, I'm working with a Splunk cluster which has two slave peers and I need to disable an index on the Cluster Master using the REST API. I've tried the usual endpoint (/servicesNS/nobody/{app}/configs/conf-indexes/{index}) as this doc says (https://docs.splunk.com/Documentation/Splunk/8.0.0/RESTREF/RESTconf#configs.2Fconf-.7Bfile.7D.2F.7Bs... ), but it doesn't seem to work on the Cluster Master. Can someone please provide me with the specific REST API endpoint I should use to disable an index on the Cluster Master? I have read the documentation https://docs.splunk.com/Documentation/Splunk/8.0.0/RESTREF/RESTcluster but there is no reference to what I need. Thank you in advance for your assistance
We'll it's all a bit of magic isn't it In this case it was the seach head deployer pushing the CSV files to the seach head cluster. Though I've seen similar issues from the deployment server tryin... See more...
We'll it's all a bit of magic isn't it In this case it was the seach head deployer pushing the CSV files to the seach head cluster. Though I've seen similar issues from the deployment server trying to push changes to the heavy forwarder layer. Sure, I guess even if the cause of the issue remains clouded in mystery, the actual problem is solved and I should accept this as the solution.
Hmm, so there is no option for the forwarder to send the log in TA_windows/CIM compliant JSON format? I know XML is compatible because this is what we normally index, and there is no JSON compliance... See more...
Hmm, so there is no option for the forwarder to send the log in TA_windows/CIM compliant JSON format? I know XML is compatible because this is what we normally index, and there is no JSON compliance? In that case, well then the "easy solution" has an even smaller chance of making it to the next family therapy session than a pickled Rick... I'll hold of on marking this a solution until the last bit of hope is gone But if I understand you correctly, even if the eventlog can be forwarded in JSON format (big if), this is not compliant with the TA for windows in the SH/IX cluster. Best regards
I am trying to host Prometheus metrics on a Splunk app such that the metrics are available at `.../my_app/v1/metrics` endpoint. I am able to create a handler of type PersistentServerConnectionAppli... See more...
I am trying to host Prometheus metrics on a Splunk app such that the metrics are available at `.../my_app/v1/metrics` endpoint. I am able to create a handler of type PersistentServerConnectionApplication and have it return Prometheus metrics. The response status, however, code = `500` and content = `Unexpected character while looking for value: '#'` Prometheus metrics do not confirm to any of the supported `output_modes` (atom | csv | json | json_cols | json_rows | raw | xml) so I get the same error irrespective of the output mode chosen. Is there a way to bypass the output check? Is there any other alternative to host a non-confirming-format output via a Splunk REST API?
There's a fine line I'm trying not to cross. Yes, we are collecting logs being indexed in JSON format. No, they are not "standard" with regards to field names, order and content as we are not colle... See more...
There's a fine line I'm trying not to cross. Yes, we are collecting logs being indexed in JSON format. No, they are not "standard" with regards to field names, order and content as we are not collecting/indexing the eventlog in conventional or forwarder based manner. The log were indexing is not CIM compliant or compatible with the Splunk TA for windows. We'd like to "adjust" incoming events in the HF layer to become "compliant". To evaluate if this is reasonable I'd need a couple of reference events of Splunk/CIM/TA compliant eventlog in JSON format. I seem to remember there being a setting for the Windows TA for UF to send eventlog in JSON format. Hence, the ask for sample of windows eventlog in JSON format.  If I'm mistaken and XML is the only forwarded format, or if there is but no one willing to share a sample, we'll just have to deploy a test environment and try to generate the JSON data we need. I just thought this could be a faster solution given that someone could share some (masked is fine) events. If I'm mistaken and the UF cannot forward eventlog in JSON format, then case closed and we're done here Best regards    
Firstly, your question is a bit inconsistent since those "methods" are not mutually exclusive. For example, a syslog event can be ingested on a network input on an UF or via SC4S pushed to HEC endpoi... See more...
Firstly, your question is a bit inconsistent since those "methods" are not mutually exclusive. For example, a syslog event can be ingested on a network input on an UF or via SC4S pushed to HEC endpoint. Secondly, unless explicitly configured, splunk on its own doesn't retain metadata about the transport it got the data from (it can however be reflected to some extent in the source field value). Thirdly, apart from the metrics which splunk gathers anyway you'd have to scan through all of your events to calculate sum of their lengths which would be highly ineffective (that's why splunk accumulates license usage count as it ingests every single event so it doesn't have to do it retroactively if needed). So it's not that easy. What you already have in license usage metrics you already have, what you don't have will be hard to compute.
@PickleRickThanks.  I was afraid of that when I couldn't find anything in the documentation.  What is your idea so I can upvote it?
Good afternoon, Background: I found a configuration issue in one of our firewalls which I'm trying to remediate where an admin created a very broad access rule that has permitted traffic over a wid... See more...
Good afternoon, Background: I found a configuration issue in one of our firewalls which I'm trying to remediate where an admin created a very broad access rule that has permitted traffic over a wide array of TCP/UDP ports. I started working to identify valid traffic which has used the rule, but a co-worker mentioned an easy win would be creating an ACL to block any ports which had not already been allowed through this very promiscuous rule. My problem is I know how to use the data model to identify TCP/UDP traffic which has been logged egressing through the rule, but how could I modify the search provided below so that I can get a result that displays which ports have NOT been logged? (Also bonus points if you can help me view numbers returned as ranges rather than individual numbers aka "5000-42000") Here is my current search:   | tstats ,values(All_Traffic.dest_port) AS dest_port values(All_Traffic.dest_ip) AS dest_ip dc(All_Traffic.dest_ip) AS num_dest_ip dc(All_Traffic.dest_port) AS num_dest_port FROM datamodel=Network_Traffic WHERE index="firewall" AND sourcetype="traffic" AND fw_rule="horrible_rule" BY All_Traffic.dest_port | rename All_Traffic.* AS *   Thank you in advance for any help that you may be able to provide!
TA_windows expects data in either "traditional" rendered text format (key=value multiline event) or an xml structure. If you want to send them another way you'll have to write your own extractions an... See more...
TA_windows expects data in either "traditional" rendered text format (key=value multiline event) or an xml structure. If you want to send them another way you'll have to write your own extractions and make it CIM-conformant.
No, you can't use search commands in fields definition. You can create calculated fields but they are only limited to what you could normally put in an eval statement. With key-value extraction done... See more...
No, you can't use search commands in fields definition. You can create calculated fields but they are only limited to what you could normally put in an eval statement. With key-value extraction done using regex (as you tried with _KEY_1 and _VAL_1 groups it's tricky to properly capture the data, you use the structure of the json object and you might hit they limit on key-value pairs extracted (100 by default if I remember correctly). Unfortunately Splunk has no way of telling it to start kv extraction from a given point within an event - it always tries to "consume" whole event. So it works well if the _raw field as a whole is just one big json object but can't handle cases like "json sent with a syslog header". It's a shame really and I think I even posted an idea about that on ideas.splunk.com. Worth upvoting.
Hello!  I'm trying to figure out a way to display a single value that calculates users who have disconnected divided by the time range based on the time picker.   The original number comes from the... See more...
Hello!  I'm trying to figure out a way to display a single value that calculates users who have disconnected divided by the time range based on the time picker.   The original number comes from the avg of total disconnects divided by the distinct user count.  I need to divide that number by the number of days which is based on the time picker.  The goal is to get the avg user disconnects per day based on time frame selected in time picker.  For example if there are 100 disconnects and 10 distinct users =10, then divided by the number of days selected in picker(7) should equal 1.42 disconnects per day.  I hope that makes sense.  Here is my search: Index=... Host=HostName  earliest=$time_tok.earliest$ latest=$time_tok.latest$ | stats count by "User ID" |search "User ID"=* |stats avg(count) That will only give me the Total disconnects divided by Distinct users, but I need that number divided by the time picker number of days and I can't get it to work.  Thank you!!!    
That's the nature of wildcards - they're *wild* and sometimes match more than is desired. The workaround is to tell Splunk what not to match, using the NOT operator and some other pattern, or use th... See more...
That's the nature of wildcards - they're *wild* and sometimes match more than is desired. The workaround is to tell Splunk what not to match, using the NOT operator and some other pattern, or use the regex command to filter using a more precise regular expression. index=test control_id=AC-2* | regex control_id="AC-2[a-z]?" This query first reads all events where the control_id field starts with "AC-2".  This is similar to the existing behavior.  The regex command keeps only the events where the control_id field contains "AC-2" followed by an optional single letter.
for example index=test |search control_id=AC-2* this would give me AC-2, AC-2a, AC-20a, AC-22b, and so on. I just want AC-2, AC-2a and not the tenth digit of 2s.
Hey Splunk Community Ok Ive got a tale of woe, intrigue, revenge, index=_*, and python 3.7 My tale begins a few weeks ago when myself and the other Splunk admin where just like "Ok, I know searc... See more...
Hey Splunk Community Ok Ive got a tale of woe, intrigue, revenge, index=_*, and python 3.7 My tale begins a few weeks ago when myself and the other Splunk admin where just like "Ok, I know searches can be slow but like EVERYTHING is just draggin" We opened a support ticket, talked about it with AOD, let our Splunk team know, got told we might be under provisioned for SVCs and indexers no wait over provisioned, no wait do better searches, no wait again skynet is like "why is you instance doing that?". We also got a Splunk engineer assigned to our case and were told our instance is fine. Le sigh, when I tell you I rabbled rabbled rabbled racka facka Mr. Krabs .... I was definitely salty. So I took it upon myself to dive deeper then I have ever EEEEEVER dived before... index=_* error OR failed OR severe OR ( sourcetype=access_* ( 404 OR 500 OR 503 ) ) I know I know it was a rough one BUT down the rabbit hole I went. I did this search back as far my instance would go. October 2022 and counted from there. I was trying to find any sort of 'spike' or anomaly something to explain that our instance is not fine. October 2022 -2 November 2022- 0 December 2022- 0 January- 25 February- 0 March- 29 April- 15 May-44 June- 1843 July-40,081 August- 569,004 September-119,696,269 October - dont ask, ok fine, so far in October there are 21,604,091 The climb is real and now I had to find what was doing it? From August and back it was a lot of connection/time out errors from the UF on some endpoints so nothing super weird just a lot of them. SEPTEMBER, specifically 9/2/23 11:49:25.331 AM This girl blew up! The 1st event_message was... 09-02-2023 16:49:25.331 +0000 ERROR PersistentScript [3873892 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-Zscaler_CIM/bin/TA_Zscaler_CIM_rh_settings.py persistent}: WARNING:root:Run function: get_password failed: Traceback (most recent call last): The rest of the event messages that followed were these ... see 3 attached screen shots I did a 'last 15 min" search but like September's show this hits the millions. Also, I see it's not just one app, its several of our apps that we use API to get logs into Splunk with, but not all the apps we use shows on the list (weird), and it's not just limited to 3rd party apps, the Splunk cloud admin app is on there among others (see attached VSC doc) I also checked that any of these apps may be out of date and they are all on their current version. I did see one post on community (https://community.splunk.com/t5/All-Apps-and-Add-ons/ERROR-PersistentScript-23354-PersistentScriptIo-From-opt-splunk/m-p/631008) but there was no reply. I also 1st posted on the Slack channel to see if anyone else was or had experienced this happening. https://splunk-usergroups.slack.com/archives/C23PUUYAF/p1696351395640639 and last but not least I did open another support ticket so hopefully I can give an update if I get so good deets! Appreciate you -Kelly
I'm working with these events   Oct 3 17:11:23 hostname Tetration Alert[1485]: [ERR] {"keyId":"keyId","eventTime":"1696266370000","alertTime":"1696266682583","alertText":"Missing Syslog heartbeats... See more...
I'm working with these events   Oct 3 17:11:23 hostname Tetration Alert[1485]: [ERR] {"keyId":"keyId","eventTime":"1696266370000","alertTime":"1696266682583","alertText":"Missing Syslog heartbeats, it might be down","severity":"HIGH","tenantId":"0","type":"CONNECTOR","alertDetails":"{\"Appliance ID\":\"applianceId\",\"Connector ID\":\"connectorId\",\"Connector IP\":\"1.1.1.1/24\",\"Name\":\"SYSLOG\",\"Type\":\"SYSLOG\",\"Deep Link\":\"host.tetrationanalytics.com/#/connectors/details/SYSLOG?id=syslog_id\",\"Last checkin at\":\"Oct 02 2023 16.55.25 PM UTC\"}","rootScopeId":"rootScopeId"} Oct 3 17:11:23 hostname Tetration Alert[1485]: [ERR] {"keyId":"keyId","eventTime":"1696266370000","alertTime":"1696266682583","alertText":"Missing Email heartbeats, it might be down","severity":"HIGH","tenantId":"0","type":"CONNECTOR","alertDetails":"{\"Appliance ID\":\"applianceId\",\"Connector ID\":\"connectorId\",\"Connector IP\":\"1.1.1.1/24\",\"Name\":\"EMAIL\",\"Type\":\"EMAIL\",\"Deep Link\":\"host.tetrationanalytics.com/#/connectors/details/EMAIL?id=6467c9b6379aa00e64072f57\",\"Last checkin at\":\"Oct 02 2023 16.55.25 PM UTC\"}","rootScopeId":"rootScopeId"} Oct 3 09:57:52 hostname Tetration Alert[1393]: [DEBUG] {"keyId":"Test_Key_ID_2023-09-29 09:57:52.73850357 +0000 UTC m=+13322248.433593601","alertText":"Tetration Test Alert","alertNotes":"TestAlert","severity":"LOW","alertDetails":"This is a test of your Tetration Alerts Notifier (TAN) configuration. If you received this then you are ready to start receiving notifications via TAN."}   I set my_json to all the json.  I then use fromjson to pull out the nvps.  I then use fromjson on alertDetails since it is nested in the json.  I can do this from the CLI using   index=main sourcetype="my_sourcetype" | fromjson csw_json | fromjson alertDetails   I need to be able to use that in a props or transforms conf file.  Are these commands able to do that? I tried this in the transforms.conf after extracting myAlertDetail   [stanza_name] REGEX = "(?<_KEY_1>[^"]*)":"(?<_VAL_1>.*)" SOURCE_KEY = myAlertDetail   I get {\ and the test message.  According to regex101.com the regex should pull everything, but it doesn't in Splunk.  Thus the question about fromjson. Splunk 9.0.4 on Linux TIA, Joe
Instead of asking volunteers to speculate what you mean by reverse engineering from complex SPL and screenshot, please illustrate some data in text (anonymize as necessary), explain key characteristi... See more...
Instead of asking volunteers to speculate what you mean by reverse engineering from complex SPL and screenshot, please illustrate some data in text (anonymize as necessary), explain key characteristics of dataset, illustrate desired results in text, and explain the logic between data and desired results.