All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Good morning fellow splunkers. I have a challenge and was wondering if anyone could help me. In some logs with multiple fields with the same label, we use eval mvindex to assign different label for ... See more...
Good morning fellow splunkers. I have a challenge and was wondering if anyone could help me. In some logs with multiple fields with the same label, we use eval mvindex to assign different label for those fields. For example, In a log, we have two fields labelled "Account Name", first one corresponding to computer account and second to user account. We use mvindex to assign labels appropriately. This works well for a known number of fields. Now, we also have logs, with groups of fields: action, module and rule:          action: quarantine          module: access          rule: verified              action: execute          module: access          rule: verified              action: continue          module: access          rule: verified                 action: reject          isFinal: true          module: pdr          rule: reject I would like to use mvindex to label those so I can use those fileds more easily. In the example above, we have four groups of those fileds, thefore I wold have: action1, action2 etc (same for module and rule). However, the number of groups changes. It could be one, two, three or more. Is there any way to use mvindex dynamically somehow? I imagine, we would have to first evaluate number of those fields (or group of fields) and then use mvindex to assign different labels? Unless there is a different way to achieve our goal. Many thnaks in advance for any advise. Kind Regards, Mike.
I apologize for the inconvenience here.  Unfortunately, Splunk has forced all apps to no longer expose via the UI the ability to accept insecure connections.   To keep our app certified and available... See more...
I apologize for the inconvenience here.  Unfortunately, Splunk has forced all apps to no longer expose via the UI the ability to accept insecure connections.   To keep our app certified and available on Splunkbase we had to remove this from the UI and move it out of inputs.conf into a separate file. 
Assuming that, like in the OP, index A still carries Hostname field that you want to compare with Reporting_Host in index B.  In addition, index A has a field "IP address".  This should get your desi... See more...
Assuming that, like in the OP, index A still carries Hostname field that you want to compare with Reporting_Host in index B.  In addition, index A has a field "IP address".  This should get your desired result.   index=A sourcetype="Any" | stats values("IP address") as "IP address" by Hostname OS | append [search index=B sourcetype="foo" | stats values(Reporting_Host) as Reporting_Host] | eventstats values(eval(lower(Reporting_Host))) as Reporting_Host | where index != "B" | mvexpand "IP address" | eval match = if(lower(Hostname) IN (Reporting_Host) OR 'IP address' IN (Reporting_Host), "ok", null()) | stats values("IP address") as "IP address" values(match) as match by Hostname OS | fillnull match value="missing"   Use the following emulation:   | makeresults format=csv data="Hostname, IP address, OS xyz, 190.1.1.1:101.2.2.2:102.3.3.3:4.3.2.1,Windows zbc, 100.0.1.0, Linux alb, 190.1.0.2, Windows cgf, 20.4.2.1, Windows bcn, 20.5.3.4:30.4.6.1, Solaris" | eval "IP address" = split('IP address', ":") | eval index = "A" | append [makeresults format=csv data="Reporting_Host zbc 30.4.6.1 alb 101.2.2.2" | eval index = "B"] ``` the above emulates index=A sourcetype="Any" | stats values("IP address") as "IP address" by Hostname OS | append [search index=B sourcetype="foo" | stats values(Reporting_Host) as Reporting_Host] ```   the result is Hostname OS IP address match alb Windows 190.1.0.2 ok bcn Solaris 20.5.3.4 30.4.6.1 ok cgf Windows 20.4.2.1 missing xyz Windows 101.2.2.2 102.3.3.3 190.1.1.1 4.3.2.1 ok zbc Linux 100.0.1.0 ok
Hello, it should be port 8088 in your script, however UI won't work, for the HEC.  Try to sending the data to HEC via Postman or curl, if that works, then it should be an issue on the payload data s... See more...
Hello, it should be port 8088 in your script, however UI won't work, for the HEC.  Try to sending the data to HEC via Postman or curl, if that works, then it should be an issue on the payload data source. For troubleshooting: use the below search for your hec logs. index=_introspection component=HttpEventCollector sourcetype=http_event_collector_metrics index=_internal host=yourhechost ERROR   Last thing try to use the services/collector/raw endpoint to test, but keep in mind to use services/collector/event for your json data. Hope this helps.   
Hi, Is there a way to check the Splunk cloud timezone? I know by documentation its at GMT+0 and displays the data based on your configured timezone My user account is configured at GMT+8, howev... See more...
Hi, Is there a way to check the Splunk cloud timezone? I know by documentation its at GMT+0 and displays the data based on your configured timezone My user account is configured at GMT+8, however when I check the triggered alerts page. the Alerts have a CST timezone Also, in our ES incident review checking the time difference from the triggering event from the triggered alerts, it almost at 2 hrs. For reference see below.    Triggered alert in Incident review Highlighted refers to the timestamp in the triggering event from the drill down search    
Hi @sainag_splunk , I am trying to open the end point on browser but getting below error. Regards, Eshwar
Splunk has warning log: WARN AggregatorMiningProcessor [10530 merging] - Breaking event because limit of 256 has been exceeded ... data_sourcetype="my_json" The "my_json" for UF is: [my_json] DAT... See more...
Splunk has warning log: WARN AggregatorMiningProcessor [10530 merging] - Breaking event because limit of 256 has been exceeded ... data_sourcetype="my_json" The "my_json" for UF is: [my_json] DATETIME_CONFIG = KV_MODE = json LINE_BREAKER = (?:,)([\r\n]+) NO_BINARY_CHECK = true TIMESTAMP_FIELDS = _time TIME_FORMAT = %2Y%m%d%H%M%S TRUNCATE = 0 category = Structured description = my json type without truncate disabled = false pulldown_type = 1 MAX_EVENTS = 2500 BREAK_ONLY_BEFORE_DATE = true   The data has about 5000 Lines, sample is the below: { "Versions" : { "sample_version" : "version.json", "name" : "my_json", "revision" : "rev2.0"}, "Domains" : [{ "reset_domain_name" : "RESET_DOMAIN", "domain_number" : 2, "data_fields" : ["Namespaces/data1", "Namespaces/data2"] } ], "log" : ["1 ERROR No such directory and file", "2 ERROR No such directory and file", "3 ERROR No such directory and file", "4 ERROR No such directory and file" ], "address" : [{ "index": 1, "addr": "0xFFFFFF"} ], "fail_reason" : [{ "reason" : "SystemError", "count" : 5}, { "reason" : "RuntimeError", "count" : 0}, { "reason" : "ValueError", "count" : 1} ], ... blahblah ... "comment" : "None"} How to fix this warning log? We add "MAX_EVENTS" field in props.conf, but it does not working.
Hi @sainag_splunk , Yes, I had configured with token, index. Below is my configuration in HEC and OTEL exporter.   Please suggest where went wrong? Regards, Eshwar
If you already have HEC setup with the token, index. You should be good on the splunk indexing side.  You will need to use HEC exporter. HEC exporter: https://github.com/open-telemetry/opentelemetr... See more...
If you already have HEC setup with the token, index. You should be good on the splunk indexing side.  You will need to use HEC exporter. HEC exporter: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/splunkhecexporter/README.md Refer:  https://github.com/signalfx/splunk-otel-collector/tree/main/examples/otel-logs-splunk   Hope all these links help. 
Hi @sainag_splunk , Thank you for your response. Just for your info I had installed HEX on on-prem not on Kubernetes. I think that command you have shared is for Kubernetes environment. My goal is... See more...
Hi @sainag_splunk , Thank you for your response. Just for your info I had installed HEX on on-prem not on Kubernetes. I think that command you have shared is for Kubernetes environment. My goal is to achieve sending log data through Otel collector to HEC end point.
Why is it that neither of the Splunk.com site dashboard examples return data for the following query:  index=main sourcetype=access_combined* status=200 action=purchase |timechart count by productid... See more...
Why is it that neither of the Splunk.com site dashboard examples return data for the following query:  index=main sourcetype=access_combined* status=200 action=purchase |timechart count by productid ? Here's what the videos say we should get: But here's what the query returns: It groups by date successfully, but doesn't yield results by product. Both of the online dashboard creation videos in the url below yield the desired results shown in the first screenshot above.   Note:  the source="tutorialdata.zip:*". Two video training sites are here: https://www.splunk.com/en_us/training/videos/all-videos.html https://www.splunk.com/en_us/blog/learn/splunk-tutorials.html#education
Is there a way to create a detector to alert if a particular user (based on a part of the URL) is experiencing a higher number of errors? For example, if I have a /user/{customerId}/do-something URL... See more...
Is there a way to create a detector to alert if a particular user (based on a part of the URL) is experiencing a higher number of errors? For example, if I have a /user/{customerId}/do-something URL, then I want to be alerted when a particular {customerId} has a high number of errors within a specific time period. If there's a higher number of errors but they're mostly for different {customerId} values, then I don't want a notification. Thanks.
Assuming you already have a token $City_tok$ from the input, mvexpand is the most traditional way to do it | spath path=DataCenters{} | mvexpand DataCenters{} | spath input=DataCenters{} | whree Cit... See more...
Assuming you already have a token $City_tok$ from the input, mvexpand is the most traditional way to do it | spath path=DataCenters{} | mvexpand DataCenters{} | spath input=DataCenters{} | whree City == "$City_tok$" If mvexpand is a problem in your environment, there are other ways.
I think there is no sensitive data, so here the full error message 2024-09-23 21:06:30,760 INFO [66f175e6c17fc8d479ab10] error:337 - 500 Internal Server Error The server encountered an unexpected co... See more...
I think there is no sensitive data, so here the full error message 2024-09-23 21:06:30,760 INFO [66f175e6c17fc8d479ab10] error:337 - 500 Internal Server Error The server encountered an unexpected condition which prevented it from fulfilling the request. 2024-09-23 21:06:30,760 ERROR [66f175e6c17fc8d479ab10] error:338 - Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 628, in respond self._do_respond(path_info) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 687, in _do_respond response.body = self.handler() File "/opt/splunk/lib/python3.7/site-packages/cherrypy/lib/encoding.py", line 219, in __call__ self.body = self.oldhandler(*args, **kwargs) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/htmlinjectiontoolfactory.py", line 75, in wrapper resp = handler(*args, **kwargs) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpdispatch.py", line 54, in __call__ return self.callable(*self.args, **self.kwargs) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-2042>", line 2, in help File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 41, in rundecs return fn(*a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-2040>", line 2, in help File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 102, in check if verify_session and not util.checkRequestForValidFormKey(requireValidFormKey=must_login): File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/util.py", line 1481, in checkRequestForValidFormKey if not isValidFormKey(form_key) or not doesFormKeyMatchCookie(form_key): File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/util.py", line 1452, in isValidFormKey logger.warn('CSRF form_key mismatch received=%s expected[redacted]=%s' % (key, getFormKey()[-4:])) TypeError: 'NoneType' object is not subscriptable Before that i would to say thankyou for reply this post and want to help me
Hi,       Based on the following json document, I want to find the value of "Geography" where City is input. Here is the json:       { "Company" : "Microsoft", "Cloud" : "Azure", "D... See more...
Hi,       Based on the following json document, I want to find the value of "Geography" where City is input. Here is the json:       { "Company" : "Microsoft", "Cloud" : "Azure", "DataCenters" : [ { "Geography" : "USA", "Region" : "East", "City": "New York" }, { "Geography" : "India", "Region" : "West", "City": "Ahmedabad" }, { "Geography" : "USA", "Region" : "West", "City": "San Fransisco" }, { "Geography" : "South Africa", "Region" : "West", "City": "Capetown" } ] }       Can somebody please help me fetch this information. Thanks.
Hey gang,  I'm using the Splunk Add on for Microsoft Azure to ingest AAD signin logs to Splunk under the azure:aad:signin sourcetype, however there seems to be a gap between the number of events v... See more...
Hey gang,  I'm using the Splunk Add on for Microsoft Azure to ingest AAD signin logs to Splunk under the azure:aad:signin sourcetype, however there seems to be a gap between the number of events visible in EntraID versus what is visible from Splunk. There are always slightly more events in EntraID. The gap seems to worsen the higher the volume of events becomes. See this table: Time Splunk Entra ID Difference 1st hour 3265 3305 40 2nd hour 3085 4804 1719 3rd hour 3264 6309 3045 4th hour 2274 3841 1567 5th hour 1659 2632 973 6th hour 2168 3442 1274 7th hour 6236 8923 2687 8th hour 22716 35901 13185 9th hour 63186 101602 38416 10th hour 88607 145503 56896 11th hour 68407 140095 71688 12th hour 76866 124423 47557 13th hour 68717 122355 53638 14th hour 81310 144880 63570 15th hour 50849 140876 90027 16th hour 42972 124040 81068 17th hour 33693 91792 58099 18th hour 13683 50408 36725 19th hour 13973 38695 24722 20th hour 12182 29645 17463 21st hour 9734 24187 14453 22nd hour 8037 16935 8898 23rd hour 5869 11994 6125 24th hour 5631 8837 3206 Total 688383 1385424 697041 Percentage difference     50.31%   - This gap appears even when searching historical logs i.e. time slots over the last two weeks. - The retention period of the index is 90 days, so the events should not have expired yet. - There are no line breaking, event breaking, aggregation, timestamp, or other parsing errors for the sourcetype. - The gap is still present when searching over all time. - The internal logs from the Splunk Add on for Microsoft Azure only show the following two error messages which don't seem relevant, and only appeared a few times over the last month or so: "File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunklib/binding.py", line 1337, in request raise HTTPError(response) splunklib.binding.HTTPError: HTTP 503 Service Unavailable -- KV Store is in maintenance mode." "File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunklib/modularinput/event.py", line 111, in write_to stream.flush() BrokenPipeError: [Errno 32] Broken pipe" I have updated the polling interval of the Microsoft Entra ID Interactive Sign-ins input to 900 seconds, but still the issue persists. What other explanation could there be for the gap?   Thanks, K
Hello @Cheng2Ready have you already tried this? https://docs.splunk.com/Documentation/Splunk/9.3.1/DashStudio/inputTimeRange   "query": "index=web \n| chart count over product_name by host"... See more...
Hello @Cheng2Ready have you already tried this? https://docs.splunk.com/Documentation/Splunk/9.3.1/DashStudio/inputTimeRange   "query": "index=web \n| chart count over product_name by host", "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } },
@MohammedKhanIUK I believe that's the requirement.  NOTE: To collect the audit-logs, the user should have admin access of the organization/enterprise and read:audit_log scope for the Personal Ac... See more...
@MohammedKhanIUK I believe that's the requirement.  NOTE: To collect the audit-logs, the user should have admin access of the organization/enterprise and read:audit_log scope for the Personal Access Token.   https://docs.splunk.com/Documentation/AddOns/released/Github/Configureinputs If this reply helps you an upvote and "Accept as Solution" is appreciated.    
I am currently using the new dashboard studio interface, they make calls to saved reports in Splunk. Is there a way to have time range work for the dashboard, but also allow it to work with the re... See more...
I am currently using the new dashboard studio interface, they make calls to saved reports in Splunk. Is there a way to have time range work for the dashboard, but also allow it to work with the reports? The issue we face is  we are able to set the reports in the studio dashboard, but the default is that they are stuck as static reports. how can we add in a time range input that will work with the dashboard and the reports?
Can the permissions be limited to specific capabilities aside from admin:org for audit events? Or is that a fundamental requirement to pull in audit logs?