All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @sainag_splunk , Thank you for your response. Just for your info I had installed HEX on on-prem not on Kubernetes. I think that command you have shared is for Kubernetes environment. My goal is... See more...
Hi @sainag_splunk , Thank you for your response. Just for your info I had installed HEX on on-prem not on Kubernetes. I think that command you have shared is for Kubernetes environment. My goal is to achieve sending log data through Otel collector to HEC end point.
Why is it that neither of the Splunk.com site dashboard examples return data for the following query:  index=main sourcetype=access_combined* status=200 action=purchase |timechart count by productid... See more...
Why is it that neither of the Splunk.com site dashboard examples return data for the following query:  index=main sourcetype=access_combined* status=200 action=purchase |timechart count by productid ? Here's what the videos say we should get: But here's what the query returns: It groups by date successfully, but doesn't yield results by product. Both of the online dashboard creation videos in the url below yield the desired results shown in the first screenshot above.   Note:  the source="tutorialdata.zip:*". Two video training sites are here: https://www.splunk.com/en_us/training/videos/all-videos.html https://www.splunk.com/en_us/blog/learn/splunk-tutorials.html#education
Is there a way to create a detector to alert if a particular user (based on a part of the URL) is experiencing a higher number of errors? For example, if I have a /user/{customerId}/do-something URL... See more...
Is there a way to create a detector to alert if a particular user (based on a part of the URL) is experiencing a higher number of errors? For example, if I have a /user/{customerId}/do-something URL, then I want to be alerted when a particular {customerId} has a high number of errors within a specific time period. If there's a higher number of errors but they're mostly for different {customerId} values, then I don't want a notification. Thanks.
Assuming you already have a token $City_tok$ from the input, mvexpand is the most traditional way to do it | spath path=DataCenters{} | mvexpand DataCenters{} | spath input=DataCenters{} | whree Cit... See more...
Assuming you already have a token $City_tok$ from the input, mvexpand is the most traditional way to do it | spath path=DataCenters{} | mvexpand DataCenters{} | spath input=DataCenters{} | whree City == "$City_tok$" If mvexpand is a problem in your environment, there are other ways.
I think there is no sensitive data, so here the full error message 2024-09-23 21:06:30,760 INFO [66f175e6c17fc8d479ab10] error:337 - 500 Internal Server Error The server encountered an unexpected co... See more...
I think there is no sensitive data, so here the full error message 2024-09-23 21:06:30,760 INFO [66f175e6c17fc8d479ab10] error:337 - 500 Internal Server Error The server encountered an unexpected condition which prevented it from fulfilling the request. 2024-09-23 21:06:30,760 ERROR [66f175e6c17fc8d479ab10] error:338 - Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 628, in respond self._do_respond(path_info) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 687, in _do_respond response.body = self.handler() File "/opt/splunk/lib/python3.7/site-packages/cherrypy/lib/encoding.py", line 219, in __call__ self.body = self.oldhandler(*args, **kwargs) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/htmlinjectiontoolfactory.py", line 75, in wrapper resp = handler(*args, **kwargs) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpdispatch.py", line 54, in __call__ return self.callable(*self.args, **self.kwargs) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-2042>", line 2, in help File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 41, in rundecs return fn(*a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-2040>", line 2, in help File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 102, in check if verify_session and not util.checkRequestForValidFormKey(requireValidFormKey=must_login): File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/util.py", line 1481, in checkRequestForValidFormKey if not isValidFormKey(form_key) or not doesFormKeyMatchCookie(form_key): File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/util.py", line 1452, in isValidFormKey logger.warn('CSRF form_key mismatch received=%s expected[redacted]=%s' % (key, getFormKey()[-4:])) TypeError: 'NoneType' object is not subscriptable Before that i would to say thankyou for reply this post and want to help me
Hi,       Based on the following json document, I want to find the value of "Geography" where City is input. Here is the json:       { "Company" : "Microsoft", "Cloud" : "Azure", "D... See more...
Hi,       Based on the following json document, I want to find the value of "Geography" where City is input. Here is the json:       { "Company" : "Microsoft", "Cloud" : "Azure", "DataCenters" : [ { "Geography" : "USA", "Region" : "East", "City": "New York" }, { "Geography" : "India", "Region" : "West", "City": "Ahmedabad" }, { "Geography" : "USA", "Region" : "West", "City": "San Fransisco" }, { "Geography" : "South Africa", "Region" : "West", "City": "Capetown" } ] }       Can somebody please help me fetch this information. Thanks.
Hey gang,  I'm using the Splunk Add on for Microsoft Azure to ingest AAD signin logs to Splunk under the azure:aad:signin sourcetype, however there seems to be a gap between the number of events v... See more...
Hey gang,  I'm using the Splunk Add on for Microsoft Azure to ingest AAD signin logs to Splunk under the azure:aad:signin sourcetype, however there seems to be a gap between the number of events visible in EntraID versus what is visible from Splunk. There are always slightly more events in EntraID. The gap seems to worsen the higher the volume of events becomes. See this table: Time Splunk Entra ID Difference 1st hour 3265 3305 40 2nd hour 3085 4804 1719 3rd hour 3264 6309 3045 4th hour 2274 3841 1567 5th hour 1659 2632 973 6th hour 2168 3442 1274 7th hour 6236 8923 2687 8th hour 22716 35901 13185 9th hour 63186 101602 38416 10th hour 88607 145503 56896 11th hour 68407 140095 71688 12th hour 76866 124423 47557 13th hour 68717 122355 53638 14th hour 81310 144880 63570 15th hour 50849 140876 90027 16th hour 42972 124040 81068 17th hour 33693 91792 58099 18th hour 13683 50408 36725 19th hour 13973 38695 24722 20th hour 12182 29645 17463 21st hour 9734 24187 14453 22nd hour 8037 16935 8898 23rd hour 5869 11994 6125 24th hour 5631 8837 3206 Total 688383 1385424 697041 Percentage difference     50.31%   - This gap appears even when searching historical logs i.e. time slots over the last two weeks. - The retention period of the index is 90 days, so the events should not have expired yet. - There are no line breaking, event breaking, aggregation, timestamp, or other parsing errors for the sourcetype. - The gap is still present when searching over all time. - The internal logs from the Splunk Add on for Microsoft Azure only show the following two error messages which don't seem relevant, and only appeared a few times over the last month or so: "File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunklib/binding.py", line 1337, in request raise HTTPError(response) splunklib.binding.HTTPError: HTTP 503 Service Unavailable -- KV Store is in maintenance mode." "File "/opt/splunk/etc/apps/TA-MS-AAD/lib/splunklib/modularinput/event.py", line 111, in write_to stream.flush() BrokenPipeError: [Errno 32] Broken pipe" I have updated the polling interval of the Microsoft Entra ID Interactive Sign-ins input to 900 seconds, but still the issue persists. What other explanation could there be for the gap?   Thanks, K
Hello @Cheng2Ready have you already tried this? https://docs.splunk.com/Documentation/Splunk/9.3.1/DashStudio/inputTimeRange   "query": "index=web \n| chart count over product_name by host"... See more...
Hello @Cheng2Ready have you already tried this? https://docs.splunk.com/Documentation/Splunk/9.3.1/DashStudio/inputTimeRange   "query": "index=web \n| chart count over product_name by host", "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } },
@MohammedKhanIUK I believe that's the requirement.  NOTE: To collect the audit-logs, the user should have admin access of the organization/enterprise and read:audit_log scope for the Personal Ac... See more...
@MohammedKhanIUK I believe that's the requirement.  NOTE: To collect the audit-logs, the user should have admin access of the organization/enterprise and read:audit_log scope for the Personal Access Token.   https://docs.splunk.com/Documentation/AddOns/released/Github/Configureinputs If this reply helps you an upvote and "Accept as Solution" is appreciated.    
I am currently using the new dashboard studio interface, they make calls to saved reports in Splunk. Is there a way to have time range work for the dashboard, but also allow it to work with the re... See more...
I am currently using the new dashboard studio interface, they make calls to saved reports in Splunk. Is there a way to have time range work for the dashboard, but also allow it to work with the reports? The issue we face is  we are able to set the reports in the studio dashboard, but the default is that they are stuck as static reports. how can we add in a time range input that will work with the dashboard and the reports?
Can the permissions be limited to specific capabilities aside from admin:org for audit events? Or is that a fundamental requirement to pull in audit logs?
Ahhh... So you don't want to just join two indexes. You want to join an index onto itself using external "bracketing" event. Ugh. With small datasets you can try to stats once then append the index... See more...
Ahhh... So you don't want to just join two indexes. You want to join an index onto itself using external "bracketing" event. Ugh. With small datasets you can try to stats once then append the index again and stats another time. Well, you could even try to use the cursed join command But the real question is how to do this operation effectively. I have a rough idea but have to test it first.
So the best performing solution I could come up with was something like this: index=ind1 earliest=-1d field1=abc field2 IN ([search index=ind1 earliest=-1d "A" field1=xyz | stats count by field2 | ... See more...
So the best performing solution I could come up with was something like this: index=ind1 earliest=-1d field1=abc field2 IN ([search index=ind1 earliest=-1d "A" field1=xyz | stats count by field2 | fields field2 | rename field2 as query | format mvsep="" "" "" "" "" "" "" | replace "NOT ()" WITH ""]) | append [search index=ind1 earliest=-1d "A" field1=xyz]   This way, the parent query is running with the additional filtering provided by the subquery.  One thing I was wondering was whether the search results of "search index=ind1 earliest=-1d "A" field1=xyz" could be stored to not have to run it twice.  Is that possible?
So it is now flagged to include "z" versions of OpenSSL, meaning that all prior and current versions were and are indeed affected? Could you provide a link to the supporting information?
the curl command works from the command line only if I specify "-k" (ignore SSL cert) how do I get Splunk to accept the cert?
| makeresults format=csv data="_time, username, computer, printer, source_dir, status 2024-09-24 15:32 , auser, cmp_auser, print01_main1, \\\\cpn-fs.local\data\program\..... See more...
| makeresults format=csv data="_time, username, computer, printer, source_dir, status 2024-09-24 15:32 , auser, cmp_auser, print01_main1, \\\\cpn-fs.local\data\program\..., Printed 2024-09-24 13:57 , buser, cmp_buser, print01_offic1, c:\program files\documents\..., Printed 2024-09-24 12:13 , cuser, cmp_cuser, print01_offic2, \\\\cpn-fs.local\data\transfer\..., In queue 2024-09-24 09:26, buser, cmp_buser, print01_offic1, F:\transfers\program\..., Printed 2024-09-24 09:26, buser, cmp_buser, print01_front1, \\\\cpn-fs.local\transfer\program\..., Printed 2024-09-24 07:19, auser, cmp_auser, print01_main1, \\\\cpn-fs.local\data\program\...., In queue" | rex field=source_dir "(?P<FolderPath>(\\\\\\\\[^\\\\]+|\w:)\\\\[^\\\\]+\\\\)"
The problem here may be that splunk is not releasing the updated libraries for libssl and libcrypto up through the 9.3.1 release now that the vulnerabilities are being flagged to "zk"
Hi @Andrew.Bray, Have you seen this AppD Docs page? https://docs.appdynamics.com/appd/23.x/23.11/en/infrastructure-visibility/machine-agent/install-the-machine-agent
Glad it was helpful.  If this reply helps you an upvote and "Accept as Solution" is appreciated.  
It appears to be failing to pull the docker image. This guide for setting up sc4s suggests using a different value for SC4S_IMAGE https://splunk.github.io/splunk-connect-for-syslog/main/gettingstart... See more...
It appears to be failing to pull the docker image. This guide for setting up sc4s suggests using a different value for SC4S_IMAGE https://splunk.github.io/splunk-connect-for-syslog/main/gettingstarted/podman-systemd-general/#unit-file Is "ghcr.io/splunk/splunk-connect-for-syslog/container3:latest" the SC4S_IMAGE value you tried?