All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

https://docs.splunk.com/Documentation/Splunk/9.1.3/Search/Aboutsubsearches Watch out however because subsearches have their limitations so if your subsearch is either long-running or returns many ev... See more...
https://docs.splunk.com/Documentation/Splunk/9.1.3/Search/Aboutsubsearches Watch out however because subsearches have their limitations so if your subsearch is either long-running or returns many events, it may get silently finalized and you might not get proper results (you might get wrong results or no results at all). Question is whether you have this Report 1 anyway or is it just a part of the functionality you want to achieve because it might be probably done differently with just a single search.
Hi @nlloyd, see : how to store encrypted credentials in Splunk at https://www.splunk.com/en_us/blog/security/storing-encrypted-credentials.html in other words, you have to run the script by Splun... See more...
Hi @nlloyd, see : how to store encrypted credentials in Splunk at https://www.splunk.com/en_us/blog/security/storing-encrypted-credentials.html in other words, you have to run the script by Splunk so you can store credentials in encrypted mode in Splunk conf files. Then you cas see here https://www.splunk.com/en_us/blog/tips-and-tricks/enable-first-run-app-configuration-with-setup-pages.html#:~:text=A%20setup%20page%20is%20a,the%20Splunk%20Web%20user%20interface. how to configure your Add-On to show a page to insert password to store in a conf file in encrypted mode. Ciao. Giuseppe
Hello, How to pass data/token from a report to another report?   Thank you for your help I am trying to run a weekly report that produces top 4 students (out of 100), then once I find out the top... See more...
Hello, How to pass data/token from a report to another report?   Thank you for your help I am trying to run a weekly report that produces top 4 students (out of 100), then once I find out the top 4 students, I will run another report that provides detailed information about grades for those 4 students For example: Report 1 StudentID Name GPA Percentile Email 101 Student1 4 100% Student1@email.com 102 Student2 3 90% Student2@email.com 103 Student3 2 70% Student3@email.com 104 Student4 1 40% Student4@email.com Report 2 StudentID Course Grade 101 Math 100 101 English 95 102 Math 90 102 English 90  
Hello, If I put your suggested search into a search  in Splunk, it didn't work, but I was able to create a dashboard using your search in Splunk. I was also able to export into PDF manually by cli... See more...
Hello, If I put your suggested search into a search  in Splunk, it didn't work, but I was able to create a dashboard using your search in Splunk. I was also able to export into PDF manually by clicking export=>download PDF 1) How do I schedule a dashboard as a PDF?  Should I create dashboard first, then put it on reports?      My goal is to send an email once a week with a report for specific time frame (e.g. 30 days) to determine a ranking.     2) What is the purpose of token=sid and <done> bracket? Thanks
Hi all, Very new to Splunk so apologies if this is a very basic question. I've looked around and haven't found a conclusive answer so far. I'm building an app that will require an API token from a 3... See more...
Hi all, Very new to Splunk so apologies if this is a very basic question. I've looked around and haven't found a conclusive answer so far. I'm building an app that will require an API token from a 3rd party system during the setup step. What I don't understand is how I can store that API token via a call to storage/passwords without also requiring the user to enter their Splunk credentials or a Splunk API token. Would really appreciate if someone could point out how I can do this! Ideally, I'm looking to use the JS SDK, so I'd need some way to create an instance of the Service object without needing admin user credentials being manually entered.  Thanks in advance!
@phanTom  We ended up using custom function with phantom.merge() as this fit our needs and was very simple (few lines of code). We also found how to use phantom.collect() to read in everything aft... See more...
@phanTom  We ended up using custom function with phantom.merge() as this fit our needs and was very simple (few lines of code). We also found how to use phantom.collect() to read in everything after some trial and error. Thank you for pointing out the addon app 
Hello, we've encountered a problem with the TA-crowdstrike-falcon-event-streams TA, which was functional in the past. Splunk Enterprise onPrem VERSION=9.1.2 BUILD=b6b9c8185839 PRODUCT=splunk ... See more...
Hello, we've encountered a problem with the TA-crowdstrike-falcon-event-streams TA, which was functional in the past. Splunk Enterprise onPrem VERSION=9.1.2 BUILD=b6b9c8185839 PRODUCT=splunk PLATFORM=Linux-x86_64 When opening the UI to configure the crowdstrike Auth we'll end up with Err 500. Same for the other views. I've tried to reinstall it, but it didn't change anything. Splunkd logs the following:     01-26-2024 16:13:29.817 +0100 ERROR AdminManagerExternal [3102377 TcpChannelThread] - Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [500]: Internal Server Error -- Traceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/connectionpool.py", line 706, in urlopen\n chunked=chunked,\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/connectionpool.py", line 382, in _make_request\n self._validate_conn(conn)\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/connectionpool.py", line 1010, in _validate_conn\n conn.connect()\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/connection.py", line 421, in connect\n tls_in_tls=tls_in_tls,\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 453, in ssl_wrap_socket\n ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls)\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 495, in _ssl_wrap_socket_impl\n return ssl_context.wrap_socket(sock)\n File "/opt/splunk/lib/python3.7/ssl.py", line 428, in wrap_socket\n session=session\n File "/opt/splunk/lib/python3.7/ssl.py", line 878, in _create\n self.do_handshake()\n File "/opt/splunk/lib/python3.7/ssl.py", line 1147, in do_handshake\n self._sslobj.do_handshake()\nssl.SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1106)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/requests/adapters.py", line 449, in send\n timeout=timeout\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/connectionpool.py", line 756, in urlopen\n method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/util/retry.py", line 574, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /servicesNS/nobody/TA-crowdstrike-falcon-event-streams/configs/conf-ta_crowdstrike_falcon_event_streams_settings/_reload (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1106)')))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/lib/splunktaucclib/rest_handler/handler.py", line 124, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/lib/splunktaucclib/rest_handler/handler.py", line 162, in get\n self.reload()\n File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/lib/splunktaucclib/rest_handler/handler.py", line 259, in reload\n action="_reload",\n File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/lib/splunklib/binding.py", line 320, in wrapper\n return request_fun(self, *args, **kwargs)\n File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/lib/splunklib/binding.py", line 79, in new_f\n val = f(*args, **kwargs)\n File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/lib/splunklib/binding.py", line 727, in get\n response = self.http.get(path, all_headers, **query)\n File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/lib/splunklib/binding.py", line 1254, in get\n return self.request(url, { 'method': "GET", 'headers': headers })\n File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/lib/splunklib/binding.py", line 1316, in request\n response = self.handler(url, message, **kwargs)\n File "/opt/splunk/etc/apps/TA-crowdstrike-falcon-event-streams/lib/solnlib/splunk_rest_client.py", line 147, in request\n **kwargs,\n File "/opt/splunk/lib/python3.7/site-packages/requests/api.py", line 61, in request\n return session.request(method=method, url=url, **kwargs)\n File "/opt/splunk/lib/python3.7/site-packages/requests/sessions.py", line 542, in request\n resp = self.send(prep, **send_kwargs)\n File "/opt/splunk/lib/python3.7/site-packages/requests/sessions.py", line 655, in send\n r = adapter.send(request, **kwargs)\n File "/opt/splunk/lib/python3.7/site-packages/requests/adapters.py", line 514, in send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /servicesNS/nobody/TA-crowdstrike-falcon-event-streams/configs/conf-ta_crowdstrike_falcon_event_streams_settings/_reload (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1106)')))\n". See splunkd.log/python.log for more details.     inputs.conf   [splunktcp-ssl:8089] disabled = 0 requireClientCert = false sslVersions = * [...] [SSL] serverCert = <path> requireClientCert = true allowSslRenegotiation = true sslCommonNameToCheck = <others> 127.0.0.1,SplunkServerDefaultCert   server.conf   [sslConfig] enableSplunkdSSL = true sslVersions = tls1.2 serverCert = /opt/splunk/etc/auth/<path>.pem sslRootCAPath = /opt/splunk/etc/auth/<path>.pem requireClientCert = true sslVerifyServerName = true sslVerifyServerCert = true sslCommonNameToCheck = <FQDNs> cliVerifyServerName = false sslPassword = <pw>     We're looking forward for your help! Thank you!
I ave a couple of scheduled reports that I SCP off of our splunk enterprise.  Both reports are in /opt/splunk/etc/apps/search/lookups.  One of the reports I setup a while ago and it's permissions loo... See more...
I ave a couple of scheduled reports that I SCP off of our splunk enterprise.  Both reports are in /opt/splunk/etc/apps/search/lookups.  One of the reports I setup a while ago and it's permissions look right and I can SCP it (file1.csv).  The new report gives me a permission denied when I try to copy it (file2.csv). File 1: -rw-r-----. 1 splunk splunk 306519 Jan 26 05:00 file1.csv -rw-------. 1 splunk splunk 1177070 Jan 26 03:00 file2.csv   Not sure how to get file2.csv group readable so I can copy it off.
Hello everyone,  I'm currently trying to optimize Splunk with disk space and index.  I read about : Changing the parameter "Pause indexing if free disk space (in MB) falls below" Never modify th... See more...
Hello everyone,  I'm currently trying to optimize Splunk with disk space and index.  I read about : Changing the parameter "Pause indexing if free disk space (in MB) falls below" Never modify the indexes.conf parameters  And some others posts of the community But I'm not quite sure about the solution for my problems :  The coldToFrozenDir/Script parameters are empty. Kind regards, Tybe
Quoting the docs: How Splunk software determines time zones To determine the time zone to assign to a timestamp, Splunk software uses the following logic in order of precedence: Use the time zon... See more...
Quoting the docs: How Splunk software determines time zones To determine the time zone to assign to a timestamp, Splunk software uses the following logic in order of precedence: Use the time zone specified in raw event data (for example, PST, -0800), if present. Use the TZ attribute set in props.conf, if the event matches the host, source, or source type that the stanza specifies. If the forwarder and the receiving indexer are version 6.0 or higher, use the time zone that the forwarder provides. Use the time zone of the host that indexes the event. From my experience with Windows (I see windows events format) the most common error is when someone forgets to set up a system timezone on install and as a result - the whole server is indeed in a wrong timezone and effectively uses wrong time. Otherwise windows events are properly ingested and parsed (I assume you have TA_windows on your receiving indexers or HF).
Hello Every Body.   I'm starting this question be couse i'm traying to genrate detections for goole workspace invader as that post about 365.  https://www.splunk.com/en_us/blog/security/hunting-m3... See more...
Hello Every Body.   I'm starting this question be couse i'm traying to genrate detections for goole workspace invader as that post about 365.  https://www.splunk.com/en_us/blog/security/hunting-m365-invaders-blue-team-s-guide-to-initial-access-vectors.html.  But i can not find google work space  login logs in actual ingest. We installed  the ad-don and newest apps abalaible in the splunkbase and could not find it. surfin into the splunk web we could't fund an euivalent searchs as the link attached.    Some bady had the same problem?  how can I solved it? 
Review how the data is ingested.  By default, Splunk Cloud presumes all event times are UTC.  That means all non-UTC timestamps must be identified as such.  The TIME_FORMAT setting in props.conf shou... See more...
Review how the data is ingested.  By default, Splunk Cloud presumes all event times are UTC.  That means all non-UTC timestamps must be identified as such.  The TIME_FORMAT setting in props.conf should include the time zone if the event timestamp does (your sample event does not).  Other events should use the TZ setting in props.conf to specify the time zone. Every sourcetype onboarded should have props.conf settings to avoid having Splunk make incorrect assumptions about the data.
Hi @ITWhisperer, thanks for your answer. Your answer give me the root cause and that is fine; now the question is: how should I fix this?
_indextime is not being represented in your screen shot. It looks like your event, which contains the text "01/24/2023 09:42:07 AM" (without any timezone information) is being interpreted as UTC i.e... See more...
_indextime is not being represented in your screen shot. It looks like your event, which contains the text "01/24/2023 09:42:07 AM" (without any timezone information) is being interpreted as UTC i.e. GMT+0. This is converted the UTC epoch time (number of seconds) and stored in _time. _time is then displayed in the event view in local time i.e. GMT+1 so 09:... becomes 08:... hence your "hour difference".
It looks to me like the dry run completed successfully.  There were no buckets that could not be merged and no failed peers.  Have you tried running with dryrun=false?
Hi Splunkers, I have a problem with timestamp on our platform. Here some assumption and acquired knowledge. Knowledge _time =  is the event time (the time which is present in the event. In other w... See more...
Hi Splunkers, I have a problem with timestamp on our platform. Here some assumption and acquired knowledge. Knowledge _time =  is the event time (the time which is present in the event. In other words: the time when the event was generated. _indextime = is the index time or, if you prefer, the time when the events have been indexed. Issue with timezone shown can be related to user settings, that can be changed under username -> Preferences -> Timezone. Environment: a Splunk Cloud SaaS platform with logs ingested in different ways: Forwarder (both UF and HF) API Syslog File monitoring Issue: If I expand the event and I examinate the _time field:  Why, in my case, time event and time shown are different? Important additional Info Our user settings timezone are set on GMT+1 (due we are in Italy) for all users. You see a Windows events as sample, but the problem is present on all logs: it doesn't matter what log source I consider and how it is sending events to Splunk. Every log show time difference. The difference between _time and time shown is always on 1 hour, for every events on every log sources. I searched here on community and I found other topics about this issue, some of them has been very useful to gain a basic knowledge like Difference Between Event Time and _time  but, due we are on cloud (with limited chance to set some file and parameter that are involved) and the issue is for all events, I'm still locked on this problem.   
Just set host and port and let the splunklib handle the rest.
Hi Team, We are trying to onboard AWS cloudwatch metrics and events data to splunk , we decided to go with splunk Add on for AWS pull mechanism. I am trying to configure a custom namespace and metri... See more...
Hi Team, We are trying to onboard AWS cloudwatch metrics and events data to splunk , we decided to go with splunk Add on for AWS pull mechanism. I am trying to configure a custom namespace and metrics created in  AWS to splunk , I am unable to see the metrics there . I edited the default aws namespaces and added my custom namespace . Is this right method to add my custom metrics. Can someone guide here. 
Hi @PickleRick  we use base url api already for curl commands and would like to use in python.
Hi @VatsalJagani  no, we need to use base url api. Thanks.