All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to get the `search` of the requests that made to the search head using API (not UI) like Splunk Python SDK. My idea is to parse the _internal log to get the search_id, then join it with _audi... See more...
I want to get the `search` of the requests that made to the search head using API (not UI) like Splunk Python SDK. My idea is to parse the _internal log to get the search_id, then join it with _audit log to get the search. So here is my SPL: index=_internal sourcetype=splunkd_access source=*splunkd_access.log method=POST useragent IN (axios*, curl*, python-requests*, splunk-sdk-python*, node-fetch*) NOT user IN (splunk-system-user, "-") | rex field=uri_path ".*/search/jobs/(?<search_id>[^/]+)" | eval search_id = "'" . search_id . "'" | where isnotnull(search_id) AND !like(search_id, "'export'") | join search_id [ search index=_audit action=search info=granted | fields search_id search ] | table _time host clientip user useragent search_id search However, this query `search` column returned nothing, though the search_id column has the correct value as `'<search_id>'`. If I take out the `'<search_id>'` and make a query like: index=_audit action=search info=granted search_id="'<search_id>'" | table _time search I could get the corresponding search. Somehow my `join` command is not working. 
Hi Splunk Community, I recently installed and configured the SentinelOne app on a Splunk 10 Beta environment. The setup went smoothly, but when I try to run any SPL queries, I’m encountering a licen... See more...
Hi Splunk Community, I recently installed and configured the SentinelOne app on a Splunk 10 Beta environment. The setup went smoothly, but when I try to run any SPL queries, I’m encountering a license expiration message. I'm unsure whether this is related to the Splunk 10 Beta version itself or a misconfiguration. Has anyone else faced this issue in the beta environment, or is there any known limitation regarding licensing in this version? Appreciate any insights or guidance.
I am not able to see the schedule of the saved searches although they are cron scheduled . so when i am saving again the saved search the time can be seen but after some time it just does not show. ... See more...
I am not able to see the schedule of the saved searches although they are cron scheduled . so when i am saving again the saved search the time can be seen but after some time it just does not show.  
We are getting the following error when trying to ingest EXO mail logs into splunk using the add-in. line 151, in __call__ raise ValueError("{} endpoint for found".format(name)) ValueError: MessageT... See more...
We are getting the following error when trying to ingest EXO mail logs into splunk using the add-in. line 151, in __call__ raise ValueError("{} endpoint for found".format(name)) ValueError: MessageTrace endpoint for found if not endpoint: raise ValueError("{} endpoint for found".format(name)) Does the Splunk add-on for M365 work when reaching out to GCC HIGH endpoints? or is the add-on not configured for such connections? 
All, For both java and .Net agents in Kubernetes, how is the CPU % calculated? I'm looking at some Java test results and the % appears to simply be CPU millis divided by time with no account for th... See more...
All, For both java and .Net agents in Kubernetes, how is the CPU % calculated? I'm looking at some Java test results and the % appears to simply be CPU millis divided by time with no account for the number of CPUs, CPU requests, or CPU limits.  Does that sound right?  With CloudFoundry, the % was additionally divided by the number of CPUs, so 120k ms/min was 200% divided by the number of CPUs. For .Net, I don't have a millis number so I can't make the same calculation to verify. thanks  
Hello, I am building a dashboard in Splunk Enterprise, I included the map with the Choropleth layer type and that worked for me, but I have a table that performs a query based on the region clicked o... See more...
Hello, I am building a dashboard in Splunk Enterprise, I included the map with the Choropleth layer type and that worked for me, but I have a table that performs a query based on the region clicked on the map and that part does not work in Splunk Dashboard Studio. I have already defined the token on the map, adjusted the token in the table's query, and it seems that it does not capture the clicked area. I did the same process in Splunk Classic and it worked as expected.   below is the source code of the MAP { "dataSources": { "primary": "ds_4lhwtNWq" }, "eventHandlers": [ { "type": "drilldown.setToken", "options": { "tokens": [ { "key": "row.UF.value", "token": "clicked_uf" } ] } } ], "options": { "backgroundColor": "#294e70", "center": [ -13.79021870397439, -52.07072204233867 ], "layers": [ { "additionalTooltipFields": [ "Quantidade de erros" ], "areaIds": "> primary | seriesByName('UF')", "areaValues": "> primary | seriesByName('Quantidade de erros')", "bubbleSize": "> primary | frameBySeriesNames('Quantidade de erros')", "choroplethOpacity": 0.5, "choroplethStrokeColor": "transparent", "latitude": "> primary | seriesByName('LATITUDE')", "longitude": "> primary | seriesByName('LONGITUDE')", "resultLimit": 50000, "type": "choropleth" } ], "scaleUnit": "imperial", "zoom": 5.38493379665208 }, "title": "mapa", "type": "splunk.map", "context": {}, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }   below is the SPL query of the table: index=<index> coderropc="0332" | eval PC = replace(codpcredeprop, "^0+", "") | stats count as "Erros por PC" by PC | join type=left PC [| inputlookup PcFabricante.csv | eval CODPC=replace(CODPC, "^0+", "") | rename CODPC as PC | fields PC NOMEFABR MODELO] | join type=left PC [| search index=ars source=GO earliest=-30d@d latest=now | eval CODPC=replace(CODPC, "^0+", "") | rename CODPC as PC | fields PC UF] | search UF="$token_mapa$" | table PC, NOMEFABR, MODELO, UF, "Erros por PC"   Is there any configuration that is different between Splunk classic and Splunk Dashboard Studio? When I add the default value in the map, the table receives the value, but does not register the clicks.
Hello - I created a Field Extraction to look for a file extension. The raw log looks like this: "FileName": "John Test File.docx" The regex I used was: "FileName":\s".+\.(?P<Domain>.[a-zA-Z0-9]*) ... See more...
Hello - I created a Field Extraction to look for a file extension. The raw log looks like this: "FileName": "John Test File.docx" The regex I used was: "FileName":\s".+\.(?P<Domain>.[a-zA-Z0-9]*)   This tests out in any regex tester I use. When I first created this, I ran a search query and some of the fields populated, but some were blank. I then checked which records weren't being extracted correctly, and found the regex matched the raw log pattern, so I was unsure why it wouldn't have extracted. However,  ~30 minutes after creating this field extraction. It stopped extracting anything. The state I'm now, I can see that each raw log record matches my extraction regex, but the fields are still empty and this isn't being extracted. Why would that be? Each raw log matches the regex in the extraction...
what is the correct format for domain users please? if i curl from a HF i get get the desired 200 response using : curl -v http://mywebsite.com --ntlm -u username@mydomain.ad.ltd.com.au If i use t... See more...
what is the correct format for domain users please? if i curl from a HF i get get the desired 200 response using : curl -v http://mywebsite.com --ntlm -u username@mydomain.ad.ltd.com.au If i use this format in the TA i see the error message in the logs asking for format in domain\\username I have tried several connotations of mydomain\\username but have not been successful what should be the format for this domain? Or is the issue with --ntlm ? as if we use the --negotiate flag or remove --ntlm we get 401 ? cheers
Hi Everyone, We are using Splunk Enterprise in our company. We want to ingest logs from applications hosted on the cloud. But, when we try to connect we get a lot of logs which is unrelated to our a... See more...
Hi Everyone, We are using Splunk Enterprise in our company. We want to ingest logs from applications hosted on the cloud. But, when we try to connect we get a lot of logs which is unrelated to our application which in turn causes high License utilization. Is there any method where we can filter out the logs that we want (like logs of specific application or  log source) before ingesting in Splunk so as to reduce the License Utilization while getting required security logs for the application.  
Hi everyone, I want to ingest logs from applications hosted in cloud (such as AWS, Azure).  In our Company we are using Splunk Enterprise.  Can Data Manager be used to ingest and  filter out logs on... See more...
Hi everyone, I want to ingest logs from applications hosted in cloud (such as AWS, Azure).  In our Company we are using Splunk Enterprise.  Can Data Manager be used to ingest and  filter out logs only pertaining to that application's security in Splunk Enterprise. Splunk Enterprise Security   
Onboarding Cisco FTD firewalls presents the choice of which Add-On to use. Apparently Cisco FTD firewalls run both ASA core and FTD core which means they send different types of events. The ASA event... See more...
Onboarding Cisco FTD firewalls presents the choice of which Add-On to use. Apparently Cisco FTD firewalls run both ASA core and FTD core which means they send different types of events. The ASA events are best handled with cisco:asa sourcetype whereas the FTD events are handled by cisco:ftd:syslog. However, all events in our environment use %FTD to tag their events, so this makes it harder to differentiate. What Add-On is the preferred Add-On (I'd expect the Cisco Security Cloud, but it still has some flaws)? And how should we get these events in with the correct sourcetype. My suggestion would be to send all events with cisco:asa sourcetype and include a transform which checks if the FTD code is in the 43k range, e.g. REGEX=%FTD-\d-43\d+.  
Hello, I am trying to use a different python version for my external lookup. The global version is 3.7 and my custom one is 3.10 my /opt/splunk/bin contains both 3.7 and 3.10   In my transforms.c... See more...
Hello, I am trying to use a different python version for my external lookup. The global version is 3.7 and my custom one is 3.10 my /opt/splunk/bin contains both 3.7 and 3.10   In my transforms.conf i changed the python version: [externallookup] python.version = python3.10   However I am getting the following error:   When I use  [externallookup] python.version = python3.7   , it does not give the error. Also I am able to use the new python version, when I change the symlink from 3.7 to my 3.10 (for debugging)   But why doesnt it work when I set the python.version to pyhon.3.10?   Thanks in advance!
Hello Everyone I need to export the search results to a folder outside the Splunk. To do this job we've exportresults in Splunk which works fine. Basically in my scenario, it is a saved search which... See more...
Hello Everyone I need to export the search results to a folder outside the Splunk. To do this job we've exportresults in Splunk which works fine. Basically in my scenario, it is a saved search which runs every week and data has been exported to the folder but it creates a new folder. I need to append the search results to the existing file or else I need to replace the file with the new data.  If I get result for any one of the things mentioned above. I'm good. Thanks.
I have custom validator class in which, Based on the input selected by the customer, i will update in the inputs conf file during configuration. But I encounter that, during configuration under name ... See more...
I have custom validator class in which, Based on the input selected by the customer, i will update in the inputs conf file during configuration. But I encounter that, during configuration under name field the account name is sent, but during the data validation, it does not present in the dictionary, Basically what i want is  I want to have the account name during configuration time.
Splunk is in gmt and server is in est time. But when displayed in dashboard studio the date format is showing based on the servers time ex: 2025-06-30T20:00:00-04:00 But the same when displayed in... See more...
Splunk is in gmt and server is in est time. But when displayed in dashboard studio the date format is showing based on the servers time ex: 2025-06-30T20:00:00-04:00 But the same when displayed in Classic dashboards it showing as received from events I want to see in dashboard studio exact date without any T
Hello everyone, I use a Dell Windows laptop, and after downloading the Splunk enterprise 9.4.3 app for Windows, I'm unable to install it because of an error prompt. Please, can I get a step by step a... See more...
Hello everyone, I use a Dell Windows laptop, and after downloading the Splunk enterprise 9.4.3 app for Windows, I'm unable to install it because of an error prompt. Please, can I get a step by step approach on fixing this?  
I am trying to find the Dataset Credits for mltk_ai_commander.csv, which comes with MLTK 5.6.0 and higher, according to the user guide. I checked the MLTK Dataset Credits page, but it looks like it h... See more...
I am trying to find the Dataset Credits for mltk_ai_commander.csv, which comes with MLTK 5.6.0 and higher, according to the user guide. I checked the MLTK Dataset Credits page, but it looks like it hasn't been updated for this version yet. Does anyone know if there is somewhere else I can find authorship or attribution information?
Can I get a PDF of the Splunk Enterprise 9.4.3 Release Notes?
Hi all, I’ve got a dashboard that uses a JS script to dynamically set the $row_count_tok$ token based on screen orientation: 24 for landscape (2 pages of 12 rows) 40 for portrait (2 pages of 20 r... See more...
Hi all, I’ve got a dashboard that uses a JS script to dynamically set the $row_count_tok$ token based on screen orientation: 24 for landscape (2 pages of 12 rows) 40 for portrait (2 pages of 20 rows) I pass this token into my search to determine how many rows to return, and then paginate them like so: ...... | head $row_count_tok$ | streamstats count as Page | eval Page = case( $row_count_tok$=24 AND Page<=12, 0, $row_count_tok$=24 AND Page>12, 1, $row_count_tok$=40 AND Page<=20, 0, $row_count_tok$=40 AND Page>20, 1 ) | eval display = floor(tonumber(strftime(now(), "%S")) / 10) % 2 | where Page = display | fields - display The token and logic work (tested manually), but I get this message on page load indicating the token was not ready when the search ran: Search is waiting for input... How do I force the query to wait for the token to load ? Many thanks.
Hello family, please does anyone knows or has sources that explains how to use or built custom functions in Splunk SOAR?