All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@SN1  Does the user who created/modified the savedsearch have enough permissions? Also whats the value of enableSched in savedsearches.conf. Make sure your search is having enableSched = 1 in saved... See more...
@SN1  Does the user who created/modified the savedsearch have enough permissions? Also whats the value of enableSched in savedsearches.conf. Make sure your search is having enableSched = 1 in savedsearches.conf. #https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Savedsearchesconf Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
We'd have to see your config to see why your httpout didn't work. In general it does work. And SC4S is something completely different. You shouldn't receive syslog directly on a HF anyway.
When you edit the search in this state is it still initially enabled or disabled? Did you check the config with btool?
I am not able to see the schedule of the saved searches although they are cron scheduled . so when i am saving again the saved search the time can be seen but after some time it just does not show. ... See more...
I am not able to see the schedule of the saved searches although they are cron scheduled . so when i am saving again the saved search the time can be seen but after some time it just does not show.  
@Jayanthan  You have multiple options depending on your architecture. Always best approach is to filter at the source itself, but if its not possible Use props.conf and transforms.conf on splunk e... See more...
@Jayanthan  You have multiple options depending on your architecture. Always best approach is to filter at the source itself, but if its not possible Use props.conf and transforms.conf on splunk enterprise to drop events before indexing If you're on Splunk 9+, you can use Ingest Actions Ref #https://docs.splunk.com/Documentation/Splunk/9.4.2/Data/DataIngest #https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Using_ingest_actions_to_filter_Windows_event_logs You can also consider using Splunk Edge Processor #https://help.splunk.com/en/splunk-cloud-platform/process-data-at-the-edge/use-edge-processors/9.3.2411/getting-started/about-the-edge-processor-solution Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@TestAdminHorst  This add-on is primarily designed for standard Microsoft 365 environments. GCC High and DoD tenants operate in different cloud environment with restricted endpoints. So this standar... See more...
@TestAdminHorst  This add-on is primarily designed for standard Microsoft 365 environments. GCC High and DoD tenants operate in different cloud environment with restricted endpoints. So this standard add-on endpoints may not work with GCC High. #https://learn.microsoft.com/en-us/office/dev/add-ins/publish/government-cloud-guidance But you can consider having custom script for gcc high endpoints. #https://learn.microsoft.com/en-us/microsoft-365/enterprise/microsoft-365-u-s-government-gcc-high-endpoints?view=o365-worldwide Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Couldn't you just use 1601 or some other port?
It appears this configuration will not work. I have transitioned it to SC4S, which appears to be the only option.
I very much doubt there is any kind of documentation - I know this issue has been discussed here and on Splunk Slack channels and there is more than one way to achieve this. This site is a good reso... See more...
I very much doubt there is any kind of documentation - I know this issue has been discussed here and on Splunk Slack channels and there is more than one way to achieve this. This site is a good resource for this type of question.
Retried with curl and the domain\\username format and got curl to work - but the repsponse is initially a 401 and then retries and is successful - the request goes through a load balancer 1st enroute... See more...
Retried with curl and the domain\\username format and got curl to work - but the repsponse is initially a 401 and then retries and is successful - the request goes through a load balancer 1st enroute to the webserver. > curl http://mywebsite/healthcheck.aspx -v --ntlm -u DOMAIN\\username Enter host password for user 'DOMAIN\username': * Trying 1.1.1.1 ... * TCP_NODELAY set * Connected to myhost (1.1.1.1) port 80 (#0) * Server auth using NTLM with user 'DOMAIN\username' > GET /healthcheck.aspx HTTP/1.1 > Host: myhost > Authorization: NTLM XXX > User-Agent: curl/7.61.1 > Accept: */* > < HTTP/1.1 401 Unauthorized < Content-Type: text/html; charset=us-ascii < Server: Microsoft-HTTPAPI/2.0 < WWW-Authenticate: NTLM XXX < Date: Thu, 03 Jul 2025 01:07:05 GMT < Content-Length: 341 < * Ignoring the response-body * Connection #0 to host myhost left intact * Issue another request to this URL: 'http://myhost/healthcheck.aspx' * Found bundle for host myhost: 0x55a8787a6a60 [can pipeline] * Re-using existing connection! (#0) with host myhost * Connected to myhost (1.1.1.1) port 80 (#0) * Server auth using NTLM with user 'DOMAIN\username' > GET /healthcheck.aspx HTTP/1.1 > Host: myhost > Authorization: NTLM XXX > User-Agent: curl/7.61.1 > Accept: */* > < HTTP/1.1 200 OK < Cache-Control: private < Content-Type: text/html; charset=utf-8 < Server: Microsoft-IIS/10.0 < X-AspNet-Version: 4.0.30319 < Persistent-Auth: true < X-Powered-By: ASP.NET < Date: Thu, 03 Jul 2025 01:07:05 GMT < Content-Length: 557 < <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">   <html> <head> <title>Health Check</title> </head>
We are getting the following error when trying to ingest EXO mail logs into splunk using the add-in. line 151, in __call__ raise ValueError("{} endpoint for found".format(name)) ValueError: MessageT... See more...
We are getting the following error when trying to ingest EXO mail logs into splunk using the add-in. line 151, in __call__ raise ValueError("{} endpoint for found".format(name)) ValueError: MessageTrace endpoint for found if not endpoint: raise ValueError("{} endpoint for found".format(name)) Does the Splunk add-on for M365 work when reaching out to GCC HIGH endpoints? or is the add-on not configured for such connections? 
Hi @arlissilva  It looks like your table SPL uses "token_mapa" but the token you are setting is "clicked_uf". If you update these do you get the same issue? Assuming that UF is a valid field in the... See more...
Hi @arlissilva  It looks like your table SPL uses "token_mapa" but the token you are setting is "clicked_uf". If you update these do you get the same issue? Assuming that UF is a valid field in the map results then the clicked_uf value should be updated, do you see this update in your browser URL?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thanks for your reply. But if you look at some example log entries, it's fairly obvious that it's 2 different sourcetypes. The structure is completely different. I'd like to split these events at the... See more...
Thanks for your reply. But if you look at some example log entries, it's fairly obvious that it's 2 different sourcetypes. The structure is completely different. I'd like to split these events at the source. The field extractions, aliases and CIM-ing of the data is just completely different for the ASA formatted logs and the FTD formatted logs. Hence, I'm wondering why this is not addressed in the Cisco Security Cloud Add-On. It has an out-of-the box "change sourcetype transform" for cisco:asa events to change to cisco:ftd:syslog when it has %FTD code and the for cisco:ftd:syslog events a transform to change to cisco:asa when it has a %ASA code. However, all events arrive with %FTD code here, so the default behaviour doesn't work. You can see from these 2 examples the big difference (FTD events with key value pairs separated by : ASA events more sentence like structure. 313004 <164>2025-07-02T11:13:26Z CF1 : %FTD-4-313004: Denied ICMP type=0, from laddr 172.143.19.36 on interface IT-1 to 10.40.72.24: no matching session ASA 430004 <13>2025-07-02T11:29:03Z CF2 : %FTD-1-430004: DeviceUUID: 104cb27c-227a-11ee-b7ae-880bf955e0c1, InstanceID: 5, FirstPacketSecond: 2025-07-02T11:29:00Z, ConnectionID: 14812, SrcIP: 172.19.47.25, DstIP: 10.30.71.65, SrcPort: 64523, DstPort: 445, Protocol: tcp, FileDirection: Download, FileAction: Malware Cloud Lookup, FileSHA256: c885df893496d5c28ad16a1ecd12e259e191f54ad76428857742af843b846c53, SHA_Disposition: Unavailable, SperoDisposition: Spero detection not performed on file, FileName: DAC\BGinfo\Bginfo.exe, FileType: MSEXE, FileSize: 2198952, ApplicationProtocol: NetBIOS-ssn (SMB), Client: NetBIOS-ssn (SMB) client, WebApplication: SMBv3-unencrypted, FilePolicy: Malware Detect, FileStorageStatus: Not Stored (Disposition Was Pending), FileSandboxStatus: File Size Is Too Large, IngressVRF: Global, EgressVRF: Global FTD
All, For both java and .Net agents in Kubernetes, how is the CPU % calculated? I'm looking at some Java test results and the % appears to simply be CPU millis divided by time with no account for th... See more...
All, For both java and .Net agents in Kubernetes, how is the CPU % calculated? I'm looking at some Java test results and the % appears to simply be CPU millis divided by time with no account for the number of CPUs, CPU requests, or CPU limits.  Does that sound right?  With CloudFoundry, the % was additionally divided by the number of CPUs, so 120k ms/min was 200% divided by the number of CPUs. For .Net, I don't have a millis number so I can't make the same calculation to verify. thanks  
This is awesome and just solved my custom panel width issue after banging my head for the last hour.  Do you know where this attribute is documented for either behavior or which CSS items are support... See more...
This is awesome and just solved my custom panel width issue after banging my head for the last hour.  Do you know where this attribute is documented for either behavior or which CSS items are support per version of Splunk.
Hello, I am building a dashboard in Splunk Enterprise, I included the map with the Choropleth layer type and that worked for me, but I have a table that performs a query based on the region clicked o... See more...
Hello, I am building a dashboard in Splunk Enterprise, I included the map with the Choropleth layer type and that worked for me, but I have a table that performs a query based on the region clicked on the map and that part does not work in Splunk Dashboard Studio. I have already defined the token on the map, adjusted the token in the table's query, and it seems that it does not capture the clicked area. I did the same process in Splunk Classic and it worked as expected.   below is the source code of the MAP { "dataSources": { "primary": "ds_4lhwtNWq" }, "eventHandlers": [ { "type": "drilldown.setToken", "options": { "tokens": [ { "key": "row.UF.value", "token": "clicked_uf" } ] } } ], "options": { "backgroundColor": "#294e70", "center": [ -13.79021870397439, -52.07072204233867 ], "layers": [ { "additionalTooltipFields": [ "Quantidade de erros" ], "areaIds": "> primary | seriesByName('UF')", "areaValues": "> primary | seriesByName('Quantidade de erros')", "bubbleSize": "> primary | frameBySeriesNames('Quantidade de erros')", "choroplethOpacity": 0.5, "choroplethStrokeColor": "transparent", "latitude": "> primary | seriesByName('LATITUDE')", "longitude": "> primary | seriesByName('LONGITUDE')", "resultLimit": 50000, "type": "choropleth" } ], "scaleUnit": "imperial", "zoom": 5.38493379665208 }, "title": "mapa", "type": "splunk.map", "context": {}, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }   below is the SPL query of the table: index=<index> coderropc="0332" | eval PC = replace(codpcredeprop, "^0+", "") | stats count as "Erros por PC" by PC | join type=left PC [| inputlookup PcFabricante.csv | eval CODPC=replace(CODPC, "^0+", "") | rename CODPC as PC | fields PC NOMEFABR MODELO] | join type=left PC [| search index=ars source=GO earliest=-30d@d latest=now | eval CODPC=replace(CODPC, "^0+", "") | rename CODPC as PC | fields PC UF] | search UF="$token_mapa$" | table PC, NOMEFABR, MODELO, UF, "Erros por PC"   Is there any configuration that is different between Splunk classic and Splunk Dashboard Studio? When I add the default value in the map, the table receives the value, but does not register the clicks.
Thank you that helped!
Hi @RowdyRodney  How are you doing this extraction? Is it a search-time extraction in Splunk Enterprise/Cloud? These use PCRE based Regex whereas you have provided a Python-style named capturing gr... See more...
Hi @RowdyRodney  How are you doing this extraction? Is it a search-time extraction in Splunk Enterprise/Cloud? These use PCRE based Regex whereas you have provided a Python-style named capturing group (?P...)  Please can you update this to a PCRE based regex and see if this resolves the issue? "FileName":\s".+\.(?<Domain>.[a-zA-Z0-9]*) Can I also check, is the intention that it matches the file extension (docx) in your sample data?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Esky73  The app uses the HttpNtlmAuth/requests-ntlm library which as you've found does require the Username in 'domain\\username' format. There doesnt look to be a way around this. It should be... See more...
Hi @Esky73  The app uses the HttpNtlmAuth/requests-ntlm library which as you've found does require the Username in 'domain\\username' format. There doesnt look to be a way around this. It should be possible to authenticate using the domain\\username but the domain isnt always the first bit after the @ symbol in the full domain, e.g. it could by "mydomain", "mydomain.ad" or something completely different. Are you able to check with your AD team to see what this value should be?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hello - I created a Field Extraction to look for a file extension. The raw log looks like this: "FileName": "John Test File.docx" The regex I used was: "FileName":\s".+\.(?P<Domain>.[a-zA-Z0-9]*) ... See more...
Hello - I created a Field Extraction to look for a file extension. The raw log looks like this: "FileName": "John Test File.docx" The regex I used was: "FileName":\s".+\.(?P<Domain>.[a-zA-Z0-9]*)   This tests out in any regex tester I use. When I first created this, I ran a search query and some of the fields populated, but some were blank. I then checked which records weren't being extracted correctly, and found the regex matched the raw log pattern, so I was unsure why it wouldn't have extracted. However,  ~30 minutes after creating this field extraction. It stopped extracting anything. The state I'm now, I can see that each raw log record matches my extraction regex, but the fields are still empty and this isn't being extracted. Why would that be? Each raw log matches the regex in the extraction...