All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @arlissilva  It looks like your table SPL uses "token_mapa" but the token you are setting is "clicked_uf". If you update these do you get the same issue? Assuming that UF is a valid field in the... See more...
Hi @arlissilva  It looks like your table SPL uses "token_mapa" but the token you are setting is "clicked_uf". If you update these do you get the same issue? Assuming that UF is a valid field in the map results then the clicked_uf value should be updated, do you see this update in your browser URL?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thanks for your reply. But if you look at some example log entries, it's fairly obvious that it's 2 different sourcetypes. The structure is completely different. I'd like to split these events at the... See more...
Thanks for your reply. But if you look at some example log entries, it's fairly obvious that it's 2 different sourcetypes. The structure is completely different. I'd like to split these events at the source. The field extractions, aliases and CIM-ing of the data is just completely different for the ASA formatted logs and the FTD formatted logs. Hence, I'm wondering why this is not addressed in the Cisco Security Cloud Add-On. It has an out-of-the box "change sourcetype transform" for cisco:asa events to change to cisco:ftd:syslog when it has %FTD code and the for cisco:ftd:syslog events a transform to change to cisco:asa when it has a %ASA code. However, all events arrive with %FTD code here, so the default behaviour doesn't work. You can see from these 2 examples the big difference (FTD events with key value pairs separated by : ASA events more sentence like structure. 313004 <164>2025-07-02T11:13:26Z CF1 : %FTD-4-313004: Denied ICMP type=0, from laddr 172.143.19.36 on interface IT-1 to 10.40.72.24: no matching session ASA 430004 <13>2025-07-02T11:29:03Z CF2 : %FTD-1-430004: DeviceUUID: 104cb27c-227a-11ee-b7ae-880bf955e0c1, InstanceID: 5, FirstPacketSecond: 2025-07-02T11:29:00Z, ConnectionID: 14812, SrcIP: 172.19.47.25, DstIP: 10.30.71.65, SrcPort: 64523, DstPort: 445, Protocol: tcp, FileDirection: Download, FileAction: Malware Cloud Lookup, FileSHA256: c885df893496d5c28ad16a1ecd12e259e191f54ad76428857742af843b846c53, SHA_Disposition: Unavailable, SperoDisposition: Spero detection not performed on file, FileName: DAC\BGinfo\Bginfo.exe, FileType: MSEXE, FileSize: 2198952, ApplicationProtocol: NetBIOS-ssn (SMB), Client: NetBIOS-ssn (SMB) client, WebApplication: SMBv3-unencrypted, FilePolicy: Malware Detect, FileStorageStatus: Not Stored (Disposition Was Pending), FileSandboxStatus: File Size Is Too Large, IngressVRF: Global, EgressVRF: Global FTD
All, For both java and .Net agents in Kubernetes, how is the CPU % calculated? I'm looking at some Java test results and the % appears to simply be CPU millis divided by time with no account for th... See more...
All, For both java and .Net agents in Kubernetes, how is the CPU % calculated? I'm looking at some Java test results and the % appears to simply be CPU millis divided by time with no account for the number of CPUs, CPU requests, or CPU limits.  Does that sound right?  With CloudFoundry, the % was additionally divided by the number of CPUs, so 120k ms/min was 200% divided by the number of CPUs. For .Net, I don't have a millis number so I can't make the same calculation to verify. thanks  
This is awesome and just solved my custom panel width issue after banging my head for the last hour.  Do you know where this attribute is documented for either behavior or which CSS items are support... See more...
This is awesome and just solved my custom panel width issue after banging my head for the last hour.  Do you know where this attribute is documented for either behavior or which CSS items are support per version of Splunk.
Hello, I am building a dashboard in Splunk Enterprise, I included the map with the Choropleth layer type and that worked for me, but I have a table that performs a query based on the region clicked o... See more...
Hello, I am building a dashboard in Splunk Enterprise, I included the map with the Choropleth layer type and that worked for me, but I have a table that performs a query based on the region clicked on the map and that part does not work in Splunk Dashboard Studio. I have already defined the token on the map, adjusted the token in the table's query, and it seems that it does not capture the clicked area. I did the same process in Splunk Classic and it worked as expected.   below is the source code of the MAP { "dataSources": { "primary": "ds_4lhwtNWq" }, "eventHandlers": [ { "type": "drilldown.setToken", "options": { "tokens": [ { "key": "row.UF.value", "token": "clicked_uf" } ] } } ], "options": { "backgroundColor": "#294e70", "center": [ -13.79021870397439, -52.07072204233867 ], "layers": [ { "additionalTooltipFields": [ "Quantidade de erros" ], "areaIds": "> primary | seriesByName('UF')", "areaValues": "> primary | seriesByName('Quantidade de erros')", "bubbleSize": "> primary | frameBySeriesNames('Quantidade de erros')", "choroplethOpacity": 0.5, "choroplethStrokeColor": "transparent", "latitude": "> primary | seriesByName('LATITUDE')", "longitude": "> primary | seriesByName('LONGITUDE')", "resultLimit": 50000, "type": "choropleth" } ], "scaleUnit": "imperial", "zoom": 5.38493379665208 }, "title": "mapa", "type": "splunk.map", "context": {}, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }   below is the SPL query of the table: index=<index> coderropc="0332" | eval PC = replace(codpcredeprop, "^0+", "") | stats count as "Erros por PC" by PC | join type=left PC [| inputlookup PcFabricante.csv | eval CODPC=replace(CODPC, "^0+", "") | rename CODPC as PC | fields PC NOMEFABR MODELO] | join type=left PC [| search index=ars source=GO earliest=-30d@d latest=now | eval CODPC=replace(CODPC, "^0+", "") | rename CODPC as PC | fields PC UF] | search UF="$token_mapa$" | table PC, NOMEFABR, MODELO, UF, "Erros por PC"   Is there any configuration that is different between Splunk classic and Splunk Dashboard Studio? When I add the default value in the map, the table receives the value, but does not register the clicks.
Thank you that helped!
Hi @RowdyRodney  How are you doing this extraction? Is it a search-time extraction in Splunk Enterprise/Cloud? These use PCRE based Regex whereas you have provided a Python-style named capturing gr... See more...
Hi @RowdyRodney  How are you doing this extraction? Is it a search-time extraction in Splunk Enterprise/Cloud? These use PCRE based Regex whereas you have provided a Python-style named capturing group (?P...)  Please can you update this to a PCRE based regex and see if this resolves the issue? "FileName":\s".+\.(?<Domain>.[a-zA-Z0-9]*) Can I also check, is the intention that it matches the file extension (docx) in your sample data?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Esky73  The app uses the HttpNtlmAuth/requests-ntlm library which as you've found does require the Username in 'domain\\username' format. There doesnt look to be a way around this. It should be... See more...
Hi @Esky73  The app uses the HttpNtlmAuth/requests-ntlm library which as you've found does require the Username in 'domain\\username' format. There doesnt look to be a way around this. It should be possible to authenticate using the domain\\username but the domain isnt always the first bit after the @ symbol in the full domain, e.g. it could by "mydomain", "mydomain.ad" or something completely different. Are you able to check with your AD team to see what this value should be?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hello - I created a Field Extraction to look for a file extension. The raw log looks like this: "FileName": "John Test File.docx" The regex I used was: "FileName":\s".+\.(?P<Domain>.[a-zA-Z0-9]*) ... See more...
Hello - I created a Field Extraction to look for a file extension. The raw log looks like this: "FileName": "John Test File.docx" The regex I used was: "FileName":\s".+\.(?P<Domain>.[a-zA-Z0-9]*)   This tests out in any regex tester I use. When I first created this, I ran a search query and some of the fields populated, but some were blank. I then checked which records weren't being extracted correctly, and found the regex matched the raw log pattern, so I was unsure why it wouldn't have extracted. However,  ~30 minutes after creating this field extraction. It stopped extracting anything. The state I'm now, I can see that each raw log record matches my extraction regex, but the fields are still empty and this isn't being extracted. Why would that be? Each raw log matches the regex in the extraction...
Hi @Jayanthan  There are a number of approaches you could take to do this such as Edge Processor, Ingest Actions, props/transforms or segregating at source. What tools/apps/processes are you curren... See more...
Hi @Jayanthan  There are a number of approaches you could take to do this such as Edge Processor, Ingest Actions, props/transforms or segregating at source. What tools/apps/processes are you currently using bring the data in to Splunk? The most optimum way to reduce the amount of data ingested in to Splunk is to omit it at source (e.g. not send/pull it)! Please let us know and we can hopefully drill further into options for you.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
what is the correct format for domain users please? if i curl from a HF i get get the desired 200 response using : curl -v http://mywebsite.com --ntlm -u username@mydomain.ad.ltd.com.au If i use t... See more...
what is the correct format for domain users please? if i curl from a HF i get get the desired 200 response using : curl -v http://mywebsite.com --ntlm -u username@mydomain.ad.ltd.com.au If i use this format in the TA i see the error message in the logs asking for format in domain\\username I have tried several connotations of mydomain\\username but have not been successful what should be the format for this domain? Or is the issue with --ntlm ? as if we use the --negotiate flag or remove --ntlm we get 401 ? cheers
Thank you @livehybrid  But, I am using Splunk Enterprise. Is there any way to add and filter log from applications hosted the cloud in Splunk Enterprise.
Hi Everyone, We are using Splunk Enterprise in our company. We want to ingest logs from applications hosted on the cloud. But, when we try to connect we get a lot of logs which is unrelated to our a... See more...
Hi Everyone, We are using Splunk Enterprise in our company. We want to ingest logs from applications hosted on the cloud. But, when we try to connect we get a lot of logs which is unrelated to our application which in turn causes high License utilization. Is there any method where we can filter out the logs that we want (like logs of specific application or  log source) before ingesting in Splunk so as to reduce the License Utilization while getting required security logs for the application.  
Hi @Jayanthan  Data Manager is only available in Splunk Cloud. Please see https://help.splunk.com/en/splunk-cloud-platform/ingest-data-from-cloud-services/data-manager-user-manual/1.11/introduction/... See more...
Hi @Jayanthan  Data Manager is only available in Splunk Cloud. Please see https://help.splunk.com/en/splunk-cloud-platform/ingest-data-from-cloud-services/data-manager-user-manual/1.11/introduction/about-data-manager for more information. There is a useful page at https://lantern.splunk.com/Splunk_Success_Framework/Data_Management/GDI_-_Getting_data_in which links out to various methods to onboard different datasources.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi everyone, I want to ingest logs from applications hosted in cloud (such as AWS, Azure).  In our Company we are using Splunk Enterprise.  Can Data Manager be used to ingest and  filter out logs on... See more...
Hi everyone, I want to ingest logs from applications hosted in cloud (such as AWS, Azure).  In our Company we are using Splunk Enterprise.  Can Data Manager be used to ingest and  filter out logs only pertaining to that application's security in Splunk Enterprise. Splunk Enterprise Security   
Yes Im using ucc framework to build the app. Im using custom validator class  to the validate the configuration field.  This is the data value sent to validate function. As I debug the code only ... See more...
Yes Im using ucc framework to build the app. Im using custom validator class  to the validate the configuration field.  This is the data value sent to validate function. As I debug the code only the field data is set to the data during validate method invocation.     When I see the account validator python file where name is set as None.    Basically my need is during the configuration validation, I want to know the account name. How to fetch this account name ?    
Hi @Vasavi29  Please can you share a screenshot of where/how you are seeing this? Can you confirm the timezone you have set in the user preferences?   Did this answer help you? If so, please con... See more...
Hi @Vasavi29  Please can you share a screenshot of where/how you are seeing this? Can you confirm the timezone you have set in the user preferences?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @PoojaDevi  Are you using UCC Framework to build this app? Can you provide a little more of your code? Have you tried adding a log method to print out what is passed to the function? Can you shar... See more...
Hi @PoojaDevi  Are you using UCC Framework to build this app? Can you provide a little more of your code? Have you tried adding a log method to print out what is passed to the function? Can you share what is sent?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @kfsplunk  What distinction do you need to make between the logs? You mention that they become hard to differentiate but I think you could probably create an eventtype or use a field extraction t... See more...
Hi @kfsplunk  What distinction do you need to make between the logs? You mention that they become hard to differentiate but I think you could probably create an eventtype or use a field extraction to determine if the FTD code is in the 43k range like you mentioned.  I would avoid onboarding it as one sourcetype and then using props/transforms to overwrite the sourcetype because you risk breaking the built-in field extractions and CIM mappings you get from the app's configuration. However, If you want to segregate into a separate index, or change the source to distinguish them apart then you could do this with props/transforms. The Cisco Security Cloud app does look a lot richer in terms of functionality and dashboards (if that helps you) but also gets much more frequent updates than the ASA app, not that this should necessarily sway your decision but might help!   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Managed to get this resolved by ensuring the submitted token model was updated by adding submittedTokenModel.set() and submittedTokenModel.trigger() to the code. The title displaying the token v... See more...
Managed to get this resolved by ensuring the submitted token model was updated by adding submittedTokenModel.set() and submittedTokenModel.trigger() to the code. The title displaying the token value was a bit of a red herring. It showed that the default model was being updated, but it didn't reflect the state of the submitted token model.