All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Dear splunk community, After successfully implementing the input from @afx : "How to Splunk the SAP Security Audit Log" I was encouraged to implement the SAP system log (SM21) on my own. So far, ... See more...
Dear splunk community, After successfully implementing the input from @afx : "How to Splunk the SAP Security Audit Log" I was encouraged to implement the SAP system log (SM21) on my own. So far, I have managed to send the log to SPLUNK, but given the log's encoding system, I am unable to process it correctly in SPLUNK. Most likely, my error lies in the transforms.conf or props.conf.  props.conf [sap:systemlog] category = Custom REPORT-SYS = REPORT-SYS EXTRACT-fields = ^(?<Prefix>.{3})(?<Date>.{8})(?<Time>.{6})(?<Code>\w\w)(?<Field1>.{5})(?<Field2>.{2})(?<Field3>.{3})(?<Field4>.)(?<Field5>.)(?<Field6>.{8})(?<Field7>.{12})(?<Field8>.{20})(?<Field9>.{40})(?<Field10>.{3})(?<Field11>.)(?<Field12>.{64})(?<Field13>.{20}) LOOKUP-auto_sm21 = sm21 message_id AS message_id OUTPUTNEW area AS area subid AS subid ps_posid AS ps_posid transforms.conf [REPORT-SYS] DELIMS = "|" FIELDS = "message_id","date","time","term1","os_process_id","term2","work_process_number","type_process","term3","term4","user","term5","program","client","session","variable","term6","term7","term8","term9","id_tran","id_cont","id_cone" [sm21] batch_index_query = 0 case_sensitive_match = 1 filename = sm21.csv Has anyone experienced a similar issue to mine?  Best Regards.
Hi I have created a playbook to capture inputs from the user like short description, description and priority and create a SIR in servicenow. but when I try to capture the State it does not show it ... See more...
Hi I have created a playbook to capture inputs from the user like short description, description and priority and create a SIR in servicenow. but when I try to capture the State it does not show it can someone help me understand where I am going wrong? Need immediate help  
Hi, Unable to search the dataset Botsv3 in my splunk local machine it is throwing an error like  Configuration initialization for C:\Program Files\Splunk\etc took longer than expected (16532ms) whe... See more...
Hi, Unable to search the dataset Botsv3 in my splunk local machine it is throwing an error like  Configuration initialization for C:\Program Files\Splunk\etc took longer than expected (16532ms) when dispatching a search with search ID 1751901306.48. This might indicate an issue with underlying storage performance or the knowledge bundle size. If you want this message displayed more or less often, change the value of the 'search_startup_config_timeout_ms' setting in "limits.conf" to a lower or higher number. How to resolve this issue! Thanks
Hi everyone, I’m currently working with a Splunk distributed clustered environment (v9.4.1), with 3 indexers, 3 search heads and 1 cluster master, on RHEL.  I recently added a second 500GB disk t... See more...
Hi everyone, I’m currently working with a Splunk distributed clustered environment (v9.4.1), with 3 indexers, 3 search heads and 1 cluster master, on RHEL.  I recently added a second 500GB disk to each indexer in order to separate hot/warm and cold bucket storage. I have set up and mounted the 500GB disks hoping that should differentiate between the /indexes and the /coldstore.  I also edited the indexes.conf file on the cluster master, an example is shown below: [bmc] homePath = /indexes/bmc/db coldPath = /coldstore/bmc/colddb thawedPath = $SPLUNK_DB/bmc/thaweddb repFactor = auto maxDataSize = auto_high_volume I then applied the cluster-bundle as well as gave it a rolling-restart just in case.  Even though (I think) that I have configured everything correctly, when I navigate to the cluster master GUI and go to the path  Settings → Indexer Clustering → Indexes The indexes tab is empty, with none of the default indexes or the custom indexes that I had made. Has anyone encountered this behaviour where indexes do not appear in the Clustering UI, despite valid indexes.conf and bundle deployment?
SPL-268481is a bug we encountered in Enterprise  9.1 and also is in 9.2.   We have very large SHC cluster with  6 indexer clusters and a total of > 1500 indexers across these 6 clusters. The issue... See more...
SPL-268481is a bug we encountered in Enterprise  9.1 and also is in 9.2.   We have very large SHC cluster with  6 indexer clusters and a total of > 1500 indexers across these 6 clusters. The issue: - we would add an indexer back to an indexer cluster (e.g. it had hardware fixed) - the indexer would join the cluster again - the search heads would briefly REMOVE ALL/almost all indexers (not just the ones that were in the SAME indexer cluster being added back) - then each SHC would add the indexers back - most or all of the SHC heads would repeat this process so over a many minute period you could have searches that were not searching all possible indexers For each head the time period where all indexers were removed was less than a minute BUT it meant that searches would run and find NO indexers/fewer indexers to search. The solution provided by Splunk that worked is to add a setting to distsearch.conf (and btw the setting is not documented and not in distsearch.conf.spec so you would get a btool warning I am told)   [distributedSearch] useIPAddrAsHost = false I am sharing this solution in case you encountered the issue.  
This may not be the best place to ask given my issue isn't technically Splunk related, but hopefully I can get some help from people smarter than me anyway.   (?i)(?P<scheme>(?:http|ftp|hxxp)s?(?::... See more...
This may not be the best place to ask given my issue isn't technically Splunk related, but hopefully I can get some help from people smarter than me anyway.   (?i)(?P<scheme>(?:http|ftp|hxxp)s?(?:://|-3A__|%3A%2F%2F))?(?:%[\da-f][\da-f])?(?P<domain>(?:[\p{L}\d\-–]+(?:\.|\[\.\]))+[\p{L}]{2,})(@|%40)?(?:\b| |[[:punct:]]|$) The above regex is a template I'm working from(lol, I'm not nearly good enough to write this). While it's not too hard to read and see how it works, in a nut shell, it matches on the domain of a URL and nothing else. It does this by first looking for the optional beginning 'https://' and storing that in the 'scheme' group. Following that, it parses the following domain.  For example, the URL 'https://community.splunk.com/t5/forums/postpage/board-id/splunk-search' would match 'community.splunk.com'   My issue is that the way it looks for domains following the 'scheme' group requires it use a TLD(.com, .net, etc). Unfortunately, internal services used by my company don't use a TLD, and this causes the regex not to catch them. I need to change this so it can do this.   I want to modify the regex expression above to detect on URLs like: 'https://mysite/resources/rules/123456' wherein the domain would be 'mysite'. I've attempted to do so, but with my limited understanding of how regex really works, my attempts lead to too many matches as shown below. (?i)(?P<scheme>(?:http|ftp|hxxp)s?(?::\/\/|-3A__|%3A%2F%2F))?(?:%[\da-f][\da-f])?(?P<domain>((?:[\p{L}\d\-–]+(?:\.|\[\.\]))+)?[\p{L}]{2,})(@|%40)?(?:\b| |[[:punct:]]|$) I tried to throw in an extra non-capturing group within the named 'domain' ground and make the entire first half of the 'domain' group optional, but it leads to matches beyond the domain. Thank you to whomever may be able to assist. This doesn't feel like it should be such a difficult thing, but it's been vexing me for hours.
****update**** did a new install on windows and everything is now working with the same test files. going to blow up ubuntu server and reimage and try the install again. So I am thinking it has somet... See more...
****update**** did a new install on windows and everything is now working with the same test files. going to blow up ubuntu server and reimage and try the install again. So I am thinking it has something to do with how the install was done. _______________________________________________________________________________________     I am working with eventgen. I have my eventgen.conf file and some sample files. I am working with the toke and regex commands in the eventgen.conf. I can get all commands to work except mvfile. I tried several ways to create the sample file but eventgen will not read the file and kicks errors such as file doesn't exist or "0 columns". I created a file with a single line of items separated by a comma and still no go. If i create a file with a single item in it whether it be a word or number, eventgen will find it and add it to the search results. If i change it to mvfile and use :1, it will not read the same file and will kick an error. Anyone please give me some guidance on why the mvfile doesn't work. Any help would be greatly appreciated. Search will pull results from (random, file, timestamp) commands, just not mvfile snip from eventgen.conf "token.4.token = nodeIP=(\w+) token.4.replacementType = mvfile token.4.replacement = $SPLUNK_HOME/etc/apps/SA-Eventgen/samples/nodename.sample:2" snip from nodename.sample host01,10.11.0.1 host02,10.12.0.2 host03,10.13.0.3 Infrastructure ubuntu server 24.04 Splunk 9.4.3 eventgen 8.2.0   I have tried to create a file from scratch with Notepad++, notepad, excel, and directly on the linux server in the samples folder.  I have validated that file as a csv file with "goteleport" and "csvlint" sites        
I have a log events that looks like this... "name|fname|desc|group|cat|exp|set|in abc|abc||Administrators;Users|S||1|1 bbb|bbb|Internal||N||2|2 ccc|ccc|MFT Service ID|Administrators;Users|S||3|3"... See more...
I have a log events that looks like this... "name|fname|desc|group|cat|exp|set|in abc|abc||Administrators;Users|S||1|1 bbb|bbb|Internal||N||2|2 ccc|ccc|MFT Service ID|Administrators;Users|S||3|3" the  log event's text is delimited by 6 spaces... What splunk query do I use to create splunk table like this name fname desc group cat exp set in abc abc   Administrators;Users S   1 1 bbb bbb Interna   N   2 2 ccc ccc MFT Service ID Administrators;Users S   3 3
Hi all, I want to create a table in which row colours change based on row value. In attachment source code {     "type": "splunk.table",     "options": {         "fontWeight": "bold",   ... See more...
Hi all, I want to create a table in which row colours change based on row value. In attachment source code {     "type": "splunk.table",     "options": {         "fontWeight": "bold",         "headerVisibility": "none",         "rowColors": {             "mode": "categorical",             "categoricalColors": {                 "ce": "#4E79A7",                 "edit": "#F28E2B",                 "service_overview": "#E15759",                 "e2e_ritm": "#76B7B2",                 "e2e_task": "#59A14F",                 "monitor": "#EDC948",                 "sla__time_to_first_response": "#B07AA1",                 "sla__time_to_resolution": "#FF9DA7"             },             "field": "file"         },         "columnFormat": {             "placeholder": {                 "data": "> table | seriesByName(\"placeholder\") | formatByType(placeholderColumnFormatEditorConfig)"             },             "file": {                 "data": "> table | seriesByName(\"file\") | formatByType(fileColumnFormatEditorConfig)"             }         }     },     "dataSources": {         "primary": "ds_b4QqXqtO"     },     "title": "Legend",     "context": {         "placeholderColumnFormatEditorConfig": {             "string": {                 "unitPosition": "after"             }         },         "fileColumnFormatEditorConfig": {             "string": {                 "unitPosition": "after"             }         }     },     "containerOptions": {},     "showProgressBar": false,     "showLastUpdated": false } The code seems to be correct but it doesn't work. I want to know what is wrong and especially if the function i want is supported.  Thanks in advance
Hi Everyone,   we are encountering a problem with the Automated Introspection feature for Data Inventory in Splunk Security Essentials. Although the introspection process seems to runs just fine, i... See more...
Hi Everyone,   we are encountering a problem with the Automated Introspection feature for Data Inventory in Splunk Security Essentials. Although the introspection process seems to runs just fine, it fails to save the data. On the UI, there are no error messages displayed. However the introspection process does not map any data as expected. We analyzed the situation using the development console in the browser, as Splunk does not seem to provide error messages at this point in the UI. Following are the specifics of the request and the response we received:   Request Details: Request URL: https://our-splunk-instance.com/servicesNS/nobody/Splunk_Security_Essentials/storage/collections/data/data_inventory_products/batch_save Request Method:  POST​ Status Code:  403 Forbidden​   Response Message: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">User '[username]' with roles { [role1], [role2], ... } cannot write to the collection: /nobody/Splunk_Security_Essentials/collections/data_inventory_products { read : [ * ], write : [ admin, power ] }, export: global, owner: nobody, removable: no, modtime: [timestamp]</msg> </messages> </response>   The error message suggests that the user [username] does not have the necessary write permissions for the specified collection. The roles assigned to this user include [role1], [role2], ..., which appear to lack the required write access. Steps we have taken so far:   We have reviewed the permissions settings and suspect that the issue is related to insufficient write permissions. We consulted the documentation on editing permissions to provide write access: Edit permissions to provide write access to Splunk Security Essentials - Splunk Documentation. Can anyone provide guidance on any troubleshooting steps that might resolve this issue? We are particularly interested in understanding how to grant the necessary write access to the user or roles involved.   Thank you in advance for your support!   Best regards
All, I'm ingesting data from Azure that contains (as part of it) a syslog message, I have the vendor specific application for this syslog message format. Simplified structure below. { Collector... See more...
All, I'm ingesting data from Azure that contains (as part of it) a syslog message, I have the vendor specific application for this syslog message format. Simplified structure below. { CollectorHostName: xxxxx Computer: xxxxx EventTime: 2025-06-27T07:19:45Z Facility: local7 HostIP: xx.xx.xx.xx SyslogMessage: logver=704072731 timestamp=1750983585 devname="xxx" devid="xxx" vd="root" date=2025-06-27 time=00:19:45 eventtime=1751008785866135964 tz="-0700" logid="0100032002" type="event" subtype="system" level="alert" ........." }  What I would ideally like to be able to do, is extract (via a search) the "SyslogMessage" field and then re-index this into a new index (and appropriate sourcetype) so that the syslog message can be processed in "the normal way" by the vender specific application.  Does anyone know how I can achieve this? Many thanks in advance.
Until I install the ES on the enterprise platform I could connect via 127.0.0.1:8000 via secure https connection. However after the ES installation, https stops connecting and I have to connect throu... See more...
Until I install the ES on the enterprise platform I could connect via 127.0.0.1:8000 via secure https connection. However after the ES installation, https stops connecting and I have to connect through non-secure connection. Changing EnableWebSSL parameter to Yes or No does not have any impact. How can I connect secure to my NFR Enterprise environment? Thanks. Ugur
I received the new license today. I tried both methods; upload Splunk.License file and Copy+Paste xml content. Both failed with this error message: Bad Request — web_1751612044.5449162.lic: fail... See more...
I received the new license today. I tried both methods; upload Splunk.License file and Copy+Paste xml content. Both failed with this error message: Bad Request — web_1751612044.5449162.lic: failed to add because: cannot add lic w/ subgroupId=DevTest:<my.email@mycompany.com> to stack w/ subgroupId=Production I have renewed license previously and this is not a lab test system. Appreciate any advise.    
  Hi Splunk Community, I’m trying to reduce disk space usage on my Splunk Universal Forwarder by filtering out unnecessary SharePoint logs and only forwarding those with a severity of High, error, ... See more...
  Hi Splunk Community, I’m trying to reduce disk space usage on my Splunk Universal Forwarder by filtering out unnecessary SharePoint logs and only forwarding those with a severity of High, error, or warning in the message I created a deployment app named SharePoint. here is what's in that folder:     I attempted to create a props and transforms.conf files to filter out the data that was unnecessary. i only need to see the log files in the dir that have certain key words not all of those logs here is what i wrote in the files. I didn't write the regex myself i found something similar to it online somewhere and tried to make it work for my environment After deploying this i now do not see any of my SharePoint logs indexed at all for this specific server even the ones with high. As you can see from the logs i even pointed them at a test index that i made so i should be seeing them I'm not sure what's going on.     
We recently switched over from an ingest based license to a resource based (vCPU) license model in our deployment. The license was successfully installed in the (dedicated) license manager however a... See more...
We recently switched over from an ingest based license to a resource based (vCPU) license model in our deployment. The license was successfully installed in the (dedicated) license manager however after the old license expired I noticed a bunch of warnings that the allowed volume had been exceeded. Our manually specified pools have not exceeded their allocation. Though when checking the "Usage report" the available total pool license is now the "free" 500 MB/day. This is not very surprising as we no longer have a "max per day". But should the available not be "infinite" now rather then drop down to default? I deleted all expired licenses and restarted the license manager and the warnings seem to have disappeared, at least for now. But the "total available license" still pushes a varning at 500 MB and up with the gauge screaming red in the "usage report" My first question, how can I modify the "total available license" from the ingest based GB per day to "infinite" or any other higher number than 500 MB per day? Did I miss some step when switching from ingest based license to resource based license in the configuration of the license manager? My second related question, how can I now monitor available license? There is no resource based license usage report available on the license manager? All the best
I am looking for learning splunk ITSI training & certification,  can any one guide and share the resource materials.?
I have 1 drop down having 5 values(value 1 ,value 2,value 3,value 4,value 5) in it. i have assigned a token to this drop down  For (value 1 ,value 2,value 3,value 4) i have a table(table 1) belo... See more...
I have 1 drop down having 5 values(value 1 ,value 2,value 3,value 4,value 5) in it. i have assigned a token to this drop down  For (value 1 ,value 2,value 3,value 4) i have a table(table 1) below drop down  where the results change as per values of the drop down selected. It uses 1 query underneath to get details what i want is to have another another table(table 2) which should display in place of table 1 [hide table 1 and display 2 instead ] when value 5 is selected from drop down.  This needs to use different query  How to achieve this ?
Hello Splunk People.... I want to return a search within splunk.  THe index is wineventlogs and i want to return all the eventcodes within eventtypes.   Meaning....  Eventtype A includes eventcode... See more...
Hello Splunk People.... I want to return a search within splunk.  THe index is wineventlogs and i want to return all the eventcodes within eventtypes.   Meaning....  Eventtype A includes eventcode 5144, 5145, 5146 Eventtype b includes eventcode 5144, 5166, 5167 As examples....   thanks to all
Hi,  I have simple chart visulization, with base SPL .... | chart sum(cost) AS total_cost BY bill_date  I'm trying to add a "$" format to total_cost to show on the columns. Tried every Google and ... See more...
Hi,  I have simple chart visulization, with base SPL .... | chart sum(cost) AS total_cost BY bill_date  I'm trying to add a "$" format to total_cost to show on the columns. Tried every Google and ChatGPT trick and I can't get it working. Any ideas? Thanks!
I want to send alerts to ms teams channel how to do it