All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am having difficulty converting event logs to metric data points https://docs.splunk.com/Documentation/Splunk/9.4.0/Metrics/L2MOverview According to the documentation, I think I need index-time ex... See more...
I am having difficulty converting event logs to metric data points https://docs.splunk.com/Documentation/Splunk/9.4.0/Metrics/L2MOverview According to the documentation, I think I need index-time extraction to modify the fields in the event as such: raw event examples   server_request_bytes{kafka_id="lkc-j2km8w",principal_id="u-j69zjw",type="Fetch",} 3.14 1736873280000 server_response_bytes{kafka_id="lkc-j2km8w",principal_id="u-j69zjw",type="ApiVersions",} 4.2 1736873280000   My Goal is to parse so that the event has the fields necessary for log to metric conversion. I think that means these are required (in addition to timestamp):   metric_name:server_request_byes numeric_value: 3.14 measurement:server_request_byes=3.14   I have 2 stanzas in transforms.conf which parse the metric name, and the numeric value.    [metric_name] REGEX = ^"(?P<metric_name>[a-z_-]+_[a-z_-]+\w+) FORMAT = metric_name::$metric_name [numeric_value] REGEX = ^[^ \n]* (?P<metric_value>\d+\.\d+) FORMAT = numeric_value::$metric_value   (props.conf looks like this:)   [my_log_to_metrics] # extract metric fields TRANSFORMS-metric_name = metric_name TRANSFORMS-numeric_value = numeric_value category = Log to Metrics # parse timestamp TIME_PREFIX = \}\s.*\s TIME_FORMAT = %s%3N MAX_TIMESTAMP_LOOKAHEAD = 20 NO_BINARY_CHECK = true SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)   Currently, when I try using this sourcetype I see this error message in splunkd log:   Metric event data without a metric name and properly formated numerical values are invalid and cannot be indexed. Ensure the input metric data is not malformed, have one or more keys of the form "metric_name:<metric>" (e.g..."metric_name:cpu.idle") with corresponding floating point values.   (And no metric data in the metrics index) I have a couple of questions: 1. Are the fields metric_name, numeric_value, and measurement required to be extracted at index time with transforms.conf for the log to metric conversion? 2. How can I combine the extracted name and value fields to create the measurement field without writing another regex statement to parse the same thing? 3. How can I parse all of the fields between the curly braces (kafka_id, principal_id, type) as dimensions for the metric, in a generic way?     
Splunk Cloud had an update this past Sunday, 3 Mar 2025. Since then, admins are unable to change a user's role. Is this a bug? We use the Chargeback App, and have it configured to use user roles to ... See more...
Splunk Cloud had an update this past Sunday, 3 Mar 2025. Since then, admins are unable to change a user's role. Is this a bug? We use the Chargeback App, and have it configured to use user roles to delineate charges per team.  
Hello Using Splunk 9.3.2 I want to deploy an app to all Windows UF only.  This config doesn't work.  [serverClass:scallufwin:app:btoolufwin] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient ... See more...
Hello Using Splunk 9.3.2 I want to deploy an app to all Windows UF only.  This config doesn't work.  [serverClass:scallufwin:app:btoolufwin] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled [serverClass:scallufwin] machineTypesFilter = windows-x64 packageTypesFilter = universal_forwarder whitelist.0 = srv*   Moving the packageTypesFilter parameter to the serverClass:app stanza fixed the issue. [serverClass:scallufwin:app:btoolufwin] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled packageTypesFilter = universal_forwarder [serverClass:scallufwin] machineTypesFilter = windows-x64 whitelist.0 = srv* Does that mean packageTypesFilter and machineTypesFilter can't be used in the same stanza ? Thanks
Dear Members, I have a use case where I would need to update or insert configuration to transforms.conf, props.conf and outputs.conf. I was told that it is possible to do this via a creating an app.... See more...
Dear Members, I have a use case where I would need to update or insert configuration to transforms.conf, props.conf and outputs.conf. I was told that it is possible to do this via a creating an app. That would make it easier for users to make the necessary changes, instead of doing it via the error-prone manual procedure. Nevertheless, I haven't come across any documentation that would illustrate and explain how to do it. Does someone have any experience with that? Or perhaps can someone point me to the relevant documentation? Thanks in advance!
How to embed "https://docs.splunk.com/Documentation" in Splunk dashboard Is it possible to do? I am getting Refused to display 'https://docs.splunk.com/' in a frame because it set 'X-Frame-Options... See more...
How to embed "https://docs.splunk.com/Documentation" in Splunk dashboard Is it possible to do? I am getting Refused to display 'https://docs.splunk.com/' in a frame because it set 'X-Frame-Options' to 'sameorigin'. <row> <panel> <html> <iframe src="https://docs.splunk.com/Documentation" width="100%" height="300">&gt;</iframe> </html> </panel> </row>  
Need some guidance on SplunkCloud Kiteworks integration. We are utilizing built-in UF of Kiteworks found on admin console and sending it directly to cloud. Did you use the forwarder app package and h... See more...
Need some guidance on SplunkCloud Kiteworks integration. We are utilizing built-in UF of Kiteworks found on admin console and sending it directly to cloud. Did you use the forwarder app package and how did you it? I don't have access to the client's KW console. All I know is currently it is asking us to upload 4 certificate files for tls and not the forwarder package app. The Splunk Cloud and Splunk Enterprise toggle button as well is disabled which is weird. I believe on lower version there no option for that but we have.
Hi,  We have configured a data input in HF and there is an option to select index there. I have created new index in Cluster master and pushed it to indexers. But that created index is not showing i... See more...
Hi,  We have configured a data input in HF and there is an option to select index there. I have created new index in Cluster master and pushed it to indexers. But that created index is not showing in HF. I believe HF is not linked in this cluster that is why it is not showing. What to do in this case? I tried to create same index in HF but our hot and cold path contains volumes which is failing to create index in HF. Please help me what can I do? If I keep default index in HF... will it pick the index in indexers? How to configure this? Please clarify my confusion here....
Hi Splunkers :-), We have nice feature it dashboard studio - "Select all matches" in multiselect filter. But, unfortunately not in classic dashboard.  Can we built similar logic in classic dashboa... See more...
Hi Splunkers :-), We have nice feature it dashboard studio - "Select all matches" in multiselect filter. But, unfortunately not in classic dashboard.  Can we built similar logic in classic dashboard?
Dear fellas, I have an issue on Monitoring Console that show wrong information of instance after upgrade from 9.2.2 up to 9.4.1 (latest version). Is this a bug or I need to change some configur... See more...
Dear fellas, I have an issue on Monitoring Console that show wrong information of instance after upgrade from 9.2.2 up to 9.4.1 (latest version). Is this a bug or I need to change some configuration ? Thanks & Best regards.  
Last week this worked fine, but since 7.0.3 of @splunk/create came out two days ago, linting doesn't work anymore. npx @splunk/create New app with component yarn run setup That still completes, b... See more...
Last week this worked fine, but since 7.0.3 of @splunk/create came out two days ago, linting doesn't work anymore. npx @splunk/create New app with component yarn run setup That still completes, but setup shows several warnings about things having unmet peer dependencies or incorrect peer dependencies.  But yarn run lint now throws an error and doesn't work: "Error: Failed to load parser '@babel/eslint-parser' declared in '.eslintrc.js >>  @splunk/eslint-config/browser-prettier >> ./browser.js >> ./base.js': Cannot find module '@babel/eslint-parser'\n" The release notes simply say this: splunk_create.spec.conf is now correctly named splunk_create.conf.spec (SUI-5385). But when I compare the package.json from a component created last week to one created today, I see several changes: dependencies: @splunk/react-ui changed from "^4.30.0" to "^4.43.0" @splunk/themes changed from "^0.18.0" to "^0.23.0" devDependencies: @splunk/eslint-config changed from "^4.0.0" to "^5.0.0" @splunk/splunk-utils changed from "^3.0.1" to "^3.2.0" @splunk/stylelint-config changed from "^4.0.0" to "^5.0.0" stylelint changed from "^13.0.0" to "^15.11.0" There may be other things that changed as well, those are just the ones that jumped out at me.  Anybody know how to fix this?  You can still do yarn run start:demo on the component and it runs, but the lint is broken. Thanks!
Below is the search and I need to extract the ID's shown in the below event and there are also many other ID's. Please help me in writing a query to extract the ID's which starts with "Duplicate Id's... See more...
Below is the search and I need to extract the ID's shown in the below event and there are also many other ID's. Please help me in writing a query to extract the ID's which starts with "Duplicate Id's that needs to be displayed ::::::[6523409, 6529865]" in the log file.   index="*" source ="*"  "Duplicate Id's that needs to be displayed ::::::[6523409, 6529865]
I’m working on a Splunk search that needs to perform a lookup against a CSV file. The challenge is that some of the fields in the lookup table contain empty values, meaning an exact match doesn’t wor... See more...
I’m working on a Splunk search that needs to perform a lookup against a CSV file. The challenge is that some of the fields in the lookup table contain empty values, meaning an exact match doesn’t work. Here’s a simplified version of my search:   index="main" eventType="departure" | table _time commonField fieldA fieldB fieldC fieldD fieldE | lookup reference_data.csv commonField fieldA fieldB fieldC fieldD fieldE OUTPUTNEW offset   The lookup file reference_data.csv contains the fields: commonField , fieldA, fieldB, fieldC, fieldD, fieldE, lookupValue. Sometimes, fieldB, fieldC, or other fields in the lookup table are empty. fieldA always has a value, sometimes the same, but the value of the offset field changes based on the values of the other fields. If a lookup row has an empty value for fieldB, I still want it to match based on the available fields. What I've Tried: Using lookup normally, but it requires all fields to match exactly, which fails when lookup fields are empty. Creating multiple lookup commands for different field combinations, but this isn’t scalable. Desired Outcome: If commonField  matches, but fieldB is empty in the lookup file, I still want the lookup to return lookupValue. The lookup should prioritize rows with the most matching fields but still work even if some fields are missing.   Is there a way to perform a lookup in Splunk that allows matches even when some lookup fields are empty?
Is there anyone familiar with any guidance on fulfilling the logging requirements for CTO 24-003 with splunk queries and dashboard  
Hello, I'm to try changing the sourcetype at the indexer level based on the source.  First question is that possible on an indexer.   Second would it work with props.conf referencing the transforms... See more...
Hello, I'm to try changing the sourcetype at the indexer level based on the source.  First question is that possible on an indexer.   Second would it work with props.conf referencing the transforms   transforms.conf testchange REGEX = .+ FORMAT = Sourcetype::testsourcetype WRITE_META = true   thanks
Hey all, I am new to Splunk Enterprise and I would like to understand more about metrics and the use of metric indexes. So far, I have created my own metric index by going to Settings > Indexing. I ... See more...
Hey all, I am new to Splunk Enterprise and I would like to understand more about metrics and the use of metric indexes. So far, I have created my own metric index by going to Settings > Indexing. I have a bunch of Splunk Rules I have created and so far I have used the mcollect command to use the following: host= (ip address) source=(source name) | mcollect index=(my_metric_index) I am able to get a list of event logs showing on the Splunk dashboard , but I am not sure if the results showing on the Search and Reporting is being stored under my metric index. When I try to check under the Indexing Tab, my associated metric index is still at "0 MB" indicating no data  Is there anyway somone can help? Is it my index that needs work? Is it my search string query?    
Hello, and I have another weird issue: When I execute a search on a SHC in the Search and Reporting App, getting data from 2025-02-27 index=test earliest=-7d@d latest=-6d@d I get zero events When... See more...
Hello, and I have another weird issue: When I execute a search on a SHC in the Search and Reporting App, getting data from 2025-02-27 index=test earliest=-7d@d latest=-6d@d I get zero events When I execute the search WITHOUT the earliest and latest time modifiers and use the Time Picker in the UI which results in "during Thu, Feb 27, 2025" I get around 167,153 results Specifying the time range with earliest and latest time modifiers is NOT giving me the "Your timerange was substituted based on your search string". If I use tstats, I get the correct number of events, the correct date, and the message "Your timerange was substituted based on your search string" is present | tstats count where index=test earliest=-7d@d latest=-6d@d by _time span=d I also made index=test earliest=-7d@d latest=-6d@d a saved search which executes every 10 minutes - zero events. Another bit of weirdness: If I run that search, and specify "All time", it will pull events ONLY for 2025-02-27. Nothing for other dates, and it has 12 months of events, populated for every day. So, it looks at both the time qualifiers and the time picker under that scenario. Any ideas what might be causing this? (I have several standalone searchheads that are working fine)
I have a survey that has a date field deletion_date. How can I filter this field by the Time range?     sourcetype=access_* status=200 action=purchase | top categoryId |where deletion_date > ... See more...
I have a survey that has a date field deletion_date. How can I filter this field by the Time range?     sourcetype=access_* status=200 action=purchase | top categoryId |where deletion_date > ?        
Hi, Here is a scenario: Step 1 9h30 TradeNumber 13400101 gets created in system 9h32 TradeNumber 13400101 gets sent to market Step 2 9h45 TradeNumber 13400101 gets modified in system 9h50 Tr... See more...
Hi, Here is a scenario: Step 1 9h30 TradeNumber 13400101 gets created in system 9h32 TradeNumber 13400101 gets sent to market Step 2 9h45 TradeNumber 13400101 gets modified in system 9h50 TradeNumber 13400101 gets sent to market with modification Step 3 9h55 TradeNumber 13400101 gets cancelled in system 9h56 TradeNumber 13400101 gets sent to market as cancelled I need to monitor the Delay for sending the order to market. In the above scenario we have 3 steps for the same TradeNumber and each needs to be calculated separately. Delay for sending new trade Delay for modifying Delay for cancelling The log does not allow to differenciate the steps but the sequence is always in the right order. If I use | stats range(_time) as Delay by TradeNumber | stats max(Delay) For TradeNumber 13400101, it will return 26mins I am looking to have a result of 5mins (gets modified ,9h45 to 9h55) Anyway Splunk can match by sequence (or something else) and TradeNumber to calculate 3 values for the same TradeNumber ?
I want to send the all the event to nullqueue except having match "EventType": 5000.   {"EventID": 2154635, "EventType": 5000, "NetObjectValue": null, "EngineID": null}   [solarwinds:alerts] TRA... See more...
I want to send the all the event to nullqueue except having match "EventType": 5000.   {"EventID": 2154635, "EventType": 5000, "NetObjectValue": null, "EngineID": null}   [solarwinds:alerts] TRANSFORMS-t=eliminate-except-5000   [eliminate-except-5000] REGEX=[\w\W]+[^("EventType": 500)] DEST_KEY=queue FORMAT=nullQueue
I believe I have managed to get myself confused and would like to request assistance about field extraction. I have a new heavy forwarder, which is going to connect Splunk Cloud. First, the heavy fo... See more...
I believe I have managed to get myself confused and would like to request assistance about field extraction. I have a new heavy forwarder, which is going to connect Splunk Cloud. First, the heavy forwarder will act as a simple Splunk Enterprise instance, before connecting to Splunk Cloud. The HF installed apps, such as  Fortinet Fortigate Add-on for Splunk,  Splunk Add-on for Palo Alto Networks,  Splunk Add-on for Microsoft Windows, Splunk Add-on for Checkpoint Log Exporter. I just simply installed and created inputs in local folder and they are good to go in HF. In the Splunk Enterprise instance, all inputs work fine. All fields are parsed properly, such as checkpoint logs, PA logs, Windows xml logs, fortigate logs. However, after connecting to Splunk Cloud, the universal forwarder credentials package is downloaded from Splunk Cloud and the app is installed in the HF. The connection is fine and logs are receiving. The weird issue is ONLY checkpoint and fortigate logs' fields are all extracted successfully, when I search in Splunk Cloud. For some reason, the Windows logs show a surprisingly small number of fields being extracted, when I search in Splunk Cloud. When I search the windows logs (old data in test index) in HF, it shows a LOT of interesting fields (>300), which is great. The PA logs only extracted host, index, source, sourcetype, _time (including default ones like linecount, punct, splunk_server), when I search in Splunk Cloud. I am confused because checkpoint and fortigate logs are all extracted successfully, but others are not. I understand that the apps are recommended to install across the deployment (https://docs.splunk.com/Documentation/AddOns/released/Overview/Wheretoinstall), but I would like to know a reason why some apps work and some apps do not. They are only installed in HF and the fields should be all extracted in the forwarder layer? Is it possible that the field extraction is not finished, since there are just too much data coming or too much data in total (PA logs >10000 events last 30 mins, windows logs >2000 events last 30 mins)? Thanks. I appreciate your help.