All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I need help building a proper rex expression to extract the bold text from the following raw data {"bootcount":8,"device_id":"XXXX","environment":"prod_walker","event_source":"appliance","event_type... See more...
I need help building a proper rex expression to extract the bold text from the following raw data {"bootcount":8,"device_id":"XXXX","environment":"prod_walker","event_source":"appliance","event_type":"GENERIC","local_time":"2025-02-20T00:34:58.406-06:00", "location":{"city":"XXXX","country":"XXXX","latitude":XXXX,"longitude":XXXX,"state":"XXXX"},"log_level":"info", "message":"martini::hardware_controller: Unit state update from cook client target: Elements(temp: 500°F, [D, D, D, D, D, F: 0]), hw_state: Elements(temp: 500°F, [D, D, D, D, D, F: 115])\u0000", "model_number":"XXXX","sequence":372246,"serial":"XXXX","software_version":"2.3.0.276","ticks":0,"timestamp":1740033298,"timestamp_ms":1740033298406}  I have tried; rex field=message "(?=[^h]*(?:hw_state:|h.*hw_state:))^(?:[^\(\n]*\(){2}\w+:\s+(?P<set_temp>\d+) rex field=message ".*hw_state: Elements\(temp:(?<set_temp>\d+),.*"|  with no results yielded. What is the proper rex expression to extract 500 from the message field
Hello, Does anyone know when this app will become cloud compliant?
Hi Everyone, Can someone please help me with Splunk query to show difference of two values in timechart for the period of time. The sample result which i get from the query with Column A with time r... See more...
Hi Everyone, Can someone please help me with Splunk query to show difference of two values in timechart for the period of time. The sample result which i get from the query with Column A with time range and other columns with the respective host values. Column A Column B Column C Column D 02/22/2025  10         12               14 02/23/2025   11         13               15 02/24/2025   12         15               17  02/25/2025    16         20              21 I need the difference of values of Column B C D from previous time period and show it one time chart. Let me know if any other details are required.
How I can repair Data input index to normal state. I created Data input as per my Technical Add on , for some reason I changed my index in inputs.conf to new index, which apparently doesnt work in S... See more...
How I can repair Data input index to normal state. I created Data input as per my Technical Add on , for some reason I changed my index in inputs.conf to new index, which apparently doesnt work in Splunk 9.3 though I created new index from UI. Later I changed my index to original but somehow that Data input stuck and never executing at all. I tried reinstalling my TA app and splunk restart multiple time but no luck and no error in spulnkd.log. Same scenario happened at client end. Can anybody please guide me for this repair or what can be RCA though we reverted all inputs to normal.
Hello, I am having trouble onboaring json array data. I read many contributions , but i still having troubles This is the json array input [{"Type":"SUGUpdates","SiteCode":"DS","SUGName":"Mic... See more...
Hello, I am having trouble onboaring json array data. I read many contributions , but i still having troubles This is the json array input [{"Type":"SUGUpdates","SiteCode":"DS","SUGName":"Microsoft-W2KX-2025 2025-10-14 23:05:36","ArticleID":"5049994"},{"Type":"SUGUpdates","SiteCode":"CSA","SUGName":"Microsoft-W2KX-2025 2025-01-14 23:05:36","ArticleID":"5050008"},{"Type":"SUGUpdates","SiteCode":"CSA","SUGName":"Microsoft-W2KX-2025 2025-01-14 23:05:36","ArticleID":"5002674"},{"Type":"SUGUpdates","SiteCode":"CSA","SUGName":"Microsoft-W2KX-2025 2025-01-14 23:05:36","ArticleID":"5050525"},{"Type":"SUGUpdates","SiteCode":"CSA","SUGName":"Microsoft-W2KX-2025 2025-01-14 23:05:36","ArticleID":"5050525"}] - My first attempt: i put a props.conf on the UF DATETIME_CONFIG=CURRENT SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true INDEXED_EXTRACTIONS=json KV_MODE=none AUTO_KV_JSON = false category=Structured The data was nicely split into separte json events, but the table command doubled the data. Like this issues https://community.splunk.com/t5/Splunk-Cloud-Platform/Why-does-json-data-get-duplicated-after-tabling-the-events/m-p/587724 https://community.splunk.com/t5/Getting-Data-In/Why-is-my-sourcetype-configuration-for-JSON-events-with-INDEXED/td-p/188551?_ga=2.153916656.937356172.1646061092-893813366.1631658459   - Then i moved the props.conf into the index-cluster Now the _raw event is the same as the input array, and not splitted ito separated json events, like this     I have to use spath commad during search as workaround. So I can workaround the issue, but I 'd rather import the data correctly Where do i go wrong?  Any help is appreciated. Reagrds, Harry
We have a dashboard a user is having a problem with which I have been able to replicate some of the time. They have a link to a dashboard with selected values in the URL, however they get a 'Cannot ... See more...
We have a dashboard a user is having a problem with which I have been able to replicate some of the time. They have a link to a dashboard with selected values in the URL, however they get a 'Cannot create search' error on one of the dropdowns since it doesn't populate with the value assigned to the field in the URL.  When using debug I can see instead of having the selected value the field name is there ex: form.product_label.  Of course, if you select a value from the product dropdown then it properly populates the dashboard.  It seems like Chrome the dashboard properly auto populates when using the URL most often.   Since I have admin access and the issue happens for me about 60% of the time I don't think it's a permissions problem.    The user has read only access to the dashboard.  A colleague of theirs supposedly doesn't have any issues but haven't been able to confirm what browser they're using.   The user, with the issue,  states they're using Chrome and apparently, it's not loading for them at all.  I've asked them to try refreshing the page which they state hasn't worked either  Since there's no actual search ran I can't inspect the job because there isn't one. When checking console I can see this depreciation log.  Haven't found anything online to confirm if this would be the cause of the problem. [Deprecation] Listener added for a 'DOMSubtreeModified' mutation event. Support for this event type has been removed, and this event will no longer be fired. See URL for more information. My questions are if the Deprecation warning could explain why the dashboard isn't properly loading.  If that's not it, then what else could it be that I'm missing. thanks
Hi there, I'm trying to create a new role that can give permission to some users (who has the basic inheritance role) to edit other dashboards and make their own dashboards public. I tried various ... See more...
Hi there, I'm trying to create a new role that can give permission to some users (who has the basic inheritance role) to edit other dashboards and make their own dashboards public. I tried various ways to solve it without reach any success, like adding specific capabilities.   any help?!
HI Team  Can you please let me know if it is possible to display the different CSV files based on the drilldown value selected in parent table.  Example:  I have a search panel with the below dr... See more...
HI Team  Can you please let me know if it is possible to display the different CSV files based on the drilldown value selected in parent table.  Example:  I have a search panel with the below drilldown that set the value of the Application clicked in the parent dashboard:  <drilldown> <condition match="isnotnull($raw_hide$)"> <unset token="raw_hide"></unset> <unset token="raw_APPLICATION"></unset> </condition> <condition> <set token="raw_hide">true</set> <set token="raw_APPLICATION">$row.APPLICATION$</set> </condition> </drilldown> Based on the value of the APPLICATION clicked on the parent Dashboard, i want to display the corresponding csv.  If Application = "X", then i want to use the command ,  | inputlookup append=t X.csv  If Application = "Y", then i want to use the command ,  | inputlookup append=t Y.csv  If Application = "Z", then i want to use the command ,  | inputlookup append=t Z.csv  OR  Is it possible to display 3 different panels based on the APPLICATION selected in the parent Dashboard.  i.e based on the value of the token set in the <drilldown> of the parent dashboard , can we display the different panel using <panel depends="$tokenset$"> Panel 1 using X.csv   <panel depends="$tokensetX$"> Panel 2 using Y.csv   <panel depends="$tokensetY$"> Panel 3 using Z.csv   <panel depends="$tokensetZ$">  
Hi everyone, does anybody know if there is a possiblity to set the dropdown-width of an input in Dashboard Studio ? This wasn´t a big deal with simple xml and css within the classic dashboard. Tha... See more...
Hi everyone, does anybody know if there is a possiblity to set the dropdown-width of an input in Dashboard Studio ? This wasn´t a big deal with simple xml and css within the classic dashboard. Thanks to all in advance  
I am trying to extract field at index time. Hence I have given following in my cluster master and pushing to indexers but field not getting extracted -  transforms.conf [idname_extract] SOURCE_... See more...
I am trying to extract field at index time. Hence I have given following in my cluster master and pushing to indexers but field not getting extracted -  transforms.conf [idname_extract] SOURCE_KEY = _raw REGEX = rsaid="\/[^\/]+\/([^\/]+)\/ FORMAT = idname::$1 WRITE_META = true   props.conf   TRANSFORMS-1_extract_idname = idname_extract   Field not getting extracted once indexing is done.   But when I am getting in search extraction is happening which means my rex is correct but index time it is failing.   |rex "rsaid=\"\/[^\/]+\/(?<idname>[^\/]+)\/"   Raw field -    rsaid="/saturn-X-01/SATURN-CK-GSPE/v-saturn.linux.com-44"   Need to extract idname=SATURN-CK-GSPE at index time. Am I missing something?  
Hello  I have xml messages in search. row like this       <log><local_time>2025-02-25T15:02:59:955059+05:00</local_time><bik>ATYNKZKA</bik><fileName>stmt_3110449968.pdf</fileName><size>555468</s... See more...
Hello  I have xml messages in search. row like this       <log><local_time>2025-02-25T15:02:59:955059+05:00</local_time><bik>ATYNKZKA</bik><fileName>stmt_3110449968.pdf</fileName><size>555468</size><iin>800716350670</iin><agrementNumber>3110449968</agrementNumber><agrementDate>08.11.2011</agrementDate><referenceId>HKBRZA0000388473</referenceId><bankCode>ALTYNDBZ</bankCode><result>OK</result></log> <log><local_time>2025-02-25T15:02:59:885557+05:00</local_time><bik>ATYNKZKA</bik><fileName>stmt_dbz.pdf</fileName><size>152868</size><iin>840625302683</iin><agrementNumber>4301961740</agrementNumber><agrementDate>21.06.2023</agrementDate><referenceId>HKBRZA0000388476</referenceId><bankCode>ALTYNDBZ</bankCode><result>OK</result></log>        I see after search in field in '_time' and 'log.local_time' date time with seconds and parts. Seems to be OK  But when i try build timechart i see next Seems to be timechart don't know about minutes and seconds. And know only hours. My span=5m is ignored. For me it is ok using _time or log.local_time   I try various method parse with strptime but useless thanks        
Hello Splunkers!! I am writing to bring to your attention a critical issue we are experiencing following our recent migration of Splunk from version 8.1.1 to 9.1.1. During our routine operati... See more...
Hello Splunkers!! I am writing to bring to your attention a critical issue we are experiencing following our recent migration of Splunk from version 8.1.1 to 9.1.1. During our routine operations, specifically while attempting to schedule reports from the dashboard using the noop command, we have encountered a "FATAL" error with the message indicating a "bad allocation." Server reported HTTP status=400 while getting mode=resultsb'\n\n \n bad allocation\n \n\n Please help me get it fix.
Hello recently I moved ES app from one sh to another non clustered sh . after that this error is coming Error in 'DataModelCache': Invalid or unaccelerable root object for datamodel 
We have an index named ABC with a searchable retention period of 180 days and an archival period of 3 years. I would like to transfer all logs to AWS S3, as they are currently stored in Splunk Archiv... See more...
We have an index named ABC with a searchable retention period of 180 days and an archival period of 3 years. I would like to transfer all logs to AWS S3, as they are currently stored in Splunk Archive storage. Could you please advise on how to accomplish this? Additionally, will this process include moving both searchable logs and archived logs to S3? I would appreciate a step-by-step guide. If anyone has knowledge of this process, I would be grateful for your assistance. Thank you.
Recently I migrated ES from one SH to another non cluther SH . this error was popping in the panel of ES app Error in 'DispatchManager': The user 'admin' does not have sufficient search privileges... See more...
Recently I migrated ES from one SH to another non cluther SH . this error was popping in the panel of ES app Error in 'DispatchManager': The user 'admin' does not have sufficient search privileges. So to resolve this i searched about this error and there was a solution to remove owner=admin from default.meta file . It worked for some panels but some panels still show this error.
Hi, I have this Splunk SPL:    index=EventViewer source="WinEventLog:Application" SourceName=sample | table host Name, Description, Location   Name, Description, and Location are all multi value ... See more...
Hi, I have this Splunk SPL:    index=EventViewer source="WinEventLog:Application" SourceName=sample | table host Name, Description, Location   Name, Description, and Location are all multi value fields that directly corresponds to each other.  Here is the sample for one of the hosts:   Name Description Location name1 description1 location1 name2 description2 location2 name3 description3 location3 name4 description4 location4   What I am trying to do is show each record for each host in a separate row. I cannot use mvexpand becasue there are millions of events and it causes the results to truncated due to the following warn message:   command.mvexpand: output will be truncated at 35500 results due to excessive memory usage.   I cannot do anything with limits.conf to adjust this memory limit so I need an alternative option to display each record in individual rows. 
I have a field message in _raw that looks something like this: "message":"test::hardware_controller: Unit state update from cook client target: Elements(temp: -, [F: 255, F: 255, F: 255, F: 255,... See more...
I have a field message in _raw that looks something like this: "message":"test::hardware_controller: Unit state update from cook client target: Elements(temp: -, [F: 255, F: 255, F: 255, F: 255, F: 255, F: 255]), hw_state: Elements(temp: -, [F: 255, F: 255, F: 255, F: 255, F: 255, F: 255])" I am looking to search for messages containing the bold section. , but when i search: index="sample_idx" $serialnumber$ log_level=info message=*Unit state update from cook client target*| this returns no results, even though I know events containing the wildcard phrase are present within the query index and timeframe  
Hello,   I have a Dashboard Studio dashboard (Splunk 9.2.3) with a pair of dropdown inputs (“Environment” and “Dependent Dropdown”). The first dropdown, “Environment”, has a static list of items (“... See more...
Hello,   I have a Dashboard Studio dashboard (Splunk 9.2.3) with a pair of dropdown inputs (“Environment” and “Dependent Dropdown”). The first dropdown, “Environment”, has a static list of items (“Q1", “Q2", “Q3", “Q4"). The items for the second dropdown, “Dependent Dropdown”, has a datasource which dynamically sets the items based on the token set by “Environment”. For example, when “Environment” is set to “Q2”, the items for “Dependent Dropdown” are (“DIRECT", “20", “21", “22", “23"). For each selection of “Environment”, the list of items of “Dependent Dropdown” begins with the value “DIRECT”.   The behavior I am trying to achieve is that when a selection is made in “Environment” that the selection in “Dependent Dropdown” be set to the first item (i.e., “DIRECT”) of the newly set item list  determined by the selection for “Environment”.   I have tried using the configuration user interface for “Dependent Dropdown” to set “Default selected values” to “First value”. However, when I these steps, the resulting value in “Dependent Dropdown” is “Select a value”:   Select “Q2” for “Environment” Select “21” for “Dependent Dropdown” Select “Q1” for “Environment”   The result is that “Dependent Dropdown” shows “Select a value”. I would like ”Dependent Dropdown” to show the intended default value (the first value of the item list) of “DIRECT”.    How can this be achieved?   Thank you in advance for responses, Erik   (Source of example dashboard included)         { "visualizations": {}, "dataSources": { "ds_ouyeecdW": { "type": "ds.search", "options": { "enableSmartSources": true, "query": "| makeresults\n| eval env = \"$env$\"\n| eval dependent=case(env=\"Q1\", \"DIRECT\", env=\"Q2\", \"DIRECT;20;21;22;23\", env=\"Q3\", \"DIRECT;30;31\", env=\"Q4\", \"DIRECT;40;41;42;43;44\")\n| makemv dependent delim=\";\"\n| mvexpand dependent\n| table dependent" }, "name": "Search_Dependent_Dropdown" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_ZIhOcc3q": { "options": { "items": [ { "label": "Q1", "value": "Q1" }, { "label": "Q2", "value": "Q2" }, { "label": "Q3", "value": "Q3" }, { "label": "Q4", "value": "Q4" } ], "token": "env", "selectFirstSearchResult": true }, "title": "Environment", "type": "input.dropdown" }, "input_1gjNEk0A": { "options": { "items": [], "token": "dependent", "selectFirstSearchResult": true }, "title": "Dependent Dropdown", "type": "input.dropdown", "dataSources": { "primary": "ds_ouyeecdW" } }, "input_Ih820ou2": { "options": { "defaultValue": "-24h@h,now", "token": "global_time" }, "title": "Time Range Input Title", "type": "input.timerange" } }, "layout": { "type": "grid", "structure": [], "globalInputs": [ "input_Ih820ou2", "input_ZIhOcc3q", "input_1gjNEk0A" ] }, "description": "", "title": "Dependent Dropdown Example" }    
I am running AppDynamics OnPrem 24.4.2. I am able to import custom dashboards on the fly but unable to export the dashboard share URL once it is created and shared manually in the console.  I was as... See more...
I am running AppDynamics OnPrem 24.4.2. I am able to import custom dashboards on the fly but unable to export the dashboard share URL once it is created and shared manually in the console.  I was assuming that the configuration would be part of the exported json file from a dashboard that had already been shared and was working, but I do not see it anywhere. Is there an API to: Share a dashboard programmatically Export/GET the URL once it has been shared.
Hello, I faced the below ERROR: The percentage of non high priority searches delayed (27%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total... See more...
Hello, I faced the below ERROR: The percentage of non high priority searches delayed (27%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=18. Total delayed Searches=5 Search for the result: