All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes, there is a write-up here: https://splunk.my.site.com/customer/s/article/CMC
Hi Cievo, Is it possible to have a different threshold for each field value? "PERCENT_FREE" in the query is actually a static value/threshold which is not calculated from "GB_USED" and "GB_FREE"... See more...
Hi Cievo, Is it possible to have a different threshold for each field value? "PERCENT_FREE" in the query is actually a static value/threshold which is not calculated from "GB_USED" and "GB_FREE". "CALCULATED_PERCENT_FREE" is the calculated value.  So I would like each cell in the "Free Space (%)"  column to change color according to: If  CALCULATED_PERCENT_FREE >=  PERCENT_FREE ->  Cell goes green If CALCULATED_PERCENT_FREE =  PERCENT_FREE - 1  -> Cell goes amber If CALCULATED_PERCENT_FREE < PERCENT_FREE - 1 -> Cell goes red (The original query did not contain the amber clause but I have planned to put it in eventually when I get the functionality working)
Hello,   I have a Dashboard Studio dashboard (Splunk 9.2.3) with a pair of dropdown inputs (“Environment” and “Dependent Dropdown”). The first dropdown, “Environment”, has a static list of items (“... See more...
Hello,   I have a Dashboard Studio dashboard (Splunk 9.2.3) with a pair of dropdown inputs (“Environment” and “Dependent Dropdown”). The first dropdown, “Environment”, has a static list of items (“Q1", “Q2", “Q3", “Q4"). The items for the second dropdown, “Dependent Dropdown”, has a datasource which dynamically sets the items based on the token set by “Environment”. For example, when “Environment” is set to “Q2”, the items for “Dependent Dropdown” are (“DIRECT", “20", “21", “22", “23"). For each selection of “Environment”, the list of items of “Dependent Dropdown” begins with the value “DIRECT”.   The behavior I am trying to achieve is that when a selection is made in “Environment” that the selection in “Dependent Dropdown” be set to the first item (i.e., “DIRECT”) of the newly set item list  determined by the selection for “Environment”.   I have tried using the configuration user interface for “Dependent Dropdown” to set “Default selected values” to “First value”. However, when I these steps, the resulting value in “Dependent Dropdown” is “Select a value”:   Select “Q2” for “Environment” Select “21” for “Dependent Dropdown” Select “Q1” for “Environment”   The result is that “Dependent Dropdown” shows “Select a value”. I would like ”Dependent Dropdown” to show the intended default value (the first value of the item list) of “DIRECT”.    How can this be achieved?   Thank you in advance for responses, Erik   (Source of example dashboard included)         { "visualizations": {}, "dataSources": { "ds_ouyeecdW": { "type": "ds.search", "options": { "enableSmartSources": true, "query": "| makeresults\n| eval env = \"$env$\"\n| eval dependent=case(env=\"Q1\", \"DIRECT\", env=\"Q2\", \"DIRECT;20;21;22;23\", env=\"Q3\", \"DIRECT;30;31\", env=\"Q4\", \"DIRECT;40;41;42;43;44\")\n| makemv dependent delim=\";\"\n| mvexpand dependent\n| table dependent" }, "name": "Search_Dependent_Dropdown" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_ZIhOcc3q": { "options": { "items": [ { "label": "Q1", "value": "Q1" }, { "label": "Q2", "value": "Q2" }, { "label": "Q3", "value": "Q3" }, { "label": "Q4", "value": "Q4" } ], "token": "env", "selectFirstSearchResult": true }, "title": "Environment", "type": "input.dropdown" }, "input_1gjNEk0A": { "options": { "items": [], "token": "dependent", "selectFirstSearchResult": true }, "title": "Dependent Dropdown", "type": "input.dropdown", "dataSources": { "primary": "ds_ouyeecdW" } }, "input_Ih820ou2": { "options": { "defaultValue": "-24h@h,now", "token": "global_time" }, "title": "Time Range Input Title", "type": "input.timerange" } }, "layout": { "type": "grid", "structure": [], "globalInputs": [ "input_Ih820ou2", "input_ZIhOcc3q", "input_1gjNEk0A" ] }, "description": "", "title": "Dependent Dropdown Example" }    
Hi ITWhisperer, Its still not coloring the cells unfortunately. I've tried your suggestion and also modified it to try to get it to work. I noticed that you combined the last two fields to create a ... See more...
Hi ITWhisperer, Its still not coloring the cells unfortunately. I've tried your suggestion and also modified it to try to get it to work. I noticed that you combined the last two fields to create a single column:  | table ... "Free Space (%) _color" Was that intentional? I tried making them separate fields in the table command but that didn't work either: | table ... "Free Space (%)" "_color" Also, is the method by which I'm selecting the "color/_color" variable in the done handler and then referencing it in the <colorpallete> tag correct?  When I hardcode a value into the <colorpallete> tag it works fine but I need it to store the value of "color/_color"
I am running AppDynamics OnPrem 24.4.2. I am able to import custom dashboards on the fly but unable to export the dashboard share URL once it is created and shared manually in the console.  I was as... See more...
I am running AppDynamics OnPrem 24.4.2. I am able to import custom dashboards on the fly but unable to export the dashboard share URL once it is created and shared manually in the console.  I was assuming that the configuration would be part of the exported json file from a dashboard that had already been shared and was working, but I do not see it anywhere. Is there an API to: Share a dashboard programmatically Export/GET the URL once it has been shared.
this worked like a charm!  thank you!
Hi @jcorcorans I haven't discovered any great way to parse the chef-client.log A few things that can help 1) look for the log_level when it isn't INFO/WARN [2025-02-24T19:06:07+00:00] FATA... See more...
Hi @jcorcorans I haven't discovered any great way to parse the chef-client.log A few things that can help 1) look for the log_level when it isn't INFO/WARN [2025-02-24T19:06:07+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report   2) for log rotate, I see we have directives in /etc/logrotate.d/chef-cilent "/var/log/chef/client.log" { weekly rotate 12 compress postrotate systemctl reload chef-client.service >/dev/null || : endscript } 3) and if you have a number of servers and you are running chef a lot and want to know when to truly spend time debugging since we find a chef operation can fail due to timeout or load, you check over a time period and see if in the end things are running okay.  So we have something like this: if after 3 times chef run is still not good then investigate idx=your_index sourcetype=chef:client ("FATAL: Chef::Exceptions::ChildConvergeError:" OR "FATAL: Chef::Exceptions::ValidationFailed" OR "Chef run process exited unsuccessfully" OR "INFO: Chef Run complete" OR "INFO: Report handlers complete") | eval chef_status=if(searchmatch("ERROR") OR searchmatch("FATAL"), "failed", "succeeded") | stats count(eval(chef_status="failed")) AS num_failed, count(eval(chef_status="succeeded")) AS num_succeeded,latest(chef_status) as latest_chef_status by host | search num_failed > 3 AND latest_chef_status!="succeede To monitor the logs, a simple monitoring stanza in your inputs [monitor:///var/log/chef/client.log] sourcetype=yourchefsourcetype index=your_index
Hello, I faced the below ERROR: The percentage of non high priority searches delayed (27%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total... See more...
Hello, I faced the below ERROR: The percentage of non high priority searches delayed (27%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=18. Total delayed Searches=5 Search for the result:
I have a chef automate logger script in python. Its using python libraries, The log rotation is not working , are there what logging modules or classes should I be looking. data is coming in howeve... See more...
I have a chef automate logger script in python. Its using python libraries, The log rotation is not working , are there what logging modules or classes should I be looking. data is coming in however the log is not rotating .  the logic is in the  Chef script. any ideas ?  
The field extractor and erex commands tend to create overly complicated expressions.  This one should work. | rex field=message "percent: (?<gts_percent>\d+)"  
Bulletin messages are found in the "Messages" dropdown. If the messages are no longer available, you can use the host, source and sourcetype values in lastchanceindex to find where the data is comin... See more...
Bulletin messages are found in the "Messages" dropdown. If the messages are no longer available, you can use the host, source and sourcetype values in lastchanceindex to find where the data is coming from and make the necessary corrections.
thankyou, where would that bulletin be found? We get a number of events into our lastchance index.
@ayomotukoya As @richgalloway  said, lastchanceindex is a Pre-defined in Splunk Cloud. Accepts events sent to a non-existing index. So please create index first before onboarding data to Splunk Cloud... See more...
@ayomotukoya As @richgalloway  said, lastchanceindex is a Pre-defined in Splunk Cloud. Accepts events sent to a non-existing index. So please create index first before onboarding data to Splunk Cloud.  lastchanceindex:- [Input Y] index = $%^&*  
I have a reliable base query to find events containing the information I want. I built a rex using the field extractor, but applying the rex expression in a search does not yield any results, the va... See more...
I have a reliable base query to find events containing the information I want. I built a rex using the field extractor, but applying the rex expression in a search does not yield any results, the values(gts_percent) column is always blank Sample query: index="june_analytics_logs_prod" $serial$ log_level=info message=*hardware_controller*| rex field=message "(?=[^G]*(?:GTS weight:|G.*GTS weight:))^(?:[^\.\n]*\.){7}\d+\w+,\s+\w+:\s+(?P<gts_percent>\d+)"| convert rmunit(gts_percent)| chart values(gts_percent) by _time   Sample raw_ result : {"bootcount":8,"device_id":"XXX","environment":"prod_walker","event_source":"appliance","event_type":"GENERIC","local_time":"2025-02-20T00:47:48.124-06:00", "location":{"city":"XX","country":"XX","latitude":XXX,"longitude":XXX,"state":"XXX"}, "log_level":"info","message":"martini::hardware_controller: GTS weight: 17.05kg, tare weight: 8.1kg, net weight: 8.95kg, fill weight: 6.8kg, percent: 100%\u0000", "model_number":"XXX","sequence":403659,"serial":"XXX","software_version":"2.3.0.276","ticks":0,"timestamp":1740034068,"timestamp_ms":1740034068124}   I am trying to extract the bold value in the raw, Where is my rex messing up?
There should be a bulletin message saying an event was put in lastchanceindex because the intended index doesn't exist.  Look for and correct the intended index name on the syslog server.
Hi @livehybrid thanks for the links. I'll add more details about the batches. They can be 1min in length to several hours. They are not regular in length unfortunately, it depends on the process and... See more...
Hi @livehybrid thanks for the links. I'll add more details about the batches. They can be 1min in length to several hours. They are not regular in length unfortunately, it depends on the process and numbers etc. There may be several batches in a day too, up to 50 on some days.   Looking at: https://docs.splunk.com/Documentation/Splunk/9.4.0/Knowledge/Defineatime-basedlookupinSplunkWeb If we pre-set a lookahear time, this could be too short and not give an ID or too big and give mutiple IDs?   Looking at: https://community.splunk.com/t5/Splunk-Search/How-to-configure-a-time-based-lookup-Temporal-lookup/m-p/367273 I can do a search for a single result using | inputlookup and | addinfo, that works fine. It's doing this on a FOR loop for each result that I'm stuck with.   I've tried this, but feels very inefficient Add a column with just the date Lookup all ID for the date Use mvexpand to split multiple IDs into single events Lookup for start and finish times for id Where to filter on _time between start and finish
Try removing the back ticks when you are searching for a macro. Check the permissions on the macro Use <ctrl><shft>E while your cursor is in the search box to expand the macro to check it is doing ... See more...
Try removing the back ticks when you are searching for a macro. Check the permissions on the macro Use <ctrl><shft>E while your cursor is in the search box to expand the macro to check it is doing what you expect
My earlier reply was "marked as spam" by the message board. Let me try again. Thank you for the reply. That is the one thing I checked and double checked in my attempt to fix my problem (the events ... See more...
My earlier reply was "marked as spam" by the message board. Let me try again. Thank you for the reply. That is the one thing I checked and double checked in my attempt to fix my problem (the events data do not reach my Splunk Cloud instance)> But that may not be the problem as the message below (I removed data payload and hostnames for sanitary reasons) indicates that an index is provided:  02-21-2025 01:06:04.001 -0500 WARN TcpOutputProc [2061704 indexerPipe] - Pipeline data does not have indexKey. [_path] = /app/splunk/etc/apps/TA-json-modinput/bin/nix_input.py\n[python.version] = python3\n[_raw] = </data><done /></event><event stanza="nix_input://ni" unbroken="1"><source>nix_input://ni</source><sourcetype>hits:unix:hosts</sourcetype><index>test</index><data>{"hostname":"......",......}\n[_meta] = timestamp::none punct::"</><_/></><_=\"://\"_=\"\"><>://</><>::</><></><>{\"\":\""\n[_stmid] = GUsvaYoWsFPrNDD.H\n[MetaData:Source] = source::nix_input\n[MetaData:Host] = host::......\n[MetaData:Sourcetype] = sourcetype::nix_input\n[_linebreaker] = _linebreaker\n[_nfd] = _nfd\n[_charSet] = UTF-8\n[_time] = 1740117963\n[_conf] = source::nix_input|host::......|nix_input|28\n[_channel] = 28\n Much appreciated!
Thank you much for the help. That is one thing I checked and double checked in my attempts of trying to fix the problem. But it may not be the case. In fact, the message shows that an index is prov... See more...
Thank you much for the help. That is one thing I checked and double checked in my attempts of trying to fix the problem. But it may not be the case. In fact, the message shows that an index is provided in the event forwarding (I removed the payload and hostname for sanitary reasons):   02-21-2025 01:06:04.001 -0500 WARN TcpOutputProc [2061704 indexerPipe] - Pipeline data does not have indexKey. [_path] = /app/splunk/etc/apps/TA-json-modinput/bin/nix_input.py\n[python.version] = python3\n[_raw] = </data><done /></event><event stanza="nix_input://ni" unbroken="1"><source>nix_input://ni</source><sourcetype>hits:unix:hosts</sourcetype><index>test</index><data>{"hostname":"......",......}\n[_meta] = timestamp::none punct::"</><_/></><_=\"://\"_=\"\"><>://</><>::</><></><>{\"\":\""\n[_stmid] = GUsvaYoWsFPrNDD.H\n[MetaData:Source] = source::nix_input\n[MetaData:Host] = host::......\n[MetaData:Sourcetype] = sourcetype::nix_input\n[_linebreaker] = _linebreaker\n[_nfd] = _nfd\n[_charSet] = UTF-8\n[_time] = 1740117963\n[_conf] = source::nix_input|host::splunkhf-prod02|nix_input|28\n[_channel] = 28\n   Since this is actually a WARN message, now I suspect if this is not the reason that my events data do not get into my Splunk Cloud instance (the TA is run on a heavy forwarder). Much appreciated!
Unfortunately, older versions are not Cloud compatible.