All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Archived data must be restored before it can be searched.
Although it was not documented but  9.3.x/9.2.x/9.1.x etc/system/default/server.conf you will find [prometheus] disabled = true It was added in server.conf to prevent unwanted memory growth caus... See more...
Although it was not documented but  9.3.x/9.2.x/9.1.x etc/system/default/server.conf you will find [prometheus] disabled = true It was added in server.conf to prevent unwanted memory growth caused by prometheus. Unintentionally stanza got removed from 9.4.0. So you restore it back.
A real-time search runs continuously so matching events are returned as soon as they reach the indexer (before writing to disk). Ad-hoc searches can be real-time, but they are not equivalent.  "ad-h... See more...
A real-time search runs continuously so matching events are returned as soon as they reach the indexer (before writing to disk). Ad-hoc searches can be real-time, but they are not equivalent.  "ad-hoc" refers to any non-scheduled search. Historical searches look back in time for matching events. Summarization searches aggregate results into a summary index for later processing. The screenshot shows the system to be nearly idle and must have been taken when the health indicator was green (or is about to turn green - it can take up to 24 hours for the health indicator to reset).
I noticed that https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/Serverconf does not mention prometheus. Is this an undocumented feature that is getting disabled to prevent a memory leak issue?
Linear memory growth on any splunk instance configured to receive data on splunktcpin, tcpin and udpin ports. Following config in server.conf will fix the memory growth. [prometheus] disable... See more...
Linear memory growth on any splunk instance configured to receive data on splunktcpin, tcpin and udpin ports. Following config in server.conf will fix the memory growth. [prometheus] disabled = true
Hi guys! I think this screenshot describes my problem pretty well.  I just tried to play around with chatgpt and splunk but I didnt succeed.    Does someone know what to do with this error... See more...
Hi guys! I think this screenshot describes my problem pretty well.  I just tried to play around with chatgpt and splunk but I didnt succeed.    Does someone know what to do with this error message?  Please help me out here.   Best regards
| makeresults | eval value = 36 | eval display = "the total percentage is ".value." %" | fields - value
Does this give you what you want? | spath properties | spath input=properties attributes | spath input=attributes
Right now, just looking to drop/discard data > 512k. If I can get this working, we may refine. Now, when you refer to "sourcetype", is that "httpevent" (to refer to all defined HECs), or, is that t... See more...
Right now, just looking to drop/discard data > 512k. If I can get this working, we may refine. Now, when you refer to "sourcetype", is that "httpevent" (to refer to all defined HECs), or, is that the name of the defined event collector (in my example "event collector 1"?
i have a field coming after a calculation  like a percentage field the request from user is to display in text format | makeresults | eval value = 36 | eval display = "the total percentage is $val... See more...
i have a field coming after a calculation  like a percentage field the request from user is to display in text format | makeresults | eval value = 36 | eval display = "the total percentage is $value$ %" | fields - value how can i display the total percentage is 36 %
Yes this works. do you know how to link this table dynamically to reports? I am trying to convert this to dashboard studio and I do not see an option to link reports dynamically. I know you can set ... See more...
Yes this works. do you know how to link this table dynamically to reports? I am trying to convert this to dashboard studio and I do not see an option to link reports dynamically. I know you can set token and then set custom link but then problem here is values have spaces in it which it does not recognize by default. $rn$ in below example has spaces in it which URL doesnt recognize in dashboard studio.     <table> <title>test</title> <search> <query>index=_internal | stats count by savedsearch_name status </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="drilldown">row</option> <option name="link.inspectSearch.visible">0</option> <option name="link.openSearch.visible">0</option> <drilldown> <set token="rn">$row.savedsearch_name$</set> <link> <![CDATA[/app/search/report?s=$rn$ ]]> </link> </drilldown> </table>      
After running the existing version of the app via AppInspect, you can see its running an old version of the Splun k SDK (1.6.18) which (from personal experience) can prevent you getting cloud compati... See more...
After running the existing version of the app via AppInspect, you can see its running an old version of the Splun k SDK (1.6.18) which (from personal experience) can prevent you getting cloud compatibility!   Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @_joe  It looks like the latest version of the TA is not cloud compatible, however its unclear if the app owner is trying to resolve these issues. There are often a few weeks delay getting new ap... See more...
Hi @_joe  It looks like the latest version of the TA is not cloud compatible, however its unclear if the app owner is trying to resolve these issues. There are often a few weeks delay getting new apps approved for Splunkbase with cloud compatibility, however I see the last version was in August last year. It might be worth contacting the app devs (splunk-support@zscaler.com) to see if they can share some more information on when this might be cloud-compatible so that you can get it installed. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @nkavouris  Is it just "500" you are expecting to extract? Your second regex looks pretty close but you are missing the \s for the space after : and before 500 - and a couple of other tweaks. Do... See more...
Hi @nkavouris  Is it just "500" you are expecting to extract? Your second regex looks pretty close but you are missing the \s for the space after : and before 500 - and a couple of other tweaks. Does this work for you? | rex field=message "Elements\(temp:\s(?<set_temp>\d+)" Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will  
Hi @mayurr98  Unfortunately what you are trying to achieve isnt supported. You cannot reference row.<fieldName>.value in a splunk.singlevalue viz (See https://docs.splunk.com/Documentation/Splunk/9.... See more...
Hi @mayurr98  Unfortunately what you are trying to achieve isnt supported. You cannot reference row.<fieldName>.value in a splunk.singlevalue viz (See https://docs.splunk.com/Documentation/Splunk/9.4.0/DashStudio/tokens#:~:text=the%20location%20clicked-,splunk.singlevalue,-Field%20name%20of) You can only use "name" or "value". *HOWEVER* - You could do something clever with "name" such as below, setting the name to the method value (in this example) - so the single value becomes GET=593,453 for the below, and clicking on it sets $method$=GET:   { "title": "Set Tokens on Click - Example", "description": "", "inputs": { "input_global_trp": { "options": { "defaultValue": "-24h@h,now", "token": "global_time" }, "title": "Global Time Range", "type": "input.timerange" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } } } } }, "visualizations": { "viz_column_chart": { "containerOptions": {}, "dataSources": { "primary": "ds_qBGlESX2" }, "eventHandlers": [ { "type": "drilldown.setToken", "options": { "tokens": [ { "key": "name", "token": "method" } ] } } ], "showLastUpdated": false, "showProgressBar": false, "title": "HTTP Request Method", "type": "splunk.singlevalue", "options": { "majorValue": "> sparklineValues | lastPoint()", "trendValue": "> sparklineValues | delta(-2)" } }, "viz_pie_chart": { "dataSources": { "primary": "ds_c8AfQapt" }, "title": "Response Codes for Method $method$", "type": "splunk.pie" } }, "dataSources": { "ds_c8AfQapt": { "name": "Search_2", "options": { "query": "index=_internal method=$method$ | stats count by status" }, "type": "ds.search" }, "ds_qBGlESX2": { "name": "Search_1", "options": { "enableSmartSources": true, "query": "index=_internal method=GET | stats count by method\n| eval {method}=count \n| fields - method count" }, "type": "ds.search" } }, "layout": { "type": "absolute", "options": { "width": 1440, "height": 960, "display": "auto" }, "structure": [ { "item": "viz_column_chart", "type": "block", "position": { "x": 0, "y": 0, "w": 250, "h": 250 } }, { "item": "viz_pie_chart", "type": "block", "position": { "x": 390, "y": 0, "w": 250, "h": 250 } } ], "globalInputs": [ "input_global_trp" ] } }  Is this what you are looking for? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hello I am trying to drilldown on a single value panel but its not working.  looks simple but I am not sure what I am doing wrong. here is my source code         { "title": "Set Tokens on Cl... See more...
Hello I am trying to drilldown on a single value panel but its not working.  looks simple but I am not sure what I am doing wrong. here is my source code         { "title": "Set Tokens on Click - Example", "description": "", "inputs": { "input_global_trp": { "options": { "defaultValue": "-24h@h,now", "token": "global_time" }, "title": "Global Time Range", "type": "input.timerange" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } } } } }, "visualizations": { "viz_column_chart": { "containerOptions": {}, "context": {}, "dataSources": { "primary": "ds_qBGlESX2" }, "eventHandlers": [ { "options": { "tokens": [ { "key": "row.method.value", "token": "method" } ] }, "type": "drilldown.setToken" } ], "options": {}, "showLastUpdated": false, "showProgressBar": false, "title": "HTTP Request Method", "type": "splunk.singlevalue" }, "viz_pie_chart": { "dataSources": { "primary": "ds_c8AfQapt" }, "title": "Response Codes for Method $method$", "type": "splunk.pie" } }, "dataSources": { "ds_c8AfQapt": { "name": "Search_2", "options": { "query": "index=_internal method=$method$ | stats count by status" }, "type": "ds.search" }, "ds_qBGlESX2": { "name": "Search_1", "options": { "enableSmartSources": true, "query": "index=_internal method=GET | stats count by method" }, "type": "ds.search" } }, "layout": { "globalInputs": [ "input_global_trp" ], "layoutDefinitions": { "layout_1": { "structure": [ { "item": "viz_column_chart", "position": { "h": 400, "w": 600, "x": 0, "y": 0 }, "type": "block" }, { "item": "viz_pie_chart", "position": { "h": 400, "w": 600, "x": 600, "y": 0 }, "type": "block" } ], "type": "grid" } }, "tabs": { "items": [ { "label": "New tab", "layoutId": "layout_1" } ] } } }      
Hi @danielbb  The app you're looking for is "Admins Little Helper" (https://splunkbase.splunk.com/app/6368) You can run things like " | btool props list" etc in the SPL search bar on Splunk Cloud -... See more...
Hi @danielbb  The app you're looking for is "Admins Little Helper" (https://splunkbase.splunk.com/app/6368) You can run things like " | btool props list" etc in the SPL search bar on Splunk Cloud - Its very useful! Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will  
Hi @jtran9373  You are using "SOURCE_KEY = MetaData:Sourcetype" to match for the regex string, however your sourcetype is "rsa:syslog" ? It looks like you might be meaning to use SOURCE_KEY = _raw ... See more...
Hi @jtran9373  You are using "SOURCE_KEY = MetaData:Sourcetype" to match for the regex string, however your sourcetype is "rsa:syslog" ? It looks like you might be meaning to use SOURCE_KEY = _raw (which is the default) to match your REGEX string against the sample event you provided. Try removing the SOURCE_KEY key/value pair from your props.conf and see if that resolves your issue. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @Vin  Its worth checking the expiry on your SSL certificates. I have seen cases like this before where something running stops working during an upgrade, when infact its simply a Splunk restart t... See more...
Hi @Vin  Its worth checking the expiry on your SSL certificates. I have seen cases like this before where something running stops working during an upgrade, when infact its simply a Splunk restart that broke it - Basically if a certificate expires then Splunk can fail to initiate a new connection and therefore will hang on to an existing, established connection. Use "openssl x509 -in <pathToYourSSL.crt> -noout -text" to validate that your client certificate on your forwarder is still valid. If that looks fine then its worth having a deeper dive into the splunkd.log ($SPLUNK_HOME/var/log/splunk/splunkd.log) to check for errors when the blocking starts - is there anything here relating to SSL or port in-accesibility.  Were any other changes made around this time? E.g. Host level firewall etc as part of the upgrade?  Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @richgalloway As I’ve tried to explain right from the beginning, It has been metric date all the time, why the default defined (event) index name had to be changed to a metric index name, which n... See more...
Hi @richgalloway As I’ve tried to explain right from the beginning, It has been metric date all the time, why the default defined (event) index name had to be changed to a metric index name, which now works as a charm on the HF, so it was all durable and works perfectly. Thanks for all your input- they helped me to focus on the details here All the best