All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@pacifiquen  Since your license expired 5 months ago, it’s likely that Splunk entered a state where search functionality was disabled due to license violations or expiration enforcement. Even with a... See more...
@pacifiquen  Since your license expired 5 months ago, it’s likely that Splunk entered a state where search functionality was disabled due to license violations or expiration enforcement. Even with a new license, prior violations (e.g., exceeding the daily indexing limit multiple times before the license expired) could still block search functionality until resolved.   In the Splunk Web UI, go to Settings > Licensing > Usage Report and review the last 30 days (or more if available) for violations.   For Splunk Enterprise (versions 8.1.0+), if you exceeded your license capacity 45+ times in a 60-day period with a stack volume <100 GB, search is disabled until violations clear or a reset license is applied.   If violations are still active (from before the new license), you may need to wait 30 days without violations (for free licenses) or request a reset license from Splunk Support (for Enterprise licenses).   Contact Splunk Support via the Splunk Support Portal or call 866.GET.SPLUNK to request a reset license. Apply it via Settings > Licensing > Add License.   Confirm Data Ingestion   Why: If logs aren’t appearing, the issue might not be the license but rather data not reaching the Search Head. Action: Verify that data is being ingested and indexed. index=* earliest=-24h https://www.splunk.com/en_us/resources/splunk-enterprise-license-enforcement-faq.html 
Hi @mikefg  Please can you run the below SPL and make sure if returns an empty string?  | inputlookup ipapikey | sort - savetime | head 1 | table apikey   Please let me know how you get on an... See more...
Hi @mikefg  Please can you run the below SPL and make sure if returns an empty string?  | inputlookup ipapikey | sort - savetime | head 1 | table apikey   Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @dolj  Is this what you are after?   I have included the dashboard content below for you to work with and update { "title": "colorpalette", "description": "", "inputs": { ... See more...
Hi @dolj  Is this what you are after?   I have included the dashboard content below for you to work with and update { "title": "colorpalette", "description": "", "inputs": { "input_global_trp": { "options": { "defaultValue": "-24h@h,now", "token": "global_time" }, "title": "Global Time Range", "type": "input.timerange" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } } } } }, "visualizations": { "viz_Nh4wq49A": { "context": { "backgroundColorEditorConfig": [ { "to": -15, "value": "#D41F1F" }, { "from": -15, "to": -10, "value": "#ff8c00" }, { "from": -10, "to": 10, "value": "#669922" }, { "from": 10, "to": 15, "value": "#ff8c00" }, { "from": 15, "value": "#d41f1f" } ] }, "dataSources": { "primary": "ds_l00kHfuB_ds_2tFZF9uM" }, "options": { "backgroundColor": "> majorValue | rangeValue(backgroundColorEditorConfig)" }, "type": "splunk.singlevalue" }, "viz_OqQGMe3n": { "context": { "backgroundColorEditorConfig": [ { "to": -15, "value": "#D41F1F" }, { "from": -15, "to": -10, "value": "#ff8c00" }, { "from": -10, "to": 10, "value": "#669922" }, { "from": 10, "to": 15, "value": "#ff8c00" }, { "from": 15, "value": "#d41f1f" } ] }, "dataSources": { "primary": "ds_2tFZF9uM" }, "options": { "backgroundColor": "> majorValue | rangeValue(backgroundColorEditorConfig)" }, "type": "splunk.singlevalue" }, "viz_P7eqPIQ1": { "context": { "backgroundColorEditorConfig": [ { "to": -15, "value": "#D41F1F" }, { "from": -15, "to": -10, "value": "#ff8c00" }, { "from": -10, "to": 10, "value": "#669922" }, { "from": 10, "to": 15, "value": "#ff8c00" }, { "from": 15, "value": "#d41f1f" } ] }, "dataSources": { "primary": "ds_Poalkk2N_ds_xTGfykmr_ds_l00kHfuB_ds_2tFZF9uM" }, "options": { "backgroundColor": "> majorValue | rangeValue(backgroundColorEditorConfig)" }, "type": "splunk.singlevalue" }, "viz_qielOoKy": { "context": { "backgroundColorEditorConfig": [ { "to": -15, "value": "#D41F1F" }, { "from": -15, "to": -10, "value": "#ff8c00" }, { "from": -10, "to": 10, "value": "#669922" }, { "from": 10, "to": 15, "value": "#ff8c00" }, { "from": 15, "value": "#d41f1f" } ] }, "dataSources": { "primary": "ds_xTGfykmr_ds_l00kHfuB_ds_2tFZF9uM" }, "options": { "backgroundColor": "> majorValue | rangeValue(backgroundColorEditorConfig)" }, "type": "splunk.singlevalue" }, "viz_s1mEJROK": { "context": { "backgroundColorEditorConfig": [ { "to": -15, "value": "#D41F1F" }, { "from": -15, "to": -10, "value": "#ff8c00" }, { "from": -10, "to": 10, "value": "#669922" }, { "from": 10, "to": 15, "value": "#ff8c00" }, { "from": 15, "value": "#d41f1f" } ] }, "dataSources": { "primary": "ds_kBIEwZOo_ds_Poalkk2N_ds_xTGfykmr_ds_l00kHfuB_ds_2tFZF9uM" }, "options": { "backgroundColor": "> majorValue | rangeValue(backgroundColorEditorConfig)" }, "type": "splunk.singlevalue" } }, "dataSources": { "ds_2tFZF9uM": { "name": "Search_1", "options": { "query": "| makeresults \n| eval num=-16", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "type": "ds.search" }, "ds_Poalkk2N_ds_xTGfykmr_ds_l00kHfuB_ds_2tFZF9uM": { "name": "Search_1 copy 4", "options": { "query": "| makeresults \n| eval num=11", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "type": "ds.search" }, "ds_bjOjvTVV_ds_2tFZF9uM": { "name": "Search_1 copy 1", "options": { "query": "| makeresults \n| eval num=-14", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "type": "ds.search" }, "ds_kBIEwZOo_ds_Poalkk2N_ds_xTGfykmr_ds_l00kHfuB_ds_2tFZF9uM": { "name": "Search_1 copy 5", "options": { "query": "| makeresults \n| eval num=16", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "type": "ds.search" }, "ds_l00kHfuB_ds_2tFZF9uM": { "name": "Search_1 copy 2", "options": { "query": "| makeresults \n| eval num=-14", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "type": "ds.search" }, "ds_xTGfykmr_ds_l00kHfuB_ds_2tFZF9uM": { "name": "Search_1 copy 3", "options": { "query": "| makeresults \n| eval num=4", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "type": "ds.search" } }, "layout": { "globalInputs": [ "input_global_trp" ], "layoutDefinitions": { "layout_1": { "options": { "display": "auto", "height": 960, "width": 1440 }, "structure": [ { "item": "viz_OqQGMe3n", "position": { "h": 250, "w": 250, "x": 0, "y": 0 }, "type": "block" }, { "item": "viz_Nh4wq49A", "position": { "h": 250, "w": 250, "x": 260, "y": 0 }, "type": "block" }, { "item": "viz_qielOoKy", "position": { "h": 250, "w": 250, "x": 520, "y": 0 }, "type": "block" }, { "item": "viz_P7eqPIQ1", "position": { "h": 250, "w": 250, "x": 780, "y": 0 }, "type": "block" }, { "item": "viz_s1mEJROK", "position": { "h": 250, "w": 250, "x": 1040, "y": 0 }, "type": "block" } ], "type": "absolute" } }, "options": {}, "tabs": { "items": [ { "label": "New tab", "layoutId": "layout_1" } ] } } } Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
@livehybrid @gcusello  Thanks for your help but I created a query which shows me the required results  | rest /services/server/introspection/indexer  | where match(splunk_server, "indexer")  ... See more...
@livehybrid @gcusello  Thanks for your help but I created a query which shows me the required results  | rest /services/server/introspection/indexer  | where match(splunk_server, "indexer")  | eval status = if(status == "normal", "up", "down") | table splunk_server, status
Hi @blanky tl;dr; - If you are sending from source to both HF then upgrading one at a time would be fine.  Do your client servers all send to both of your HF? If so they should automatically load b... See more...
Hi @blanky tl;dr; - If you are sending from source to both HF then upgrading one at a time would be fine.  Do your client servers all send to both of your HF? If so they should automatically load balance between the two of them and therefore you will not lose data if you gracefully shutdown one, upgrade it and then ensure it has started successfully before doing the other. If you are unsure check the outputs.conf on the servers sending to the HF which should have a comma-delimited list under the server key in your tcpout group stanza similar to the below:   [tcpout] defaultGroup = My_Cluster_1 [tcpout:My_Cluster_1] disabled=false server = 10.1.4.32:9997,10.1.4.33:9997     If you are outputting to a single HF then consider adding the secondary if possible, this will give redundancy for when 1 of the HF is offline. Either way, if you are sending data from Splunk UF/HF to the HF and the HF goes offline, the client server should queue the data so that it sends when the HF connection is restored. The size of the queue will depend on your configuration and knowing if the queue would withstand the downtime would depend on the amount of data the client is sending. For more about queues see https://docs.splunk.com/Documentation/Splunk/latest/Data/Usepersistentqueues Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
@blanky  Use Persistent Queues Configure persistent queues on your HFs to store data on disk while the Splunk service is stopped. Edit inputs.conf on each HF to enable persistent queues for your i... See more...
@blanky  Use Persistent Queues Configure persistent queues on your HFs to store data on disk while the Splunk service is stopped. Edit inputs.conf on each HF to enable persistent queues for your inputs (e.g., set persistentQueueSize to an appropriate value like 1GB or more, depending on your data volume). Stop the Splunk service, perform the upgrade, and restart. The HF will process the queued data after restarting. Data is preserved on disk during the outage and forwarded once the HF is back online. Requires sufficient disk space and pre-configuration. Not all input types support persistent queues (e.g., HTTP Event Collector doesn’t). https://docs.splunk.com/Documentation/Splunk/9.4.1/Data/Usepersistentqueues 
@blanky  Options to Upgrade HFs Without Data Loss If you have two HFs, configure them as a redundant pair with a load balancer or configure your data sources to send data to both HFs (e.g., Syslog ... See more...
@blanky  Options to Upgrade HFs Without Data Loss If you have two HFs, configure them as a redundant pair with a load balancer or configure your data sources to send data to both HFs (e.g., Syslog can send to multiple destinations). Steps:- Ensure both HFs are forwarding identical data to the indexers. Stop Splunk on HF1, upgrade it, and restart it. Validate HF1 is working, then repeat the process for HF2. HF2 continues processing data while HF1 is down, and vice versa, ensuring no data loss. Your data sources must support sending to multiple endpoints, or you need a load balancer in front of the HFs.
@blanky  When you stop the Splunk service on an HF for an upgrade, it stops accepting new data from inputs and forwarding data to indexers. Any data generated by your sources during this downtime co... See more...
@blanky  When you stop the Splunk service on an HF for an upgrade, it stops accepting new data from inputs and forwarding data to indexers. Any data generated by your sources during this downtime could be lost unless mitigated.   Are your HFs collecting data from files (e.g., log files), network inputs (e.g., Syslog, HTTP Event Collector), or scripts?  HFs have in-memory queues and can use persistent queues (if configured) to buffer data during brief interruptions.   A typical Splunk HF upgrade is relatively quick (minutes), but preparation and validation can extend the outage window.
@mpk_24  To test it, Generated a "Sessions.csv" file using makeresults and outputlookup.   | makeresults| eval SID = "SID12345;SID67890;SID99999;SID00000" | makemv delim=";" SID | mvexpand SID | ... See more...
@mpk_24  To test it, Generated a "Sessions.csv" file using makeresults and outputlookup.   | makeresults| eval SID = "SID12345;SID67890;SID99999;SID00000" | makemv delim=";" SID | mvexpand SID | table SID | outputlookup Sessions.csv   This creates a CSV file named Sessions.csv with a single column SID containing the session IDs.          Final Query:- mvindex(split(data, ","), 0) extracts the first part (SID). mvindex(split(data, ","), 1) extracts the second part (FName). Ensures correct lookup matching before filtering   | makeresults | eval data = "SID12345,John Doe;SID67890,Jane Smith;SID99999,Bob Johnson;SID00000,Alice Brown" | makemv delim=";" data | mvexpand data | eval SID = mvindex(split(data, ","), 0), FName = mvindex(split(data, ","), 1) | eval _raw = "2025-03-13T10:00:00 INFO hostname=prod* /api/update CUSTOMER:\"" . FName . "\", Session:\"" . SID . "\"" | rex field=_raw "CUSTOMER:\"(?<FName>[^\"]+)\"" | rex field=_raw "Session:\"(?<SID>[^\"]+)\"" | lookup local=t Sessions.csv SID OUTPUT SID as matching_sid | table SID, FName, matching_sid | where isnotnull(matching_sid)    
@mpk_24  The subsearch syntax using lookup is incorrect.  The search command with a subsearch needs proper formatting.  Using lookup in a subsearch won't work as expected in this context.    Test... See more...
@mpk_24  The subsearch syntax using lookup is incorrect.  The search command with a subsearch needs proper formatting.  Using lookup in a subsearch won't work as expected in this context.    Test a small sample:-    index=testing_car hostname=*prod* "/api/update" | rex field=_raw "Session\":\"(?<SID>[^\"]+)" | head 10   Verify SID extraction is working.    Here are some makeresults examples to create dummy data for testing your Splunk query with session IDs and customer names. These can help you simulate your use case without needing real log data.    
Hi Bowesmana, I tried that too but the editor wont even let me save it that way: Also note that while this simple example illustrates the problem the real data is extracted by  previous rex com... See more...
Hi Bowesmana, I tried that too but the editor wont even let me save it that way: Also note that while this simple example illustrates the problem the real data is extracted by  previous rex command so my ability to manipulate it is limited. Thanks
Splunk uses PCRE and also the search parser will handle unescaping, whereas Ingest Processor uses RE2 - although that appears to be changing. So you probably need to use 2 \\ characters, not 3 as the... See more...
Splunk uses PCRE and also the search parser will handle unescaping, whereas Ingest Processor uses RE2 - although that appears to be changing. So you probably need to use 2 \\ characters, not 3 as the Splunk parser will take away one. But validate
Hello @Splunkers, Can someone please help me on this ? Trying to use "lookup/ inputlookup" command in search. Use case: trying to extract some specific values from logs for given session IDs. But... See more...
Hello @Splunkers, Can someone please help me on this ? Trying to use "lookup/ inputlookup" command in search. Use case: trying to extract some specific values from logs for given session IDs. But there are more than 200K session IDs to check.  So I created a lookup table which includes 200K sessions and then used below query.   Problem: nothing is returning, but there should be values returned when I checked some session IDs manually. index=testing_car hostname=*prod* "/api/update" | rex field=_raw "CUSTOMER\":(?<FName>[^\,]+)" | rex field=_raw "Session\":\"(?<SID>[^\"]+)" | search [ | lookup Sessions.csv SID | fields SID] | table SID, FName P.S. SID field is available in Session.csv file. 
I'm planning to upgrade upgrade splunk environment now. 3 shcluster - 3 index cluster - 2 heavy forwarder - 1 master.   i want to upgrade HF without data loss but i have to stop the splunk server ... See more...
I'm planning to upgrade upgrade splunk environment now. 3 shcluster - 3 index cluster - 2 heavy forwarder - 1 master.   i want to upgrade HF without data loss but i have to stop the splunk server during upgrade.   is there any other way to upgrade HF without data loss??
Oh for some reason the image of the SPL2 result didnt post so here it is:
Hi, I am having trouble getting replace to work correctly in Ingest Processor and have this example. In SPL I can run this search:     | makeresults | eval test = "AAABBBCCC" | eval text = "\\... See more...
Hi, I am having trouble getting replace to work correctly in Ingest Processor and have this example. In SPL I can run this search:     | makeresults | eval test = "AAABBBCCC" | eval text = "\\\"test\\\":\\\"" | eval output = replace(test, "BBB", text)     and I will get this output But if I run this in a Ingest Processor pipeline | eval test = "AAABBBCCC" | eval text = "\\\"test\\\":\\\"" | eval output = replace(test, "BBB", text) The result is:     Note the slashes before the doublequotes have gone. Why have they gone? How do I ensure they are retained by Ingest Processor. This is a simplified example of what I am trying to do but this is the core of the problem I am having. Thanks
Have you tried the search I showed - does it give you something approximating what you are after?    
Yes, for example, try something like this: "visualizations": { "viz_242vtDn7": { "context": { "countColumnFormatEditorConfig": { "number":... See more...
Yes, for example, try something like this: "visualizations": { "viz_242vtDn7": { "context": { "countColumnFormatEditorConfig": { "number": { "thousandSeparated": false, "unitPosition": "after" } }, "countRowColorsEditorConfig": [ { "to": -15, "value": "#ff0000" }, { "from": -15, "to": -10, "value": "#ff8c00" }, { "from": -10, "to": 15, "value": "#ff0000" }, { "from": 15, "to": 16, "value": "#ff8c00" }, { "from": 16, "value": "#ff0000" } ] }, "dataSources": { "primary": "ds_zVjhlJmS" }, "options": { "columnFormat": { "count": { "align": "auto", "data": "> table | seriesByName(\"count\") | formatByType(countColumnFormatEditorConfig)", "headerAlign": "auto", "rowColors": "> table | seriesByName(\"count\") | rangeValue(countRowColorsEditorConfig)", "textOverflow": "break-word" } } }, "type": "splunk.table" } },  
Thanks for the Reply, rex statement was supposed to be ahead like you mentioned. i also missed the table row in my query, basically querying the same data source. I have a common field documentId, us... See more...
Thanks for the Reply, rex statement was supposed to be ahead like you mentioned. i also missed the table row in my query, basically querying the same data source. I have a common field documentId, using which trying to join 2 logs so i can get additional fields in my result from the right side query result.  index=provisioning_index sourcetype=PCF:log source_type=APP/PROC/WEB message_type=OUT cf_org_name=org1 cf_app_name=app1 LOG_LEVEL="ERROR" service=service1 errorCd="DOC-MGMT*" |rex field=_raw "errorDetails=(?<errorDetails>.*?)\s*:" |fields _time errorCd errorDetails stateCode letterId documentId |join left=lerr right=rlkp type=left where lerr.documentId = rlkp.documentId max=0 [search index=provisioning_index sourcetype=PCF:log source_type=APP/PROC/WEB message_type=OUT cf_org_name=org1 cf_app_name=app1 NOT letterId=null operation=generateInstantDocument |fields _time errorCd errorDetails stateCode letterId documentId] | table _time lerr.errorCd lerr.errorDetails rlkp.stateCode rlkp.letterId lerr.documentId The result i'm seeing that that rlkp.letterId is only populated for few rows and not the whole set. And the volume its search is large/huge.
I find that match() is generally more useful for most problems than searchmatch - as @livehybrid says, searchmatch is effectively giving you the ability to do matching done through the search command... See more...
I find that match() is generally more useful for most problems than searchmatch - as @livehybrid says, searchmatch is effectively giving you the ability to do matching done through the search command syntax, so there is no case sensitivity, wildcards can be used as needed AND and OR can be used, whereas match is a much more specific regex based matching and you can match any regex against any field.