All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Was able to get it working this way.   index=kafka-np sourcetype="KCON" connName="CCNGBU_*" ERROR!=INFO _raw=* | eval error_msg = case(match(_raw, "Disconnected"), "disconected", match(_raw, "rest... See more...
Was able to get it working this way.   index=kafka-np sourcetype="KCON" connName="CCNGBU_*" ERROR!=INFO _raw=* | eval error_msg = case(match(_raw, "Disconnected"), "disconected", match(_raw, "restart failed"), "restart failed", match(_raw, "Failed to start connector"), "failed to start connector") | search error_msg=* | dedup connName | table host connName error_msg ERROR
@Ciccius  You need to configure Data Input similar to how you would setup File Monitor, Performance Monitors etc. Splunk would need to know what to read, from where to read and how frequently to rea... See more...
@Ciccius  You need to configure Data Input similar to how you would setup File Monitor, Performance Monitors etc. Splunk would need to know what to read, from where to read and how frequently to read, where to index and setting up source/sourcetype etc. These you would need to configure in inputs.conf either through Splunk Web or CLI. Refer to the documentation: Get data from APIs and other remote data interfaces through scripted inputs - Splunk Documentation Also read about Writing Reliable scripts documentation, as most of the time scripted inputs have a wrapper script as well as maintain your own last indexed data/recovery/parallel execution etc: https://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/ScriptSetup  Once you have completely tested and made your scripted input robust for your scenario, you may be able to build an Add on using Splunk Add On Builder or move towards creating your Modular Input to Splunk. https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/ 
 I have drop-down named "Program" and Table with static datasource "ds_EHYzbg0g". How to define dataSource for Table dynamically based on value from drop-down "Program"?       { "opti... See more...
 I have drop-down named "Program" and Table with static datasource "ds_EHYzbg0g". How to define dataSource for Table dynamically based on value from drop-down "Program"?       { "options": { "items": [ { "label": "All", "value": "*" } ], "defaultValue": "*", "token": "select_program" }, "dataSources": { "primary": "ds_8xyubP1c" }, "title": "Program", "type": "input.dropdown" } { "type": "splunk.table", "options": { "tableFormat": { "rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableAltRowBackgroundColorsByTheme)" }, "columnFormat": { "_raw": { "data": "> table | seriesByName(\"_raw\") | formatByType(_rawColumnFormatEditorConfig)" } }, "count": 50 }, "dataSources": { "primary": "ds_EHYzbg0g" }, "context": { "_rawColumnFormatEditorConfig": { "string": { "unitPosition": "after" } } }, "showProgressBar": true, "containerOptions": {}, "showLastUpdated": false }    
Hi all, I have configured a new script in 'Data inputs' to feed my index with data from a Rest API. The script has been written in python3, do a simple request to the endpoint, gather the data and ... See more...
Hi all, I have configured a new script in 'Data inputs' to feed my index with data from a Rest API. The script has been written in python3, do a simple request to the endpoint, gather the data and do some little manipulation of it,  and write it to the stout by the print() function. The script is placed in the 'bin' folder of my app and using the web UI I configured it without any issue to run every hour. I tested it running manually from the command line and the output is what I expected. In the splunkd.log I have the trace that Splunk ran it as the following: 02-19-2025 10:49:00.001 +0100 INFO ExecProcessor [3193396 ExecProcessor] - setting reschedule_ms=86399999, for command=/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/adsmart_summary/bin/getCampaignData.py ... and nothing more is logged, neither errors nor anything else. But in the index i choose in the web UI there is no data coming from the script. Where I can start to check what is going on? Thanks!
Hey even we have come across the same recquirement to duplicate the grafana dashboard in splunk for observability..... Currently we have our k8's dashboard in grafana but now somehow we need to repl... See more...
Hey even we have come across the same recquirement to duplicate the grafana dashboard in splunk for observability..... Currently we have our k8's dashboard in grafana but now somehow we need to replicate it in splunk observability cloud... How can this be done? Thanks # splunkcloud # grafana
Hi @narenpg , probably you're using a base search in your dashboard, in this case, CSV export is disabled. To export in CSV, you have to open the panel in Search and then export results in csv. Ci... See more...
Hi @narenpg , probably you're using a base search in your dashboard, in this case, CSV export is disabled. To export in CSV, you have to open the panel in Search and then export results in csv. Ciao. Giuseppe
@shashank9  1. Then, when I tried to only grep for 9997 (netstat -tulnp | grep 9997) I did not see any output. --> it means the indexers are NOT listening for incoming data. This could mean, The ... See more...
@shashank9  1. Then, when I tried to only grep for 9997 (netstat -tulnp | grep 9997) I did not see any output. --> it means the indexers are NOT listening for incoming data. This could mean, The HF not configured to listen on port 9997. Network issues preventing the HF from binding to port 9997. Verify that outputs.conf the HF is correctly configured. Ensure there are no typos in the IP addresses or port numbers. --> Your outputs.conf look correct: [tcpout:errorGroup] server=indexr_1_ip_addr:9997 [tcpout:successGroup] server=indexer_2_ip_addr:9997 On the HF, in the file /opt/splunk/var/log/splunk/test.log I changed the user and group ec2-user: --> The file permissions for /opt/splunk/var/log/splunk/test.log seem correct. However, ensure that the Splunk process has the necessary permissions to read the file. You can check the Splunk user running the HF and adjust permissions accordingly. Check the splunkd.log in heavy forwarder:- tail -n 100 /opt/splunk/var/log/splunk/splunkd.log | grep -i "ERROR" tail -n 100 /opt/splunk/var/log/splunk/splunkd.log | grep -i "WARN" Verify that the Splunk process is running on the HF: ps -ef | grep splunkd Finally, I would recommend you add this to the heavy forwarder: Go to cd /opt/splunk/etc/system/local vi inputs.conf [splunktcp://9997] disabled = 0 Restart Splunk.
is it not complete JSON when it arrives. Its a raw data which I have removed unwanted lines in master props.conf by giving SEDCMD and then wrote kv_mode in SH.
Hi @Praz_123  Try the following SPL query, which you can then export / save the results of. | tstats count where index=_dsappevent data.serverClassName=100_IngestAction_AutoGenerated data.action=In... See more...
Hi @Praz_123  Try the following SPL query, which you can then export / save the results of. | tstats count where index=_dsappevent data.serverClassName=100_IngestAction_AutoGenerated data.action=Install by data.clientId, data.serverClassName | rename data.* as * | table serverClassName clientId | append [ tstats count where index=_dsclient by data.build data.clientId data.connectionId data.dns data.guid data.hostname data.instanceId data.instanceName data.ip data.mgmt data.name data.package data.packageType data.splunkVersion data.utsname datetime | dedup data.clientId sortby -datetime | rename data.* as *] | stats values(*) AS * by clientId | table serverClassName clientId hostname Replace "100_IngestAction_AutoGenerated" with your chosen serverclass,   Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @Karthikeya  Is the sample data you provided after you have modified it with any SPL, or is that as it arrives into Splunk? It looks like its already a JSON string when it arrives, if so then js... See more...
Hi @Karthikeya  Is the sample data you provided after you have modified it with any SPL, or is that as it arrives into Splunk? It looks like its already a JSON string when it arrives, if so then json functions should work. I will test this further.
 Hi @splunklearner  Try using the following REGEX  \"compression\":\"([^\"]*)\"\, Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped.... See more...
 Hi @splunklearner  Try using the following REGEX  \"compression\":\"([^\"]*)\"\, Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Thanks for responding. Time range is exactly same.  We ended up opening a support case for this. The cause was found to be duplicate events in index=notable for a particular correlation search . What... See more...
Thanks for responding. Time range is exactly same.  We ended up opening a support case for this. The cause was found to be duplicate events in index=notable for a particular correlation search . What is causing these duplicates is under investigation.
Please help me in extracting only compression values from this raw event -  "response_time_last_byte":5,"compression_percentage":0,"compression":"NO_COMPRESSION_CAN_BE_COMPRESSED","client_insights":... See more...
Please help me in extracting only compression values from this raw event -  "response_time_last_byte":5,"compression_percentage":0,"compression":"NO_COMPRESSION_CAN_BE_COMPRESSED","client_insights":"","request_headers":577,"response_headers":13,"request_state":"AVI_HTTP_REQUEST_STATE_SEND_RESPONSE_BODY_TO_CLIENT", "response_time_last_byte":1,"compression_percentage":0,"compression":"","client_insights":"","request_headers":3,"response_headers":12,"request_state":"AVI_HTTP_REQUEST_STATE_READ_CLIENT_REQ_HDR", Tried this but it is extracting client insights as well. I need to exclude all compression string values by writing SEDCMD   
@kiran_panchavat Thank you for those steps and suggestions: I tried those steps and below are the details: Can you check this on the heavy forwarder?  netstat -tulnp | grep 9997 OR ss -tulnp |... See more...
@kiran_panchavat Thank you for those steps and suggestions: I tried those steps and below are the details: Can you check this on the heavy forwarder?  netstat -tulnp | grep 9997 OR ss -tulnp | grep 9997 Ran the above command in on my HF: 1. First it said: grep: invalid option -- 't' Usage: grep [OPTION]... PATTERN [FILE]... Try 'grep --help' for more information. 2. Then when I tried to only grep for 9997 (netstat -tulnp | grep 9997) I did not see any output. Check the metrics.log if any queues are getting blocked. tail -f /opt/splunk/var/log/splunk/metrics.log | grep -i "blocked=true" Verify that outputs.conf the HF is correctly configured. Ensure there are no typos in the IP addresses or port numbers. I verified that both my indexers IPs mentioned in the HF's outputs.conf file are correct. Can you please confirm if the below stanza name tcpout is correct or if there is any typo with it?  [tcpout:errorGroup] server = indexr_1_ip_addr:9997 [tcpout:successGroup] server = indexer_2_ip_addr:9997 File permission issues could be a possible reason why Splunk HF is not reading test.log. If the Heavy Forwarder (HF) process does not have the required permissions to read the file, it won't be able to forward logs to the indexers. On the HF the file /opt/splunk/var/log/splunk/test.log I changed the user and group ec2-user: -rw-r--r-- 1 ec2-user ec2-user 1133 Feb 19 00:53 test.log I restarted the HF and checked my indexers for logs/events from the HF under main index. But, no luck 
We have a requirement to remove few strings from the events while indexing the data. Here is my raw event sample -    {"adf":true,"significant":0,"udf":false,"virtualservice":"virtualservice-fe4a30... See more...
We have a requirement to remove few strings from the events while indexing the data. Here is my raw event sample -    {"adf":true,"significant":0,"udf":false,"virtualservice":"virtualservice-fe4a30d8-ce53-4427-b920-ec81381cb1f4","report_timestamp":"2025-02-19T06:31:56.065370Z","service_engine":"GB-DRN-AB-Tier2-se-vxeuz","vcpu_id":0,"log_id":20138,"client_ip":"128.12.73.92","client_src_port":39688,"client_dest_port":443,"client_rtt":1,"http_version":"1.1","method":"HEAD","uri_path":"/path/to/monitor/page/","host":"udg1704n01.hc.cloud.uk.sony","response_content_type":"text/html","request_length":93,"response_length":94,"response_code":400,"response_time_first_byte":1,"response_time_last_byte":1,"compression_percentage":0,"compression":"","client_insights":"","request_headers":3,"response_headers":12,"request_state":"AVI_HTTP_REQUEST_STATE_READ_CLIENT_REQ_HDR","significant_log":["ADF_HTTP_BAD_REQUEST_PLAIN_HTTP_REQUEST_SENT_ON_HTTPS_PORT","ADF_RESPONSE_CODE_4XX"],"vs_ip":"128.160.71.14","request_id":"jjc-HmSo-8zb3","max_ingress_latency_fe":0,"avg_ingress_latency_fe":0,"conn_est_time_fe":0,"source_ip":"128.12.73.92","vs_name":"v-atcptest-wdc.hc.cloud.uk.sony-443","tenant_name":"admin"} I need to remove strings like avg_ingress_latency_fe, conn_est_time_fe, client_insights etc. I gone through the google and found giving SEDCMD will help me. Hence giving this in props.conf and giving this in my cluster manager and it is working well. SEDCMD-removeavglatency=s/\"avg_ingress_latency_fe\"\:[\d+]\,//g SEDCMD-removeclientinsights=s/\"client_insights\"\:\"\.*"\,//g But my problem we need to give more lines like this which will not be in readable format in future. I want to keep it in less lines. Tried this but not working and in return this is disturbing the Json format-  == props.conf == [yourSourceType] TRANSFORMS-removeJsonKeys = removeJsonKeys1 == transforms.conf == [removeJsonKeys1] INGEST_EVAL = _raw=json_delete(_raw, "avg_ingress_latency_be", "avg_ingress_latency_fe", "max_ingress_latency_fe", "client_insights" ) because already we removed few lines from this event by giving in props.conf for auto extraction of json fields -  SEDCMD-removeheader=s/^[^\{]*//g   and here is SH props.conf -    [mysourcetype] KV_MODE = json AUTO_KV_JSON = true   Please suggest what can I do now instead of this to keep props.conf neat?  
How can I export the host values in excel for the particular serverclass  Is there is any query for that that will be helpful . Path will be  Deployment server -> forwarder management ->serv... See more...
How can I export the host values in excel for the particular serverclass  Is there is any query for that that will be helpful . Path will be  Deployment server -> forwarder management ->serverclass -> action (edit clients) -> need to export the hostname from the list 
@vksplunk1  The KV store isn’t very reliable, so it's best to back it up regularly. 1. Some apps store their lookups in the kvstore. (collections.conf) 2. Some apps store all their configu... See more...
@vksplunk1  The KV store isn’t very reliable, so it's best to back it up regularly. 1. Some apps store their lookups in the kvstore. (collections.conf) 2. Some apps store all their configuration in the kvstore (ITSI, but they also do daily backups. For Splunk itself 1. It sometimes uses the kvstore to track which summary indexing time range was done. It's wise to back up your KV store regularly since it's vulnerable to data loss. If it gets corrupted, deleted, or runs into issues during an upgrade or restart, you could lose valuable data. Keeping backups helps you recover your data quickly if anything goes wrong. https://community.splunk.com/t5/Knowledge-Management/Is-there-any-way-to-retrieve-kv-store-that-was-accidentally/m-p/408788 
@spy_jrCheck this community link for more details:- https://community.splunk.com/t5/Alerting/Error-Code-3/m-p/689100/highlight/true
@spy_jr  This usually happens when there are 0 results from the preceding search. If the results are more than 0 then you'll not see this error. So its safe to ignore this.
Get rid of that dedup host.  You will see some events with error_msg, some without.  I cannot decipher what that dedup is supposed to accomplish, or what real problem you are trying to solve.  So, I ... See more...
Get rid of that dedup host.  You will see some events with error_msg, some without.  I cannot decipher what that dedup is supposed to accomplish, or what real problem you are trying to solve.  So, I cannot suggest an alternative.  But if you have that dedup and if for each host the last event is NOT a failure or disconnect, you will get no error_msg.  Maybe you mean this? index=kafka-np sourcetype="KCON" connName="CCNGBU_*" ERROR=ERROR OR ERROR=WARN | eval error_msg = case(match(_raw, "Disconnected"), "disconected", match(_raw, "restart failed"), "restart failed", match(_raw, "Failed to start connector"), "failed to start connector") | search error_msg = * | dedup host | table host connName error_msg