All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Has anyone managed to set up source control for workbooks?  Pulling the information down via API to upload to gitlab is straightforward. You can run a get request against [base_url]/rest/workbook_te... See more...
Has anyone managed to set up source control for workbooks?  Pulling the information down via API to upload to gitlab is straightforward. You can run a get request against [base_url]/rest/workbook_template (REST Workbook). The problem is with pushing information. As far as I've been able to find, you can only create new phases or tasks. You're not able to specify via name or ID that you want to update an object. There's also no way I've found to delete a phase or task which would make creating a new one more reasonable.
Hi. I have below raw event/s. Highlighted Syntax: { [-]    body: {"isolation": "isolation","device_classification": "Network Access Control","ip": "1.2.3.4", "mac": "Unknown","dns_hn": "XYZ","po... See more...
Hi. I have below raw event/s. Highlighted Syntax: { [-]    body: {"isolation": "isolation","device_classification": "Network Access Control","ip": "1.2.3.4", "mac": "Unknown","dns_hn": "XYZ","policy": "TEST_BLOCK","network_fn": "CounterACT Device","os_fingerprint": "CounterACT Appliance","nic_vendor": "Unknown Vendor","ipv6": "Unknown",}    ctupdate: notif    eventTimestamp: 1739913406    ip: 1.2.3.4    tenant_id: CounterACT__sample } Raw Text: {"tenant_id":"CounterACT__sample","body":"{\"isolation\": \"isolation\",\"device_classification\": \"Network Access Control\",\"ip\": \"1.2.3.4\", \"mac\": \"Unknown\",\"dns_hn\": \"XYZ\",\"policy\": \"TEST_BLOCK\",\"network_fn\": \"CounterACT Device\",\"os_fingerprint\": \"CounterACT Appliance\",\"nic_vendor\": \"Unknown Vendor\",\"ipv6\": \"Unknown\",}","ctupdate":"notif","ip":"1.2.3.4","eventTimestamp":"1739913406"} I need below fields=value extracted from each event at search time. It is a very small dataset: isolation=isolation policy=TEST_BLOCK ctupdate=notif ip=1.2.3.4 ipv6=Unknown mac=Unknown dns_hn=XYZ eventTimestamp=1739913406 Thank you in advance!!!
I am trying to export the dashboard into a csv file. But i am not seeing CSV under export.  How do i enable the csv export.? My data is in table format.  
Hello, I want to get the ML toolkit however, how will it affect the hard rules we write? Can we use the toolkit as a verification method of the same index data? I meant for the same index and same sp... See more...
Hello, I want to get the ML toolkit however, how will it affect the hard rules we write? Can we use the toolkit as a verification method of the same index data? I meant for the same index and same splunk account, can we write hard rule sets as we do now and also get the ML toolkit as the same time?  thanks a lot 
It appears that the latest version of this app is having this issue. Uninstall it and install an older version 
As of the time of writing this it is only available on single value, single value icon, and single radial visualizations. I very much would like to see them add it for line graph visualizations.
Hello! I hope you can help! I have installed splunk enterprise 8.12 on my MAC OS 14.6.1 to study for an exam. Splunk installed fine. However the lab asked me to create an app called "destinations" wh... See more...
Hello! I hope you can help! I have installed splunk enterprise 8.12 on my MAC OS 14.6.1 to study for an exam. Splunk installed fine. However the lab asked me to create an app called "destinations" which i did and i set the proper permissions. However, when i go to the app in the search head and type "index=main" it sees it but doesn't display any records. I have copied down eventgen to the samples folder in Destinations  folder in the samples folder and copied the eventgen.conf to the local folder as directed but it still does not display.  I also see that the main index is enabled in indexes using theb $SPLUNK_DB/defaultdb/db  it also shows that it indexed 1mg out of 500gb.  I have a feeling that its something obvious but im not seeing it.   I really need this lab to work can you assist?  I used SPLK-10012.PDF instructions. not sure if you have access to that. i pulled down the files fro github  - eventgen. Maybe this is an easy fix?  Thank you
I think I'm close, but the error_msg does not display: index=kafka-np sourcetype="KCON" connName="CCNGBU_*" ERROR=ERROR OR ERROR=WARN | eval error_msg = case(match(_raw, "Disconnected"), "disconect... See more...
I think I'm close, but the error_msg does not display: index=kafka-np sourcetype="KCON" connName="CCNGBU_*" ERROR=ERROR OR ERROR=WARN | eval error_msg = case(match(_raw, "Disconnected"), "disconected", match(_raw, "restart failed"), "restart failed", match(_raw, "Failed to start connector"), "failed to start connector") | dedup host | table host connName error_msg
I put in a support case and got a response. Seems like this is a known issue with 9.4.0. I followed the resolution steps here and it seems to have worked for me. Had to do with the /etc/hosts file o... See more...
I put in a support case and got a response. Seems like this is a known issue with 9.4.0. I followed the resolution steps here and it seems to have worked for me. Had to do with the /etc/hosts file on the host machine. https://splunk.my.site.com/customer/s/article/After-upgrading-Splunk-from-v9-2-to-v9-4-the-Forwarder-Manager-Web-UI-is-unavailable
Given this in props.conf  SEDCMD-removeevents= s/\"avg_ingress_latency_fe\":.*//g    as per raw data but it is not doing anything in return it is disturbing the json format what we have given i... See more...
Given this in props.conf  SEDCMD-removeevents= s/\"avg_ingress_latency_fe\":.*//g    as per raw data but it is not doing anything in return it is disturbing the json format what we have given in SH.   SH props.conf [mysourcetype] KV_MODE = json AUTO_KV_JSON = true   Please help me in this case....
can someone help on this ticket - https://community.splunk.com/t5/Getting-Data-In/Exclude-or-Remove-few-fields-while-on-boarding-data/m-p/711926/highlight/false#M117571
Hi,   I want to use a common Otel Collector gateway to collect traces and metrics from different sources. One of the sources I want to collect traces and metrics from is Azure API Management. How c... See more...
Hi,   I want to use a common Otel Collector gateway to collect traces and metrics from different sources. One of the sources I want to collect traces and metrics from is Azure API Management. How can I configure Azure API Management to send traces and metrics to an existing Otel Collector integrated with Splunk Observability. The Splunk documentation talks about how to create a separate integration from Splunk Observability to Azure cloud. However I dont want to create a separate integration but rather use an existing collector gateway.    Regards, Sukesh
  Here is the raw data sample--  {"adf":true,"significant":0,"udf":false,"virtualservice":"virtualservice-fe4a30d8-ce53-4427-b920-ec81381cb1f4","report_timestamp":"2025-02-18T17:21:53.173205Z","ser... See more...
  Here is the raw data sample--  {"adf":true,"significant":0,"udf":false,"virtualservice":"virtualservice-fe4a30d8-ce53-4427-b920-ec81381cb1f4","report_timestamp":"2025-02-18T17:21:53.173205Z","service_engine":"GB-DRN-AB-Tier2-se-vxeuz","vcpu_id":0,"log_id":18544,"client_ip":"128.12.73.92","client_src_port":42996,"client_dest_port":443,"client_rtt":1,"http_version":"1.1","method":"HEAD","uri_path":"/path/to/monitor/page/","host":"udg1704n01.hc.cloud.uk.sony","response_content_type":"text/html","request_length":93,"response_length":94,"response_code":400,"response_time_first_byte":1,"response_time_last_byte":1,"compression_percentage":0,"compression":"","client_insights":"","request_headers":3,"response_headers":12,"request_state":"AVI_HTTP_REQUEST_STATE_READ_CLIENT_REQ_HDR","significant_log":["ADF_HTTP_BAD_REQUEST_PLAIN_HTTP_REQUEST_SENT_ON_HTTPS_PORT","ADF_RESPONSE_CODE_4XX"],"vs_ip":"128.160.71.14","request_id":"2OP-U2vt-pre1","max_ingress_latency_fe":0,"avg_ingress_latency_fe":0,"conn_est_time_fe":1,"source_ip":"128.12.73.92","vs_name":"v-atcptest-wdc.hc.cloud.uk.hc-443","tenant_name":"admin"}
Bonjour Amory, NetFlow Analytics for Splunk App and TA-netflow are designed to work with NetFlow Optimizer. For details, please visit: https://docs.netflowlogic.com/integrations-and-apps/integrati... See more...
Bonjour Amory, NetFlow Analytics for Splunk App and TA-netflow are designed to work with NetFlow Optimizer. For details, please visit: https://docs.netflowlogic.com/integrations-and-apps/integrations-with-splunk/ If you have any questions or would like to see a demo, please contact us at team_splunk@netflowlogic.com    
Hello all, new poster here. I have a csv file with a column full of Splunk queries. I am trying to enrich my Splunk instance with the data from the csv file via the following command:        inde... See more...
Hello all, new poster here. I have a csv file with a column full of Splunk queries. I am trying to enrich my Splunk instance with the data from the csv file via the following command:        index="index1" [ inputlookup rules.csv | eval search = if(boolean=="FALSE","\""+rule+"\"",rule) | return 10000 $search] | fields _time index | eval time_token = "_time=" + _time | eval index_token = "index=" + index | stats values(time_token) AS time_token values(index_token) AS index_token | eval time_token=mvjoin(time_token," OR ") | eval index_token=mvjoin(index_token," OR ") | append [ inputlookup rules.csv | eval rule = if(boolean=="FALSE","\""+rule+"\"",rule) | return 10000 $rule] | eventstats first(time_token) AS time_token first(index_token) AS index_token | search rule=* | map maxsearches=100 search="search [| makeresults | eval search= \"$time_token$ $index_token$ $rule$\" | return $search] | eval rule_found=\"$rule$\", rule_id=\"$id$\""       The problem I am having is with the "map" command. everything after the second "search" is greyed out and not being included in the search. I have been able to get the following portion of the code working:      index="index1" [ inputlookup rules.csv | eval search = if(boolean=="FALSE","\""+rule+"\"",rule) | return 10000 $search] | fields _time index | eval time_token = "_time=" + _time | eval index_token = "index=" + index | stats values(time_token) AS time_token values(index_token) AS index_token | eval time_token=mvjoin(time_token," OR ") | eval index_token=mvjoin(index_token," OR ") | append [ inputlookup rules.csv | eval rule = if(boolean=="FALSE","\""+rule+"\"",rule) | return 10000 $rule] | eventstats first(time_token) AS time_token first(index_token) AS index_token | search rule=*       Thank you for any suggestions you have to get this search working.  
Thanks @livehybrid for the reply. However I have given the same but it is not working as expected. As I earlier said this is not by default JSON data we converted it by using KV_MODE = json in SH... ... See more...
Thanks @livehybrid for the reply. However I have given the same but it is not working as expected. As I earlier said this is not by default JSON data we converted it by using KV_MODE = json in SH... I think JSON is extracting at search time but I have given this in index time. That might be the reason this json_delete not working... can you please help me with any other alternative?
Hello @user487596 You should file support case for such issues.
Hi, Delimiter doesn't work here(.  the option only possible: index=_internal sourcetype IN ($ms2$) https://docs.splunk.com/Documentation/Splunk/9.0.3/DashStudio/inputMulti  
@livehybrid As I said in my query statement that when I am performing edit > source > save execution images are loading perfectly. Thats mean not an issue with the permission and requirement to chang... See more...
@livehybrid As I said in my query statement that when I am performing edit > source > save execution images are loading perfectly. Thats mean not an issue with the permission and requirement to change in web.conf. I am thinking the issue with the cache or drilldown.
So you would use == props.conf == [yourSourceType] TRANSFORMS-removeJsonKeys = removeJsonKeys1 == transforms.conf == [removeJsonKeys1] INGEST_EVAL = _raw=json_delete(_raw, "avg_ingress_latency_be",... See more...
So you would use == props.conf == [yourSourceType] TRANSFORMS-removeJsonKeys = removeJsonKeys1 == transforms.conf == [removeJsonKeys1] INGEST_EVAL = _raw=json_delete(_raw, "avg_ingress_latency_be", "avg_ingress_latency_fe", "request_state", "server_response_code" ) as json_delete takes an object (_raw) and a list of keys to delete. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will