All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can download the results in the panels to csv but clicking the download button. Is that what you mean?
Thank you @livehybrid !!!!! I knew I was dosing off at the end of the day.... LOL
Good Morning @livehybrid  Just wanted to wrap my head around the logic 2025-02-13 Yes 2025-02-14 Yes 2025-02-15 Yes So is the yes mean that it will alert on those dates? hence... See more...
Good Morning @livehybrid  Just wanted to wrap my head around the logic 2025-02-13 Yes 2025-02-14 Yes 2025-02-15 Yes So is the yes mean that it will alert on those dates? hence returning an result? Also lets say for example If an alert fired on the 15th and the lookuptable has the date 2025-02-15 Does it mute the next day? so the 16th ?wont get alerted? (if it falls within mon~thursday) where Friday it will jump to monday to mute so it would look like this 2025-02-15 no and  instead of displaying that in a event it will not actually return any results? If I want to only add 1 day would I change it like this?   | eval mute_date = if(day_of_week == Date + 86400)   all the best!
Hi @mbasharat  Add "| spath input=body" to your SPL - this will then extract the fields within the body JSON key as keyval fields in your results. Please let me know how you get on and consider acc... See more...
Hi @mbasharat  Add "| spath input=body" to your SPL - this will then extract the fields within the body JSON key as keyval fields in your results. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @vksplunk1  By default your KV store files will be stored in $SPLUNK_HOME/var/lib/splunk/kvstore/mongo - so if you have a backup of this directory you may be able to get the data back based on th... See more...
Hi @vksplunk1  By default your KV store files will be stored in $SPLUNK_HOME/var/lib/splunk/kvstore/mongo - so if you have a backup of this directory you may be able to get the data back based on the time it was backed up, however I would look at recovering this to a different / test server rather than your production instance as it isnt possible to pick and choose which files to restore.  Therefore you might need to recover the whole backup and then take a backup from the recovered data before restoring. Do you have other lookups also? This will affect those if you overwrite from an old backup. You could try this approach, and depending on the size of your lost KV Store lookup. you could export it from the restored backup, then load it back into the KV Store on your production instance using a mixture of |inputlookup <restoredData.csv> | outputlookup <OriginalLookupName> Do you think this might work for your situation? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi - We have accidentally deleted kvstore with outputlookup command. We do not have a backup from splunk.   How to Restore KVStore from back up of  splunk home( /opt/splunk )directory backup
Has anyone managed to set up source control for workbooks?  Pulling the information down via API to upload to gitlab is straightforward. You can run a get request against [base_url]/rest/workbook_te... See more...
Has anyone managed to set up source control for workbooks?  Pulling the information down via API to upload to gitlab is straightforward. You can run a get request against [base_url]/rest/workbook_template (REST Workbook). The problem is with pushing information. As far as I've been able to find, you can only create new phases or tasks. You're not able to specify via name or ID that you want to update an object. There's also no way I've found to delete a phase or task which would make creating a new one more reasonable.
Hi. I have below raw event/s. Highlighted Syntax: { [-]    body: {"isolation": "isolation","device_classification": "Network Access Control","ip": "1.2.3.4", "mac": "Unknown","dns_hn": "XYZ","po... See more...
Hi. I have below raw event/s. Highlighted Syntax: { [-]    body: {"isolation": "isolation","device_classification": "Network Access Control","ip": "1.2.3.4", "mac": "Unknown","dns_hn": "XYZ","policy": "TEST_BLOCK","network_fn": "CounterACT Device","os_fingerprint": "CounterACT Appliance","nic_vendor": "Unknown Vendor","ipv6": "Unknown",}    ctupdate: notif    eventTimestamp: 1739913406    ip: 1.2.3.4    tenant_id: CounterACT__sample } Raw Text: {"tenant_id":"CounterACT__sample","body":"{\"isolation\": \"isolation\",\"device_classification\": \"Network Access Control\",\"ip\": \"1.2.3.4\", \"mac\": \"Unknown\",\"dns_hn\": \"XYZ\",\"policy\": \"TEST_BLOCK\",\"network_fn\": \"CounterACT Device\",\"os_fingerprint\": \"CounterACT Appliance\",\"nic_vendor\": \"Unknown Vendor\",\"ipv6\": \"Unknown\",}","ctupdate":"notif","ip":"1.2.3.4","eventTimestamp":"1739913406"} I need below fields=value extracted from each event at search time. It is a very small dataset: isolation=isolation policy=TEST_BLOCK ctupdate=notif ip=1.2.3.4 ipv6=Unknown mac=Unknown dns_hn=XYZ eventTimestamp=1739913406 Thank you in advance!!!
I am trying to export the dashboard into a csv file. But i am not seeing CSV under export.  How do i enable the csv export.? My data is in table format.  
Hello, I want to get the ML toolkit however, how will it affect the hard rules we write? Can we use the toolkit as a verification method of the same index data? I meant for the same index and same sp... See more...
Hello, I want to get the ML toolkit however, how will it affect the hard rules we write? Can we use the toolkit as a verification method of the same index data? I meant for the same index and same splunk account, can we write hard rule sets as we do now and also get the ML toolkit as the same time?  thanks a lot 
It appears that the latest version of this app is having this issue. Uninstall it and install an older version 
As of the time of writing this it is only available on single value, single value icon, and single radial visualizations. I very much would like to see them add it for line graph visualizations.
Hello! I hope you can help! I have installed splunk enterprise 8.12 on my MAC OS 14.6.1 to study for an exam. Splunk installed fine. However the lab asked me to create an app called "destinations" wh... See more...
Hello! I hope you can help! I have installed splunk enterprise 8.12 on my MAC OS 14.6.1 to study for an exam. Splunk installed fine. However the lab asked me to create an app called "destinations" which i did and i set the proper permissions. However, when i go to the app in the search head and type "index=main" it sees it but doesn't display any records. I have copied down eventgen to the samples folder in Destinations  folder in the samples folder and copied the eventgen.conf to the local folder as directed but it still does not display.  I also see that the main index is enabled in indexes using theb $SPLUNK_DB/defaultdb/db  it also shows that it indexed 1mg out of 500gb.  I have a feeling that its something obvious but im not seeing it.   I really need this lab to work can you assist?  I used SPLK-10012.PDF instructions. not sure if you have access to that. i pulled down the files fro github  - eventgen. Maybe this is an easy fix?  Thank you
I think I'm close, but the error_msg does not display: index=kafka-np sourcetype="KCON" connName="CCNGBU_*" ERROR=ERROR OR ERROR=WARN | eval error_msg = case(match(_raw, "Disconnected"), "disconect... See more...
I think I'm close, but the error_msg does not display: index=kafka-np sourcetype="KCON" connName="CCNGBU_*" ERROR=ERROR OR ERROR=WARN | eval error_msg = case(match(_raw, "Disconnected"), "disconected", match(_raw, "restart failed"), "restart failed", match(_raw, "Failed to start connector"), "failed to start connector") | dedup host | table host connName error_msg
I put in a support case and got a response. Seems like this is a known issue with 9.4.0. I followed the resolution steps here and it seems to have worked for me. Had to do with the /etc/hosts file o... See more...
I put in a support case and got a response. Seems like this is a known issue with 9.4.0. I followed the resolution steps here and it seems to have worked for me. Had to do with the /etc/hosts file on the host machine. https://splunk.my.site.com/customer/s/article/After-upgrading-Splunk-from-v9-2-to-v9-4-the-Forwarder-Manager-Web-UI-is-unavailable
Given this in props.conf  SEDCMD-removeevents= s/\"avg_ingress_latency_fe\":.*//g    as per raw data but it is not doing anything in return it is disturbing the json format what we have given i... See more...
Given this in props.conf  SEDCMD-removeevents= s/\"avg_ingress_latency_fe\":.*//g    as per raw data but it is not doing anything in return it is disturbing the json format what we have given in SH.   SH props.conf [mysourcetype] KV_MODE = json AUTO_KV_JSON = true   Please help me in this case....
can someone help on this ticket - https://community.splunk.com/t5/Getting-Data-In/Exclude-or-Remove-few-fields-while-on-boarding-data/m-p/711926/highlight/false#M117571
Hi,   I want to use a common Otel Collector gateway to collect traces and metrics from different sources. One of the sources I want to collect traces and metrics from is Azure API Management. How c... See more...
Hi,   I want to use a common Otel Collector gateway to collect traces and metrics from different sources. One of the sources I want to collect traces and metrics from is Azure API Management. How can I configure Azure API Management to send traces and metrics to an existing Otel Collector integrated with Splunk Observability. The Splunk documentation talks about how to create a separate integration from Splunk Observability to Azure cloud. However I dont want to create a separate integration but rather use an existing collector gateway.    Regards, Sukesh
  Here is the raw data sample--  {"adf":true,"significant":0,"udf":false,"virtualservice":"virtualservice-fe4a30d8-ce53-4427-b920-ec81381cb1f4","report_timestamp":"2025-02-18T17:21:53.173205Z","ser... See more...
  Here is the raw data sample--  {"adf":true,"significant":0,"udf":false,"virtualservice":"virtualservice-fe4a30d8-ce53-4427-b920-ec81381cb1f4","report_timestamp":"2025-02-18T17:21:53.173205Z","service_engine":"GB-DRN-AB-Tier2-se-vxeuz","vcpu_id":0,"log_id":18544,"client_ip":"128.12.73.92","client_src_port":42996,"client_dest_port":443,"client_rtt":1,"http_version":"1.1","method":"HEAD","uri_path":"/path/to/monitor/page/","host":"udg1704n01.hc.cloud.uk.sony","response_content_type":"text/html","request_length":93,"response_length":94,"response_code":400,"response_time_first_byte":1,"response_time_last_byte":1,"compression_percentage":0,"compression":"","client_insights":"","request_headers":3,"response_headers":12,"request_state":"AVI_HTTP_REQUEST_STATE_READ_CLIENT_REQ_HDR","significant_log":["ADF_HTTP_BAD_REQUEST_PLAIN_HTTP_REQUEST_SENT_ON_HTTPS_PORT","ADF_RESPONSE_CODE_4XX"],"vs_ip":"128.160.71.14","request_id":"2OP-U2vt-pre1","max_ingress_latency_fe":0,"avg_ingress_latency_fe":0,"conn_est_time_fe":1,"source_ip":"128.12.73.92","vs_name":"v-atcptest-wdc.hc.cloud.uk.hc-443","tenant_name":"admin"}
Bonjour Amory, NetFlow Analytics for Splunk App and TA-netflow are designed to work with NetFlow Optimizer. For details, please visit: https://docs.netflowlogic.com/integrations-and-apps/integrati... See more...
Bonjour Amory, NetFlow Analytics for Splunk App and TA-netflow are designed to work with NetFlow Optimizer. For details, please visit: https://docs.netflowlogic.com/integrations-and-apps/integrations-with-splunk/ If you have any questions or would like to see a demo, please contact us at team_splunk@netflowlogic.com