All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  i have data from two columns and using a third column to display the matches | makeresults | eval GroupA = 353649273, GroupB=353648649 | append [ | makeresults | eval GroupA = 353649184, Gro... See more...
Hi  i have data from two columns and using a third column to display the matches | makeresults | eval GroupA = 353649273, GroupB=353648649 | append [ | makeresults | eval GroupA = 353649184, GroupB=353648566] | append [ | makeresults | eval GroupA = 353649091, GroupB=353616829] | append [ | makeresults | eval GroupA = 353649033, GroupB=353638941] | append [ | makeresults | eval GroupA = 353648797] | append [ | makeresults | eval GroupA = 353648680] | append [ | makeresults | eval GroupA = 353648745] | append [ | makeresults | eval GroupA = 353648730] | append [ | makeresults | eval GroupA = 353638941] | fields - _time | foreach GroupA [eval match=if(GroupA=GroupB,GroupA ,NULL)] | stats values(GroupA) values(GroupB) values(match)   however nothing is getting displayed in values(match). is there something wrong in the logic or alternate way to do it 
I am looking to extract this section of an event and have it as a field that I am able to manipulate with. I am unfamiliar with regex and I am getting the wrong results.  Events   <28>1 2025-02-... See more...
I am looking to extract this section of an event and have it as a field that I am able to manipulate with. I am unfamiliar with regex and I am getting the wrong results.  Events   <28>1 2025-02-19T15:14:00.968210+00:00 aleoweul0169x falcon-sensor-bpf 1152 - - CrowdStrike(4): SSLSocket Disconnected from Cloud. <30>1 2025-02-19T15:14:16.104202+00:00 aleoweul0169x falcon-sensor-bpf 1152 - - CrowdStrike(4): SSLSocket connected successfully to ts01-lanner-lion.cloudsink.net:443    I am looking to have a field called Disconnect based on "SSLSocket Disconnected from Cloud"
I want to extract value from the following field while indexing the data to use it to map it with index. vs_name=v-jupiter-prd-cbc-us.sony-443-ipv6 I want to extract every field after v- and till s... See more...
I want to extract value from the following field while indexing the data to use it to map it with index. vs_name=v-jupiter-prd-cbc-us.sony-443-ipv6 I want to extract every field after v- and till sony. I.e., jupiter-prd-cbc-us.sony as fqdn so that this fqdn will check in lookup to map it to correct index. Please help me with props and transforms to extract fqdn correctly.
Hello Everyone! I installed Splunk and Alert Manager Enterprise in Virtualbox for learning purposes (4cpu /8gb ram). I configured AME via the documentation. Health Check is green. I can send test ... See more...
Hello Everyone! I installed Splunk and Alert Manager Enterprise in Virtualbox for learning purposes (4cpu /8gb ram). I configured AME via the documentation. Health Check is green. I can send test alerts, they appear in the ame_default index.   However the alerts don't appear in the Events. Hang up forever. I have some broken pipe errors, but they also appear in an another working environment. Thank you for your help. A      
 I have drop-down named "Program" and Table with static datasource "ds_EHYzbg0g". How to define dataSource for Table dynamically based on value from drop-down "Program"?       { "opti... See more...
 I have drop-down named "Program" and Table with static datasource "ds_EHYzbg0g". How to define dataSource for Table dynamically based on value from drop-down "Program"?       { "options": { "items": [ { "label": "All", "value": "*" } ], "defaultValue": "*", "token": "select_program" }, "dataSources": { "primary": "ds_8xyubP1c" }, "title": "Program", "type": "input.dropdown" } { "type": "splunk.table", "options": { "tableFormat": { "rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableAltRowBackgroundColorsByTheme)" }, "columnFormat": { "_raw": { "data": "> table | seriesByName(\"_raw\") | formatByType(_rawColumnFormatEditorConfig)" } }, "count": 50 }, "dataSources": { "primary": "ds_EHYzbg0g" }, "context": { "_rawColumnFormatEditorConfig": { "string": { "unitPosition": "after" } } }, "showProgressBar": true, "containerOptions": {}, "showLastUpdated": false }    
Hi all, I have configured a new script in 'Data inputs' to feed my index with data from a Rest API. The script has been written in python3, do a simple request to the endpoint, gather the data and ... See more...
Hi all, I have configured a new script in 'Data inputs' to feed my index with data from a Rest API. The script has been written in python3, do a simple request to the endpoint, gather the data and do some little manipulation of it,  and write it to the stout by the print() function. The script is placed in the 'bin' folder of my app and using the web UI I configured it without any issue to run every hour. I tested it running manually from the command line and the output is what I expected. In the splunkd.log I have the trace that Splunk ran it as the following: 02-19-2025 10:49:00.001 +0100 INFO ExecProcessor [3193396 ExecProcessor] - setting reschedule_ms=86399999, for command=/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/adsmart_summary/bin/getCampaignData.py ... and nothing more is logged, neither errors nor anything else. But in the index i choose in the web UI there is no data coming from the script. Where I can start to check what is going on? Thanks!
Please help me in extracting only compression values from this raw event -  "response_time_last_byte":5,"compression_percentage":0,"compression":"NO_COMPRESSION_CAN_BE_COMPRESSED","client_insights":... See more...
Please help me in extracting only compression values from this raw event -  "response_time_last_byte":5,"compression_percentage":0,"compression":"NO_COMPRESSION_CAN_BE_COMPRESSED","client_insights":"","request_headers":577,"response_headers":13,"request_state":"AVI_HTTP_REQUEST_STATE_SEND_RESPONSE_BODY_TO_CLIENT", "response_time_last_byte":1,"compression_percentage":0,"compression":"","client_insights":"","request_headers":3,"response_headers":12,"request_state":"AVI_HTTP_REQUEST_STATE_READ_CLIENT_REQ_HDR", Tried this but it is extracting client insights as well. I need to exclude all compression string values by writing SEDCMD   
We have a requirement to remove few strings from the events while indexing the data. Here is my raw event sample -    {"adf":true,"significant":0,"udf":false,"virtualservice":"virtualservice-fe4a30... See more...
We have a requirement to remove few strings from the events while indexing the data. Here is my raw event sample -    {"adf":true,"significant":0,"udf":false,"virtualservice":"virtualservice-fe4a30d8-ce53-4427-b920-ec81381cb1f4","report_timestamp":"2025-02-19T06:31:56.065370Z","service_engine":"GB-DRN-AB-Tier2-se-vxeuz","vcpu_id":0,"log_id":20138,"client_ip":"128.12.73.92","client_src_port":39688,"client_dest_port":443,"client_rtt":1,"http_version":"1.1","method":"HEAD","uri_path":"/path/to/monitor/page/","host":"udg1704n01.hc.cloud.uk.sony","response_content_type":"text/html","request_length":93,"response_length":94,"response_code":400,"response_time_first_byte":1,"response_time_last_byte":1,"compression_percentage":0,"compression":"","client_insights":"","request_headers":3,"response_headers":12,"request_state":"AVI_HTTP_REQUEST_STATE_READ_CLIENT_REQ_HDR","significant_log":["ADF_HTTP_BAD_REQUEST_PLAIN_HTTP_REQUEST_SENT_ON_HTTPS_PORT","ADF_RESPONSE_CODE_4XX"],"vs_ip":"128.160.71.14","request_id":"jjc-HmSo-8zb3","max_ingress_latency_fe":0,"avg_ingress_latency_fe":0,"conn_est_time_fe":0,"source_ip":"128.12.73.92","vs_name":"v-atcptest-wdc.hc.cloud.uk.sony-443","tenant_name":"admin"} I need to remove strings like avg_ingress_latency_fe, conn_est_time_fe, client_insights etc. I gone through the google and found giving SEDCMD will help me. Hence giving this in props.conf and giving this in my cluster manager and it is working well. SEDCMD-removeavglatency=s/\"avg_ingress_latency_fe\"\:[\d+]\,//g SEDCMD-removeclientinsights=s/\"client_insights\"\:\"\.*"\,//g But my problem we need to give more lines like this which will not be in readable format in future. I want to keep it in less lines. Tried this but not working and in return this is disturbing the Json format-  == props.conf == [yourSourceType] TRANSFORMS-removeJsonKeys = removeJsonKeys1 == transforms.conf == [removeJsonKeys1] INGEST_EVAL = _raw=json_delete(_raw, "avg_ingress_latency_be", "avg_ingress_latency_fe", "max_ingress_latency_fe", "client_insights" ) because already we removed few lines from this event by giving in props.conf for auto extraction of json fields -  SEDCMD-removeheader=s/^[^\{]*//g   and here is SH props.conf -    [mysourcetype] KV_MODE = json AUTO_KV_JSON = true   Please suggest what can I do now instead of this to keep props.conf neat?  
How can I export the host values in excel for the particular serverclass  Is there is any query for that that will be helpful . Path will be  Deployment server -> forwarder management ->serv... See more...
How can I export the host values in excel for the particular serverclass  Is there is any query for that that will be helpful . Path will be  Deployment server -> forwarder management ->serverclass -> action (edit clients) -> need to export the hostname from the list 
Hi Everyone, I've installed and configured a Splunk Heavy Forwarder on an EC2 instance in AWS and configured two Splunk Indexers on EC2 instances in AWS. I created a test.log file on my HF with samp... See more...
Hi Everyone, I've installed and configured a Splunk Heavy Forwarder on an EC2 instance in AWS and configured two Splunk Indexers on EC2 instances in AWS. I created a test.log file on my HF with sample log events to forward them to my Splunk indexers. I'm trying to forward the logs/events with keyword "success" to indexer_1 and forward logs/events with keyword "error" to indexer_2. But, for some reason the logs/events from the HF are not visible in both Indexers. Just for the context, I have installed and configured a UF on another EC2 Instance in AWS and sending data to Indexer_1 and I can see the data successfully forwarded with no issues. Below are the .conf files and setup on my HF and two indexers. HF: inputs.conf: [monitor:///opt/splunk/var/log/splunk/test.log] disabled = false sourcetype = test outputs.conf: [tcpout:errorGroup] server = indexr_1_ip_addr:9997 [tcpout:successGroup] server = indexer_2_ip_addr:9997 props.conf: [test] TRANSFORMS-routing=errorRouting,successRouting transforms.conf: [errorRouting] REGEX=error DEST_KEY=_TCP_ROUTING FORMAT=errorGroup [successRouting] REGEX=success DEST_KEY=_TCP_ROUTING FORMAT=successGroup Indexer_1 & Indexer_2: Configured the port 9997 on both indexers. Note: I tried below steps to troubleshoot or identify the issue, but no luck so far: 1. Checked if the forwarder has any inactive forwards or receivers through CLI: Active forwards: indexr_1_ip_addr:9997 indexr_2_ip_addr:9997 Configured but inactive forwards: None 2. Check the splunkd.log on the forwarder to see if there are any errors related to data forwarding: No errors 3. Checked the Security Group rules (Inbound and Outbound) in AWS console: Port 9997 is enabled for both Inbound and Outbound traffic. 4. All EC2 Instances running Splunk are on the same Security Group in AWS. 5. Tried to Ping both Indexers from HF. But, no response. Can someone please help me with this issue as I'm stuck and unable to figure out what is the root cause of the issue. Also, I'm using the same security group for both HF and UF with same Inbound and Outbound rules, but I can only see the logs sent from UF and not seeing the logs/events from my HF. I'm not sure what I am missing here to resolve or fix the issue to see the logs/events from HF in my Indexers. Thank you!
Hi,   I am trying to update IT Essentials Work (ITEW) from v4.13.0 to v4.15.0. There is no much documentation on ITEW so I am using the documentation for IT Service Intelligence (ITSI). My understa... See more...
Hi,   I am trying to update IT Essentials Work (ITEW) from v4.13.0 to v4.15.0. There is no much documentation on ITEW so I am using the documentation for IT Service Intelligence (ITSI). My understanding is ITEW is the free version of ITSI without premium features. I checked the prerequisites, updated as per the documentation  1. Stopped the service (it is a single instance - SH) 2. Extracted the new version into $SPLUNK_HOME/etc/apps 3. Started the service Then opened the app on the search head to proceed with the update, it passed the pre checks, got to    2025-02-19 14:30:56,637+1100 process:654449 thread:MainThread INFO [itsi.migration] [itsi_migration_log:43] [info] UI: Running prechecker: EAPrechecks   I left it for 30 minutes or so, then checked the status by running    curl -k -u admin:changeme -X GET https://localhost:8089/servicesNS/nobody/SA-ITOA/migration/info   and it was is_running: false Cannot see anything alarming when I check the status. Tried several times and every time it is the same. Checked the permissions, Troubleshooting documentation, restarted the service - still could not update. Please, advise
Has anyone been able to use the "| sendalert risk ..." command from the correlation search query, even when the search returns no results? I currently need to do this, but when there are no result... See more...
Has anyone been able to use the "| sendalert risk ..." command from the correlation search query, even when the search returns no results? I currently need to do this, but when there are no results I get the message "Error in 'sendalert' command: Alert script returned error code 3." Is there a way to truncate (abort) the sendalert command when there are no results?
Hello, I have this search query      index=app iNumber IN (72061271737983, 72061271737983, 72061274477906, 72061277215167) | stats count by notificationId, iNumber       This results in mult... See more...
Hello, I have this search query      index=app iNumber IN (72061271737983, 72061271737983, 72061274477906, 72061277215167) | stats count by notificationId, iNumber       This results in multiple notificationIds coming in for each iNumber in this list. What im trying to find out is the max notificationId value per iNumber, and output that list. Is there a way to do that?  somthing like: iNumber (Max)NotificationId 72061271737983 12345 72061271737983  78787   Thank you!
Hi - We have accidentally deleted kvstore with outputlookup command. We do not have a backup from splunk.   How to Restore KVStore from back up of  splunk home( /opt/splunk )directory backup
Has anyone managed to set up source control for workbooks?  Pulling the information down via API to upload to gitlab is straightforward. You can run a get request against [base_url]/rest/workbook_te... See more...
Has anyone managed to set up source control for workbooks?  Pulling the information down via API to upload to gitlab is straightforward. You can run a get request against [base_url]/rest/workbook_template (REST Workbook). The problem is with pushing information. As far as I've been able to find, you can only create new phases or tasks. You're not able to specify via name or ID that you want to update an object. There's also no way I've found to delete a phase or task which would make creating a new one more reasonable.
Hi. I have below raw event/s. Highlighted Syntax: { [-]    body: {"isolation": "isolation","device_classification": "Network Access Control","ip": "1.2.3.4", "mac": "Unknown","dns_hn": "XYZ","po... See more...
Hi. I have below raw event/s. Highlighted Syntax: { [-]    body: {"isolation": "isolation","device_classification": "Network Access Control","ip": "1.2.3.4", "mac": "Unknown","dns_hn": "XYZ","policy": "TEST_BLOCK","network_fn": "CounterACT Device","os_fingerprint": "CounterACT Appliance","nic_vendor": "Unknown Vendor","ipv6": "Unknown",}    ctupdate: notif    eventTimestamp: 1739913406    ip: 1.2.3.4    tenant_id: CounterACT__sample } Raw Text: {"tenant_id":"CounterACT__sample","body":"{\"isolation\": \"isolation\",\"device_classification\": \"Network Access Control\",\"ip\": \"1.2.3.4\", \"mac\": \"Unknown\",\"dns_hn\": \"XYZ\",\"policy\": \"TEST_BLOCK\",\"network_fn\": \"CounterACT Device\",\"os_fingerprint\": \"CounterACT Appliance\",\"nic_vendor\": \"Unknown Vendor\",\"ipv6\": \"Unknown\",}","ctupdate":"notif","ip":"1.2.3.4","eventTimestamp":"1739913406"} I need below fields=value extracted from each event at search time. It is a very small dataset: isolation=isolation policy=TEST_BLOCK ctupdate=notif ip=1.2.3.4 ipv6=Unknown mac=Unknown dns_hn=XYZ eventTimestamp=1739913406 Thank you in advance!!!
I am trying to export the dashboard into a csv file. But i am not seeing CSV under export.  How do i enable the csv export.? My data is in table format.  
Hello, I want to get the ML toolkit however, how will it affect the hard rules we write? Can we use the toolkit as a verification method of the same index data? I meant for the same index and same sp... See more...
Hello, I want to get the ML toolkit however, how will it affect the hard rules we write? Can we use the toolkit as a verification method of the same index data? I meant for the same index and same splunk account, can we write hard rule sets as we do now and also get the ML toolkit as the same time?  thanks a lot 
Hello! I hope you can help! I have installed splunk enterprise 8.12 on my MAC OS 14.6.1 to study for an exam. Splunk installed fine. However the lab asked me to create an app called "destinations" wh... See more...
Hello! I hope you can help! I have installed splunk enterprise 8.12 on my MAC OS 14.6.1 to study for an exam. Splunk installed fine. However the lab asked me to create an app called "destinations" which i did and i set the proper permissions. However, when i go to the app in the search head and type "index=main" it sees it but doesn't display any records. I have copied down eventgen to the samples folder in Destinations  folder in the samples folder and copied the eventgen.conf to the local folder as directed but it still does not display.  I also see that the main index is enabled in indexes using theb $SPLUNK_DB/defaultdb/db  it also shows that it indexed 1mg out of 500gb.  I have a feeling that its something obvious but im not seeing it.   I really need this lab to work can you assist?  I used SPLK-10012.PDF instructions. not sure if you have access to that. i pulled down the files fro github  - eventgen. Maybe this is an easy fix?  Thank you
Hi,   I want to use a common Otel Collector gateway to collect traces and metrics from different sources. One of the sources I want to collect traces and metrics from is Azure API Management. How c... See more...
Hi,   I want to use a common Otel Collector gateway to collect traces and metrics from different sources. One of the sources I want to collect traces and metrics from is Azure API Management. How can I configure Azure API Management to send traces and metrics to an existing Otel Collector integrated with Splunk Observability. The Splunk documentation talks about how to create a separate integration from Splunk Observability to Azure cloud. However I dont want to create a separate integration but rather use an existing collector gateway.    Regards, Sukesh