All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We would like to produce statistics about the usage of Splunk and we would like to categorize the searches by ranges, whether they cover the last day, past week or past month, and I wonder which fiel... See more...
We would like to produce statistics about the usage of Splunk and we would like to categorize the searches by ranges, whether they cover the last day, past week or past month, and I wonder which fields in _audit provide the beginning and end interval of the search.  
I think I know how to do this but I thought it would be best to check with some of the experts here first.   I am upgrading the hardware (storage expansion) on our indexers and this will require tu... See more...
I think I know how to do this but I thought it would be best to check with some of the experts here first.   I am upgrading the hardware (storage expansion) on our indexers and this will require turning off and unplugging each device. Indexers are clustered with a Replication Factor of 2. From what I have read: I can issue the 'splunk offline' command on the indexer I am working on Wait for the indexer to wrap up any tasks Then shut down and unplug the machine to perform this upgrade Once complete, I can plug it back in and turn it back on (make sure Splunk starts running again) Am i missing anything important? Thanks!
Hello, I am having issues getting data into Splunk Cloud with two new Universal forwarders. I have two existing Universal Forwarders that are working just fine, but I am migrating these to new serv... See more...
Hello, I am having issues getting data into Splunk Cloud with two new Universal forwarders. I have two existing Universal Forwarders that are working just fine, but I am migrating these to new servers. Same Universal Forwarder version on both the old and new servers (9.4.3) I have the Universal Forwader software installed on both the new Linux servers. I copied the inputs.conf and outputs.conf files from the old servers. I also installed splunkclouduf.spl that I downloaded from my Splunk Cloud instance. The usage for these forwarders is limited to syslog messages only. I receive syslog messages from other devices on port 514 of the Universal Forwarders (UDP and TCP allowed) and those messages forward to Splunk Cloud. Pretty simple setup. I have confirmed that traffic is being received on the servers on port 514 using tcpdump. However, none of that traffic is reaching Splunk Cloud. I can see the new forwarders in the Splunk Cloud Monitoring Console under Forwarders->Versions and Forwarders->Instance. But no data is being received from the new forwarders. Below are my inputs.conf and outputs.conf files from one of the new servers. As you can see, very simple setup and outputs.conf is doing nothing. Again, these were copied from my old working servers exactly, except for the hostname on the new forwarders. ---------------------------------------- inputs.conf  [default] host = NHC-NETSplunkForwarder [tcp://514] acceptFrom = * connection_host=ip index=nhcnetwork sourcetype=NETWORK disabled=0 [udp://514] acceptFrom = * connection_host=ip index=nhcnetwork sourcetype=NETWORK ---------------------------------------- outputs.conf (sanitized) #This breaks stuff. The credentials package provides what is needed here. Leave commented out. #[tcpout] #defaultGroup = splunkcloud,default-autolb-group #[tcpout:default-autolb-group] #server = XXXXXXX.splunkcloud.com:9997 #disabled = false #[tcpout-server://XXXXXXX.splunkcloud.com:9997] Do I need to do something in Splunk Cloud to allow these new forwarders to send data? I don't know how splunkclouduf.spl works so I don't know a way to monitor output traffic from the Universal Forwarder. Any suggestions or tips are appreciated. Thanks, -Pete  
Jun 26 13:46:12 128.23.84.166 [local0.err] <131>Jun 26 13:46:12 GBSDFA1AD011HMA.systems.uk.fed ASM:f5_asm=PROD vs_name="/f5-tenant-01/XXXXXXXX" violations="HTTP protocol compliance failed" sub_viola... See more...
Jun 26 13:46:12 128.23.84.166 [local0.err] <131>Jun 26 13:46:12 GBSDFA1AD011HMA.systems.uk.fed ASM:f5_asm=PROD vs_name="/f5-tenant-01/XXXXXXXX" violations="HTTP protocol compliance failed" sub_violations="HTTP protocol compliance failed:Header name with no header value" attack_type="HTTP Parser Attack" violation_rating="3/5" severity="Error" support_id="XXXXXXXXX" policy_name="/Common/waf-fed-transparent" enforcement_action="none" dest_ip_port="128.155.6.2:443" ip_client="128.163.192.44" x_forwarded_for_header_value="N/A" method="POST" uri="/auth-service/api/v2/token/refreshAccessToken" microservice="N/A" query_string="N/A" response_code="500" sig_cves="N/A" sig_ids="N/A" sig_names={N/A} sig_set_names="N/A" staged_sig_cves="N/A" staged_sig_ids="N/A" staged_sig_names="N/A" staged_sig_set_names="N/A" <?xml version='1.0' encoding='UTF-8'?> <BAD_MSG> <violation_masks> <block>0-0-0-0</block> <alarm>2400500004500-106200000003e-0-0</alarm> <learn>0-0-0-0</learn> <staging>0-0-0-0</staging> </violation_masks> <request-violations> <violation> <viol_index>14</viol_index> <viol_name>VIOL_HTTP_PROTOCOL</viol_name> <http_sanity_checks_status>2</http_sanity_checks_status> <http_sub_violation_status>2</http_sub_violation_status> <http_sub_violation>SGVhZGVyICdBdXRob3JpemF0aW9uJyBoYXMgbm8gdmFsdWU=</http_sub_violation> </violation> </request-violations> </BAD_MSG>​ Jul 3 11:12:48 128.168.189.4 [local0.err] <131>2025-07-03T11:12:48+00:00 nginxplus-nginx-ingress-controller-6947cb4744-hxwf5 ASM:Log_details\x0a\x0avs_name="14-cyberwasp-sv-busybox.ikp3001ynp.cloud.uk.fed:10-/"\x0aviolations="Attack signature detected"\x0asub_violations="N/A"\x0aattack_type="Cross Site Scripting (XSS)"\x0aviolation_rating="5/5"\x0aseverity="N/A"\x0a\x0asupport_id="14096019979554169061"\x0apolicy_name="waf-fed-enforced"\x0aenforcement_action="block"\x0a\x0adest_ip_port="0.0.0.0:443"\x0aip_client="128.175.220.223"\x0ax_forwarded_for_header_value="N/A"\x0a\x0amethod="GET"\x0auri="/"\x0amicroservice="N/A"\x0aquery_string="svanga=%3Cscript%3Ealert(1)%3C/script%3E%22"\x0aresponse_code="0"\x0a\x0asig_cves="N/A,N/A,N/A,N/A"\x0asig_ids="200001475,200000098,200001088,200101609"\x0asig_names={XSS script tag end (Parameter) (2),XSS script tag (Parameter),alert() (Parameter)...}\x0asig_set_names="{High Accuracy Signatures;Cross Site Scripting Signatures;Generic Detection Signatures (High Accuracy)},{High Accuracy Signatures;Cross Site Scripting Signatures;Generic Detection Signatures (High Accuracy)},{Cross Site Scripting Signatures}..."\x0astaged_sig_cves="N/A,N/A,N/A,N/A"\x0astaged_sig_ids="N/A"\x0astaged_sig_names="N/A"\x0astaged_sig_set_names="N/A"\x0a\x0a<?xml version='1.0' encoding='UTF-8'?><BAD_MSG><violation_masks><block>400500200500-1a01030000000032-0-0</block><alarm>20400500200500-1ef903400000003e-7400000000000000-0</alarm><learn>0-0-0-0</learn><staging>0-0-0-0</staging></violation_masks><request-violations><violation><viol_index>42</viol_index><viol_name>VIOL_ATTACK_SIGNATURE</viol_name><context>parameter</context><parameter_data><value_error/><enforcement_level>global</enforcement_level><name>c3Zhbmdh</name><value>PHNjcmlwdD5hbGVydCgxKTwvc2NyaXB0PiI=</value><location>query</location><expected_location></expected_location><is_base64_decoded>false</is_base64_decoded><param_name_pattern>*</param_name_pattern><staging>0</staging></parameter_data><staging>0</staging><sig_data><sig_id>200001475</sig_id><blocking_mask>3</blocking_mask><kw_data><buffer>c3ZhbmdhPTxzY3JpcHQ+YWxlcnQoMSk8L3NjcmlwdD4i</buffer><offset>8</offset><length>7</length></kw_data></sig_data><sig_data><sig_id>200000098</sig_id><blocking_mask>3</blocking_mask><kw_data><buffer>c3ZhbmdhPTxzY3JpcHQ+YWxlcnQoMSk8L3NjcmlwdD4i</buffer><offset>7</offset><length>7</length></kw_data></sig_data><sig_data><sig_id>200001088</sig_id><blocking_mask>2</blocking_mask><kw_data><buffer>c3ZhbmdhPTxzY3JpcHQ+YWxlcnQoMSk8L3NjcmlwdD4i</buffer><offset>15</offset><length>6</length></kw_data></sig_data><sig_data><sig_id>200101609</sig_id><blocking_mask>3</blocking_mask><kw_data><buffer>c3ZhbmdhPTxzY3JpcHQ+YWxlcnQoMSk8L3NjcmlwdD4i</buffer><offset>7</offset><length>25</length></kw_data></sig_data></violation></request-violations></BAD_MSG> We have already implemented some platform logs in Splunk and this is the format we have for it (1st XML)   and the props.conf we have written for this in indexer -  [abcd] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S SEDCMD-newline_remove = s/\\r\\n/\n/g SEDCMD-formatxml =s/></>\n</g LINE_BREAKER = ([\r\n]+)[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s SHOULD_LINEMERGE = False TRUNCATE = 10000 # Leaving PUNCT enabled can impact indexing performance. Customers can # comment this line if they need to use PUNCT (e.g. security use cases) ANNOTATE_PUNCT = false   props.conf on search head -   [abcd] REPORT-xml_kv_extract = bad_msg_xml, bad_msg_xml_kv   transforms.conf   [bad_msg_xml] REGEX = (?ms)<BAD_MSG>(.*?)<\/BAD_MSG> FORMAT = Bad_Msg_Xml::$1 [bad_msg_xml_kv] SOURCE_KEY = Bad_Msg_Xml REGEX = (?ms)<(\w*)>([^<]*)<\/\1> FORMAT = $1::$2 MV_ADD = true   Now we are applying same logic for the  raw data (attached above in 2nd XML format) and now it is not at all working in readable format --     Sometimes single event is coming as multi event. for example response code coming as one event method is coming as another event which is not supposed to be. Please help me with props and transforms modifications. We need data to be in the format I have given initially
Hi Splunkers, I have a Splunk cluster with 1 SH, 1 CM and HF, and 3 indexers. The CM setup is configured to connect forwarders and SH using indexer discovery. All of this setup works well when we do... See more...
Hi Splunkers, I have a Splunk cluster with 1 SH, 1 CM and HF, and 3 indexers. The CM setup is configured to connect forwarders and SH using indexer discovery. All of this setup works well when we don't have any issues. Still, when the indexer is not accepting any connections (sometimes when we are overusing the license, we flip the input port on indexers to xxxx, not to receive any accepted data from forwarders), the network activity (read /write) on the Splunk Search Head is taking a hit. The Search Head becomes completely unusable at this point. Has anyone faced a similar issue like this, or am I missing any setting during the setup of Indexer discovery? Thanks, Pravin
I Install UF 9.1.7 ARM file on Rocky 9. and i got an Error "tcp_conn_open_afux ossocket_connect failed with No such file or directory" when set deploy-poll   is this Compatibility problem?
I am using the Java SignalFlow client to send the same query each minute.  Only the start and end times change.  I actually set the start and end time to the same value, which seems to reliably give ... See more...
I am using the Java SignalFlow client to send the same query each minute.  Only the start and end times change.  I actually set the start and end time to the same value, which seems to reliably give me a single data point, which is what I want. "persistent" is false and "immediate" is true. I'm reusing the SignalFlowClient object but closing the computation after reading the results. If I run the client in a loop with a 60 second delay between iterations, I get frequent but unpredictable http 400 bad request responses.  It appears the first request always succeeds.  There is no further info about what's bad.  Output looks like this: com.signalfx.signalflow.client.SignalFlowException: 400: failed post [ POST https://stream.us0.signalfx.com:443/v2/signalflow/execute?start=1750889822602&stop=1750889822602&persistent=false&immediate=true&timezone=America%2FChicago HTTP/1.1 ] reason: Bad Request at com.signalfx.signalflow.client.ServerSentEventsTransport$TransportConnection.post(ServerSentEventsTransport.java:338) at com.signalfx.signalflow.client.ServerSentEventsTransport.execute(ServerSentEventsTransport.java:106) at com.signalfx.signalflow.client.Computation.execute(Computation.java:185) at com.signalfx.signalflow.client.Computation.<init>(Computation.java:67) at com.signalfx.signalflow.client.SignalFlowClient.execute(SignalFlowClient.java:145)  How can I troubleshoot this further?  I can't find much useful info about how the client is supposed to work. thanks  
I don't mean SharePoint activity, admin or audit logs. I mean actual data files (that will be converted later to lookup files in Splunk Cloud). Basically, do I need to extract the CSV files from Sha... See more...
I don't mean SharePoint activity, admin or audit logs. I mean actual data files (that will be converted later to lookup files in Splunk Cloud). Basically, do I need to extract the CSV files from SharePoint first (eg to a traditional on-prem file share by way of Power Automate) and use a UF to forward the files to Splunk Cloud, or is there some other nifty way to forward CSV data files directly from SharePoint Online to Splunk Cloud, or some other intermediary method? Thank you.
Hello everyone, I have a network monitoring system that exports data via IPFIX using Forwarding Targets. I am trying to receive this data in Splunk using the Splunk Stream app. The add-on is instal... See more...
Hello everyone, I have a network monitoring system that exports data via IPFIX using Forwarding Targets. I am trying to receive this data in Splunk using the Splunk Stream app. The add-on is installed and Stream is enabled, but I am facing the following issues: Templates are not being received properly. The data arrives, but it's unreadable or incomplete. I need full flow data, including summaries or headers from Layer 7 (e.g., HTTP, DNS). My question: Has anyone successfully received and parsed IPFIX data in Splunk? If so, could you share the steps or configurations you used (like streamfwd.conf, input settings, etc.)? Any guidance would be greatly appreciated! Thanks in advance!
Hello! I have logs from Domain Controller Active Directory in Splunk and try to configure monitoring of user logons (EventCode=4624). Unfortunately, there are two fields with a name "Account Name" ... See more...
Hello! I have logs from Domain Controller Active Directory in Splunk and try to configure monitoring of user logons (EventCode=4624). Unfortunately, there are two fields with a name "Account Name" example of a log: 06/25/2025 02:54:32 PM LogName=Security EventCode=4624 EventType=0 ComputerName=num-dc1.boston.loc SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=881265691 Keywords=Audit Success TaskCategory=Logon OpCode=Info Message=An account was successfully logged on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Impersonation Level: Impersonation New Logon: Security ID: BOSTON\*** Account Name: *** Account Domain: BOSTON Logon ID: 0x135F601B51 Logon GUID: {12C0DD76-F92B-07E1-88A5-914C43F7D3D5} Could you please advise if it’s possible to modify the fields before indexing, i.e., at the "input" stage? Specifically, I'd like to change the first field Subject: Account Name to Source Account Name and the second field New Logon: Account Name to Destination Account Name. From what I understand, this would require modifications in props.conf and transforms.conf. If anyone has ideas on how to achieve this, please share!
Hi there, In Mission Control in our properly working Splunk environment, we see the following: This is exactly how we want it: the finding based correlation search "Threat - Findings Risk Thres... See more...
Hi there, In Mission Control in our properly working Splunk environment, we see the following: This is exactly how we want it: the finding based correlation search "Threat - Findings Risk Threshold Exceeded for Entity Over 24 Hour Period - Rule" fired because of multiple findings that occured for one specific entity. If you expand it, then it shows all the findings.  (Please ignore the weird names of the findings) Then in our other environment, it looks differently. When you click expand, it has to think for a while: And then it just shows the number of intermediate findings, but the not the actual findings themselves. You also can't click on this grey label. I suspect it has something to do with the fact that our working environment is a somewhat fresh install, whereas the environment in which it doesn't work properly is an upgrade from an old ES version to the newest version. There might be some index problems or something, I don't know. Does anyone know?
Hi, I have a requirement for "high jvm thread wait time monitoring" for BT's. The only metric for JVM is thread count only (Application Infrastructure Performance -> Tier -> JVM -> Threads). So appr... See more...
Hi, I have a requirement for "high jvm thread wait time monitoring" for BT's. The only metric for JVM is thread count only (Application Infrastructure Performance -> Tier -> JVM -> Threads). So appreciate your expert suggestions on enabling/configuring the metric.  
Summary index or any alternative Hi, I have created a dashboard with 8 panels and time frame is last 5 minutes. Kept that shorter time frame booz for this platform we are receiving large chunks of d... See more...
Summary index or any alternative Hi, I have created a dashboard with 8 panels and time frame is last 5 minutes. Kept that shorter time frame booz for this platform we are receiving large chunks of data, App team want this dashboard to The run for longer time frames may be last 7 days. If we are running for last 7 days, search is taking so much time and lot of resources getting wasted. They asked for solution to implement longer time Frame with faster results I explored and found SUMMARY index as an option but never worked on it. Can this help me? We have nearly 100+ indexes in that particular platform and sourcetype is same for all. We have RBAC implemented for each index (restricting app A users to view app B logs and viceversa ) Now if I implement Summary Index here,can this RBAC sill take effect because summary index provides data for all indexes and if it used the same in dashboard.. app A user can see app B logs by any chance or set RBAC applies here over summary index? Or else suggest other alternatives as well. At the end it should align with my RBACs created.
I have a lookup table with daily records which includes: area, alarm description, date, number of bags per area and for that specific day (repetitive number). There is a timestamp for each alarm, and... See more...
I have a lookup table with daily records which includes: area, alarm description, date, number of bags per area and for that specific day (repetitive number). There is a timestamp for each alarm, and a bag column repeating the total bags for that day (same number appears multiple times because the same day has multiple alarm rows). I want to:  1) compute the total number of bags for the whole 3-month period. 2) compute the total number of alarm events (counted as total occurrences across 3 months). What is the best approach in Splunk enterprise to get both in the same final stats result? Example of scenario: AREA ALARM DESCRIPTION TOTAL DAILY BAGS TIME 1111 TRIGGER 18600 01/03/2024 1111 TRIGGER 18600 01/03/2024 1222 FAILURE 18600 01/03/2024 1323 FAILURE 18600 01/03/2024 1323 HAC 18600 01/03/2024 1222 FAILURE 33444 01/02/2024 1111 FAILURE 33444 01/02/2024 1323 TRIGGER 33444 01/02/2024
Hi Team, I am new to this community. I am working on golang integration with appdynamics. Go sdk is not available in appdynamics downloads. Can anybody help me how to get it? And, if anyone can share... See more...
Hi Team, I am new to this community. I am working on golang integration with appdynamics. Go sdk is not available in appdynamics downloads. Can anybody help me how to get it? And, if anyone can share the documentation for integration of app dynamics with golang, that would be really helpful. Thanks in advance. #AppDynamics #AppD #Golang #integration
Hi Team,   I am currently working on to monitor a C++ application in Appdynamics. Have instrumented other applications Java,PHP , .Net , but this looks totally different. As per the documentation,... See more...
Hi Team,   I am currently working on to monitor a C++ application in Appdynamics. Have instrumented other applications Java,PHP , .Net , but this looks totally different. As per the documentation, these were the 3 steps, but bit confused. 1.Add the AppDynamics Header File to the Application - This is fine, will add this line #include <path_to_SDK>/sdk_lib/appdynamics.h in the application source code. But is there any specific file name or instructions for this?   2.Initialize the Controller Configuration - The values mentioned below does these need to be updated in c++ application source code and call the Appdynamics SDK. If so does both these codes to be included and how toc all the SDK.   const char APP_NAME[] = "SampleC"; const char TIER_NAME[] = "SampleCTier1"; const char NODE_NAME[] = "SampleCNode1"; const char CONTROLLER_HOST[] = "controller.somehost.com"; const int CONTROLLER_PORT = 8080; const char CONTROLLER_ACCOUNT[] = "customer1"; const char CONTROLLER_ACCESS_KEY[] = "MyAccessKey"; const int CONTROLLER_USE_SSL = 0;        struct appd_config* cfg = appd_config_init(); // appd_config_init() resets the configuration object and pass back an handle/pointer appd_config_set_app_name(cfg, APP_NAME); appd_config_set_tier_name(cfg, TIER_NAME); appd_config_set_node_name(cfg, NODE_NAME); appd_config_set_controller_host(cfg, CONTROLLER_HOST); appd_config_set_controller_port(cfg, CONTROLLER_PORT); appd_config_set_controller_account(cfg, CONTROLLER_ACCOUNT); appd_config_set_controller_access_key(cfg, CONTROLLER_ACCESS_KEY); appd_config_set_controller_use_ssl(cfg, CONTROLLER_USE_SSL); .   3.Initialize the SDK - This I understand to be called from the source code to trigger the SDK, correct me if I am wrong. If my undertsnading is correct, any specific instructions toa dd these lines in the code? int initRC = appd_sdk_init(cfg); if (initRC) { std::cerr << "Error: sdk init: " << initRC << std::endl; return -1; }  
Hello everyone, I’m trying to integrate AppDynamics with a Golang application, and I came across mentions of an AppDynamics Go SDK. However, after checking the AppDynamics Downloads page, it doesn’t... See more...
Hello everyone, I’m trying to integrate AppDynamics with a Golang application, and I came across mentions of an AppDynamics Go SDK. However, after checking the AppDynamics Downloads page, it doesn’t seem to be listed under the available Agents for the “Go SDK” category. Is the AppDynamics Go SDK still available? If so: Where can I download it? Any guidance or official confirmation would be greatly appreciated! Thanks in advance.    
so i have a dashboard and i want to send an alerts to the Microsoft teams channel how can i do that.
Hello is it possible to use multiselect input in classic dashboard so the selected objects there will be  key=value AND key=value1 ? if im using IN its acts like OR   Thanks
raw data -  "attackData":{"rules":[{"data":"SCANTL=10","action":"alert","selector":"","tag":"REPUTATION","id":"REP_6021037","message":"Scanning Tools (High Threat) - Shared IPs","version":""},{"data... See more...
raw data -  "attackData":{"rules":[{"data":"SCANTL=10","action":"alert","selector":"","tag":"REPUTATION","id":"REP_6021037","message":"Scanning Tools (High Threat) - Shared IPs","version":""},{"data":"SCANTL=10","action":"alert","selector":"","tag":"REPUTATION","id":"REP_6021039","message":"Scanning Tools (Low Threat) - Shared IPs","version":""},{"data":"WEBATCK=10","action":"alert","selector":"","tag":"REPUTATION","id":"REP_6021041","message":"Web Attackers (High Threat) - Shared IPs","version":""},{"data":"WEBATCK=10","action":"alert","selector":"","tag":"REPUTATION","id":"REP_6021043","message":"Web Attackers (Low Threat) - Shared IPs","version":""}], converted to Json and here is the result -  attackData: { [-] rules: [ [-]        {  action: alert          data: SCANTL=10          id: REP_6021037          message: Scanning Tools (High Threat) - Shared IPs          selector:          tag: REPUTATION          version:        }        { [-] action: alert          data: SCANTL=10          id: REP_6021039          message: Scanning Tools (Low Threat) - Shared IPs          selector:          tag: REPUTATION          version:        }        { [-] action: alert data:WEBATCK=10 id:REP_6021041 message:Web Attackers (High Threat) - Shared IPs selector: tag:REPUTATION version:        }        { [-] action: alert          data: WEBATCK=10          id: REP_6021043          message: Web Attackers (Low Threat) - Shared IPs          selector:          tag: REPUTATION        }      ]    } Here the issue is whenever we are creating an alert or dashboard with single message called Scanning Tools (High Threat) - Shared IPs we are getting correct values but along with that rest all rules are also coming in event which client is not accepting. I know that will be there bcoz thats how the log is. Can we do anything for this to get only given message or value not all. This is happening for all events.