All Topics

Top

All Topics

Apply following workaround in default-mode.conf Additionally you can also push this change via DS push across thousands of universal forwarders. Add index_thruput in the list of disabled proces... See more...
Apply following workaround in default-mode.conf Additionally you can also push this change via DS push across thousands of universal forwarders. Add index_thruput in the list of disabled processors.  Add following line as is in default-mode.conf.   #Turn off a processor [pipeline:indexerPipe] disabled_processors= index_thruput, indexer, indexandforward, latencytracker, diskusage, signing,tcp-output-generic-processor, syslog-output-generic-processor, http-output-generic-processor, stream-output-processor, s2soverhttpoutput, destination-key-processor     NOTE:  PLEASE DON'T APPLY ON HF/SH/IDX/CM/DS. You want to use different app( not SplunkUniversalForwarder app) to push the change.
I have an appliance that can only forward syslog via UDP. Is there a way for me to forward the udp syslog to a machine that has a Heavy Forwarder, or UF on it and have the forwarder relay the Syslog ... See more...
I have an appliance that can only forward syslog via UDP. Is there a way for me to forward the udp syslog to a machine that has a Heavy Forwarder, or UF on it and have the forwarder relay the Syslog via TLS to the server running my Splunk Enterprise Instance?
I have a hostname.csv file and contact these attributes. hostname.csv ip                     mac                           hostname x.x.x.x                                                abc_01  ... See more...
I have a hostname.csv file and contact these attributes. hostname.csv ip                     mac                           hostname x.x.x.x                                                abc_01                        00:00:00                  def_02 x.x.x.y           00:00:11                  ghi_03                                                             jkl_04   i would like to search in Splunk index=* host=* ip=* mac=*, compare my host equal to my hostname column from a lookup file "hostname.csv",  if it matches, then I would like to write ip and mac values to hostname.csv file. the result look like this. new hostname.csv file. ip                              mac                                       hostname x.x.x.x                  00:new:mac                            abc_01 x.x.y.new            00:00:00                                   def_02 x.x.x.y                  00:00:11                                    ghi_03 new.ip                new:mac                                      jkl_04   thank you for your help!!!
Hello All, I found an wired problem/defect. Not sure whether you are getting the same. Issue: I am unable to Bind IP and getting error 'OOps the server encountered unexpected Condition ' Followed ... See more...
Hello All, I found an wired problem/defect. Not sure whether you are getting the same. Issue: I am unable to Bind IP and getting error 'OOps the server encountered unexpected Condition ' Followed the : https://docs.splunk.com/Documentation/Splunk/8.0.3/Admin/BindSplunktoanIP It's related to editing the correct 'web.conf' file.  The document basically not telling which 'web.conf' file to edit. There are Eight 'web.conf' files in total. if you try coping 'web.conf' from 'Defaullt' folder into 'local' folder and editing 'mgmtHostPort' (with correct IP port etc,) it still does not work. Resolution : if you edit 'web.conf' file (for mgmtHostPort)  in the below location, it works perfectly i.e. you are able to launch the Splunk with 'IP address:8000' port. C:/Programme Files/Splunk/var/run/splunk/merged/web.conf However: If you restart Splunk Enterprise via the web console i.e. 'Restart button' within the application. The settings goes and you need to do it again. If you restart via Microsoft services>splunkd.service -- there is no problem. Environment Used : Splunk Enterprise - 9.3.1 (On-prem)    OS : Windows Server 2022 Also tried on Splunk Enterprise - 9.2.1  (On-prem) - Has same problem.    
Right now I have an issue with duplicate notables. I want to make it so a notable will only re-generate if there have been new events that have added on to its risk score, not if no new events have h... See more...
Right now I have an issue with duplicate notables. I want to make it so a notable will only re-generate if there have been new events that have added on to its risk score, not if no new events have happened and its risk score has remained the same. I have tried adjusting our base correlation search's throttling to throttle by risk object over every 7 days, because our correlation search goes back over the last 7 day's worth of alerts to determine whether or not to trigger a notable.  Which brings me to this question: do the underlying alerts (i.e., the alerts that contribute to generating a risk score which ultimately determines if a risk object is generated or not) also need to be throttled for the past 7 days? Right now the throttling settings for those alerts are set to throttle by username over the past 1 day. 
To enhance SOC efficiency, analysts must be equipped with a streamlined workflow experience that boosts productivity. Ensuring security analysts have a SIEM solution that provides the foundation to u... See more...
To enhance SOC efficiency, analysts must be equipped with a streamlined workflow experience that boosts productivity. Ensuring security analysts have a SIEM solution that provides the foundation to unify detection, investigation, and response to threats will bolster their confidence and efficacy in managing security risks. In our latest release of Splunk Enterprise Security, we have revolutionized the SOC workflow experience, enabling security analysts to seamlessly detect what matters, investigate holistically, and respond rapidly. Learn about: Complete unified TDIR workflows with new, native integration with Splunk SOAR New modern aggregation and triage capabilities Enhanced detections to find and remediate threats, faster Simplified terminology across TDIR workflows, which aligns to Open Cybersecurity Schema Framework (OCSF), making it easy for your security team to understand exactly what they are working on Watch full Tech Talk here:
Watch an insightful talk where we dive into the world of threat hunting, exploring the key differences between indicator-based and behavior-based approaches. We'll break down the fundamental con... See more...
Watch an insightful talk where we dive into the world of threat hunting, exploring the key differences between indicator-based and behavior-based approaches. We'll break down the fundamental concepts behind each method, highlighting their strengths and use cases. Additionally, we'll showcase how you can leverage the power of Recorded Future's threat intelligence within Splunk to execute both indicator and behavior-based threat hunts. Whether you're refining your threat detection strategies or just starting your journey, this session will equip you with practical insights and hands-on techniques to enhance your security operations. Watch this Tech Talk to learn… Approaches to Threat Detection and Threat Hunting How to identify potentially malicious activity in your own logs that you may have otherwise missed How to mature your SOC practices Watch Full Tech Talk here:
Hello, I have these two events that are part of a transaction. These have the same s and qid. I need to match s and qid of these two and insert a field equal to hdr_mid from the second event into f... See more...
Hello, I have these two events that are part of a transaction. These have the same s and qid. I need to match s and qid of these two and insert a field equal to hdr_mid from the second event into first event. Is this possible? In final stats I group events by hdr_mid and qid so I need hdr_mid value present in first event if I want to extract all recipients email addresses.  To do so I need to pull rcpts from first event and not  the second. How would I do that? Oct 24 13:46:56 hostname.company.com 2024-10-24T18:46:56.426217+00:00 hostname filter_instance1[31332]: rprt s=42cu1tr3wx m=1 x=42cu1tr3wx-1 cmd=send profile=mail qid=49O9Yi2a005119 rcpts=1@company.com,2@company.com,3@company.com...52@company.com Oct 24 13:46:56 hostname.company.com 2024-10-24T18:46:56.426568+00:00 hostname filter_instance1[31332]: rprt s=42cu1tr3wx m=1 x=42cu1tr3wx-1 mod=mail cmd=msg module= rule= action=continue attachments=0 rcpts=52 routes=allow_relay,default_inbound,internalnet size=4416 guid=Rze4pxSO_BZ4kUYS0OtXqLZjW3uHSx8d hdr_mid=<103502694.595.1729795616099.JavaMail.psoft@xyz123> qid=49O9Yi2a005119 hops-ip=x.x.x.x subject="Message subject" duration=0.271 elapsed=0.325
Hi guys, I have a set of data in the following format: This is a manually exported list, and my requirements are as follows: - Objective: I need to identify hosts that haven't connected to the... See more...
Hi guys, I have a set of data in the following format: This is a manually exported list, and my requirements are as follows: - Objective: I need to identify hosts that haven't connected to the server for a long time and track the daily changes in these numbers. - Method: Since I need daily statistics, I must perform the import action daily. However, without any configuration changes, Splunk defaults to using "Last Communicaiton" as "_time", which is not what I want. I need "_time" to reflect the date of the import. This way, I can track changes in the count of "Last " records within each day's imported data. I can't use folder or file monitoring for this because it only adds new data, so my only options are to use oneshot or to perform the import via the Web interface. Is my approach correct? If not, what other methods could be used to handle this?   I could use splunk oneshot to upload the file to the Splunk indexer, but I couldn't adjust the date to the import day or specific day.   The example I used the command:   splunk add oneshot D:\upload.csv -index indexdemo     I want the job will run automatically. So I don't want to change any content to the file. How could I do?  
Hello, I have configured an index inside an indexer and when i try to fetch data from that index in search head not getting any data. when i search that same index in indexer i could get the data... See more...
Hello, I have configured an index inside an indexer and when i try to fetch data from that index in search head not getting any data. when i search that same index in indexer i could get the data from the index but not from search head. Could you please assist what configuration needs to be checked on my search head and indexer ? Note - it's not clustered setup.   Thanks  
Hi All, We Are using earliest and latest commands in splunk test environment search and those are working fine but in production environment earliest and latest commands are not working in SPL query... See more...
Hi All, We Are using earliest and latest commands in splunk test environment search and those are working fine but in production environment earliest and latest commands are not working in SPL query due to some reason. Can you please help me with alternative commands for those commands and provide the solution to fix this issue why earliest and latest commands are not working in production environment.   Thanks, Srinivasulu S
How to extract fields from below source. /audit/logs/QTEST/qtestw-core_server4-core_server4.log I need extract QTEST as environment qtestw as hostname core_server4 as component core_server4.log as ... See more...
How to extract fields from below source. /audit/logs/QTEST/qtestw-core_server4-core_server4.log I need extract QTEST as environment qtestw as hostname core_server4 as component core_server4.log as filename
Hello, we would like to filter ES incident review and hide notables with TEST keyword by example, how to do? Thanks for your help
Hi. I do not understand well the SHC config, [raft_statemachine] disabled = <boolean> * Set to true to disable the raft statemachine. * This feature requires search head clustering to be enabled. ... See more...
Hi. I do not understand well the SHC config, [raft_statemachine] disabled = <boolean> * Set to true to disable the raft statemachine. * This feature requires search head clustering to be enabled. * Any consensus replication among search heads uses this feature. * Default: true replicate_search_peers = <boolean> * Add/remove search-server request is applied on all members of a search head cluster, when this value to set to true. * Requires a healthy search head cluster with a captain.  What changes in a SHC by setting "disabled = true or false"? By default is true. "replicate_search_peers = true" works only if disabled is false.   What does setting this to true or false do to the cluster?
I’m experiencing slow performance with my Splunk queries, especially when working with large datasets. What are some best practices or techniques I can use to optimize my searches and improve respons... See more...
I’m experiencing slow performance with my Splunk queries, especially when working with large datasets. What are some best practices or techniques I can use to optimize my searches and improve response times? Are there specific commands or settings I should focus on?
I need to replace the variables in the field rule_title field that is generated when using the `notable` macro.  I was able to get this search to work but it only works when I table the spec... See more...
I need to replace the variables in the field rule_title field that is generated when using the `notable` macro.  I was able to get this search to work but it only works when I table the specific variable fields. Is there a way I can do that but for all title regardless of title and variable fields?     
Register here. This thread is for the Community Office Hours session on Splunk App Development on Thurs, Jan 16, 2025 at 1pm PT / 4pm ET.    Ask the experts at Community Office Hours! An ongoing se... See more...
Register here. This thread is for the Community Office Hours session on Splunk App Development on Thurs, Jan 16, 2025 at 1pm PT / 4pm ET.    Ask the experts at Community Office Hours! An ongoing series where technical Splunk experts answer questions and provide how-to guidance on various Splunk product and use case topics.   What can I ask in this AMA? How do we work with REST APIs? What SDKs are available for app development? How should we get started with Splunk UI development? What are some best practices to maintain & evolve Splunk Apps? Anything else you'd like to learn!   Please submit your questions at registration. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.   Look forward to connecting!
Register here. Ask the experts at Community Office Hours! An ongoing series where technical Splunk experts answer questions and provide how-to guidance on various Splunk product and use case topics. ... See more...
Register here. Ask the experts at Community Office Hours! An ongoing series where technical Splunk experts answer questions and provide how-to guidance on various Splunk product and use case topics.   This thread is for the Community Office Hours session on Awesome Admins: Running a Healthy Splunk Platform Environment on Thurs, Dec 12, 2024 at 1pm PT / 4pm ET.    What can I ask in this AMA? What should I be looking at as a Splunk Cloud or Splunk Enterprise Admin, and why? What are some best practices for using workload management? How can I set up a scalable architecture? What are some best practices for monitoring system health with the Cloud Monitoring Console? What are some tips for managing and balancing disaster recovery? Any best practices for managing large numbers of users? Which admin tasks should I be streamlining with ACS? Anything else you'd like to learn!   Please submit your questions at registration. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.   Look forward to connecting!
Hello how can I display only 1 value of these 3 "maxCapacitMachine" results (which are the same in all 3 cases) in a BY timechart?  
Hello, I'm having a hard time trying to find what data source events from a search are originating from, the Search is: source="/var/www/html/PIM/var/log/webservices/*" I've looked thru the "Files... See more...
Hello, I'm having a hard time trying to find what data source events from a search are originating from, the Search is: source="/var/www/html/PIM/var/log/webservices/*" I've looked thru the "Files % Directories" (Which I thought I would find it in there) and the rest of the Data Inputs, but can't seem to locate it anywhere. A side question   I tried creating a new Files % Directories Data Input by putting the full Linux path like below: //HostName/var/www/html/PIM/var/log/webservices/* But It says Path can't be empty.  I'm sure this is probably not how you format a Linux path, just couldn't find what I'm doing wrong. Thanks for any help at all, Newb