All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, Can someone please assist me with the steps/commands that need to be performed when pointing existing Splunk components (Deployment Server, Search Heads, Heavy Forwarder, Indexer) to a different... See more...
Hi, Can someone please assist me with the steps/commands that need to be performed when pointing existing Splunk components (Deployment Server, Search Heads, Heavy Forwarder, Indexer) to a different existing License Server? We currently have two separate environments with their own License Servers, now this is all being merged into one license. So, one of the environments needs to point to the other existing License Server where the new license will be installed. Thanks
hi thanks. i finally decided to create a dashboard for each tab until Splunk will develop tabs in Dashboard studio.
HI @sinhashubham014 , which ESCU Use Cases are triggered? Why isn't the result you want? Ciao. Giuseppe
Hi I am trying to see for a ticket that is not assigned to an analyst for the last 15 mins from the time of arrival. I have only the timestamp system_updated meaning when ever there is any change in... See more...
Hi I am trying to see for a ticket that is not assigned to an analyst for the last 15 mins from the time of arrival. I have only the timestamp system_updated meaning when ever there is any change in the INC the timestamping gets updated for that event. index="servicenow" INC* sourcetype="snow:incident" |where assigned_to = "" |rename sys_updated_on as earliest |eval date = strptime(earliest, "%Y-%m-%d %H:%M:%S.%3N") | eval start=strftime(strptime(earliest, "%Y-%m-%d %H:%M:%S.%2N") + 1, "%Y-%m-%d %H:%M:%S.%2N") | eval end=strftime(strptime(earliest, "%Y-%m-%d %H:%M:%S.%2N") + 900, "%Y-%m-%d %H:%M:%S.%2N") |table ticket_number start end So here I have taken the time when the assigned to field was empty and that is the iNC created time as well. From that next second to the 15 min I need to know the series of events with the help of  start and end values.  When I do so I am not able to see any events. Please help
Hi, I’m new to splunk and getting the same error message after upgrading splunk and the security essentials apps.  could you please help me understand how I can perform these steps: First and forem... See more...
Hi, I’m new to splunk and getting the same error message after upgrading splunk and the security essentials apps.  could you please help me understand how I can perform these steps: First and foremost, - you need to track down the automatic lookup definition - record the lookup definition name being referenced - find the lookup definition and record the lookup table name
Hello, two quick questions regarding the Splunk Add-on for JBoss and the Splunk Add-on for JMX: The documentation says that both TA's require Oracle JDK or openJDK. Given the documentation, I'm... See more...
Hello, two quick questions regarding the Splunk Add-on for JBoss and the Splunk Add-on for JMX: The documentation says that both TA's require Oracle JDK or openJDK. Given the documentation, I'm assuming that there is no possibility to run the TA's with another JDK distribution such as JDK variants of Azul (Zulu Prime aka Zing or Zulu) - could someone here confirm this? If it's indeed not possible, is it sufficient to only have the openJDK on the Heavy Forwarder? I would assume yes since the doc says: "Install Java Runtime 1.7 or later on the same machine as the Splunk Add-on for JBoss. Note: You need to use the OpenJDK Java Runtime or Oracle Java Runtime."
Hello i am getting this below error on the linux server and can't find the nmon performance data on the server, can some one please help? how to rectify on it? 12-11-2023 08:39:48.203 +0100 INFO  l... See more...
Hello i am getting this below error on the linux server and can't find the nmon performance data on the server, can some one please help? how to rectify on it? 12-11-2023 08:39:48.203 +0100 INFO  loader [8843 MainThread] - SPLUNK_MODULE_PATH environment variable not found - defaulting to /splunk/splunkforwarder/etc/modules
Hello Experts , Is there any option in AppDynamics (SaaS)where i can configure to get the daily / Weekly/Monthly  license usage report by email  . Thanks
I have configured the Endpoint data model logs with windows:security, system and registery logs. However, when i triggered the ESCU usecases, it's showing the events.
Hi @sinhashubham014, I suppose that you are using the Splunk_TA_Windows on the Search Head for parsing. Anyway, log ingestion isn't managed by ESCU: ESCU contains many Correlation Searches to use i... See more...
Hi @sinhashubham014, I suppose that you are using the Splunk_TA_Windows on the Search Head for parsing. Anyway, log ingestion isn't managed by ESCU: ESCU contains many Correlation Searches to use in ES or simply in Splunk Enterprise, no parsing or ingesting rules. See if you are correctly parsing your logs and if there are the eventtypes to assign the correct tagging to your logs, so the DataModels are correctly populated. Ciao. Giuseppe
Hi @madhav_dholakia , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @tscroggins , could you better describe your solution? In two load balanced  HF I have: one source: tcp://10514 one sourcetype: syslog one index: network these values are assigned to logs ... See more...
Hi @tscroggins , could you better describe your solution? In two load balanced  HF I have: one source: tcp://10514 one sourcetype: syslog one index: network these values are assigned to logs in one inputs.conf: [tcp://10514] sourcetype = syslog index = network disabed = 0 then I have the props.conf and transforms.conf I already shared (three tries!) that should transform e.g. the syslog sourcetype to fgt_log and then in fortigate_traffic, fortigate_log, fortigate_utm, fortigate_event based on regex. The first transformation (fgt_log) correctly works, but not the second one. Ciao. Giuseppe
Hello, i am deploying the ESCU searches in our environment. However, the endpoint logs are not ingested in Splunk. However for deploying the usecases, I ingested the Windws Security Logs with win eve... See more...
Hello, i am deploying the ESCU searches in our environment. However, the endpoint logs are not ingested in Splunk. However for deploying the usecases, I ingested the Windws Security Logs with win event 4688/4689 to monitor the usecases. Sysmon logs are not ingested well. The Windows logs, configured with endpoint model are triggering the notables. Is it triggered notables relevant with the incident triage?
thanks @PickleRick , that was helpful - for now I am settled with below approach and will monitor if this is causing any more issues: I have created a saved search (Report) to run evey minute and th... See more...
thanks @PickleRick , that was helpful - for now I am settled with below approach and will monitor if this is causing any more issues: I have created a saved search (Report) to run evey minute and then dashboard panels are using  | loadjob and refreshes every 15 seconds. 
thanks @gcusello - only reason to refresh panels every 15 seconds is to get the results from last executed report run in 15 seconds, rather than to wait for a minute to see the latest results on dash... See more...
thanks @gcusello - only reason to refresh panels every 15 seconds is to get the results from last executed report run in 15 seconds, rather than to wait for a minute to see the latest results on dashboard once report run is completed. Thank you for your help on this. 
Sending logs from Universal Forwarders to Heavy Forwarders is like passing along important messages from one person to another in a relay. Here's a simple way to understand it: Imagine Passing Not... See more...
Sending logs from Universal Forwarders to Heavy Forwarders is like passing along important messages from one person to another in a relay. Here's a simple way to understand it: Imagine Passing Notes: Think of Universal Forwarders as individuals who have notes (logs) with important information. Heavy Forwarders are the ones ready to collect and manage these notes. Universal Forwarders (Note Holders): Universal Forwarders are like people holding notes (logs) and standing in a line. They generate logs from different sources on a computer. Heavy Forwarders (Note Collectors): Heavy Forwarders are the ones waiting at the end of the line to collect these notes (logs) from the Universal Forwarders. Setting Up the Relay: You set up a system where each person (Universal Forwarder) in the line passes their note (log) to the next person (Heavy Forwarder) until it reaches the end. Configuring Universal Forwarders: On each computer with a Universal Forwarder, you configure it to know where the next person (Heavy Forwarder) is in line. This is like telling each note holder where to pass their note. Logs Move Down the Line: As logs are generated, they move down the line from Universal Forwarder to Universal Forwarder until they reach the Heavy Forwarder. Heavy Forwarder Collects and Manages: The Heavy Forwarder collects all the notes (logs) from different Universal Forwarders. It's like the person at the end of the line collecting all the notes to manage and make sense of them. Centralized Log Management: Now, all the important information is centralized on the Heavy Forwarder, making it easier to analyze and keep track of everything in one place. In technical terms, configuring Universal Forwarders to send logs to Heavy Forwarders involves setting up these systems to efficiently collect and manage logs from different sources across a network. It's like orchestrating a relay of information to ensure that important data reaches its destination for centralized management and analysis.
Thank you for your quick response!   I gave to my account these roles (admin and user) and restarted Splunk Enterprise but there are still same error when I'm installing Python for Scientific ... See more...
Thank you for your quick response!   I gave to my account these roles (admin and user) and restarted Splunk Enterprise but there are still same error when I'm installing Python for Scientific Computing Application...   and there are no such file  C:\Program Files\Splunk\var\run\7514f26da673bbe6.tar.gz  in  C:\Program Files\Splunk\var\run      
Hi  I'm using the Splunk App for Lookup File Editing.
Hi  I'm using the Splunk App for Lookup File Editing.
If I Import... do you import thru zip file extract?.. if so, it will give you a warning msg about file already exist.  are you importing by outputlookup command? Pls give us more details, thanks.