All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There are a few ways to onboard data into Splunk. Install a universal forwarder on the server to send log files to Splunk Have the server send syslog data to Splunk via a syslog server or Splu... See more...
There are a few ways to onboard data into Splunk. Install a universal forwarder on the server to send log files to Splunk Have the server send syslog data to Splunk via a syslog server or Splunk Connect for Syslog Use the server's API to extract data for indexing Use Splunk DB Connect to pull data from the server's SQL database.
Thanks for the response.. @Giuseppe. where do you locate the conf files? The conf files are located at the first full splunk instance that the data pass through. Regarding the REGEX, what I am try... See more...
Thanks for the response.. @Giuseppe. where do you locate the conf files? The conf files are located at the first full splunk instance that the data pass through. Regarding the REGEX, what I am trying to achieve is data to be routed to the specified index in transforms.conf based on the field name and it's value. In this case, what I am looking for is whenever, there is a <namespace="drnt0-retail-sabbnetservices"> in the data I want the routing to work. Regards, Yaseen.   Regards, Yaseen.    
Hi @syaseensplunk, at first: where do you locate the conf files? they must be located in the Heavy Forwarder that you're using to take logs from Kubernetes or in the first Full Splunk instance that... See more...
Hi @syaseensplunk, at first: where do you locate the conf files? they must be located in the Heavy Forwarder that you're using to take logs from Kubernetes or in the first Full Splunk instance that the data pass through. Second question: are you sure that the regex that you inserted in transforms.conf matches the events to override index? Ciao. Giuseppe
Ok so I suppose HEC is out of the question then? Is there an alternative solution?
Hi, I'm running a test setup with some live kubernetes data and I want to do the following indexer: 1) Route all data matching a certain field to a specific index called "gsp" on my indexer. I a... See more...
Hi, I'm running a test setup with some live kubernetes data and I want to do the following indexer: 1) Route all data matching a certain field to a specific index called "gsp" on my indexer. I already have been playing around with the _MetaData:Index key which seems to work just fine when applied as single transform for a certain sourcetype. However, How I have multiple sourcetypes. This is my props.conf [kube:container*] TRANSFORMS-routing = AnthosGSP This is my transforms.conf [AnthosGSP] REGEX = drnt0-retail-sabbnetservices DEST_KEY = _MetaData:Index FORMAT = gsp However, the routing isn't happening as it should be.Please help!! PS: I am a newbie to splunking.. so pardon my ignorance. Regards, Yaseen.
hi When I type this command, the following error message is displayed. | inputintelligence mitre_attack error command: Error in 'inputintelligence' command: Inputintelligence does not support thr... See more...
hi When I type this command, the following error message is displayed. | inputintelligence mitre_attack error command: Error in 'inputintelligence' command: Inputintelligence does not support threat intel at this time can you help me, how can i solve my problem?
I tried to upload a  .py file for Data Inputs - Scripts in Splunk for searching, but i can't get results. My sourcetype is CSV. How can I fix this? @richgalloway 
I am working on Linux based usecases that are available in Splunk ESCU. Most of the usecases are using Endpoint. process data model. When checked in the official Splunk Linux add on, only 3 source ty... See more...
I am working on Linux based usecases that are available in Splunk ESCU. Most of the usecases are using Endpoint. process data model. When checked in the official Splunk Linux add on, only 3 source types are shown in Endpoint i.e. (fs_notification, netstat, Unix:Service), Whereas the "process" sourcetype is not mapped with any data model. Will adding "process" sourcetype help in executing the Splunk ESCU queries?
In your first Splunk instance, there should be one or more workflow_actions.conf files containing the configurations of your workflows. You should copy them to your second Splunk machine. Ref: https... See more...
In your first Splunk instance, there should be one or more workflow_actions.conf files containing the configurations of your workflows. You should copy them to your second Splunk machine. Ref: https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Workflow_actionsconf
Hi , in my current splunk I have builded a lot of work flow action, when I want to upload all my workflow action into my new seccond splunk. How I do that thank you a lot  
Hi @Mahendra.Shetty  Not sure if this is what you looking for: from selenium import webdriver from selenium.webdriver.common.by import By driver = webdriver.Chrome() driver.get('https://www.c... See more...
Hi @Mahendra.Shetty  Not sure if this is what you looking for: from selenium import webdriver from selenium.webdriver.common.by import By driver = webdriver.Chrome() driver.get('https://www.cookiebot.com/en/uk-ie-cookie-compliance/') waitTime=30 # Accept cookies try: cookie_button = driver.find_element(By.ID, 'CybotCookiebotDialogBodyLevelButtonLevelOptinAllowAll') print("button located") cookie_button.click() except Exception as e: print("cannot find button") Take the given website for instance, upon loaded it prompts for cookie acceptance/acknowledgement: "Deny", "Customize" or "Allow all". In this example, I simply accept i.e., "Allow all". To do that, using browser dev tool, we  can inspect the ID of that link button.  regards, Terence
You would probably find the splunk Add-on for ServiceNow useful: https://splunkbase.splunk.com/app/1928 As for the query, you could compare the list of splunk server names active now versus the se... See more...
You would probably find the splunk Add-on for ServiceNow useful: https://splunkbase.splunk.com/app/1928 As for the query, you could compare the list of splunk server names active now versus the servers active a few days ago. e.g. index=_internal host="*splunknamescheme*" OR host IN (splunkserver1, splunkserver2) earliest=-3d latest=-2d | dedup host | table host | search NOT [search index=_internal host="*splunknamescheme*" OR host IN (splunkserver1, splunkserver2) earliest=-1d | dedup host | table host]  Then you can add an Alert Action to the alert and make it create an incident: https://docs.splunk.com/Documentation/AddOns/released/ServiceNow/Usecustomalertactions
いらっしゃいませ! ホワイトリストはRegExですか? [monitor:///xxxx/] whitelist = xxxx_list_\d{8}.csv それがうまくいかない場合は、エラーを検索してみてください: index=_internal /xxxx/
Try using Splunk Cloud to solve a problem at your organization.  For example: Are all hosts up and running as expected? Are any hosts running short on resources (free memory or disk space, CPU)? A... See more...
Try using Splunk Cloud to solve a problem at your organization.  For example: Are all hosts up and running as expected? Are any hosts running short on resources (free memory or disk space, CPU)? Are all applications up and running as expected? Has someone tried to brute-force their way into an account? I'm sure you can think of others.  Check out the Splunk Security Essentials app for security-related suggestions.
Merry Christmas Everyone!!  I wanted to thank everyone in this community first because within 3 months as Jr. Splunk Admin this page has brought me a lot of Value.  My Org asked me to focus on th... See more...
Merry Christmas Everyone!!  I wanted to thank everyone in this community first because within 3 months as Jr. Splunk Admin this page has brought me a lot of Value.  My Org asked me to focus on the Innovation part of Splunk cloud in the Year 2024 in my End of year review, While I'm thinking of: -digging through apps in Splunk base -reading articles I would love to have suggestions from you all that how I can add some value other than doing maintenance tasks, what it really takes, any prerequisites?  I know it's a generic question but any answer would help!!     Thank you. 
Hello richgalloway,   It worked, Thanks !!
First, search for both SERVICE_STOP and SERVICE_START events.  Then use the dedup command to get the most recent event for each host.  Filter out the SERVICE_START events and anything that happened i... See more...
First, search for both SERVICE_STOP and SERVICE_START events.  Then use the dedup command to get the most recent event for each host.  Filter out the SERVICE_START events and anything that happened in the last 10 minutes.  Whatever is left will be a SERVICE_STOP event at least 10 minutes old without a matching SERVICE_START. index=foo sourcetype=XYZ type IN (SERVICE_START SERVICE_STOP) | dedup host | where type=SERVICE_STOP AND _time < relative_time(now(), "-10m")
There are a few ways to onboard data into Splunk. Install a universal forwarder on the server to send log files to Splunk Have the server send syslog data to Splunk via a syslog server or Splunk C... See more...
There are a few ways to onboard data into Splunk. Install a universal forwarder on the server to send log files to Splunk Have the server send syslog data to Splunk via a syslog server or Splunk Connect for Syslog Use the server's API to extract data for indexing Use Splunk DB Connect to pull data from the server's SQL database. Have the application send data directly to Splunk using HTTP Event Collector (HEC).
(index=123) sourcetype=XYZ AND type IN ("SERVICE_STOP") )  | _time host type _raw  is the main query where we are searching host where service stop has been observed Here in this scenario we need ... See more...
(index=123) sourcetype=XYZ AND type IN ("SERVICE_STOP") )  | _time host type _raw  is the main query where we are searching host where service stop has been observed Here in this scenario we need to exclude if SERVICE_START event seen with same host within 10 Minutes. Kindly help me with the query Thanks in Advance !!
Hi @vijreddy30, let me understand, quich kind of HA do you want to  impement? if HA on data, you need an Indexer Cluster: you need at least two Indexer and a Cluster master, you can find more infor... See more...
Hi @vijreddy30, let me understand, quich kind of HA do you want to  impement? if HA on data, you need an Indexer Cluster: you need at least two Indexer and a Cluster master, you can find more information at https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/Basicclusterarchitecture If Ha in the presentation Layer, you neet a Search Head Cluster:, at least three Search Heads and a Deployer, as described at https://docs.splunk.com/Documentation/Splunk/9.1.2/DistSearch/SHCarchitecture If at ingestion level, you need at least two Heavy Forwarders and a Load Balancer. Ciao. Giuseppe