All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a lookup file that contains a column for hostname, ip address and location.  I need a query that will check the lookup file and determine if the element is up or down and if it has or used "ra... See more...
I have a lookup file that contains a column for hostname, ip address and location.  I need a query that will check the lookup file and determine if the element is up or down and if it has or used "radius". |inputlookup filename | search (MESSAGE_TEXT="Radius")
Hello Splunker,   I have two volumes with the following specs: Hot/Warm Volume: 5.25 TB Cold Volume: 4.75 TB ================================ [volume:hot] path = /opt/splunk-hwdata maxVolume... See more...
Hello Splunker,   I have two volumes with the following specs: Hot/Warm Volume: 5.25 TB Cold Volume: 4.75 TB ================================ [volume:hot] path = /opt/splunk-hwdata maxVolumeDataSizeMB = 7602176 [volume:cold] path = /opt/splunk-Colddata maxVolumeDataSizeMB = 4980736 ================================== [Win] repFactor = auto homePath = volume:hot/$_index_name/db coldPath = volume:cold/$_index_name/colddb thawedPath = /opt/splunk-Colddata/$_index_name/thaweddb homePath.maxDataSizeMB = 7602176 coldPath.maxDataSizeMB = 4980736 maxWarmDBCount = 720 frozenTimePeriodInSecs = 5184000 maxDataSize = auto_high_volume [FW] repFactor = auto homePath = volume:hot/$_index_name/db coldPath = volume:cold/$_index_name/colddb thawedPath = /opt/splunk-Colddata/$_index_name/thaweddb homePath.maxDataSizeMB = 7602176 coldPath.maxDataSizeMB = 4980736 maxWarmDBCount = 720 frozenTimePeriodInSecs = 5184000 maxDataSize = auto_high_volume   ==================================== Notice we have re-configured the below: [diskUsage] minFreeSpace = 20000 Finally, we have reached the bottom of the question  .   I am doubt if this configuration can maintain the below requirements: The data retention period for the online data is 2 months. - Hot/Warm – 1 month - Cold – 1 month        
Hello, I'm trying to extract fields from an event, but am not up to par on my regex, and I can't seem to get this to work.  So these work in regex101, but not within the Splunk Field Extraction for... See more...
Hello, I'm trying to extract fields from an event, but am not up to par on my regex, and I can't seem to get this to work.  So these work in regex101, but not within the Splunk Field Extraction for some reason.  Within the event there is the following: "alias":"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777," I need to create 3 fields from this: Host = FL-NS-VPX-INT-1 ServiceGroup = mobileapist Server = vnetapis003 When trying for Host with:  (?<="alias":")[^|]* It never finds it in Splunk.  Can't figure out why.   Extra credit:   Just kidding.  The last field I need, I can't get either with:   (?<="team","name":")[^"]* "team","name":"Monitoring_Admin"}], Here's the full event as well. INFO[2024-11-13T13:37:23.9114215-05:00] Message body: {"actionType":"custom","customerId":"3a1f4387-b87b-4a3a-a568-cc372a86d8e4","ownerDomain":"integration","ownerId":"8b500163-8476-4b0e-9ef7-2cfdaa272adf","discardScriptResponse":true,"sendCallbackToStreamHub":false,"requestId":"18dcdb1b-14d6-4b10-ad62-3f73acaaef2a","action":"Close","productSource":"Opsgenie","customerDomain":"siteone","integrationName":"Opsgenie Edge Connector","integrationId":"8b500163-8476-4b0e-9ef7-2cfdaa272adf","customerTransitioningOrConsolidated":false,"source":{"name":"","type":"system"},"type":"oec","receivedAt":1731523037863,"ownerId":"8b500163-8476-4b0e-9ef7-2cfdaa272adf","params":{"type":"oec","alertId":"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697","customerId":"3a1f4387-b87b-4a3a-a568-cc372a86d8e4","action":"Close","integrationId":"8b500163-8476-4b0e-9ef7-2cfdaa272adf","integrationName":"Opsgenie Edge Connector","integrationType":"OEC","customerDomain":"siteone","alertDetails":{"Raw":"","Results Link":"https://hostname:8000/app/search/search?q=%7Cloadjob%20scheduler__td26605__search__RMD5e461b39d4ff19795_at_1731522600_38116%20%7C%20head%204%20%7C%20tail%201&earliest=0&latest=now","SuppressClosed":"True","TeamsDescription":"True"},"alertAlias":"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777,","receivedAt":1731523037863,"customerConsolidated":false,"customerTransitioningOrConsolidated":false,"productSource":"Opsgenie","source":{"name":"","type":"system"},"alert":{"alertId":"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697","id":"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697","type":"alert","message":"[Splunk] Load Balancer Member Status","tags":[],"tinyId":"14585","entity":"","alias":"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777,","createdAt":1731522737697,"updatedAt":1731523038582000000,"username":"System","responders":[{"id":"f8c9079d-c7bb-4e58-ac83-359cb217a3b5","type":"team","name":"Monitoring_Admin"}],"teams":["f8c9079d-c7bb-4e58-ac83-359cb217a3b5"],"actions":[],"priority":"P3","oldPriority":"P3","source":"Splunk"},"entity":{"alertId":"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697","id":"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697","type":"alert","message":"[Splunk] Load Balancer Member Status","tags":[],"tinyId":"14585","entity":"","alias":"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777,","createdAt":1731522737697,"updatedAt":1731523038582000000,"username":"System","responders":[{"id":"f8c9079d-c7bb-4e58-ac83-359cb217a3b5","type":"team","name":"Monitoring_Admin"}],"teams":["f8c9079d-c7bb-4e58-ac83-359cb217a3b5"],"actions":[],"priority":"P3","oldPriority":"P3","source":"Splunk"},"mappedActionDto":{"mappedAction":"postActionToOEC","extraField":""},"ownerId":"8b500163-8476-4b0e-9ef7-2cfdaa272adf"},"integrationType":"OEC","alert":{"alertId":"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697","id":"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697","type":"alert","message":"[Splunk] Load Balancer Member Status","tags":[],"tinyId":"14585","entity":"","alias":"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777,","createdAt":1731522737697,"updatedAt":1731523038582000000,"username":"System","responders":[{"id":"f8c9079d-c7bb-4e58-ac83-359cb217a3b5","type":"team","name":"Monitoring_Admin"}],"teams":["f8c9079d-c7bb-4e58-ac83-359cb217a3b5"],"actions":[],"priority":"P3","oldPriority":"P3","source":"Splunk"},"customerConsolidated":false,"customerId":"3a1f4387-b87b-4a3a-a568-cc372a86d8e4","action":"Close","mappedActionDto":{"mappedAction":"postActionToOEC","extraField":""},"alertId":"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697","alertAlias":"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777,","alertDetails":{"Raw":"","Results Link":"https://hostname:8000/app/search/search?q=%7Cloadjob%20scheduler__td26605__search__RMD5e461b39d4ff19795_at_1731522600_38116%20%7C%20head%204%20%7C%20tail%201&earliest=0&latest=now","SuppressClosed":"True","TeamsDescription":"True"},"entity":{"alertId":"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697","id":"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697","type":"alert","message":"[Splunk] Load Balancer Member Status","tags":[],"tinyId":"14585","entity":"","alias":"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777,","createdAt":1731522737697,"updatedAt":1731523038582000000,"username":"System","responders":[{"id":"f8c9079d-c7bb-4e58-ac83-359cb217a3b5","type":"team","name":"Monitoring_Admin"}],"teams":["f8c9079d-c7bb-4e58-ac83-359cb217a3b5"],"actions":[],"priority":"P3","oldPriority":"P3","source":"Splunk"}} messageId=7546739e-2bab-414d-94b5-b0f205208932   Thank you for all the help on this one, Thanks, Tom    
This is in request to add the steps for adding Splunk Enterprise Security to my enterprise account, Thanks.  
Currently running Splunk 9.3.0. IT Essentials Work 4.18.1. VMware Dashboards and Reports content pack 1.2.0 All dashboards in the VMware Dashboards and Reports app where there is a "Quick Search" fr... See more...
Currently running Splunk 9.3.0. IT Essentials Work 4.18.1. VMware Dashboards and Reports content pack 1.2.0 All dashboards in the VMware Dashboards and Reports app where there is a "Quick Search" free text option, is not working. It used to provide as list as you typed of matching hosts/VMs depending on the dashboard. Now I can't get it to do anything.  Can anyone provide what the data source is for this input? I think I am probably missing a lookup file but cannot find which one.  For example, this shows the radio button that gets you to the text input. The radio button works but the text input does nothing.   
index=replicate category=* action=* Message=* [search index=replicate | eval Msg=substr(Message,1,30)] | stats count by action category Msg | dedup action   This is what I'm trying to do.  ... See more...
index=replicate category=* action=* Message=* [search index=replicate | eval Msg=substr(Message,1,30)] | stats count by action category Msg | dedup action   This is what I'm trying to do.  The Message field is very large and I only need the first sentence of the Message.  How can I do this?  We want it in a sub-search to show the sub-search function for our users. This is Splunk Cloud implementation.
The API reference mentions how to install an app that is already local to the splunk instance with apps/local. We can already upload an app manually in the Web console by going Apps->Manage Apps->In... See more...
The API reference mentions how to install an app that is already local to the splunk instance with apps/local. We can already upload an app manually in the Web console by going Apps->Manage Apps->Install App from File. However, for detection-as-code purposes, I need to be able to do that in a programmatic way, using an API, for CI/CD purposes. I have seen no documented way to do that, which can't be true. Surely if we can do that from the web console, there is a way to do that programmatically using an API. How do I install an app outside the Splunk instance from the REST API? Thanks
Hello, Sorry, still trying to get the hang of Search queries.   I am tasked with creating a table that displays a server name from one search, with a team name from another search that corresponds w... See more...
Hello, Sorry, still trying to get the hang of Search queries.   I am tasked with creating a table that displays a server name from one search, with a team name from another search that corresponds with the server name.  In example, 1st Search  index="netscaler | table servername Results in a table like: servername1 servername2   2nd Search index="main | table teamname Results in a table like teamname1 teamname2   I need to make 1 table that will display the corresponding teamname to the servername.  Like If servername = servername2, display teamname2 in the same table row. Does that make sense.   Let me know if any details are needed.  Not sure how to do this one. Thanks for any help, Tom
Trying to find out how to show the error message(hourly) when we hover over spunk sparkline graph in a splunk dashboard. Do we have such an option for sparkline. 
Hey guys, i sometimes have the task of reassigning ownership to certain teams, and at times it can be multiple dashboards/alerts at once. I have the option to select multiple dashboards/alerts , but ... See more...
Hey guys, i sometimes have the task of reassigning ownership to certain teams, and at times it can be multiple dashboards/alerts at once. I have the option to select multiple dashboards/alerts , but when I try to reassign all at once, it doesn't work.  I remember someone mentioning that it can be done, so i wanted to talk with my favorite community. thanks again.
Hello, if you are using _TCP_ROUTING and index rename on target platform, logs may go to "last chance index"       
I found that I had an error in one of my correlation searches because I saw it in the cloud monitoring console. When I fixed the error I suddenly saw that the latency over this specific correlation s... See more...
I found that I had an error in one of my correlation searches because I saw it in the cloud monitoring console. When I fixed the error I suddenly saw that the latency over this specific correlation search was >4 million seconds. Looking into the actual events that the cloud monitoring console is looking at I see scheduled_time is more than a month ago. Did I do something dumb or is Splunk actually just trying to run all those failed scheduled tasks now and I just need to wait it out? Or is there a way to stop them from running? I disabled the correlation search already and did a restart from the server controls....
Hello, Is there possibility of obtaining a Splunk Cloud license for development and integration purposes. Our company is actively working with Splunk APIs, and I’m trying to determine if there’s a ... See more...
Hello, Is there possibility of obtaining a Splunk Cloud license for development and integration purposes. Our company is actively working with Splunk APIs, and I’m trying to determine if there’s a license or partnership program we could leverage to support this work. Many thanks in advance!  
Hello All,    I'm having a timeline chart, I would like to add zoom in to this chart when we drang and select some lines, it needs to zoom.    Can anyone hekp to find this. Thanks in Advance! ... See more...
Hello All,    I'm having a timeline chart, I would like to add zoom in to this chart when we drang and select some lines, it needs to zoom.    Can anyone hekp to find this. Thanks in Advance!
I am trying to integrate splunk into my project. Currently, I have the following .properties file:   mySplunk.level = INFO mySplunk.handlers = com.splunk.logging.HttpEventCollectorLoggingHandler ... See more...
I am trying to integrate splunk into my project. Currently, I have the following .properties file:   mySplunk.level = INFO mySplunk.handlers = com.splunk.logging.HttpEventCollectorLoggingHandler # Configure the com.splunk.logging.HttpEventCollectorLoggingHandler com.splunk.logging.HttpEventCollectorLoggingHandler.url = myUrl com.splunk.logging.HttpEventCollectorLoggingHandler.level = INFO com.splunk.logging.HttpEventCollectorLoggingHandler.token = myToken com.splunk.logging.HttpEventCollectorLoggingHandler.source= mySource com.splunk.logging.HttpEventCollectorLoggingHandler.disableCertificateValidation=true     Note: url and token are not put into this file but are available and the access is grated. My SplunkTestLogger.java   import java.util.logging.Logger; import java.util.logging.Level; public class Main { public static void main(String[] args) { Logger logger = Logger.getLogger("mySplunk"); try{ FileInputStream fis = new FileInputStream("C\\User\\myUser\\logging.properties"); LogManager.getLogManager().readConfiguration(fis); log.setLevel(LEVEL.INFO); log.addHandler(new java.util.logging.consoleHandler()); log.setUseParentHandlers(false); log.info("starting myApp"); fis.close(); } catch (Exception e) { logger.log(Level.SEVERE, "Exception occurred", e); } } }   This class is not able to send any log messages to splunk. Why? I already tried to connect and send events manually with   URL url = new URL(SPLUNK_HEC_URL + "/services/collector/event"); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); connection.setRequestMethod("POST"); connection.setRequestProperty("Authorization", "Splunk " + SPLUNK_HEC_TOKEN); connection.setDoOutput(true); //....   and it was successful. but I want to make it work with the .properties approach.
I have a heavy forwarder, where all security devices logs have been pointed to HF, and from HF logs have been forwarded to Indexer, but as we don't have access for Indexer & Search Head. I want to... See more...
I have a heavy forwarder, where all security devices logs have been pointed to HF, and from HF logs have been forwarded to Indexer, but as we don't have access for Indexer & Search Head. I want to validate, that configuration done on HF for forwarded the particular types logs has is getting in the Indexer, How do i can verify that all logs are forwarding to indexer. As can be observed in splunkd.log "TcpOutEloop" it shows the HF is connected to Indexer, where we can validate related to configuration for indexer. is there any way to validate? My security device logs which are pointed to HF, are forwarding to Indexer.
in the outer query i am trying to pull  the ORDERS which is Not available .I need to match the ORDERS  which is Not available to with the ORDERS on Sub query.  Result to be displayed  ORDERS  & UNI... See more...
in the outer query i am trying to pull  the ORDERS which is Not available .I need to match the ORDERS  which is Not available to with the ORDERS on Sub query.  Result to be displayed  ORDERS  & UNIQUEID .  common field in two query is ORDERS  my requirement is to use the combine two log statements  on "ORDERS"  and pull the ORDER and UNIQUEID in table  .   Below is the query i am using , but the result is pulling all ORDERS.  i want only the ORDERS and UNIQUEID from subquery to be displayed  which matches the ORDERS those  Not available in the first query     index=source "status for : * | "status for : * " AND "Not available" | rex field=_raw "status for : (?<ORDERS>.*?)" | join ORDERS [search Message=Request for : * | rex field=_raw "data=[A-Za-z0-9-]+\|(?P<ORDERS>[\w\.]+)" | rex field=_raw "\"unique\"\:\"(?P<UNIQUEID>[A-Z0-9]+)\""] | table ORDERS UNIQUEID
Hello team, We want to run some custom code inside Splunk SOAR that utilize the pandas python package. We can already install the pandas and use it using the below commands:   sudo su - phantom ... See more...
Hello team, We want to run some custom code inside Splunk SOAR that utilize the pandas python package. We can already install the pandas and use it using the below commands:   sudo su - phantom /opt/phantom/bin/phenv pip install pandas   After installing we can use pandas in custom functions just fine.   I want to ask if this is good or can it lead to any compatibility issue in the future? (e.g. SOAR upgrades)   Thanks in advance!
I want to monitor AWS logs sources with various account when ever logs stopped coming for particular sourcetype i need alert for specific  accounts i have tried some thing like this but its not picki... See more...
I want to monitor AWS logs sources with various account when ever logs stopped coming for particular sourcetype i need alert for specific  accounts i have tried some thing like this but its not picking right away so any suggested SPL will be apricated ( not sure we can use Tstat so it will be much faster )       index=aws sourcetype="aws:cloudtrail" aws_account_id IN(991650019 55140 5557 39495836 157634 xxxx9015763) | eval now=now() | eval time_since_last=round(((now-Latest)/60)/60,2) | stats latest(_time) as last_event_time, earliest(_time) as first_event_time count by sourcetype aws_account_id | eval time_gap = last_event_time - first_event_time | where time_gap > 4000 | table aws_account_id first_event_time last_event_time time_gap | convert ctime(last_event_time)  
Hi, We've installed this app and tried configuring it to send Splunk alerts to Jira. After entering the API URL and key and hitting 'Complete Setup,' it keeps redirecting back to the configuration p... See more...
Hi, We've installed this app and tried configuring it to send Splunk alerts to Jira. After entering the API URL and key and hitting 'Complete Setup,' it keeps redirecting back to the configuration page. Is anyone else experiencing this issue on Splunk Cloud?