All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi @gcusell, I have 2 double   1. How can I drop a source IP 10.0.0.0/24 subnet at indexer, I am aware of dropping a host at the indexer level but not this. 2. I'm getting duplicate data I.e... See more...
Hi @gcusell, I have 2 double   1. How can I drop a source IP 10.0.0.0/24 subnet at indexer, I am aware of dropping a host at the indexer level but not this. 2. I'm getting duplicate data I.e. Duplicate data is being indexed. My query is how can know from which host the data is duplicate so that I can offboard those devices..    Kindly guide me for the above 2 solution.   Thanks Debjit
How much data can the heavy forwarder handle? I don't find reference 
Hi Splunkers, I'm having an issue with my Splunk instance here. I'm running Splunk as a search head and an Indexer on the same machine. The VM is Redhat with 32 GB of RAM and 32 cores. I have... See more...
Hi Splunkers, I'm having an issue with my Splunk instance here. I'm running Splunk as a search head and an Indexer on the same machine. The VM is Redhat with 32 GB of RAM and 32 cores. I have noticed that the Splunk service is running very slow, I have checked the server and saw that the Splunk service (splunk -p 8089 restart) is taking all the load!   Can someone please tell me what to do with such an issue? what is Splunk trying to do here? and why so much load on the CPU? Thanks in advance.
Hi All, Need your valuable help here... I am just practicing AppDynamics. I created a sample spring-boot application and configured the same in my SaaS free account. I have configured a custom ... See more...
Hi All, Need your valuable help here... I am just practicing AppDynamics. I created a sample spring-boot application and configured the same in my SaaS free account. I have configured a custom service endpoint for one of my spring bean methods. The issue is AppDynamics is auto-detecting my Rest endpoint transaction and capturing the metrics under a business transaction. But none of the service endpoint metrics are displayed. So am doubting the business transaction is masking my service endpoint as the configured entry points are within the business transaction. Is that true??  If not, why my custom service endpoints are not displayed??
Hello, How I would use monitor path in my  inputs.conf. All files are in the Windows machine at the location: MLTS(\\VPWSENTSHMS\CFT\TEST)(L:) Should it be [monitor://L:\MLTS\*] Any recomme... See more...
Hello, How I would use monitor path in my  inputs.conf. All files are in the Windows machine at the location: MLTS(\\VPWSENTSHMS\CFT\TEST)(L:) Should it be [monitor://L:\MLTS\*] Any recommendations will be highly appreciated. Thank you!    
How to use eval reference in rex command. Here is what I have tried so far: MyMacro: myrextest(1)   | eval test= "Hello" | eval myinput = $myinput$ | eval rexString = "'$myinput$':'(?<$myinpu... See more...
How to use eval reference in rex command. Here is what I have tried so far: MyMacro: myrextest(1)   | eval test= "Hello" | eval myinput = $myinput$ | eval rexString = "'$myinput$':'(?<$myinput$>[^*']+)" | rex field=payload "'$myinput$':'(?<$myinput$>[^*']+)"   Search String without eval and it is working fine :   | eval payload = "{'description':'snapshot created from test','snapShotName':'instance1-disk-2-cio- 1564744963','sourceDisk':'instance1-disk-2','status':'READY'}" `myrextest("snapShotName")`   output from search string:   rexString: 'snapShotName':'(?<snapShotName>[^*']+)   Search String with eval:   | makeresults | eval payload = "{'description':'snapshot created from test','snapShotName':'instance1-disk-2-cio-1564744963','sourceDisk':'instance1- disk-2','status':'READY'}" | eval myMacroInput = "snapShotName" `myrextest(myMacroInput)`   output from search string:   'myMacroInput':'(?<myMacroInput>[^*']+)   Based on my observation when I am passing eval reference to my macro and using it in rex it is not replacing the value it is replacing with eval reference. Can some one please help me on it, I have tired a lot but unfortunately I didn't get any solution .
First time posting here, and I'm a new user to Splunk. I'd love to get some advice on setting up an alert. I want it to trigger at 8am, 12pm, 4pm, and 8pm. I've set my Cron schedule to "* 8,12,16,2... See more...
First time posting here, and I'm a new user to Splunk. I'd love to get some advice on setting up an alert. I want it to trigger at 8am, 12pm, 4pm, and 8pm. I've set my Cron schedule to "* 8,12,16,20 * * *".  For the search's time scope, I'd like the following 8am trigger should have a search range of -12 hours to the current time. 12pm, 4pm, and 8pm triggers should have a search range of -4 hours to the current time I've set my range time to be -12 (earliest) to the current time (now), but the 12pm, 4pm, and 8pm triggers are getting results that had already been part of the result set from the 8am trigger. Does Splunk know when a result has been previously reported, or is there a way I can filter those out using the search query? How does the expire parameter work? Can I leverage it in a way that I won't get previously reported results? Would I have to set up a separate alert for the 8am trigger, even though (aside from the 12 hour lookback) it does the same thing and serves the same purpose of an alert that would encompass the other times? Here's what search and the time range looks like on my alert. Thanks in advance for the guidance! index="slPlatform" environment="Development" Application="AP_post_EmployeePayload_To_EmployeeProfile" | where eventLogLevel like "CRITICAL%"    
Is it possible to configure heavy forwarders to send data to two tcpout groups (A,B) (outputs.conf) and don't  block on group B failure? We want to send all data to group A, and a subset of data... See more...
Is it possible to configure heavy forwarders to send data to two tcpout groups (A,B) (outputs.conf) and don't  block on group B failure? We want to send all data to group A, and a subset of data (specific sourcetypes) to group B, but group B is in a remote location and our link to that location is not fully stable and we don't want to event loss in group A on link failures or group B failures.   [tcpout] [tcpout:groupA] server=indexerA1_ip:9997,indexerA2_ip:9997 [tcpout:groupB] server=indexerB_ip:9997    
I have a dbquery ouput that looks like the below, unfortunately i cant update the actual database query to make it more readable...    2022-12-16 21:30:17.689, TO_CHAR(schema.function(MAX(columnA... See more...
I have a dbquery ouput that looks like the below, unfortunately i cant update the actual database query to make it more readable...    2022-12-16 21:30:17.689, TO_CHAR(schema.function(MAX(columnA)),'MM-DD-YYHH24:MI')="12-16-22 16:29"   I am trying to whether time the 2 times  at the begining and end of the results are within 15 mins of each other. I have tried renaming the column from the long stupid string but i cant get that working using the rename function.  does anyone have any ideas how to rename (or if i even need to) and then evaluate whether the times are within 15 mins of each other? the query i ran to get the above is just <index="abc">  
Are .p12 and .pfx files required to use Splunk after initial install?
Community, I am attempting to retrieve events in Splunk regarding Tenable vulnerability data.  The goals are as follows: Obtain the most recent information for a given vulnerability ID and devi... See more...
Community, I am attempting to retrieve events in Splunk regarding Tenable vulnerability data.  The goals are as follows: Obtain the most recent information for a given vulnerability ID and device pair. Filter out any vulnerabilities that have a "severity" equal to "informational" AND/OR Filter out any vulnerabilities that have a state of "fixed"   The issue I have encountered, is that the "fixed" vulnerability may be the most recent status.  So, simply filtering that value out for a specific vulnerability ID and device combination will result in that vulnerability ID for that device showing up in the result set. (even though the vulnerability has been "fixed" in this case) --- don't want IT chasing "fixed" vulnerabilities. In reality what I want to see is the most recent vulnerability for a given device if the severity is not equal to "fixed" and/or the vulnerability severity is not "informational" (the reason behind this is that some vulnerability severities are reduced over time due to various conditions --- where they may have started out as "high" are now "informational" or vice versa) --- otherwise do not list that device and vulnerability ID pair at all in my result set. Here is how far I have gotten to date:       `get_tenable_index` sourcetype="tenable:io:vuln" [ search index="tenable" sourcetype="tenable:io:assets" deleted_at="null" | rename uuid AS asset_uuid | stats count by asset_uuid | fields asset_uuid ] | rename plugin.id AS Plugin_ID asset_uuid AS Asset_ID | strcat Asset_ID : Plugin_ID Custom_ID | stats latest(*) as * by Custom_ID << The problem here is that the latest might be "fixed" or "informational" which in this case I want to ignore (if either of those is true). | rename plugin.cvss_base_score AS CVSS plugin.synopsis AS Description plugin.name AS Name plugin.cve{} AS CVE output AS Output severity AS Risk plugin.see_also{} AS See_Also plugin.solution AS Solution state AS State plugin.has_patch AS Patchable plugin.exploit_available AS Exploitable plugin.exploited_by_malware AS Exploited_By_Malware plugin.publication_date AS Plugin_Publish_Date | table Custom_ID, CVSS, Description, Name, CVE, Plugin_ID, Output, Risk, See_Also, Solution, State, Asset_ID, Patchable, Exploitable, Exploited_By_Malware, Plugin_Publish_Date tags{}.value        
I am using rex field to extract the field name and then inject the data so I can get only the desired fields but not able to do so. My Access logs:  server - - [date& time] "GET /google/page1/page... See more...
I am using rex field to extract the field name and then inject the data so I can get only the desired fields but not able to do so. My Access logs:  server - - [date& time] "GET /google/page1/page1a/633243463476/googlep1 HTTP/1.1" 200 350 85 My search query: <query> | rex field_=(?<SRC>\d+\.\d+\.\d+\.\d+).+\]\s\"(?<http_method>\w+)\s(?<serviceName>/[^/]+)(?<uri_path>[^?\s]+)\s(?<uri_query>\S+)\"\s(?<statusCode>\d+)\s(?<body_size>\d+)\s\s(?<response_time>\d+) My Search query with lookup <query> | rex field=_raw "(?<SRC>\d+\.\d+\.\d+\.\d+).+\]\s\"(?<http_method>\w+)\s(?<serviceName>/[^/]+)(?<uri_path>[^?\s]+)\s(?<uri_query>\S+)\"\s(?<statusCode>\d+)\s(?<body_size>\d+)\s\s(?<response_time>\d+)" |search serviceName="/google" | lookup abc.csv uri_path OUTPUT serviceName apiName | search searviceName=* operationName=* I am using above query to lookup from csv file but I get all the api has same count and not able to get the stats or logs for only particular. Is there a way to match this and produce result with both uri_path and api_name? can anyone please help me on this? Eg: csv file looks like this and i am trying to match apiName and uri_path so the logs are getting properly. serviceName uri_path http_method apiName /google /page1/page1a/*/googlep1 post postusingRRR /google /page1/page1a/sada/*/googlep1 get getusingep2 /google /pag5/ggg/*/ooopp/ggplr delete deleteusing
I am using the following query to get the results  index=abc node=* | chart latest(state) as state by node | stats count by state | sort - state   Below is the column chart display of it.I w... See more...
I am using the following query to get the results  index=abc node=* | chart latest(state) as state by node | stats count by state | sort - state   Below is the column chart display of it.I want to display each state by a custom color    I tried using the below line in xml but its not changing <option name="charting.fieldColors">{"Allocated":0x333333,"DOWN":0xd93f3c,"IDLE":0xf58f39,"Minor":0xf7bc38,"Notice" :0xeeeeee,"Healthy":0x65a637}</option>
I have a question regarding KPI Threshold in Splunk ITSI. While using cloning action, all KPI thresholds created will inherit the Timezone attribute of the user that cloned it. Could any one give me ... See more...
I have a question regarding KPI Threshold in Splunk ITSI. While using cloning action, all KPI thresholds created will inherit the Timezone attribute of the user that cloned it. Could any one give me working example. Thanks in advance.
How do we relate  index=_audit action=search search=* user!=splunk-system-user provenance!=scheduler | table _time user search host total_run_time result_count | sort - _time'  query to determin... See more...
How do we relate  index=_audit action=search search=* user!=splunk-system-user provenance!=scheduler | table _time user search host total_run_time result_count | sort - _time'  query to determine the SVC usage of the results 
We're sending logs to SplunkCloud over port 514 using the following stanza in inputs.conf   [udp://514] index=syslog disabled=false sourcetype=syslog   This works great, however we are now se... See more...
We're sending logs to SplunkCloud over port 514 using the following stanza in inputs.conf   [udp://514] index=syslog disabled=false sourcetype=syslog   This works great, however we are now sending more than one type of log this way.  Can we declare multiple sourcetypes depending upon where the origin of the logs is?  For example: if they are from IP address A give it the "firewall" sourcetype and from IP address B give it the "crontab" sourcetype?    
Hi, I have table below then I need to grouping field and need to eval (+ )the value become below table Help please..
Hello Splunkers Need your help to get the desired result. Below is the sample query for reference. | makeresults | eval week_year="2022-48",group="ABC",old=64,new=78 | append [| makeresults ... See more...
Hello Splunkers Need your help to get the desired result. Below is the sample query for reference. | makeresults | eval week_year="2022-48",group="ABC",old=64,new=78 | append [| makeresults | eval week_year="2022-48",group="XYZ",old=35,new=15] | append [| makeresults | eval week_year="2022-49",group="XYZ",old=33,new=17] | append [| makeresults | eval week_year="2022-49",group="ABC",old=215,new=158] | fields - _time | eval target1=round((old/new)*0.17,3)*100,target2=round((old/new)*0.26,3)*100,final=round(old/new,3)*100 | table week_year group final target1 target2 |chart last(final) as final values(target1) as target1 values(target2) as target2 over group by week_year But since values() is used we are getting target fields for each week. But expected outcome is to get one each line for target1 & target2.     Please help me to get the visualization in correct format. Thanks in advance!!  
I am looking at building a homelab for splunk. Any suggestions for minimum HW? I can't really do 8 cores / 64 GiB. 6 cores / 32 GiB would be feasible
My count field is right justified and so far from the description. Is it possible to either left justify the content or right justify the count field in a table?