All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I was going through my Splunk setup and came across this warning when I was clicking on LDAP Groups under "Authentication Methods > LDAP strategies" "LDAP server warning: Size limit exceeded" ... See more...
Hi, I was going through my Splunk setup and came across this warning when I was clicking on LDAP Groups under "Authentication Methods > LDAP strategies" "LDAP server warning: Size limit exceeded"   I was just wondering what could be the cause and how to resolve this warning?   Thank you in advance for any assistance.   Mikhael  
Hello, I am very new to Splunk. I want to trigger an alert when a second event does not occur within 20min of the first event. Index and souurcetype is the same for both the events. Both these events... See more...
Hello, I am very new to Splunk. I want to trigger an alert when a second event does not occur within 20min of the first event. Index and souurcetype is the same for both the events. Both these events have the same value for the requestId field but different value for the message field. For example, First Event: Event A     level: INFO logger_name: filename1 message: First event requestId: 12345 thread_name: http-t2 timestamp: 2022-12-19T05:44:51.757Z     Event B     level: INFO logger_name: filename1 message: First event requestId: 67890 thread_name: http-t2 timestamp: 2022-12-19T05:44:51.757Z     Second Event: Event C     level: INFO logger_name: filename2 message: Second Event requestId: 12345 thread_name: http-t1 timestamp: 2022-12-19T05:44:51.926Z     Since Event B with request ID: 67890 does not have a second event with the same request ID, I want Event B as my output,      level: INFO logger_name: filename1 message: First event requestId: 67890 thread_name: http-t2 timestamp: 2022-12-19T05:44:51.757Z     Please any help is appreciated
Hi everyone! I plan to ingest some OpenTelemetry spans. When I want to search/filter my ingested spans for certain resource/span attributes, if I understood correctly from the docs here: Analyze se... See more...
Hi everyone! I plan to ingest some OpenTelemetry spans. When I want to search/filter my ingested spans for certain resource/span attributes, if I understood correctly from the docs here: Analyze services with span tags and MetricSets in Splunk APM, I won't be able to search or filter until I explicitly start to index the resource/span attributes I want to search, correct? Or is my understanding wrong? Is there a way to automatically tell Splunk that I want to add to the index any OpenTelemetry resource/span attribute that is identified in a span? Or do I always need to add them first manually?
We have a distributed splunk (8.x) environment on-prem, with CM and 3 peers, 2 SH, 1 deployment server, and many clients. On on of my Windows 10 clients, I have a csv file  that gets new data appen... See more...
We have a distributed splunk (8.x) environment on-prem, with CM and 3 peers, 2 SH, 1 deployment server, and many clients. On on of my Windows 10 clients, I have a csv file  that gets new data appended to it every 5 minutes via a separate python script that runs locally. I have verified the appending is working as intended. On the UF, I have setup an App with a monitor input  to watch the file. A custom props.conf is located on the indexer peers. I am experiencing some unexpected behavior and am struggling to find a solution.  The first odd thing I have noticed is that when data is appended  to the csv file, either via the python script or manually adding, I sometimes will get an event ingested but sometimes it will not. If I manually add lines 4 or 5 times over a 15 minute period, I might get 1 event. Sometimes I wont get any events at all, but if I get one event, its never more than 1.  The second weird thing noticed is that the event is always the top line of the csv file. Never the line that I added manually or via python to the bottom of the CSV file. The file is over 2500 lines. I have verified that the lines are actually appended to the bottom, and persist.  I suspect that there might be an issue with the timestamp or possibly the LINE_BREAK but I cannot say definitively that is the issue. (Maybe the LINE_BREAK regex is not actually there?) I can take the csv file, and add it without issue using the "Add Data" process in the SplunkWeb. It breaks the events exactly how I would expect it should (not just the top line) with correct timestamps, and looks perfect. Using the Add Data method, I copied the props from the WebUI and tried to add it to my Apps custom props.conf that is pushed to the peers. I am still left with the same weird behavior as I experienced with my custom props.conf file. I am reaching a point where googling is becoming too specific to get me a solid lead on my next steps in the troubleshoot. Does anyone know what might be causing either of these issues? Here is a snippet of the csv file (the top 3 lines):   profile_val,start_time,scheduler_tree,bot_name,tree_name,query_val,kill_option,stopped_on,remarks,status_val,ts Fit,11/03/2022 19:34:00.277,lb_prod_scheduler,spotbot,,%20redacted%20auto,false,11/03/2022 19:40:00.107,BOT did not pinged for more than 300 seconds,failed,2022-11-04 00:40:05 Fit,11/03/2022 19:49:00.143,lb_prod_scheduler,spotbot,,%20redacted%20auto,false,11/03/2022 19:55:00.091,BOT did not pinged for more than 300 seconds,failed,2022-11-04 00:55:05     Here is my custom props.conf that I initially tried:   [bot:logs] description = BOT Application Status Logs category = Custom disabled = false CHARSET=UTF-8 KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIME_FORMAT = %m/%d/%Y%t%H:%M:%S.%f TIME_PREFIX = Fit, INDEXED_EXTRACTIONS=CSV FIELD_DELIMITER=, FIELD_NAMES=profile_val,start_time,scheduler_tree,bot_name,tree_name,query_val,kill_option,stopped_on,remarks,status_val,ts     Here is the "Add Data" props.conf that works in the WebUI but not on the indexers:   [bot:logs] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 EXTRACT-Ex_account=^(?:[^:\n]*:){4}(?P<Ex_account>\d+) INDEXED_EXTRACTIONS=csv KV_MODE=none category=Structured description=Comma-separated value format. Set header and other settings in "Delimited Settings" disabled=false pulldown_type=true    
[monitor://C:\*_IPCATMDetailLog.txt] disable=0 index=test sourcetype=IPCATMDetailLog   that is what I need to monitor. Because day by day the log will have date. For example: 20221219_IPCAT... See more...
[monitor://C:\*_IPCATMDetailLog.txt] disable=0 index=test sourcetype=IPCATMDetailLog   that is what I need to monitor. Because day by day the log will have date. For example: 20221219_IPCATMDetailLog.txt or 20221218_IPCATMDetailLog.txt etc. I dont know why it just only can get log in 20221215 , before that I use default sourcetype by Splunk, I have use my sourcetype in afternoon 12/15/2022. The other days after that can not anymore. I want to get all the log day by day following the sourcetype=IPCATMDetailLog. Thanks for your help.
HI I was about to create a summary index for log sizes/counts by host and by sourcetype. I require this for alerting when log volumes change. I can create the indexes/searches but I thought that ... See more...
HI I was about to create a summary index for log sizes/counts by host and by sourcetype. I require this for alerting when log volumes change. I can create the indexes/searches but I thought that this might be a common thing - does anyone know of an app/addon that does this already? Thanks Bevan      
I was wondering, 1. We have search time and index time field extractions, so can i push the same props/transforms over SH and indexer cluster for search time field extraction or no need to push the... See more...
I was wondering, 1. We have search time and index time field extractions, so can i push the same props/transforms over SH and indexer cluster for search time field extraction or no need to push the same at indexer since search head will replicate knowledge budle with its search peers. And do i need to restart both search head and the indexers for these conf files to take effect? 2.  For index time field extraction we usually push the props/transforms over HF and do i need to push the same conf over indexers as well. And do i need to restart both HF and the indexers for these conf files to take effect?  
Hi,Splunkers,   I  have a timechart,  which have value for count by VQ  less than 10,  but default y axis scale is 100, which causes I can't see any column in my timechart.   how to change the ... See more...
Hi,Splunkers,   I  have a timechart,  which have value for count by VQ  less than 10,  but default y axis scale is 100, which causes I can't see any column in my timechart.   how to change the y Axis scale to adjust automatically to my maximum value,   for example, if max value is less than 10,  than default scale should be 10.       thx in advance. kevin  
I need a query to group similar stack trace across request (CR- Correlation Id) in a specific format: Query: index="myIndex" source="/mySource" "*exception*" | rex field=_raw "(?P<firstFewLinesO... See more...
I need a query to group similar stack trace across request (CR- Correlation Id) in a specific format: Query: index="myIndex" source="/mySource" "*exception*" | rex field=_raw "(?P<firstFewLinesOfStackTrace>(.*\n){1,5})" | eval date=strftime(_time, "%d-%m-%Y") | head 3 | reverse | table date, CR, count, firstFewLinesOfStackTrace Format: Date CR Count Log 01/12/22 CR_1 CR-2 2 StackTrace1 StackTrace2 StackTrace3 02/12/22 CR_1 CR-2 CR-3 3 DiffStackTrace1 DiffStackTrace2 DiffStackTrace3   Am not sure how to group these logs as each stack trace have _date as unique identifier, also how to get the result in above format (what to use stats, eventstats, table, etc.) pls help, thanks in advance.
Hi I am going to create a DC list lookup daily using nslookup how I can I define the lookup without a header  or I should create a header manually  ? 
Hi @gcusell, I have 2 double   1. How can I drop a source IP 10.0.0.0/24 subnet at indexer, I am aware of dropping a host at the indexer level but not this. 2. I'm getting duplicate data I.e... See more...
Hi @gcusell, I have 2 double   1. How can I drop a source IP 10.0.0.0/24 subnet at indexer, I am aware of dropping a host at the indexer level but not this. 2. I'm getting duplicate data I.e. Duplicate data is being indexed. My query is how can know from which host the data is duplicate so that I can offboard those devices..    Kindly guide me for the above 2 solution.   Thanks Debjit
How much data can the heavy forwarder handle? I don't find reference 
Hi Splunkers, I'm having an issue with my Splunk instance here. I'm running Splunk as a search head and an Indexer on the same machine. The VM is Redhat with 32 GB of RAM and 32 cores. I have... See more...
Hi Splunkers, I'm having an issue with my Splunk instance here. I'm running Splunk as a search head and an Indexer on the same machine. The VM is Redhat with 32 GB of RAM and 32 cores. I have noticed that the Splunk service is running very slow, I have checked the server and saw that the Splunk service (splunk -p 8089 restart) is taking all the load!   Can someone please tell me what to do with such an issue? what is Splunk trying to do here? and why so much load on the CPU? Thanks in advance.
Hi All, Need your valuable help here... I am just practicing AppDynamics. I created a sample spring-boot application and configured the same in my SaaS free account. I have configured a custom ... See more...
Hi All, Need your valuable help here... I am just practicing AppDynamics. I created a sample spring-boot application and configured the same in my SaaS free account. I have configured a custom service endpoint for one of my spring bean methods. The issue is AppDynamics is auto-detecting my Rest endpoint transaction and capturing the metrics under a business transaction. But none of the service endpoint metrics are displayed. So am doubting the business transaction is masking my service endpoint as the configured entry points are within the business transaction. Is that true??  If not, why my custom service endpoints are not displayed??
Hello, How I would use monitor path in my  inputs.conf. All files are in the Windows machine at the location: MLTS(\\VPWSENTSHMS\CFT\TEST)(L:) Should it be [monitor://L:\MLTS\*] Any recomme... See more...
Hello, How I would use monitor path in my  inputs.conf. All files are in the Windows machine at the location: MLTS(\\VPWSENTSHMS\CFT\TEST)(L:) Should it be [monitor://L:\MLTS\*] Any recommendations will be highly appreciated. Thank you!    
How to use eval reference in rex command. Here is what I have tried so far: MyMacro: myrextest(1)   | eval test= "Hello" | eval myinput = $myinput$ | eval rexString = "'$myinput$':'(?<$myinpu... See more...
How to use eval reference in rex command. Here is what I have tried so far: MyMacro: myrextest(1)   | eval test= "Hello" | eval myinput = $myinput$ | eval rexString = "'$myinput$':'(?<$myinput$>[^*']+)" | rex field=payload "'$myinput$':'(?<$myinput$>[^*']+)"   Search String without eval and it is working fine :   | eval payload = "{'description':'snapshot created from test','snapShotName':'instance1-disk-2-cio- 1564744963','sourceDisk':'instance1-disk-2','status':'READY'}" `myrextest("snapShotName")`   output from search string:   rexString: 'snapShotName':'(?<snapShotName>[^*']+)   Search String with eval:   | makeresults | eval payload = "{'description':'snapshot created from test','snapShotName':'instance1-disk-2-cio-1564744963','sourceDisk':'instance1- disk-2','status':'READY'}" | eval myMacroInput = "snapShotName" `myrextest(myMacroInput)`   output from search string:   'myMacroInput':'(?<myMacroInput>[^*']+)   Based on my observation when I am passing eval reference to my macro and using it in rex it is not replacing the value it is replacing with eval reference. Can some one please help me on it, I have tired a lot but unfortunately I didn't get any solution .
First time posting here, and I'm a new user to Splunk. I'd love to get some advice on setting up an alert. I want it to trigger at 8am, 12pm, 4pm, and 8pm. I've set my Cron schedule to "* 8,12,16,2... See more...
First time posting here, and I'm a new user to Splunk. I'd love to get some advice on setting up an alert. I want it to trigger at 8am, 12pm, 4pm, and 8pm. I've set my Cron schedule to "* 8,12,16,20 * * *".  For the search's time scope, I'd like the following 8am trigger should have a search range of -12 hours to the current time. 12pm, 4pm, and 8pm triggers should have a search range of -4 hours to the current time I've set my range time to be -12 (earliest) to the current time (now), but the 12pm, 4pm, and 8pm triggers are getting results that had already been part of the result set from the 8am trigger. Does Splunk know when a result has been previously reported, or is there a way I can filter those out using the search query? How does the expire parameter work? Can I leverage it in a way that I won't get previously reported results? Would I have to set up a separate alert for the 8am trigger, even though (aside from the 12 hour lookback) it does the same thing and serves the same purpose of an alert that would encompass the other times? Here's what search and the time range looks like on my alert. Thanks in advance for the guidance! index="slPlatform" environment="Development" Application="AP_post_EmployeePayload_To_EmployeeProfile" | where eventLogLevel like "CRITICAL%"    
Is it possible to configure heavy forwarders to send data to two tcpout groups (A,B) (outputs.conf) and don't  block on group B failure? We want to send all data to group A, and a subset of data... See more...
Is it possible to configure heavy forwarders to send data to two tcpout groups (A,B) (outputs.conf) and don't  block on group B failure? We want to send all data to group A, and a subset of data (specific sourcetypes) to group B, but group B is in a remote location and our link to that location is not fully stable and we don't want to event loss in group A on link failures or group B failures.   [tcpout] [tcpout:groupA] server=indexerA1_ip:9997,indexerA2_ip:9997 [tcpout:groupB] server=indexerB_ip:9997    
I have a dbquery ouput that looks like the below, unfortunately i cant update the actual database query to make it more readable...    2022-12-16 21:30:17.689, TO_CHAR(schema.function(MAX(columnA... See more...
I have a dbquery ouput that looks like the below, unfortunately i cant update the actual database query to make it more readable...    2022-12-16 21:30:17.689, TO_CHAR(schema.function(MAX(columnA)),'MM-DD-YYHH24:MI')="12-16-22 16:29"   I am trying to whether time the 2 times  at the begining and end of the results are within 15 mins of each other. I have tried renaming the column from the long stupid string but i cant get that working using the rename function.  does anyone have any ideas how to rename (or if i even need to) and then evaluate whether the times are within 15 mins of each other? the query i ran to get the above is just <index="abc">  
Are .p12 and .pfx files required to use Splunk after initial install?