All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

All, When running a metric-data query, for one application I'm getting this response: Invalid application id xxx is specified Here is the query: https://mycompany.saas.appdynamics.com/controller/... See more...
All, When running a metric-data query, for one application I'm getting this response: Invalid application id xxx is specified Here is the query: https://mycompany.saas.appdynamics.com/controller/rest/applications/xxx/metric-data?time-range-type=BEFORE_NOW&metric-path=Application Infrastructure Performance|*|JVM|Process CPU Burnt (ms/min)&duration-in-mins=1440&rollup=true Of course the app name is not actually xxx.  It is fairly long.  Perhaps the API has a length limit in the app name?  I don't know. Any ideas about the real problem? thanks
Is there a way of limiting the search load on a index cluster using configuration in the index cluster? E.g. setting a max limit for how many concurrent searches that is allowed to run simultaneous. ... See more...
Is there a way of limiting the search load on a index cluster using configuration in the index cluster? E.g. setting a max limit for how many concurrent searches that is allowed to run simultaneous. Lowering the parameters in limits.conf section Concurrency limits and savedsearches.conf does not have any effect on the indexer. These only seems to have an effect on the Search Head https://docs.splunk.com/Documentation/Splunk/9.0.2/Admin/Limitsconf One environment I use have multiple standalone search heads that all execute searches in the same indexer cluster. The measured median concurrent searches on an indexer peer goes well above the max concurrent searches (twice the limit). That indexer cluster has default limits.conf.
I am getting logs in Splunk. But the logs are in improper format. So I want to make changes so that all my logs should be indexed in a proper format. Below are the format of the logs. Please help m... See more...
I am getting logs in Splunk. But the logs are in improper format. So I want to make changes so that all my logs should be indexed in a proper format. Below are the format of the logs. Please help me regex in props & transforms.conf     2022-12-15T16:02:11+05:30 gd9017 msgtra.imss[26879]: NormalTransac#0112022 Dec 15 16:01:30 +05:30#0112022/12/15 16:01:31 +05:30#0112022 Dec 15 16:01:31 +05:30#01136082476.4647.1671100216806.JavaMail.jwsuser@communication-api-9-xrc8m#0118B3D3323-EFDB-5B05-A5EA-9077D10C03DD#011288C06408D#0111#011donotreply@test.com#011uat08@test.org.in#011Invoices not transmitted to ICEGATE because of Negative ledger balance.#011103.83.79.99#011[172.18.201.13]:25#011250 2.0.0 Ok: queued as 619AE341807#011sent#01100100000000000000#0110#011#0112022 Dec 15 16:01:31 +05:30#0112022 Dec 15 16:01:31 +05:30#011#0113#011     Fields in the logs are time, computer, from, to, subjectline, attachment name  
I've got 3 single values and I'd like to put them into a row within a panel. The problem is that the last single value jumps to the row below. Is there any way to reduce the size or width to allow fo... See more...
I've got 3 single values and I'd like to put them into a row within a panel. The problem is that the last single value jumps to the row below. Is there any way to reduce the size or width to allow for all 3 single values to always be in the row no matter what size the browser window is at ?   
Hi All, I'm facing issue while appending results for 2 searches using append command.  I have a 2 search which i'm using to get results and also both query has lookup command to get ip_address deta... See more...
Hi All, I'm facing issue while appending results for 2 searches using append command.  I have a 2 search which i'm using to get results and also both query has lookup command to get ip_address details from a lookup . Search query1: index=abc filter1=A | eval ..| table * | lookup def host as host OUTPUT ipaddress | stats ..| eval .. Query 2: index=abc filter2=B | eval ..| table * | lookup def host as host OUTPUT ipaddress | stats ..| eval .. Both searches are almost same except "filter" field and eval commands. And i'm using append command to append results as below: index=abc filter1=A | eval ..| table * | lookup def host as host OUTPUT ipaddress | stats ..| eval ..| append [search index=abc filter2=B | eval ..| table * | lookup def host as host OUTPUT ipaddress | stats ..| eval ..] I'm getting error ([subsearch]: Streamed search execute failed because: Error in 'lookup' command: Could not construct lookup) when running above query and runs fine if i run it seperately. Please let me know what i am making wrong.  
IPs in lookup table 3.124.56/32 64.37.99.0/24 55.63.24.7/16  How to edit my search to Exclude  an IPs  from outside to a Subnet IP in a lookup file?
I am facing socket issue in Splunk cloud. How we can set parameter on Splunk Cloud? DefaultLimitFSIZE=-1 DefaultLimitNOFILE=64000 DefaultLimitNPROC=8192  
Hello, I have a csv file that have some summary stats from an index, but the requirement  is to show an sample event with all the info in that index. The CSV file have a hash number (from a accou... See more...
Hello, I have a csv file that have some summary stats from an index, but the requirement  is to show an sample event with all the info in that index. The CSV file have a hash number (from a account number), some calculated status. For example: PAN total_trans total_amount HASHPAN 1234******5678 15 15000 ABC123   The index have all the transaction and detail. We need to take an sample in the index from the csv and output to a new csv that have sample detail. Something like this: PAN total_trans total_amount HASHPAN _time TRACE TRANSACTIONID 1234******5678 15 15000 ABC123 xxxxxxx xxxxxx xxxxxxxxx   I don't really like to use join, because the index have a lot of events (around 32 mils events). Are there any elegants way to get the data?
Hi. I'm looking to make a table/stats of all fields in a search to display all values inside of each field. Similar to stats count, but instead of counting the amount of values, I want to display... See more...
Hi. I'm looking to make a table/stats of all fields in a search to display all values inside of each field. Similar to stats count, but instead of counting the amount of values, I want to display all values inside.
Hi, I am facing an strange issue on a SIEM Installation (Splunk 9.0.2 / ES 7.0.1) in regards to multisearch which is used inside Threat Intel Framework for threatmatch src. The customer complai... See more...
Hi, I am facing an strange issue on a SIEM Installation (Splunk 9.0.2 / ES 7.0.1) in regards to multisearch which is used inside Threat Intel Framework for threatmatch src. The customer complains he does not get notables for Threat Activity that is coming from IP Intel Matches. I was able to find out that the threat match is not running properly anymore... Running the search manually I get: Error in 'multisearch' command: Multisearch subsearches might only contain purely streaming operations (subsearch 1 contains a non-streaming command) Expanding all the Macros, I dont get whats up with the multisearch... This looks all proper to me. | multisearch [| tstats prestats=true summariesonly=true values("sourcetype"),values("DNS.dest") from datamodel="Network_Resolution"."DNS" by "DNS.query" | eval SEGMENTS=split(ltrim('DNS.query', "."), "."),SEG1=mvindex(SEGMENTS, -1),SEG2=mvjoin(mvindex(SEGMENTS, -2, -1), "."),SEG3=mvjoin(mvindex(SEGMENTS, -3, -1), "."),SEG4=mvjoin(mvindex(SEGMENTS, -4, -1), ".") | lookup mozilla_public_suffix_lookup domain AS SEG1 OUTPUTNEW length AS SEG1_LENGTH | lookup mozilla_public_suffix_lookup domain AS SEG2 OUTPUTNEW length AS SEG2_LENGTH | lookup mozilla_public_suffix_lookup domain AS SEG3 OUTPUTNEW length AS SEG3_LENGTH | lookup mozilla_public_suffix_lookup domain AS SEG4 OUTPUTNEW length AS SEG4_LENGTH | lookup cim_http_tld_lookup tld AS SEG1 OUTPUT tld AS TLD | eval TGT_LENGTH=coalesce(SEG4_LENGTH,SEG3_LENGTH,SEG2_LENGTH,SEG1_LENGTH, if(cidrmatch("0.0.0.0/0", 'DNS.query'), if(1==0, 4, 0), if(isnull(TLD), 2, 0))),SRC_LENGTH=mvcount(SEGMENTS),"DNS.query_truncated"=if(TGT_LENGTH==0, null, if(SRC_LENGTH>=TGT_LENGTH, mvjoin(mvindex(SEGMENTS, -TGT_LENGTH, -1), "."), null)) | fields - TLD SEG1 SEG2 SEG3 SEG4 SEG1_LENGTH SEG2_LENGTH SEG3_LENGTH SEG4_LENGTH TGT_LENGTH SRC_LENGTH SEGMENTS | eval "DNS.query_truncated"=if("DNS.query_truncated"="DNS.query" OR 'DNS.query_truncated'='DNS.query', null(), 'DNS.query_truncated') | lookup "threatintel_by_cidr" value as "DNS.query" OUTPUT threat_collection as tc0,threat_collection_key as tck0 | lookup "threatintel_by_domain" value as "DNS.query" OUTPUT threat_collection as tc1,threat_collection_key as tck1 | lookup "threatintel_by_domain" value as "DNS.query_truncated" OUTPUT threat_collection as tc2,threat_collection_key as tck2 | lookup "threatintel_by_system" value as "DNS.query" OUTPUT threat_collection as tc3,threat_collection_key as tck3 | where isnotnull('tck0') OR isnotnull('tck1') OR isnotnull('tck2') OR isnotnull('tck3') | eval intelzip0=mvzip('tc0','tck0',"@@") | eval intelzip1=mvzip('tc1','tck1',"@@") | eval intelzip2=mvzip('tc2','tck2',"@@") | eval intelzip3=mvzip('tc3','tck3',"@@") | eval threat_collection_key=mvappend(intelzip0,intelzip1,intelzip2,intelzip3) | eval "psrsvd_ct_sourcetype"=if(isnull('psrsvd_ct_sourcetype'),'psrsvd_ct_sourcetype','psrsvd_ct_sourcetype') | eval "psrsvd_nc_sourcetype"=if(isnull('psrsvd_nc_sourcetype'),'psrsvd_nc_sourcetype','psrsvd_nc_sourcetype') | eval "psrsvd_vm_sourcetype"=if(isnull('psrsvd_vm_sourcetype'),'psrsvd_vm_sourcetype','psrsvd_vm_sourcetype') | eval "psrsvd_ct_dest"=if(isnull('psrsvd_ct_dest'),'psrsvd_ct_DNS.dest','psrsvd_ct_dest') | eval "psrsvd_nc_dest"=if(isnull('psrsvd_nc_dest'),'psrsvd_nc_DNS.dest','psrsvd_nc_dest') | eval "psrsvd_vm_dest"=if(isnull('psrsvd_vm_dest'),'psrsvd_vm_DNS.dest','psrsvd_vm_dest') | eval "threat_match_field"=if(isnull('threat_match_field'),"src",'threat_match_field') | eval "threat_match_value"=if(isnull('threat_match_value'),'DNS.query','threat_match_value') ] [| tstats prestats=true summariesonly=true values("sourcetype"),values("All_Traffic.dest"),values("All_Traffic.user") from datamodel="Network_Traffic"."All_Traffic" where "All_Traffic.action"="allowed" by "All_Traffic.src" | eval SEGMENTS=split(ltrim('All_Traffic.src', "."), "."),SEG1=mvindex(SEGMENTS, -1),SEG2=mvjoin(mvindex(SEGMENTS, -2, -1), "."),SEG3=mvjoin(mvindex(SEGMENTS, -3, -1), "."),SEG4=mvjoin(mvindex(SEGMENTS, -4, -1), ".") | lookup mozilla_public_suffix_lookup domain AS SEG1 OUTPUTNEW length AS SEG1_LENGTH | lookup mozilla_public_suffix_lookup domain AS SEG2 OUTPUTNEW length AS SEG2_LENGTH | lookup mozilla_public_suffix_lookup domain AS SEG3 OUTPUTNEW length AS SEG3_LENGTH | lookup mozilla_public_suffix_lookup domain AS SEG4 OUTPUTNEW length AS SEG4_LENGTH | lookup cim_http_tld_lookup tld AS SEG1 OUTPUT tld AS TLD | eval TGT_LENGTH=coalesce(SEG4_LENGTH,SEG3_LENGTH,SEG2_LENGTH,SEG1_LENGTH, if(cidrmatch("0.0.0.0/0", 'All_Traffic.src'), if(1==0, 4, 0), if(isnull(TLD), 2, 0))),SRC_LENGTH=mvcount(SEGMENTS),"All_Traffic.src_truncated"=if(TGT_LENGTH==0, null, if(SRC_LENGTH>=TGT_LENGTH, mvjoin(mvindex(SEGMENTS, -TGT_LENGTH, -1), "."), null)) | fields - TLD SEG1 SEG2 SEG3 SEG4 SEG1_LENGTH SEG2_LENGTH SEG3_LENGTH SEG4_LENGTH TGT_LENGTH SRC_LENGTH SEGMENTS | eval "All_Traffic.src_truncated"=if("All_Traffic.src_truncated"="All_Traffic.src" OR 'All_Traffic.src_truncated'='All_Traffic.src', null(), 'All_Traffic.src_truncated') | lookup "threatintel_by_cidr" value as "All_Traffic.src" OUTPUT threat_collection as tc0,threat_collection_key as tck0 | lookup "threatintel_by_domain" value as "All_Traffic.src" OUTPUT threat_collection as tc1,threat_collection_key as tck1 | lookup "threatintel_by_domain" value as "All_Traffic.src_truncated" OUTPUT threat_collection as tc2,threat_collection_key as tck2 | lookup "threatintel_by_system" value as "All_Traffic.src" OUTPUT threat_collection as tc3,threat_collection_key as tck3 | where isnotnull('tck0') OR isnotnull('tck1') OR isnotnull('tck2') OR isnotnull('tck3') | eval intelzip0=mvzip('tc0','tck0',"@@") | eval intelzip1=mvzip('tc1','tck1',"@@") | eval intelzip2=mvzip('tc2','tck2',"@@") | eval intelzip3=mvzip('tc3','tck3',"@@") | eval threat_collection_key=mvappend(intelzip0,intelzip1,intelzip2,intelzip3) | eval "psrsvd_ct_sourcetype"=if(isnull('psrsvd_ct_sourcetype'),'psrsvd_ct_sourcetype','psrsvd_ct_sourcetype') | eval "psrsvd_nc_sourcetype"=if(isnull('psrsvd_nc_sourcetype'),'psrsvd_nc_sourcetype','psrsvd_nc_sourcetype') | eval "psrsvd_vm_sourcetype"=if(isnull('psrsvd_vm_sourcetype'),'psrsvd_vm_sourcetype','psrsvd_vm_sourcetype') | eval "psrsvd_ct_dest"=if(isnull('psrsvd_ct_dest'),'psrsvd_ct_All_Traffic.dest','psrsvd_ct_dest') | eval "psrsvd_nc_dest"=if(isnull('psrsvd_nc_dest'),'psrsvd_nc_All_Traffic.dest','psrsvd_nc_dest') | eval "psrsvd_vm_dest"=if(isnull('psrsvd_vm_dest'),'psrsvd_vm_All_Traffic.dest','psrsvd_vm_dest') | eval "psrsvd_ct_user"=if(isnull('psrsvd_ct_user'),'psrsvd_ct_All_Traffic.user','psrsvd_ct_user') | eval "psrsvd_nc_user"=if(isnull('psrsvd_nc_user'),'psrsvd_nc_All_Traffic.user','psrsvd_nc_user') | eval "psrsvd_vm_user"=if(isnull('psrsvd_vm_user'),'psrsvd_vm_All_Traffic.user','psrsvd_vm_user') | eval "threat_match_field"=if(isnull('threat_match_field'),"src",'threat_match_field') | eval "threat_match_value"=if(isnull('threat_match_value'),'All_Traffic.src','threat_match_value') ] | mvexpand threat_collection_key | stats values("dest") as "dest",values("sourcetype") as "sourcetype",values("user") as "user" by threat_match_field,threat_match_value,threat_collection_key | rex field=threat_collection_key "^(?<threat_collection>.*)@@(?<threat_collection_key>.*)$" | eval "dest"=mvindex('dest',0,10-1) | eval "sourcetype"=mvindex('sourcetype',0,10-1) | eval "user"=mvindex('user',0,10-1) | eval certificate_intel_key=if(threat_collection="certificate_intel",'threat_collection_key',null()) | eval email_intel_key=if(threat_collection="email_intel",'threat_collection_key',null()) | eval file_intel_key=if(threat_collection="file_intel",'threat_collection_key',null()) | eval http_intel_key=if(threat_collection="http_intel",'threat_collection_key',null()) | eval ip_intel_key=if(threat_collection="ip_intel",'threat_collection_key',null()) | eval process_intel_key=if(threat_collection="process_intel",'threat_collection_key',null()) | eval registry_intel_key=if(threat_collection="registry_intel",'threat_collection_key',null()) | eval service_intel_key=if(threat_collection="service_intel",'threat_collection_key',null()) | eval user_intel_key=if(threat_collection="user_intel",'threat_collection_key',null()) | lookup "certificate_intel" _key as "certificate_intel_key" OUTPUTNEW "description","threat_key","weight","disabled" | lookup "email_intel" _key as "email_intel_key" OUTPUTNEW "description","threat_key","weight","disabled" | lookup "file_intel" _key as "file_intel_key" OUTPUTNEW "description","threat_key","weight","disabled" | lookup "http_intel" _key as "http_intel_key" OUTPUTNEW "description","threat_key","weight","disabled" | lookup "ip_intel" _key as "ip_intel_key" OUTPUTNEW "description","threat_key","weight","disabled" | lookup "process_intel" _key as "process_intel_key" OUTPUTNEW "description","threat_key","weight","disabled" | lookup "registry_intel" _key as "registry_intel_key" OUTPUTNEW "description","threat_key","weight","disabled" | lookup "service_intel" _key as "service_intel_key" OUTPUTNEW "description","threat_key","weight","disabled" | lookup "user_intel" _key as "user_intel_key" OUTPUTNEW "description","threat_key","weight","disabled" | lookup threat_group_intel _key as threat_key OUTPUTNEW description,weight | eval weight=if(isnum(weight),weight,60) | fields - intelzip*,"certificate_intel_key","email_intel_key","file_intel_key","http_intel_key","ip_intel_key","process_intel_key","registry_intel_key","service_intel_key","user_intel_key" | where NOT match(disabled, "1|[Tt]|[Tt][Rr][Uu][Ee]") | dedup threat_match_field,threat_match_value,threat_key Has anyone faced similar problems? BR, Markus
Hi, I was going through my Splunk setup and came across this warning when I was clicking on LDAP Groups under "Authentication Methods > LDAP strategies" "LDAP server warning: Size limit exceeded" ... See more...
Hi, I was going through my Splunk setup and came across this warning when I was clicking on LDAP Groups under "Authentication Methods > LDAP strategies" "LDAP server warning: Size limit exceeded"   I was just wondering what could be the cause and how to resolve this warning?   Thank you in advance for any assistance.   Mikhael  
Hello, I am very new to Splunk. I want to trigger an alert when a second event does not occur within 20min of the first event. Index and souurcetype is the same for both the events. Both these events... See more...
Hello, I am very new to Splunk. I want to trigger an alert when a second event does not occur within 20min of the first event. Index and souurcetype is the same for both the events. Both these events have the same value for the requestId field but different value for the message field. For example, First Event: Event A     level: INFO logger_name: filename1 message: First event requestId: 12345 thread_name: http-t2 timestamp: 2022-12-19T05:44:51.757Z     Event B     level: INFO logger_name: filename1 message: First event requestId: 67890 thread_name: http-t2 timestamp: 2022-12-19T05:44:51.757Z     Second Event: Event C     level: INFO logger_name: filename2 message: Second Event requestId: 12345 thread_name: http-t1 timestamp: 2022-12-19T05:44:51.926Z     Since Event B with request ID: 67890 does not have a second event with the same request ID, I want Event B as my output,      level: INFO logger_name: filename1 message: First event requestId: 67890 thread_name: http-t2 timestamp: 2022-12-19T05:44:51.757Z     Please any help is appreciated
Hi everyone! I plan to ingest some OpenTelemetry spans. When I want to search/filter my ingested spans for certain resource/span attributes, if I understood correctly from the docs here: Analyze se... See more...
Hi everyone! I plan to ingest some OpenTelemetry spans. When I want to search/filter my ingested spans for certain resource/span attributes, if I understood correctly from the docs here: Analyze services with span tags and MetricSets in Splunk APM, I won't be able to search or filter until I explicitly start to index the resource/span attributes I want to search, correct? Or is my understanding wrong? Is there a way to automatically tell Splunk that I want to add to the index any OpenTelemetry resource/span attribute that is identified in a span? Or do I always need to add them first manually?
We have a distributed splunk (8.x) environment on-prem, with CM and 3 peers, 2 SH, 1 deployment server, and many clients. On on of my Windows 10 clients, I have a csv file  that gets new data appen... See more...
We have a distributed splunk (8.x) environment on-prem, with CM and 3 peers, 2 SH, 1 deployment server, and many clients. On on of my Windows 10 clients, I have a csv file  that gets new data appended to it every 5 minutes via a separate python script that runs locally. I have verified the appending is working as intended. On the UF, I have setup an App with a monitor input  to watch the file. A custom props.conf is located on the indexer peers. I am experiencing some unexpected behavior and am struggling to find a solution.  The first odd thing I have noticed is that when data is appended  to the csv file, either via the python script or manually adding, I sometimes will get an event ingested but sometimes it will not. If I manually add lines 4 or 5 times over a 15 minute period, I might get 1 event. Sometimes I wont get any events at all, but if I get one event, its never more than 1.  The second weird thing noticed is that the event is always the top line of the csv file. Never the line that I added manually or via python to the bottom of the CSV file. The file is over 2500 lines. I have verified that the lines are actually appended to the bottom, and persist.  I suspect that there might be an issue with the timestamp or possibly the LINE_BREAK but I cannot say definitively that is the issue. (Maybe the LINE_BREAK regex is not actually there?) I can take the csv file, and add it without issue using the "Add Data" process in the SplunkWeb. It breaks the events exactly how I would expect it should (not just the top line) with correct timestamps, and looks perfect. Using the Add Data method, I copied the props from the WebUI and tried to add it to my Apps custom props.conf that is pushed to the peers. I am still left with the same weird behavior as I experienced with my custom props.conf file. I am reaching a point where googling is becoming too specific to get me a solid lead on my next steps in the troubleshoot. Does anyone know what might be causing either of these issues? Here is a snippet of the csv file (the top 3 lines):   profile_val,start_time,scheduler_tree,bot_name,tree_name,query_val,kill_option,stopped_on,remarks,status_val,ts Fit,11/03/2022 19:34:00.277,lb_prod_scheduler,spotbot,,%20redacted%20auto,false,11/03/2022 19:40:00.107,BOT did not pinged for more than 300 seconds,failed,2022-11-04 00:40:05 Fit,11/03/2022 19:49:00.143,lb_prod_scheduler,spotbot,,%20redacted%20auto,false,11/03/2022 19:55:00.091,BOT did not pinged for more than 300 seconds,failed,2022-11-04 00:55:05     Here is my custom props.conf that I initially tried:   [bot:logs] description = BOT Application Status Logs category = Custom disabled = false CHARSET=UTF-8 KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIME_FORMAT = %m/%d/%Y%t%H:%M:%S.%f TIME_PREFIX = Fit, INDEXED_EXTRACTIONS=CSV FIELD_DELIMITER=, FIELD_NAMES=profile_val,start_time,scheduler_tree,bot_name,tree_name,query_val,kill_option,stopped_on,remarks,status_val,ts     Here is the "Add Data" props.conf that works in the WebUI but not on the indexers:   [bot:logs] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 EXTRACT-Ex_account=^(?:[^:\n]*:){4}(?P<Ex_account>\d+) INDEXED_EXTRACTIONS=csv KV_MODE=none category=Structured description=Comma-separated value format. Set header and other settings in "Delimited Settings" disabled=false pulldown_type=true    
[monitor://C:\*_IPCATMDetailLog.txt] disable=0 index=test sourcetype=IPCATMDetailLog   that is what I need to monitor. Because day by day the log will have date. For example: 20221219_IPCAT... See more...
[monitor://C:\*_IPCATMDetailLog.txt] disable=0 index=test sourcetype=IPCATMDetailLog   that is what I need to monitor. Because day by day the log will have date. For example: 20221219_IPCATMDetailLog.txt or 20221218_IPCATMDetailLog.txt etc. I dont know why it just only can get log in 20221215 , before that I use default sourcetype by Splunk, I have use my sourcetype in afternoon 12/15/2022. The other days after that can not anymore. I want to get all the log day by day following the sourcetype=IPCATMDetailLog. Thanks for your help.
HI I was about to create a summary index for log sizes/counts by host and by sourcetype. I require this for alerting when log volumes change. I can create the indexes/searches but I thought that ... See more...
HI I was about to create a summary index for log sizes/counts by host and by sourcetype. I require this for alerting when log volumes change. I can create the indexes/searches but I thought that this might be a common thing - does anyone know of an app/addon that does this already? Thanks Bevan      
I was wondering, 1. We have search time and index time field extractions, so can i push the same props/transforms over SH and indexer cluster for search time field extraction or no need to push the... See more...
I was wondering, 1. We have search time and index time field extractions, so can i push the same props/transforms over SH and indexer cluster for search time field extraction or no need to push the same at indexer since search head will replicate knowledge budle with its search peers. And do i need to restart both search head and the indexers for these conf files to take effect? 2.  For index time field extraction we usually push the props/transforms over HF and do i need to push the same conf over indexers as well. And do i need to restart both HF and the indexers for these conf files to take effect?  
Hi,Splunkers,   I  have a timechart,  which have value for count by VQ  less than 10,  but default y axis scale is 100, which causes I can't see any column in my timechart.   how to change the ... See more...
Hi,Splunkers,   I  have a timechart,  which have value for count by VQ  less than 10,  but default y axis scale is 100, which causes I can't see any column in my timechart.   how to change the y Axis scale to adjust automatically to my maximum value,   for example, if max value is less than 10,  than default scale should be 10.       thx in advance. kevin  
I need a query to group similar stack trace across request (CR- Correlation Id) in a specific format: Query: index="myIndex" source="/mySource" "*exception*" | rex field=_raw "(?P<firstFewLinesO... See more...
I need a query to group similar stack trace across request (CR- Correlation Id) in a specific format: Query: index="myIndex" source="/mySource" "*exception*" | rex field=_raw "(?P<firstFewLinesOfStackTrace>(.*\n){1,5})" | eval date=strftime(_time, "%d-%m-%Y") | head 3 | reverse | table date, CR, count, firstFewLinesOfStackTrace Format: Date CR Count Log 01/12/22 CR_1 CR-2 2 StackTrace1 StackTrace2 StackTrace3 02/12/22 CR_1 CR-2 CR-3 3 DiffStackTrace1 DiffStackTrace2 DiffStackTrace3   Am not sure how to group these logs as each stack trace have _date as unique identifier, also how to get the result in above format (what to use stats, eventstats, table, etc.) pls help, thanks in advance.
Hi I am going to create a DC list lookup daily using nslookup how I can I define the lookup without a header  or I should create a header manually  ?