All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

TLDR: I'm trying to automate the large 25 day search to break up into 25 separate one day searches. I'm updating a lookup table that is tracking which indexes are affected by the new log4j exploit. ... See more...
TLDR: I'm trying to automate the large 25 day search to break up into 25 separate one day searches. I'm updating a lookup table that is tracking which indexes are affected by the new log4j exploit.  I do this so that I can only have to search through the affected indexes with subsequent searches.  This lookup table takes hours each time it is updated for a day.  Problem being, I need to know all of the affected indexes over all of the days log4j since December 10th or so.   Query that updates lookup table:   NOT [| inputlookup log4j_indexes.csv | fields index] | regex _raw="(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)" | table index | inputlookup append=true log4j_indexes.csv | dedup index | outputlookup log4j_indexes.csv​   Each time this query finishes, it appends log4j-exploit-affected indexes to the lookup table.  I need to automate the scanning over a large timeframe (December 10th 2021 - January 5th 2022).  However, I want the lookup table to update as it runs over each day.  I'm trying to automate the large 25 day search to break up into 25 separate one day searches.  This also makes it so that if the search fails, then I don't lose all progress.  I can then apply this same methodology to other searches. Lookup Table (Log4J_affected_indexes) Index index_1 index_2   How I've tried to solve the problem Commands I've tried while attempting to solve: foreach map gentimes subsearch saved searches Gentimes (smaller timeframes) -> map Explanation of Query below: The gentimes part creates a table based on the selected timerange: Earliest  Latest 01/02/2022:00:00:00 01/03/2022:00:00:00 01/03/2022:00:00:00 01/04/2022:00:00:00 01/04/2022:00:00:00 01/05/2022:00:00:00   I try to pass those values to a subsearch as the earliest and latest parameters using map.  I understand now that map doesn't seem to work for this, and I get no results when the search runs. (gentimes and map) Query:   |gentimes start=-1 |addinfo |eval datetime=strftime(mvrange(info_min_time,info_max_time,"1d"),"%m/%d/%Y:%H:%M:%S") |mvexpand datetime |fields datetime |eval latest=datetime |eval input_earliest=strptime(datetime, "%m/%d/%Y:%H:%M:%S") - 86400 |eval earliest=strftime(input_earliest, "%m/%d/%Y:%H:%M:%S") |fields earliest, latest | map search="search NOT [| inputlookup log4j_indexes.csv | fields index] earliest=$earliest$ latest=$latest$ | regex _raw=\"(\$|\%24)(\{|\%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|\%3A|\$|\%24|}|\%7D)\" | table index | inputlookup append=true log4j_indexes.csv | dedup index | outputlookup log4j_indexes.csv"   Gentimes subsearch -> main search Explanation of Query below: I use gentimes in a subsearch to produce smaller timeframes from the larger selected timeframe: Earliest  Latest 01/02/2022:00:00:00 01/03/2022:00:00:00 01/03/2022:00:00:00 01/04/2022:00:00:00 01/04/2022:00:00:00 01/05/2022:00:00:00   This doesn't give me errors.  However, I get no matches.  I can almost guarantee this isn't running separate searches per value displayed in the above table.  I'm not sure how this can be done. (gentimes subsearch) Query:   NOT [| inputlookup log4j_indexes.csv | fields index] [|gentimes start=-1 |addinfo |eval datetime=strftime(mvrange(info_min_time,info_max_time,"1d"), "%m/%d/%Y:%H:%M:%S") |mvexpand datetime |fields datetime |eval latest=datetime |eval input_earliest=strptime(datetime,"%m/%d/%Y:%H:%M:%S") - 86400 |eval earliest=strftime(input_earliest,"%m/%d/%Y:%H:%M:%S") |fields earliest, latest] | regex _raw="(\$|\%24)(\{|\%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|\%3A|\$|\%24|}|\%7D)" | table index | inputlookup append=true log4j_indexes.csv | dedup index | outputlookup log4j_indexes.csv   Conclusion Other failed attempts: using foreach (can't do non-streaming) passing earliest and latest parameters to saved-search savedsearch doesn't work this way Other solutions I've thought of: Running subsearch that updates a smaller_timeframe.csv file that keeps track of the smaller timeframes.  Then, pass those timeframe parameters (earliest / latest) into a search somehow. Somehow do a recursive sort of search where each search triggers another search to go.  Consequently, I could have a search trigger another search with the earliest and latest values incremented forward one day (or any amount of time). Maybe, Splunk has a feature (not on the search head) that can automate the same search over small timeframes, and over a large period of time.  Maybe this unknown-to-me feature also has scheduling built into it. If there is any other information that I can give to help others solve this with me, then just ask.  I can edit this post...
I need to customize the alert message (send via email) with information that is not intrinsic to the alert itself. For example, if the number of users logging in over a 5 minute period exceeds a thre... See more...
I need to customize the alert message (send via email) with information that is not intrinsic to the alert itself. For example, if the number of users logging in over a 5 minute period exceeds a threshold, then send the alert email with the number of IP addresses that have logged in in that time period. Trying to use Custom Alert Actions, but we feel that there may be an easier way to execute. Is there a way to have an alert trigger a report, then email that contents to a select group? We have an alert X. This alert is setup so it triggers at custom machine learning parameters. It will only trigger when the actual number of events is much higher than the mathematical prediction.  When X is triggered, we need to do 2 things. Firstly, run a report compiling all the information needed to triage. A lot of this is in a Dashboard, but can be run through any number of report and/or splunk query ways. Secondly, we need the information in that report or queries to be put into an email, either by the file itself or using Splunk tokens to convey the report results. My approach in my head is alert > run report > email data from that report.  Thanks in advance!
We have alerts set for 65 client sites and a handful of internal sites. If we want to disable ALL the alerts with one action, is this function available in AppDynamics?
Similar to https://community.splunk.com/t5/Splunk-Search/Word-Cloud-Not-Showing/td-p/544413 When I select "Tag Cloud" as the visualization stats count by campaign   It seems to do this for al... See more...
Similar to https://community.splunk.com/t5/Splunk-Search/Word-Cloud-Not-Showing/td-p/544413 When I select "Tag Cloud" as the visualization stats count by campaign   It seems to do this for all stats count by xxxx searches, as well as my others.     
Can some one help me in building a Splunk search with the below mentioned criteria!. My application contains some fields and one of the field is "Request Number". I want the search query to fetch th... See more...
Can some one help me in building a Splunk search with the below mentioned criteria!. My application contains some fields and one of the field is "Request Number". I want the search query to fetch the records which have "Request Number" as "0". I have the source name, Host name etc. I'm getting other results also, But no Requet number as 0. Can someone help me out here.
We have a commercial appliance that requires a HEC configuration in Splunk to ingest data.  I have configuration the TA and App and the HEC configuration on the search head.  But I get no data being ... See more...
We have a commercial appliance that requires a HEC configuration in Splunk to ingest data.  I have configuration the TA and App and the HEC configuration on the search head.  But I get no data being ingested.   I was told that it requires a valid certificate on the search head in order for this to work.  Is this true?  In the HEC configuration there is a check box for not using SSL.  I've also have run the curl -k command with success using the generated token. 
I have two searches where I need to run an stats count on to do some calculations. First search  is index=xxx wf_id=xxx wf_env=xxx xxx | stats count   Second search is  index=xxx wf_id=xxx wf_env... See more...
I have two searches where I need to run an stats count on to do some calculations. First search  is index=xxx wf_id=xxx wf_env=xxx xxx | stats count   Second search is  index=xxx wf_id=xxx wf_env=xxx    sourcetype=xxx usecase=xxx  | stats count by request_id   First search uses a simple stats to get its count, but the second search uses stats count by request_id so I am having trouble getting the counts for both. Ideally I would like to get the counts for both searches and divide them. I've used appendcols but it returns empty fields for both searches. Any guidance on how to get counts for these searches would be helpful!    Working example: _time Search 1 counts Search 2 counts Search 1/ Search 2 00:30 50 25 2 00:35 100 25 4    
Hi! I have a summarized field (docsReturned) by customer id that I would like to make a top X pie chart of, while summarizing the fields not displayed in the list under the OTHERS tag that the timech... See more...
Hi! I have a summarized field (docsReturned) by customer id that I would like to make a top X pie chart of, while summarizing the fields not displayed in the list under the OTHERS tag that the timechart and top command use. Base command example:   <search here> | stats sum(docsReturned) by customerId   I assumed it would work in the same way as the others (that I could simply set a limit on the "| stats" transform command) like I can with the timechart command, but that does not seem to be supported. I also attempted to chain the above search with the top command, but top appears to only work when counting rows? (Can at least not figure out how to make it work based on an already summarized field) Last but not least I have tested chaining it with the sort command. "| sort 3 -docsReturned" is the closest I have gotten to what I want, but then I am lacking "OTHERS" which is quite important in this scenario.. Sample output that I would like (in a scenario where the dynamic limit is set to 3): 1 Customer 1 14079 2 Customer 2 7015 3 Customer 3 5302 4 OTHER 6407 It seems like this should be an easy thing (since it is available in the timechart and top commands) and hopefully I have simply overlooked something. Fingers crossed that someone here can point me in the right direction?
I am using Splunk Slack webhook to send alert results to Slack channels but at present its only displaying the first result of the alert ...I want to display all the values rather than just the first... See more...
I am using Splunk Slack webhook to send alert results to Slack channels but at present its only displaying the first result of the alert ...I want to display all the values rather than just the first value. I tried for each result but that creates multiple messages which is not the solution I am looking for.  <$results_url$|$result.sum$ transaction failed - tr_id=$result.tr_id$> tr_id has mutiple values but only displaying first value in slack at the moment    
Hi, How can I write the name of a field in the value like I have : test_1 test_2 test_3 warn error critical   I want : test test_1 - warn test_2 - error test_3 - critica... See more...
Hi, How can I write the name of a field in the value like I have : test_1 test_2 test_3 warn error critical   I want : test test_1 - warn test_2 - error test_3 - critical   I must do this for unknown fields (by now I have 3 tests but it can be more so it must be variable).  I thought to foreach command but I don't know how to do it. Can you help me if this usecase is possible ?
I'm pretty new to Splunk and have currently been tasked to startup an App and am outfitting a dashboard for my team. I'm currently in the process of researching ways on how to integrate an external ... See more...
I'm pretty new to Splunk and have currently been tasked to startup an App and am outfitting a dashboard for my team. I'm currently in the process of researching ways on how to integrate an external reverse DNS lookup on an enterprise level. The goal is match/identify the business partner's name with their external connection's IP within our database. As of now, we just have a large scale of outbound/inbound ip's and it would benefit us to match a name to them. Is this a possible task, and if it is what are best practices or known solutions to this request? Thank you in advance!
Hello, i have a log file which is capturing processed files. The file text always has the same string, its just the date prefix which changes.  So i would like to read in the files processed today... See more...
Hello, i have a log file which is capturing processed files. The file text always has the same string, its just the date prefix which changes.  So i would like to read in the files processed today and compare to yesterday and how the difference. I have used the answers to other questions to get the file date read in by day, however the diff command does not work, is this only for integers rather than string.   Successfully processed file 20211105-zone-Foo Bar1.txt   Successfully processed file 20211105-zone-Bar 1.txt   Successfully processed file 20211106_zone-Foo Bar1.txt   Successfully processed file 20211106-zone-Bar Foo1.txt   index=foo source=bar earliest=-1d@d latest=now "Successfully processed file" | rex "\-zone\-(?<File>.+)" | eval Day=if(_time<relative_time(now(),"@d"),"Yesterday","Today") | chart values(File) by Day | eval Diff=Yesterday-Today | where Yesterday!=Today   i would like to report that Bar 1.txt and Bar Foo1.txt are the differences.
I have deployed Splunk on a CentOS machine and forwarder on Windows Server 2012 R2. After installing the universal forwarder, the host is not showing up in 
Hello, I have a table like that : customer prod_1 prod_2 prod_3 customer_1   green   customer_2 red   orange   and I would like to count customer by product to get a table li... See more...
Hello, I have a table like that : customer prod_1 prod_2 prod_3 customer_1   green   customer_2 red   orange   and I would like to count customer by product to get a table like this :  product count custumer prod_1 1 customer_2 prod_2 1 customer_1 prod_3 1 customer_2   Is it possible ?
Hi 2022-01-04 23:10:43,224 INFO [APP] sessionDestroyed, Session Count: 0 2022-01-04 23:12:34,238 INFO [APP] sessionCreated, Session Count: 1 2022-01-04 23:13:43,224 INFO [APP] sessionDestroyed, Se... See more...
Hi 2022-01-04 23:10:43,224 INFO [APP] sessionDestroyed, Session Count: 0 2022-01-04 23:12:34,238 INFO [APP] sessionCreated, Session Count: 1 2022-01-04 23:13:43,224 INFO [APP] sessionDestroyed, Session Count: 10 2022-01-04 23:14:34,238 INFO [APP] sessionCreated, Session Count: 7   extract output                              sessionCreated            sessionDestroyed 2022-01-04 23:10:43                                                                                0 2022-01-04 23:12:34                           1 2022-01-04 23:13:43                                                                               10 2022-01-04 23:14:34                            7  
Hello Team, I need help with a splunk query where I am trying to get the AWS instance ID via lookup table but I am able to get the instance name with respect to IP , please find the query below and h... See more...
Hello Team, I need help with a splunk query where I am trying to get the AWS instance ID via lookup table but I am able to get the instance name with respect to IP , please find the query below and help me with the suggestion. index=c3d_security host=ip-10-10* rule=corp_deny_all_to_untrust NOT dest_port=4431 | table src_ip dest_ip transport dest_port application | lookup Blocked_Non-httptraffic.csv src_ip as src_ip outputnew dest_ip Note: I have made the csv file with lookup editor " Non-httptraffic.csv src" with two fields src_ip and dest_ip , if I am searching with above query so I am unable to get the instance name like host name with regards to IP Please help..
Hello,  I have a question regarding replication of lookups on a search head cluster containing 3 search heads. The issue is that their is a lookup that is roughly 6gb large, and that is larger ... See more...
Hello,  I have a question regarding replication of lookups on a search head cluster containing 3 search heads. The issue is that their is a lookup that is roughly 6gb large, and that is larger than the replication max bundle size we allow. This lookup will not be changed in the future.  My question is how to you suggest we get the lookup on every search head?  One way is to increase the max size that we allow,  But i'm thinking if we could just manually upload the lookup to every search head instead?   I believe that as long as they share the same properties over all search heads it will be functionally the same as allowing the search heads to automatically replicate it?  Do you relive this would work? 
Hi All, I have a query to get the result of the list of filesystems and their respective disk usage details as below: File_System  Total in GB   Used in GB   Available in GB   Disk_Usage in % /var... See more...
Hi All, I have a query to get the result of the list of filesystems and their respective disk usage details as below: File_System  Total in GB   Used in GB   Available in GB   Disk_Usage in % /var                   10                    9.2                   0.8                           92 /opt                   10                    8.1                   1.9                          81 /logs                 10                    8.7                   1.3                          87 /apps                10                    8.4                   1.6                          84 /pcvs                10                    9.4                    0.6                         94 I need to create a multiselect option with the disk usage values to get the above table for a range of values. For e.g. If I select 80 in the multiselect it will show the table with values of disk usage in the range 76-80, then if I select 80 & 90 in the multiselect it will show the table with values of disk usage in the range 76-80 & 86-90 and so on. I created the multiselect with token as "DU" and created the search query for the table as: .... | where ((Disk_Usage<=$DU$ AND Disk_Usage>($DU$-5)) OR (Disk_Usage<=$DU$ AND Disk_Usage>($DU$-5))) | table File_System,Total,Used,Available,Disk_Usage | rename Total as "Total in GB" Used as "Used in GB" Available as "Available in GB" Disk_Usage as "Disk_Usage in %" With the above query I am able to get the results when I run a search with two different values (e.g. 100 & 65) for $DU$ in (Disk_Usage<=$DU$ AND Disk_Usage>($DU$-5)). But with this query I am not able to get the table in the dashboard when I am using multiple values. Please help me with the delimiter to be added or help create a query so that upon selecting multiple options in multiselect will give the table for a range of disk usage values.
I need a splunk service for my client buying Bitdefender cyber security but wants a solution to add on to capture HTTP data and JSON.   Thank you
First query index = pcf_logs cf_org_name = creorg OR cf_org_name = SvcITDnFAppsOrg cf_app_name=VerifyReviewConsumerService host="*" | eval _raw = msg | rex "Request\#\:\s*(?<ID1>\d+) with (?<Status... See more...
First query index = pcf_logs cf_org_name = creorg OR cf_org_name = SvcITDnFAppsOrg cf_app_name=VerifyReviewConsumerService host="*" | eval _raw = msg | rex "Request\#\:\s*(?<ID1>\d+) with (?<Status>\w+.\w+)"|rex "CRERequestId\"\:\"(?<ID2>[^\"]+)" | eval ID=coalesce(ID1,ID2) | stats latest(Status) as Status by ID | eval Status=trim(Status, "status ") | stats count by Status Second query index = pcf_logs cf_org_name = creorg OR cf_org_name = SvcITDnFAppsOrg cf_app_name=VerifyReviewConsumerService host="*" | search msg="*Rejected*" | eval _raw = msg | rex "(?<CRE_Creation_Date>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}\s..)" | rex "Request\#\:\s*(?<Rejected_CRE_ID>\d+)" | rex status(?<Rejected>\s\w+) | rex (?<Failed_Reason>Rule.*)$ | eval Failed_Reason=trim(Failed_Reason, "Rule ") | stats count by CRE_Creation_Date Rejected_CRE_ID Rejected Failed_Reason