All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

This is a search string I inherited and for the most part has worked fine.  There is a desire to modify it and thought I would seek help. index=firewall host=10.214.0.11 NOT src_ip=172.26.22.192/... See more...
This is a search string I inherited and for the most part has worked fine.  There is a desire to modify it and thought I would seek help. index=firewall host=10.214.0.11 NOT src_ip=172.26.22.192/26 | stats count by src_ip, dest_ip | appendpipe [| stats sum(count) as count by src_ip |eval keep=1 | eventstats sum(count) as total_log_count ] | appendpipe [| stats sum(count) as count by dest_ip |eval keep=1 | eventstats sum(count) as total_log_count ] |where keep=1| sort -count | head 20 | where total_log_count > 1000000 Below example outputs received, separate instances: src_ip dest_ip count keep total_log_count   192.168.14.11 39164 1 1008943 192.168.14.11   32239 1 1008943 10.80.0.243   31880 1 1008943   143.251.111.100 30773 1 1008943   156.33.250.10 15544 1 1008943 192.242.214.186   13793 1 1008943 172.253.63.188   12359 1 1008943   192.168.5.46 12346 1 1008943 192.168.10.146   10987 1 1008943   192.168.3.19 9079 1 1008943 192.168.3.195   8970 1 1008943 192.168.3.18   8074 1 1008943 172.18.3.42   7709 1 1008943   192.168.14.23 7647 1 1008943 192.168.5.46   7583 1 1008943   172.253.63.188 6549 1 1008943 172.33.250.10   5806 1 1008943   192.168.24.65 5654 1 1008943   172.253.115.188 5494 1 1008943   192.168.24.134 4388 1 1008943   src_ip dest_ip count keep total_log_count 87.114.132.220   45441 1 1005417   192.168.35.6 39597 1 1005417 192.168.14.15   31629 1 1005417   172.30.5.9 16348 1 1005417 10.80.0.243   15444 1 1005417 196.199.95.18   13883 1 1005417   172.253.62.139 12703 1 1005417   192.168.12.45 11957 1 1005417   172.253.115.188 10010 1 1005417 192.168.3.19   9676 1 1005417   192.168.35.16 9641 1 1005417 192.168.5.146   9290 1 1005417 192.168.25.46   7440 1 1005417 172.253.115.188   7292 1 1005417   192.168.3.18 6163 1 1005417 192.168.39.18   6063 1 1005417 176.155.19.207   5818 1 1005417   4.188.95.188 4947 1 1005417   5.201.73.253 4942 1 1005417   45.225.238.30 4938 1 1005417   Is there a way to modify the query such that it only triggers if there is a single entity causing logs greater than a certain number (e.g. 50000) in combination with the total logs also being over a certain threshold? There is still a desire to see an output reporting the top 20 IPs.  Your time, consideration and helpful suggestions is appreciated. Thank you.
Need regex & Null queue help to send events in /var/log/messages. Here is regex101: regex101: build, test, and debug regex    (IP & hostname randomized) props.conf [source::/var/log/messages... See more...
Need regex & Null queue help to send events in /var/log/messages. Here is regex101: regex101: build, test, and debug regex    (IP & hostname randomized) props.conf [source::/var/log/messages] TRANSFORMS-set= setnull,setparsing transforms.conf [setnull] REGEX = \w{3}\s\d{2}\s\d{2}:\d{2}:\d{2}\s\w+\n DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = \w{3}\s\d{2}\s\d{2}:\d{2}:\d{2}\s\w{5}\d{4}\S\i.ab2.jone.com\s.+\n DEST_KEY = queue FORMAT = indexQueue the regex not sending unwanted event in /var/log/message .  I am doing the on HF before UF.  
So I have a query which returns a value over a period of 7 days   The below is like the query but took a few items out   index=xxxx search xxxxx | rex field=_raw "projects/\\s*(?<ProjectID>\d+)" ... See more...
So I have a query which returns a value over a period of 7 days   The below is like the query but took a few items out   index=xxxx search xxxxx | rex field=_raw "projects/\\s*(?<ProjectID>\d+)" | rex field=_raw "HTTP\/1\.1\ (?P<Status_Code>[^\ ]*)\s*(?P<Size>\d+)\s*(?P<Speed>\d+)" | eval MB=Size/1024/1024 | eval SecTM=Speed/1000 | eval Examplefield=case(SecTM<=1.00, "90%")| stats count by Examplefield | table count I can get the single value over 7 days I want to be able to do like a comparaison over the previous 7 days So lets number is 100,000 and prevous week was 90,000 then it shows up 10,000 or vice versa if that makes sense. I have seen the Sample Dashboard with Single Value with an arrow going up or down but I just have no clue how to syntax the time bit
I have a query that does a group by, which allows the sum(diff) column to be calculated.  [search] | stats sum(diff) by X_Request_ID as FinalDiff: From here, how can I list out only the entries... See more...
I have a query that does a group by, which allows the sum(diff) column to be calculated.  [search] | stats sum(diff) by X_Request_ID as FinalDiff: From here, how can I list out only the entries that have a sum(diff) > 1? My attempt looks like: [search] | stats sum(diff) by X_Request_ID as FinalDiff |where FinalDiff>1   My issue is that after the group by happens, the query seems to forget about the grouped sum and so I cannot compare it to 1. 
I am running a query where I'm trying to calculate the difference between the start and end times a request travels through a service (aka latency). In order to achieve this I search for two logs: on... See more...
I am running a query where I'm trying to calculate the difference between the start and end times a request travels through a service (aka latency). In order to achieve this I search for two logs: one for the start, one for the end, I then subtract the start and end times, and finally do a group by X_Request_ID-which is unique per request. What I have at this point is: What I want to do now is to only display the count of all requests that took over 1 second.  My attempt at this looks like: index=prod component="card-notification-service" eventCategory=transactions eventType=auth AND ("is going to process" OR ("to POST https://apay-partner-api.apple.com/ccs/v1/users/eventNotification/transactions/auth" AND status=204)) | eval diff=if(searchmatch("is going to process"), _time*-1, 0) | eval Start=if(searchmatch("is going to process"), _time, NULL) | eval diff=if(searchmatch("to POST https://app.transactions/auth"), diff+_time, diff) | eval End=if(searchmatch("to POST https://app.transactions/auth"), _time, NULL) | eval seriesName="Baxter<->Saturn | streamstats sum(diff) by X_Request_ID as FinalDiff |where FinalDiff> 1.0  | timechart span=5m partial=f count by seriesName I’ve gotten everything to compile fine before the bolded where clause above. I suspect it’s because in the streamstats command prior, the “as” is only naming the query and not persisting the grouping of the query. Regardless this leads me to the question I am trying to solve: How can I persist sum(diff) after grouping it by X_Request_ID so that in the next pipe I can perform a comparison in the where operation?
How do you show the annotation label on a chart without having to hover over the value? Is there a way to make a label to show this?
Getting the error "This XML file does not appear to have any style information associated with it." while trying to export search result. Getting this error within dashboards as well from sear... See more...
Getting the error "This XML file does not appear to have any style information associated with it." while trying to export search result. Getting this error within dashboards as well from search(.../search/search) page.  This is stopping our ability to export/download search results in all available formats(csv/xml/json). Any possible solutions? Splunk Enterprise version 9.0.0.1
In a nutshell AZ T rade received a request to delete some personal data from a former contractor. we have to delete data linked to ****(employee name) older than a year.  In which it is necessary t... See more...
In a nutshell AZ T rade received a request to delete some personal data from a former contractor. we have to delete data linked to ****(employee name) older than a year.  In which it is necessary to delete data and logs dating back more than a year concerning ****(Employee name). How can we do that to delte old personal data and logs dating back more than a year of an ex-employee?  
Hello, How do I combine two searches in an eval command? In the example below, I'm trying to create a value for "followup_live_agent" and "caller_silence" values. Splunk is telling me this query is... See more...
Hello, How do I combine two searches in an eval command? In the example below, I'm trying to create a value for "followup_live_agent" and "caller_silence" values. Splunk is telling me this query is invalid.        index=conversation sourcetype=cui-orchestration-log botId=123456 | eval AgentRequests=if(match(intent, "followup_live_agent" OR "caller_silence"), 1, 0)       Any help is much appreciated! 
We are planning to upgrade ES from 6.6.2 to 7.0.1, one of the new features will have a pop up window indicating that a new Content Update version is available and allows for the option to upgrade to ... See more...
We are planning to upgrade ES from 6.6.2 to 7.0.1, one of the new features will have a pop up window indicating that a new Content Update version is available and allows for the option to upgrade to the new version.  We'd like to suppress this pop up and/or prevent the update through the UI.  Would either of the below two settings prevent the pop up?  If we can't suppress the pop up will either of the below two settings help prevent the update from occurring? web.conf: Setting 'updateCheckerBaseURL' to 0 stops Splunk Web from pinging  Splunk.com for new versions of Splunk software. app.conf: Setting 'check_for_updates' to 0, this setting determines whether Splunk Enterprise checks Splunkbase for updates to this app. https://docs.splunk.com/Documentation/ES/7.0.0/RN/Enhancements Automated updates for the Splunk ES Content Update (ESCU) app When new security content is available, the update process is built into Splunk Enterprise Security so that ES admins always have the latest security content from the Splunk Security Research Team.
Hello all!  Newbie here so please forgive the ignorance in advance! I have a search: index="zscaler" reason="Reputation block outbound request: malicious URL" |dedup _time |stats count as siteCou... See more...
Hello all!  Newbie here so please forgive the ignorance in advance! I have a search: index="zscaler" reason="Reputation block outbound request: malicious URL" |dedup _time |stats count as siteCount by url,user |where siteCount > 3 |search earliest=-24h When running this search in the search bar, the time picker is overriding the 24 hour search criteria, which from what I read in the documentation shouldn't occur (unless it's a subsearch).  I'm using this for alerting purposes so I want to be sure to specify the time frame I'd like to search.  Any suggestions?   
Hello,
Hi there after much searching and testing i feel i'm stuck. Or even unsure what i want is possible.  What i want I have _json data indexed. Each event is a long array. I want Splunk to automaticall... See more...
Hi there after much searching and testing i feel i'm stuck. Or even unsure what i want is possible.  What i want I have _json data indexed. Each event is a long array. I want Splunk to automatically make key:value pairs per value. Until now, Splunk gives me all the values instead of 1 single value. Also it seems Splunk can't make correlations between fields.  I want to use fields so i can do simple searches, like making a table for "internal" "website_url"s and their  status ("up" or "down").    Example event {"data":[{"id":"1234567","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234567","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234562","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456","123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234563","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234564","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":[],"status":"up","tags":["internal"],"uptime":100},{"id":"1234567","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234562","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234560","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234562","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234568","paused":false,"name":"adyen","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":["external"],"uptime":100},{"id":"1234567","paused":false,"name":"paynl","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":["external"],"uptime":100},{"id":"1234562","paused":false,"name":"trustpay","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":["external"],"uptime":100},{"id":"1234563","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":["external"],"uptime":100},{"id":"1234566","paused":false,"name":"spryng","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":["external","sms gateway"],"uptime":100},{"id":"1234568","paused":false,"name":"messagebird","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":["external"],"uptime":100},{"id":"1234567","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234563","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":["external"],"uptime":100},{"id":"1234564","paused":false,"name":"mitek","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":["external"],"uptime":100},{"id":"1234566","paused":false,"name":"bitstamp","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":[external"],"uptime":100},{"id":"1234560","paused":false,"name":"kraken","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":3600,"contact_groups":[],"status":"up","tags":["external"],"uptime":100},{"id":"1234569","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100},{"id":"1234567","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":[],"uptime":100},{"id":"1234567","paused":false,"name":"Blox login","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":300,"contact_groups":["123456"],"status":"up","tags":[],"uptime":100},{"id":"1234567","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":[],"uptime":100},{"id":"1234564","paused":false,"name":"https:\/\/some.random.url\/with\/random\/string","website_url":"https:\/\/some.random.url\/with\/random\/string","test_type":"HTTP","check_rate":60,"contact_groups":["123456"],"status":"up","tags":["internal"],"uptime":100}],"metadata":{"page":1,"per_page":25,"page_count":2,"total_count":26}}   How far i got   source="/opt/splunk/etc/apps/randomname/bin/statuscake_api.sh" | spath output=id path=data{}.id | spath output=url path=data{}.website_url | spath output=status path=data{}.status | search id=8179640 | table id, url, status  However, it shows a table of all aray fields, not just one specific 'id' i specified in the search part | search id=<idnumber>   Screenshot  
Hi All, I have the following saved search: | tstats summariesonly=true fillnull_value="N/D" count from datamodel=Change where NOT [|`change_whitelist_generic`] nodename="All_Changes.Account_Manag... See more...
Hi All, I have the following saved search: | tstats summariesonly=true fillnull_value="N/D" count from datamodel=Change where NOT [|`change_whitelist_generic`] nodename="All_Changes.Account_Management.Accounts_Updated" AND All_Changes.log_region=* AND All_Changes.log_country=* AND (All_Changes.command=passwd OR All_Changes.result_id IN (4723, 4724)) by All_Changes.log_region, All_Changes.log_country, index, host, All_Changes.Account_Management.src_user, All_Changes.user, _time | `drop_dm_object_name("All_Changes")` | rename Account_Management.src_user as src_user My customer asked to me to exclude results when Account_Management.src_user=user1 and All_Changes.Account_Management.src_nt_domain=All_Changes.Account_Management.dest_nt_domain. So I tried something like that but it seems not working:   | tstats summariesonly=true fillnull_value="N/D" count from datamodel=Change where NOT [| `change_whitelist_generic`] nodename="All_Changes.Account_Management.Accounts_Updated" AND All_Changes.log_region=* AND All_Changes.log_country=* AND (All_Changes.command=passwd OR All_Changes.result_id IN (4723, 4724)) by All_Changes.log_region, All_Changes.log_country, index, host, All_Changes.Account_Management.src_user, All_Changes.user, All_Changes.Account_Management.dest_nt_domain, All_Changes.Account_Management.src_nt_domain, _time | `drop_dm_object_name("All_Changes")` | search NOT (Account_Management.src_user=user1 AND Account_Management.src_nt_domain=Account_Management.dest_nt_domain) | rename Account_Management.src_user as src_user Have you any advice?   Thank you!
scenario : - I had a log file. I am able to extract the fields from the log event and also see the data in the extracted fields. But when I am filtering the data with use of extracted field, unable t... See more...
scenario : - I had a log file. I am able to extract the fields from the log event and also see the data in the extracted fields. But when I am filtering the data with use of extracted field, unable to see the results although it has data in the extracted field.  I have referred to this "http://blogs.splunk.com/2011/10/07/cannot-search-based-on-an-extracted-field/" link it doesn't help.  Please help us regarding this.
Hello, I am currently working on a use case which has complex ingested data with nested json. The data I am trying to capture is non compliant. I am looking for guidance on how to categorize the ne... See more...
Hello, I am currently working on a use case which has complex ingested data with nested json. The data I am trying to capture is non compliant. I am looking for guidance on how to categorize the nested json objects into fields within the array. Here is the redacted information I currently have, thank you! Search I am using: index=fsctcenter sourcetype=fsctcenter_json | regex "Non Compliant[^\:]+\:\"\d+\"\,\"status\":\"Match" | rex field=_raw "policy_name\":\"(?<policy_name>[a-zA-z1-9\.\s+]+Non\sCompliant[^\"]+)" | rex field=_raw "rule_name\":\"(?<rule_name>[a-zA-z1-9\.\s+]+Non\sCompliant[^\"]+)" Raw: {"ctupdate":"policyinfo","ip":"X.X.X.X","policies":[{"rule_name":"XXXX","policy_name":"XXXX","since":"XXXX","status":"XXXX"},{"rule_name":"XXXX","policy_name":"XXXX","since":"XXXX","status":"XXXX"},{"rule_name":"XXXX","policy_name":"XXXX","since":"XXXX","status":"XXXX"},{"rule_name":"XXXX","policy_name":"XXXX","since":"XXXX","status":"XXXX"},...etc   List: policies: [ [-] { [-] policy_name: XXXX rule_name: XXXX since: XXXX status: XXXX } { [-] policy_name: XXXX rule_name: XXXX since: XXXX status: XXXX } Etc...   Currently Splunk ES is not itemizing the fields correctly for the nested json above. Any help or guidance would be greatly appreciated, thanks!
Hello folks, there is a tool that helps in sizing a server that will work with accelerate data models ? Or wich is the best way to achieve that goal? It seams that splunk base configuration 12c... See more...
Hello folks, there is a tool that helps in sizing a server that will work with accelerate data models ? Or wich is the best way to achieve that goal? It seams that splunk base configuration 12cpu/12gb of ram is not enoght. Thank you all.
I have a stats table with output in the below format: Device                          Timestamp        Action some value                some value.             1 some value              some va... See more...
I have a stats table with output in the below format: Device                          Timestamp        Action some value                some value.             1 some value              some value.              2 ..                     ..                                ..  some value                some value              10 some value                 some value               1 some value              some value.                  2 ..                     ..                                ..  some value                some value              10   So, the action column repeats the pattern after a certain number of iterations. How to group these into single fields, that is, each full iteration should be stored as a mv field.  
Hi Team, Is it possible to stop alert for particular time window. Suppose I have a alert already created and running and I want to stop it on a coming Saturday from 1 PM to 4PM. is it possible ... See more...
Hi Team, Is it possible to stop alert for particular time window. Suppose I have a alert already created and running and I want to stop it on a coming Saturday from 1 PM to 4PM. is it possible whiteout doing it manual or not by using cron scheduler ?  Please help. Thank you
Hello Splunkers !! As per the below screenshot I want to capitalise the first letter of every filed column.So for the same I have tried above work around which are in commented. Please suggest me h... See more...
Hello Splunkers !! As per the below screenshot I want to capitalise the first letter of every filed column.So for the same I have tried above work around which are in commented. Please suggest me how can I Capitalise first letter of every field name.