All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

reference: | bucket _time span=1d | stats sum(bytes*) as bytes* by user _time src_ip | eventstats max(_time) as maxtime avg(bytes_out) as avg_bytes_out stdev(bytes_out) as stdev_bytes_out | e... See more...
reference: | bucket _time span=1d | stats sum(bytes*) as bytes* by user _time src_ip | eventstats max(_time) as maxtime avg(bytes_out) as avg_bytes_out stdev(bytes_out) as stdev_bytes_out | eventstats count as num_data_samples avg(eval(if(_time < relative_time(maxtime, "@h"),bytes_out,null))) as per_source_avg_bytes_out stdev(eval(if(_time < relative_time(maxtime, "@h"),bytes_out,null))) as per_source_stdev_bytes_out by src_ip | where num_data_samples >=4 AND bytes_out > avg_bytes_out + 3 * stdev_bytes_out AND bytes_out > per_source_avg_bytes_out + 3 * per_source_stdev_bytes_out AND _time >= relative_time(maxtime, "@h") | eval num_standard_deviations_away_from_org_average = round(abs(bytes_out - avg_bytes_out) / stdev_bytes_out,2), num_standard_deviations_away_from_per_source_average = round(abs(bytes_out - per_source_avg_bytes_out) / per_source_stdev_bytes_out,2) | fields - maxtime per_source* avg* stdev*  
ERROR TcpOutputFd - Read error. Connection reset by peer 09-16-2022 06:13:35.552 +0000 INFO TcpOutputProc - Connection to 111.11.11.111:9997 closed. Read error. Connection reset by peer I see the ... See more...
ERROR TcpOutputFd - Read error. Connection reset by peer 09-16-2022 06:13:35.552 +0000 INFO TcpOutputProc - Connection to 111.11.11.111:9997 closed. Read error. Connection reset by peer I see the above error in the forwarder log and ingestion is not happening. using Splunk version is 8.0.2 modified outputs.conf but still have the same error
What are the various techniques for boarding data?
It must run automatically After downloading Right? But it did not appear the login page. Like this. How I get it.
How will we be able to determine which of our 10,000 forwarders is down?
Hi, I would like display values of variables from an event as a Table.  My data format is as follow: Time Event 9/16/22 10:10:10.000 AM index=* sourcetype=* type=* "Name1" : ... See more...
Hi, I would like display values of variables from an event as a Table.  My data format is as follow: Time Event 9/16/22 10:10:10.000 AM index=* sourcetype=* type=* "Name1" : "A", "Name2" : "B", "Name3" : "C", ... "Name10" : "J", "Var1" : 10, "Var2" : 10, "Var3" : 25, ... "Var10" : 50 I would like the search data to be transformed into a table formatted like this, internalizing the field names Name*, Var* and replacing the column headers with new names as shown below. Station Value A 10 B 10 C 25 ... ... J 50 How can I do this? Thanks
Hello All, In Windows Server, the URL Monitoring Extension of v2.2.0 on Machine agent of v21 is crashing intermittently. The extension is failed to report the metrics on to the Controller during the... See more...
Hello All, In Windows Server, the URL Monitoring Extension of v2.2.0 on Machine agent of v21 is crashing intermittently. The extension is failed to report the metrics on to the Controller during the crash time. But the Machine agent is sending all the Infra metrics to the controller.  I tried with heap increment for xmX & smX values and tried with metric registration limit to maximum level but these options are not resolving the issue. However, once I restarted the Machine agent service, the URL monitoring extension could start reporting it's metrics. This process is being repeated for 5 to 6 times per day. Can someone please help me. Thanks in advance! Avinash
Hi,   Fundamentals question but one of those brain teasers.  How do i get a total count of distinct values of a field ?   For example, as shown below  Splunk shows my "aws_account_id" field has 100+ ... See more...
Hi,   Fundamentals question but one of those brain teasers.  How do i get a total count of distinct values of a field ?   For example, as shown below  Splunk shows my "aws_account_id" field has 100+ unique values.   What is that exact 100+ number ?  If i hover my mouse on the field, it shows Top 10 values etc. but not the total count.  Things i have tried as per other posts in the forum"     index=aws sourcetype="aws:cloudtrail" | fields aws_account_id | stats dc(count) by aws_account_id       This does show me the total count (which is 156) but it shows like this:   Instead i want the data in this tabular format: Fieldname Count aws_account_id 156   Thanks in advance
I have a dashboard for all SSL certifications. I'd like to setup few alerts for renewal reminds from Splunk. My current query is as shown below: Index=epic_ehr source=C:\\logs\certs\\results.json |... See more...
I have a dashboard for all SSL certifications. I'd like to setup few alerts for renewal reminds from Splunk. My current query is as shown below: Index=epic_ehr source=C:\\logs\certs\\results.json |Search validdays<60 |table hostname,validddays,issuer,commonName My custom trigger condition is: search validdays="*" AND count<273   When I run this I am seeing results but no alert is triggered nor do I receive any email. please assist
Hi folks, I'm tying to list all users from my Splunk cloud using this link: https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTREF/RESTaccess#authentication.2Fusers:~:text=s%3Adict%3E%0A%... See more...
Hi folks, I'm tying to list all users from my Splunk cloud using this link: https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTREF/RESTaccess#authentication.2Fusers:~:text=s%3Adict%3E%0A%20%20%20%3C/content%3E%0A%20%3C/entry%3E-,authentication/users,-https%3A//%3Chost%3E%3A%3CmPort However I'm using a custom role who just have the following capabilities:   * admin_all_objects * rest_access_server_endpoints * rest_apps_management * rest_apps_view * rest_properties_get *edit_user *search The user is unable to pull all users. My assumption is that as this users does not inheritance any other role then it is not able to list all users, as per the grantableRoles. If I'm right, what chance do I have for this user to pull all users with the rest API? or what capabilities I'm missing? Thanks in advance,  
Howdy  Splunk Community, I'm curious if anyone here has any experience, or is currently utilizing Splunk's "Azure Functions for Splunk" , specifically the "event-hubs-hec" solution to successfully p... See more...
Howdy  Splunk Community, I'm curious if anyone here has any experience, or is currently utilizing Splunk's "Azure Functions for Splunk" , specifically the "event-hubs-hec" solution to successfully push events from their Azure Tenant to their Splunk deployment. If so, I'm ultimately curious what designs / architecture patterns you utilized when deploying and segmenting out your Azure Event Hub Namespaces, and Event Hubs.  Reading over the README in the repo leads me to believe that you can get away with dumping all of the events generated within your tenant into a single event hub namespace / event hub, assuming you stay within the performance limitations imposed by the event hub. I don't particularly like this model as I believe it makes troubleshooting ingestion / data issues a bit of a pain since all of your data, regardless of source, or event type is in a single centralized location. so I would like to have a bit more organization than that.  I'm slowly working on a rough draft of how I think I want to break out my Event Hub Namespaces / Event Hubs but right now I'm not sure if I'm going to make my life, or my development team's life's harder as they will have to interface with this design via Terraform as we continue implementing infrastructure as code in our platform.   My initial breakout looks something like: - A unique subscription per AZ region we are deployed in, dedicated to logging infrastructure that will contain the Event Hub Namespaces, and corresponding function applications that push events out to Splunk...etc. All infrastructure that exists within a specified region will send their Diagnostic Logging Events (Platform logs / Resource logs) into the logging subscription. - A EH Namespace for SQL Servers, with EH's broken out per event type generated by the SQL Servers - An EH Namespace for Keyvaults, with EH's broken out per event type generated by Keyvaults - An EH namespace for Storage Accounts, with EH's broken out per event type generated by the storage accounts - An EH namespace for Global Microsoft Services (Azure Active Directory, Microsoft Defender, Sentinel...etc) - An EH namespace for Azure PaaS / IaaS offerings (Databricks, Azure Data Factory, Cognitive Search...etc) - An EH namespace for networking events (NAT Gateways, Firewalls, Public IPs, APIM, Frontdoor, WAF...etc)   so on and so forth.   Anyone willing to lend their insight?        
This is a search string I inherited and for the most part has worked fine.  There is a desire to modify it and thought I would seek help. index=firewall host=10.214.0.11 NOT src_ip=172.26.22.192/... See more...
This is a search string I inherited and for the most part has worked fine.  There is a desire to modify it and thought I would seek help. index=firewall host=10.214.0.11 NOT src_ip=172.26.22.192/26 | stats count by src_ip, dest_ip | appendpipe [| stats sum(count) as count by src_ip |eval keep=1 | eventstats sum(count) as total_log_count ] | appendpipe [| stats sum(count) as count by dest_ip |eval keep=1 | eventstats sum(count) as total_log_count ] |where keep=1| sort -count | head 20 | where total_log_count > 1000000 Below example outputs received, separate instances: src_ip dest_ip count keep total_log_count   192.168.14.11 39164 1 1008943 192.168.14.11   32239 1 1008943 10.80.0.243   31880 1 1008943   143.251.111.100 30773 1 1008943   156.33.250.10 15544 1 1008943 192.242.214.186   13793 1 1008943 172.253.63.188   12359 1 1008943   192.168.5.46 12346 1 1008943 192.168.10.146   10987 1 1008943   192.168.3.19 9079 1 1008943 192.168.3.195   8970 1 1008943 192.168.3.18   8074 1 1008943 172.18.3.42   7709 1 1008943   192.168.14.23 7647 1 1008943 192.168.5.46   7583 1 1008943   172.253.63.188 6549 1 1008943 172.33.250.10   5806 1 1008943   192.168.24.65 5654 1 1008943   172.253.115.188 5494 1 1008943   192.168.24.134 4388 1 1008943   src_ip dest_ip count keep total_log_count 87.114.132.220   45441 1 1005417   192.168.35.6 39597 1 1005417 192.168.14.15   31629 1 1005417   172.30.5.9 16348 1 1005417 10.80.0.243   15444 1 1005417 196.199.95.18   13883 1 1005417   172.253.62.139 12703 1 1005417   192.168.12.45 11957 1 1005417   172.253.115.188 10010 1 1005417 192.168.3.19   9676 1 1005417   192.168.35.16 9641 1 1005417 192.168.5.146   9290 1 1005417 192.168.25.46   7440 1 1005417 172.253.115.188   7292 1 1005417   192.168.3.18 6163 1 1005417 192.168.39.18   6063 1 1005417 176.155.19.207   5818 1 1005417   4.188.95.188 4947 1 1005417   5.201.73.253 4942 1 1005417   45.225.238.30 4938 1 1005417   Is there a way to modify the query such that it only triggers if there is a single entity causing logs greater than a certain number (e.g. 50000) in combination with the total logs also being over a certain threshold? There is still a desire to see an output reporting the top 20 IPs.  Your time, consideration and helpful suggestions is appreciated. Thank you.
Need regex & Null queue help to send events in /var/log/messages. Here is regex101: regex101: build, test, and debug regex    (IP & hostname randomized) props.conf [source::/var/log/messages... See more...
Need regex & Null queue help to send events in /var/log/messages. Here is regex101: regex101: build, test, and debug regex    (IP & hostname randomized) props.conf [source::/var/log/messages] TRANSFORMS-set= setnull,setparsing transforms.conf [setnull] REGEX = \w{3}\s\d{2}\s\d{2}:\d{2}:\d{2}\s\w+\n DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = \w{3}\s\d{2}\s\d{2}:\d{2}:\d{2}\s\w{5}\d{4}\S\i.ab2.jone.com\s.+\n DEST_KEY = queue FORMAT = indexQueue the regex not sending unwanted event in /var/log/message .  I am doing the on HF before UF.  
So I have a query which returns a value over a period of 7 days   The below is like the query but took a few items out   index=xxxx search xxxxx | rex field=_raw "projects/\\s*(?<ProjectID>\d+)" ... See more...
So I have a query which returns a value over a period of 7 days   The below is like the query but took a few items out   index=xxxx search xxxxx | rex field=_raw "projects/\\s*(?<ProjectID>\d+)" | rex field=_raw "HTTP\/1\.1\ (?P<Status_Code>[^\ ]*)\s*(?P<Size>\d+)\s*(?P<Speed>\d+)" | eval MB=Size/1024/1024 | eval SecTM=Speed/1000 | eval Examplefield=case(SecTM<=1.00, "90%")| stats count by Examplefield | table count I can get the single value over 7 days I want to be able to do like a comparaison over the previous 7 days So lets number is 100,000 and prevous week was 90,000 then it shows up 10,000 or vice versa if that makes sense. I have seen the Sample Dashboard with Single Value with an arrow going up or down but I just have no clue how to syntax the time bit
I have a query that does a group by, which allows the sum(diff) column to be calculated.  [search] | stats sum(diff) by X_Request_ID as FinalDiff: From here, how can I list out only the entries... See more...
I have a query that does a group by, which allows the sum(diff) column to be calculated.  [search] | stats sum(diff) by X_Request_ID as FinalDiff: From here, how can I list out only the entries that have a sum(diff) > 1? My attempt looks like: [search] | stats sum(diff) by X_Request_ID as FinalDiff |where FinalDiff>1   My issue is that after the group by happens, the query seems to forget about the grouped sum and so I cannot compare it to 1. 
I am running a query where I'm trying to calculate the difference between the start and end times a request travels through a service (aka latency). In order to achieve this I search for two logs: on... See more...
I am running a query where I'm trying to calculate the difference between the start and end times a request travels through a service (aka latency). In order to achieve this I search for two logs: one for the start, one for the end, I then subtract the start and end times, and finally do a group by X_Request_ID-which is unique per request. What I have at this point is: What I want to do now is to only display the count of all requests that took over 1 second.  My attempt at this looks like: index=prod component="card-notification-service" eventCategory=transactions eventType=auth AND ("is going to process" OR ("to POST https://apay-partner-api.apple.com/ccs/v1/users/eventNotification/transactions/auth" AND status=204)) | eval diff=if(searchmatch("is going to process"), _time*-1, 0) | eval Start=if(searchmatch("is going to process"), _time, NULL) | eval diff=if(searchmatch("to POST https://app.transactions/auth"), diff+_time, diff) | eval End=if(searchmatch("to POST https://app.transactions/auth"), _time, NULL) | eval seriesName="Baxter<->Saturn | streamstats sum(diff) by X_Request_ID as FinalDiff |where FinalDiff> 1.0  | timechart span=5m partial=f count by seriesName I’ve gotten everything to compile fine before the bolded where clause above. I suspect it’s because in the streamstats command prior, the “as” is only naming the query and not persisting the grouping of the query. Regardless this leads me to the question I am trying to solve: How can I persist sum(diff) after grouping it by X_Request_ID so that in the next pipe I can perform a comparison in the where operation?
How do you show the annotation label on a chart without having to hover over the value? Is there a way to make a label to show this?
Getting the error "This XML file does not appear to have any style information associated with it." while trying to export search result. Getting this error within dashboards as well from sear... See more...
Getting the error "This XML file does not appear to have any style information associated with it." while trying to export search result. Getting this error within dashboards as well from search(.../search/search) page.  This is stopping our ability to export/download search results in all available formats(csv/xml/json). Any possible solutions? Splunk Enterprise version 9.0.0.1
In a nutshell AZ T rade received a request to delete some personal data from a former contractor. we have to delete data linked to ****(employee name) older than a year.  In which it is necessary t... See more...
In a nutshell AZ T rade received a request to delete some personal data from a former contractor. we have to delete data linked to ****(employee name) older than a year.  In which it is necessary to delete data and logs dating back more than a year concerning ****(Employee name). How can we do that to delte old personal data and logs dating back more than a year of an ex-employee?  
Hello, How do I combine two searches in an eval command? In the example below, I'm trying to create a value for "followup_live_agent" and "caller_silence" values. Splunk is telling me this query is... See more...
Hello, How do I combine two searches in an eval command? In the example below, I'm trying to create a value for "followup_live_agent" and "caller_silence" values. Splunk is telling me this query is invalid.        index=conversation sourcetype=cui-orchestration-log botId=123456 | eval AgentRequests=if(match(intent, "followup_live_agent" OR "caller_silence"), 1, 0)       Any help is much appreciated!