All Topics

Top

All Topics

Today, we're unveiling a revamped integration between Splunk Answers and Splunkbase, designed to elevate your experience with Splunkbase apps. Now, each Splunkbase app will feature its own dedicated ... See more...
Today, we're unveiling a revamped integration between Splunk Answers and Splunkbase, designed to elevate your experience with Splunkbase apps. Now, each Splunkbase app will feature its own dedicated 'product page' on Splunk Answers.      This new layout simplifies app-specific conversations, making it easier than ever for customers, developers, partners, and Splunkers to collaborate and solve challenges. About Splunkbase: Splunkbase is a marketplace where Splunk customers can download apps for the Splunk Cloud Platform or Splunk Enterprise environment, or Splunk SOAR. Developers can also upload their own Splunk Enterprise and Splunk Cloud Platform apps to share them with the Splunk community. About Splunk Answers: Splunk Answers is a discussion forum for the Splunk community to engage in dialogue regarding Splunk. It serves as a knowledge base to help customers engage with each other, Splunk employees, and app developers. This integration improves the experience between two key tools used by the community.  This integration offers three key benefits: First, customers will be able to easily find a knowledge base of previously asked and answered questions regarding specific apps, allowing faster self-service issue resolution. Second, developers will be able to communicate directly with users of their apps, enabling better asynchronous troubleshooting of issues. Third, developers will be able to source feedback from the discussions to educate any future enhancements to their apps.  The best part? In order to take advantage of this integration, developers do not need to do anything! App listings will automatically be updated to point to new Splunk Answers app discussions. (Note: when creating a new listing, it may take up to 24 hrs to create a new corresponding Splunk Answers app discussion. Upon initial creation, the app listing will temporarily point to the generic All Apps and Add-Ons page.) We hope this new functionality makes it easier to use and extend Splunk. For questions and comments, reach out to splunkbase-admin@splunk.com or community@splunk.com.
How do I use a lookup table to filter events based on a list of known malicious IP addresses (in CIDR format), or to exclude events from known internal IP ranges.
Below is our Requirement Lookup file has just one column DatabaseName, this is the left dataset DatabaseName A B C   My Search is for metrics on databases and ha s multiple rows, ... See more...
Below is our Requirement Lookup file has just one column DatabaseName, this is the left dataset DatabaseName A B C   My Search is for metrics on databases and ha s multiple rows, this is the right dataset DatabaseName Instance CPUUtilization A A1 10 A A2 20 C C1 40 C C2 50 D D 60   Expected Result is this after left join DatabaseName Instance CPUUtilization A A1 10 A A2 20 B NULL NULL C C1 40 C C2 50   But when I join using DatabaseName, I am getting only three records, 1 for A, 1 for B with NULL and 1 for C My background is SQL and for me left join is all from left data set and all matching from right data set. So please suggest me how I can achive this.
Hi All, I have got logs like below:       </tr> <tr> <td ><b><font color=blue>Asia</font></b></td> <td >Samsung_AA</td> <td ><b><font color=green>Singapore</font></b></td> <td ><b><font color="g... See more...
Hi All, I have got logs like below:       </tr> <tr> <td ><b><font color=blue>Asia</font></b></td> <td >Samsung_AA</td> <td ><b><font color=green>Singapore</font></b></td> <td ><b><font color="green">UP</font></b></td> <td >1100</td> <td >311-1000</td> <td >311-1000</td> <td >0-200000</td> <td >3172-3</td> <td >55663</td> <td >NC</td> <td >3.983-20000</td> <td >11112-20000</td> <td >6521-10000</td>       I used below query to get the below table:       ... | rex field=_raw "\<tr\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>(?P<Region>[^\<]+)\<\/\w+\>\<\/b\>\<\/td\>" | rex field=_raw "\<tr\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\<\/td\>\s+\<td\s\>(?P<VPN_Name>[^\<]+)\<\/td\>" | rex field=_raw "\<tr\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>(?P<Country>[^\<]+)\<\/\w+\>\<\/b\>\<\/td\>" | rex field=_raw "\<tr\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\<\/td\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\"\w+\"\>(?P<VPN_Status>[^\<]+)\<\/\w+\>\<\/b\>\<\/td>" | rex field=_raw "\<tr\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\<\/td\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\"\w+\"\>[^\<]+\<\/\w+\>\<\/b\>\<\/td>\s+\<td\s\>(?P<Spooled>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Conn_Max>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Conn_SMF_Max>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Conn_Rest_Max>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Queue_Topic>[^\<]+)\<\/td\>\s+\<td\s\>(?P<SMF_SSL>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Rest_SSL>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Spool_Usage_Max>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Ingress_Usage_Max>[^\<]+)\<\/td\>\s+\<td\s\>(?P<Egress_Usage_Max>[^\<]+)\<\/td\>" | eval Time_Stamp=strftime(_time, "%b %d, %Y %I:%M:%S %p") | replace "UAT2-L2" with "NGC" in Region | replace "UAT2-L1" with "GC" in Region | search Region="Asia" | search VPN_Status="UP" | table Time_Stamp,VPN_Name,Spooled,Conn_Max,Conn_SMF_Max,Conn_Rest_Max,Queue_Topic,Spool_Usage_Max,Ingress_Usage_Max,Egress_Usage_Max | dedup VPN_Name       Time_Stamp VPN_Name Spooled Conn_Max Conn_SMF_Max Conn_Rest_Max Queue_Topic Spool_Usage_Max Ingress_Usage_Max Egress_Usage_Max Oct 16, 2023 03:51:08 AM Samsung_AB 0 1-500 1-500 0-200000 3-2 0.000-5000 0-10000 0-10000 Oct 16, 2023 03:51:08 AM Samsung_AA 1100 311-1000 311-1000 0-200000 3172-3 3.983-20000 11112-20000 6521-10000 In this table, I want to color-code the cells of the columns (Conn_Max, Conn_SMF_Max, Conn_Rest_Max, Spool_Usage_Max, Ingress_Usage_max & Egress_Usage_Max), where if first part of the field-value is greater than or equal to 50% & 80% of the second part. For e.g. if Conn_Max is 6500-10000 then it should be in yellow and if it is 8500-10000 then it should be in red  color. Please help me to modify the query or source code so that I can get the required cells color coded as per my requirements. Your kind inputs are highly appreciated. Thank You..!!
Hello everyone, I am concerned about single-event-match (e.g. observable-based) searches and the eventual indexing delay events may have. Would the usage of accelerated DM allow me to just ignore s... See more...
Hello everyone, I am concerned about single-event-match (e.g. observable-based) searches and the eventual indexing delay events may have. Would the usage of accelerated DM allow me to just ignore something like the below, whilst still ensuring that such an event will be anyhow taken into account? If so, how? I read Data Models "faithfully deal with late-arriving events with no upkeep or mitigation required", however I am still concerned on what would happen in a case such as the one depicted in the image I'm uploading, where: - T0 is the moment when the event happened / was logged (_time) - T1 is the first moment taken into account by the search (earliest) - T2 is the moment when the event was indexed (_indextime) - T3 is the last moment taken into account by the search (latest) What about, instead, taking a "larger" time frame for earliest / latest and then focus on the queue of events happened between _index_earliest / _index_latest ? Would this ensure that each and every single event is taken into account with such a search? (Splunk suggests "When using index-time based modifiers such as _index_earliest and _index_latest, [...] you must run your search using All Time", and although I'm not entirely sure about the performances impacts of doing so while still filtering by _indextime, I think it would still be a good idea to account for an ideal maximum events lag, still big but not too big, e.g. 24h, similar to the one mentioned here https://docs.splunk.com/Documentation/Splunk/9.1.1/Report/Durablesearch#Set_time_lag_for_late-arriving_events , for which the surpassing could generate an alert on its own )   Are there different and simpler ways to achieve such mathematic certainty, regardless of the indexing delay? (of course, given that the search isn't skipped) Thank you all Ps. Same question, asked in the generic forum: https://community.splunk.com/t5/Splunk-Enterprise-Security/Developing-reliable-searches-dealing-with-events-indexing-delay/m-p/664104 
Hello everyone, I am concerned about single-event-match (e.g. observable-based) searches and the eventual indexing delay events may have. Would the usage of accelerated DM allow me to just ignore s... See more...
Hello everyone, I am concerned about single-event-match (e.g. observable-based) searches and the eventual indexing delay events may have. Would the usage of accelerated DM allow me to just ignore something like the below, whilst still ensuring that such an event will be anyhow taken into account? If so, how? I read Data Models "faithfully deal with late-arriving events with no upkeep or mitigation required", however I am still concerned on what would happen in a case such as the one depicted in the image I'm uploading, where: - T0 is the moment when the event happened / was logged (_time) - T1 is the first moment taken into account by the search (earliest) - T2 is the moment when the event was indexed (_indextime) - T3 is the last moment taken into account by the search (latest) What about, instead, taking a "larger" time frame for earliest / latest and then focus on the queue of events happened between _index_earliest / _index_latest ? Would this ensure that each and every single event is taken into account with such a search? (Splunk suggests "When using index-time based modifiers such as _index_earliest and _index_latest, [...] you must run your search using All Time", and although I'm not entirely sure about the performances impacts of doing so while still filtering by _indextime, I think it would still be a good idea to account for an ideal maximum events lag, still big but not too big, e.g. 24h, similar to the one mentioned here https://docs.splunk.com/Documentation/Splunk/9.1.1/Report/Durablesearch#Set_time_lag_for_late-arriving_events , for which the surpassing could generate an alert on its own )   Are there different and simpler ways to achieve such mathematic certainty, regardless of the indexing delay? (of course, given that the search isn't skipped)   Thank you all
When I use timechart, if some trailing buckets have zero count, they are displayed as zero on the time axis that extends to the end of search window.  But in the same time window, if I use chart over... See more...
When I use timechart, if some trailing buckets have zero count, they are displayed as zero on the time axis that extends to the end of search window.  But in the same time window, if I use chart over _time, trailing zero-count buckets are removed.  For example,   index = _internal earliest=-3h@h latest=+3h@h ``` simulate trailing zero-count buckets``` | timechart span=1h count   This gives _time count 2023-10-19 05:00 33798 2023-10-19 06:00 33798 2023-10-19 07:00 33949 2023-10-19 08:00 27416 2023-10-19 09:00 0 2023-10-19 10:00 0 Note the last two buckets are zero-count.  Whereas this   index = _internal earliest=-3h@h latest=+3h@h ``` simulate zero-count buckets ``` | bucket _time span=1h | chart count over _time   gives _time count 2023-10-19 05:00 33798 2023-10-19 06:00 33798 2023-10-19 07:00 33949 2023-10-19 08:00 27438 The two trailing buckets are not listed, even though info_max_time is exactly the same. Is there a way to force chart to list all _time buckets between info_min_time and info_max_time?
Hello everyone! I create the role for splunk users, which will be able to edit alerts. What capabilities should I choose for such users? To be minimal and sufficient
Hi all, I was trying to get details on some detected backends, for example what was the call type (async, RMI, etc.) and the class:method used for the exit call, typically detailed into transaction ... See more...
Hi all, I was trying to get details on some detected backends, for example what was the call type (async, RMI, etc.) and the class:method used for the exit call, typically detailed into transaction snapshots. Although in the application dashboard these backends are available, there is no way to view only the snapshots that include a specific backend.  Instead one has to go through all the snapshots randomly until it finds the one of interest. The only filtering available on the transaction snaphots currently is via Business Transaction Error Execution Time HTTP request Details Data Collector GUIDs I think the filtering on the backend could help in faster investigation and troubleshooting.
We have multiple HF's and one DS in our environment. We want to monitor the underlying Linux operating System for which our HFs and DS run's on by forwarding it's OS events to Splunk indexers. Is ... See more...
We have multiple HF's and one DS in our environment. We want to monitor the underlying Linux operating System for which our HFs and DS run's on by forwarding it's OS events to Splunk indexers. Is the process for doing this the same as any other server? Install a UF and enter the usual config?
I have configured Oauth in a custom account in the splunk salesforce Add-On app.  After configuring the account and saving the configuration it reaches out to salesforce.  I login to salesforce and i... See more...
I have configured Oauth in a custom account in the splunk salesforce Add-On app.  After configuring the account and saving the configuration it reaches out to salesforce.  I login to salesforce and its states grant access.  Once I click submit it comes back with an error   "Error occurred while trying to authenticate. Please try Again" in the app.  I am not sure what the issue is or if this is a need to configure something on the salesforce side. 
We are utilizing the Log Event Trigger Action for an alert and we'd essentially like to duplicate the event that's found into another index. There is some renaming that happens in the alert, so pulli... See more...
We are utilizing the Log Event Trigger Action for an alert and we'd essentially like to duplicate the event that's found into another index. There is some renaming that happens in the alert, so pulling the_raw wouldn't include the renamed fields correct? If _raw is the way to go, what is the token for this? $result._raw$?
I have snmp_ta running 1.8.0 for a few years and have many snmp polls, both with and with out mibs. I am doing a walk, probably the first time.  I get successes but then I get a bunch of these: ERR... See more...
I have snmp_ta running 1.8.0 for a few years and have many snmp polls, both with and with out mibs. I am doing a walk, probably the first time.  I get successes but then I get a bunch of these: ERROR Exception resolving MIBs for table: 'MibTable' object has no attribute 'getIndicesFromInstId' stanza:snmp and then I get two of these ERROR Exception resolving MIBs for table: list modified during sort stanza:snmp and no more polling of this config. There are several IPs in this config.  All works well for a 2-4 polling cycles, then it stops. I can do snmpwalk -v 2c -c $PUBLIC -m $MIB and I get good results. I did recently install a new MIB for this device.  The old mib has the same issue and the other configs works fine regardless. I am thinking it is related to snmpwalk, but I am having little success in solutions. -- Frank  
Is there a way to detect subsearch limits being exceeded in scheduled searches I notice that you can get this info from REST: | rest splunk_server=local /servicesNS/$user$/$app$/search/jobs/$search... See more...
Is there a way to detect subsearch limits being exceeded in scheduled searches I notice that you can get this info from REST: | rest splunk_server=local /servicesNS/$user$/$app$/search/jobs/$search_id$ | where isnotnull('messages.error') | fields id savedsearch_name, app, user, executed_at, search, messages.* And you can kinda join this to the _audit query: index=_audit action=search (has_error_warn=true OR fully_completed_search=false OR info="bad_request") | eval savedsearch_name = if(savedsearch_name="", "Ad-hoc", savedsearch_name) | eval search_id = trim(search_id, "'") | eval search = mvindex(search, 0) | map search="| rest splunk_server=local /servicesNS/$user$/$app$/search/jobs/$search_id$ | where isnotnull('messages.error') | fields id savedsearch_name, app, user, executed_at, search, messages.*" But it doesn't really work - I get lots of rest failures reported and the output is bad. You also need to run it when the search artifacts are present. Although my plan was to run this frequently and push the result to a summary index. Has anyone had better success with this? One thought would be to ingest the data that is returned by the rest call (I presume var/run/dispatch). Or might debug level logging help?
I use a PowerShell script in a Splunk forwarder that sends data with Write-Output $line Splunk receives this data in the _raw field.   How shall a PowerShell script write key-value pairs, so t... See more...
I use a PowerShell script in a Splunk forwarder that sends data with Write-Output $line Splunk receives this data in the _raw field.   How shall a PowerShell script write key-value pairs, so that for Splunk there are separate keys and values instead of _raw?
Hi, I have the following issue: Have many events with different document_number+datetime_type, which have a field (started_on). There is always 4 different types / document_number. Then 4 new tim... See more...
Hi, I have the following issue: Have many events with different document_number+datetime_type, which have a field (started_on). There is always 4 different types / document_number. Then 4 new timestamp fields are evaluated by the type and the timestamp, so each event will have 1 new filled timestamp in a different field. Now I need to fill the empty ones from the evaluated ones for the same document_number. With streamstats I was able to fill them further (after found), but not backwards. Is it possible somehow? Or only if I do | reverse and apply streamstats again?
Hi there!    In inputs.conf whitelist, how do I create a regex expression for whitelisting files which contain a certain number (101-109, 201, 205, 301-303)?
Hi Team,  I'm using summary index for below requirement : 1. Store daily counts of HTTP_Status_Code per hour for each of the application (app_name) on to daily summary index 2. Once in a week, cal... See more...
Hi Team,  I'm using summary index for below requirement : 1. Store daily counts of HTTP_Status_Code per hour for each of the application (app_name) on to daily summary index 2. Once in a week, calculate the average for each app_name by hour, HTTP_STATUS_CODE for the stored values in daily summary index.  3. This average values will be showed in dashboard widget.  But when I'm trying to calculate avg for the stored values, it isn't working. Below are the steps I'm following: 1. Pushing HTTP_Status_Code, _time,hour, day, app_name, count along with value "Summary_test" (for ease of filtering) to daily index named "summary_index_1d". Note : app_name is a extracted field. There are 25+ different values       index="index" | fields HTTP_STATUS_CODE,app_name | eval HTTP_STATUS_CODE=case(like(HTTP_STATUS_CODE, "2__"),"2xx",like(HTTP_STATUS_CODE, "4__"),"4xx",like(HTTP_STATUS_CODE, "5__"),"5xx") | eval hour=strftime(_time, "%H") | eval day=strftime(_time, "%A") | bin _time span=1d | stats count by HTTP_STATUS_CODE,_time,hour,day,app_name | eval value="Summary_Test" | collect index=summary_index_1d         2. Retrieve data from summary index. its showing up the data pushed       index=summary_index_1d "value=Summary_Test"        3. Now I want to calculate the average for previous 2 or 4 weekday data stored in summary index. I'm using below as reference  https://community.splunk.com/t5/Splunk-Enterprise/How-to-Build-Average-of-Last-4-Monday-Current-day-vs-Today-in-a/m-p/657868/highlight/true#M17385       Trying to perform avg on summary index stored values. But this fails index=summary_index_1d "value=Summary_Test" app_name=abc HTTP_STATUS_CODE=2xx | eval current_day = strftime(now(), "%A") | eval log_day = strftime(_time, "%A") | eval hour=strftime(_time, "%H") | eval day=strftime(_time, "%d")| eval dayOfWeek = strftime(_time, "%u") | where dayOfWeek >=1 AND dayOfWeek <= 5 | stats count as value by hour log_day day | sort log_day, hour | stats avg(value) as average by log_day,hour       I guess the "hour" in the query is creating conflict. I tried without it and also by changing the values, but not returning expected result. When the same query is used on main index, it works perfectly fine for my requirement. But when used on summary index, its not able to calculate the average.        This works fine for the requirement. But when same is applied on "Summary index", it fails index=index app_name=abc | eval HTTP_STATUS_CODE=case(like(status, "2__"),"2xx") | eval current_day = strftime(now(), "%A") | eval log_day = strftime(_time, "%A") | eval hour=strftime(_time, "%H") | eval day=strftime(_time, "%d")| eval dayOfWeek = strftime(_time, "%u") | where dayOfWeek >=1 AND dayOfWeek <= 5 | stats count as value by hour log_day day | sort log_day, hour | stats avg(value) as average by log_day,hour       Can you please help me understand what's wrong with query used on summary index ?  @ITWhisperer @yuanliu @smurf  
Task failed: Store encrypted MySQL credentials on disk on host: ESS-LT-68Q9FS3 as user: appd-team-9 with message: Command failed with exit code 1 and stdout Checking if db credential is valid... Mysq... See more...
Task failed: Store encrypted MySQL credentials on disk on host: ESS-LT-68Q9FS3 as user: appd-team-9 with message: Command failed with exit code 1 and stdout Checking if db credential is valid... Mysql returned with error code [127]: /home/appd-team-9/appdynamics/platform/product/controller/db/bin/mysql: error while loading shared libraries: libtinfo.so.5: cannot open shared object file: No such file or directory and stderr .
Hey everyone,    I have this format -  cn=<name>,ou=<>,ou=people,dc=<>,dc=<>,dc=<> that i'm pulling that i need to use only the cn= field. how can i do it with the regex command? is that possible?... See more...
Hey everyone,    I have this format -  cn=<name>,ou=<>,ou=people,dc=<>,dc=<>,dc=<> that i'm pulling that i need to use only the cn= field. how can i do it with the regex command? is that possible?   thanks!!