All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

The Microsoft Azure App Template for Splunk has a dashboard called Portal Notification Overview which should show the Service Health events from the Activity Logs Which input collects this data?  We... See more...
The Microsoft Azure App Template for Splunk has a dashboard called Portal Notification Overview which should show the Service Health events from the Activity Logs Which input collects this data?  We have Service Health events in the logs, but they are not showing up in splunk.  Other events from that log are showing up.  `azure-audit` (subscription_id="*") eventSource.value=ServiceHealth | stats latest(_time) AS l latest(status.value) AS Status values(eventName.value) AS Event values(properties.Region) AS Region values(properties.Service) AS Service by properties.IncidentId | rename properties.IncidentId AS IncidentId | eval "Last Activity"=strftime(l, "%Y-%m-%d %H:%M:%S %z") | table IncidentId "Last Activity" Status Service Event Region  
I have 2 data inputs going to 2 separate indexes.  I have 2 different REGEX expressions to obtain IPAddress and Hostname.  How can I make it so only regex#1 is executed for index#1 and regex#2 is exe... See more...
I have 2 data inputs going to 2 separate indexes.  I have 2 different REGEX expressions to obtain IPAddress and Hostname.  How can I make it so only regex#1 is executed for index#1 and regex#2 is executed for index#2?  I have used EXTRACT and REPORT entries in props.conf to perform search-time field extractions, but both regex expressions get executed and I get erroneous data for IPAddress and Hostname fields. What's the best way to handle this situation? Sample data from data input #1: Feb 1 14:15:17 10.106.198.22 1 2021-02-01T19:07:01.490Z w2k19_02.xxxBD.local Sample data from data input #2: Feb 1 14:46:43 10.106.5.72 Feb 1 19:53:23 vCM_01.xxxbd.local appliance: <134>1 2021-02-01T19:53:18.318369Z vCM_01.xxxbd.local  
Hi, Splunk noob here. I cannot get a deployment client to show up in deployment server. turned DEBUG on splunkd.log and can see that it communicates with the deployment server: DEBUG DC:Deploymen... See more...
Hi, Splunk noob here. I cannot get a deployment client to show up in deployment server. turned DEBUG on splunkd.log and can see that it communicates with the deployment server: DEBUG DC:DeploymentClient - channel=deploymentServer/phoneHome/default Success sending phonehome to DS. I have ran tcpdump on the client and it makes a tcp connection to DS, goes through the TLS handshake fine, and then 30 seconds later, client send a FIN and then RST to the deployment server. I get this freaking thing all the time, which I have googled of course, and the provided answer is worthless: https://community.splunk.com/t5/Monitoring-Splunk/What-does-this-error-message-mean-quot-something-needs-splunkd/m-p/230714 ./splunk display deploy-client Deployment Client is enabled. This command [GET /services/messages/restart_required/] needs splunkd to be up, and splunkd is down.   Are these the normal splunk processes? splunk 7682 1 0 12:47 ? 00:00:08 splunkd -p 8089 restart splunk 7683 7682 0 12:47 ? 00:00:00 [splunkd pid=7682] splunkd -p 8089 restart [process-runner] Any ideas? @gcusello 
Each multi-value field (FiledName: R_time ) which has time value in epoch format should be compared to it previous event time and next event time. The epoch time should be less than previous event an... See more...
Each multi-value field (FiledName: R_time ) which has time value in epoch format should be compared to it previous event time and next event time. The epoch time should be less than previous event and greater than next event.
Hello everyone,    how can I bold certain text elements in the message's body please ? ex: Result: 4526 errors with Call  function. 4526 = $result.nb$ Call = $result.ft$   All the answer i've... See more...
Hello everyone,    how can I bold certain text elements in the message's body please ? ex: Result: 4526 errors with Call  function. 4526 = $result.nb$ Call = $result.ft$   All the answer i've seen about this subject suggest to modify the sendemail.py file But i can't modify this file. I also tried to but <b></b> around the text i wanted bold with no result, in the mail i got this : <b>4526</b>   After this, i wanted to find a way to test the mail alert.  I tried the : sendemail to=mymail@mail.com subject="" message="" sendresults=true But always the same error : command="sendemail", 'rootCAPath' while sending mail to: How can i test mail alert please ?   Thanks.
hi! I have a case where I need to onboard data from different hosts and paths but under the same index. As an example, I need to onboard from server1 logfile /foo/bar1.log, and from server2 /foo/bar... See more...
hi! I have a case where I need to onboard data from different hosts and paths but under the same index. As an example, I need to onboard from server1 logfile /foo/bar1.log, and from server2 /foo/bar2.log.  If I create one app and in the inputs.conf place [monitor:///foo/bar*.log] and in the serverclass add server1 and server2, it will start to gather data from both files from both servers (I assume that they both exists on both servers).  Now, the only workaround that comes to my mind is to separate them into 2 different apps, like: app1: inputs.conf - [monitor:///foo/bar1.log]  serverclass: server1   app2: inputs.conf - [monitor:///foo/bar2.log]  serverclass: server2   The question is, if it is possible to do it within one app?
I have configured email settings as below smtp.gmail.com:587 with TLS selected username - email password   Search queries I have tried as below: 1) index=_internal | head 5 | sendemail to... See more...
I have configured email settings as below smtp.gmail.com:587 with TLS selected username - email password   Search queries I have tried as below: 1) index=_internal | head 5 | sendemail to="<myemail>" server=smtp.gmail.com subject="Here is an email from Splunk" message="This is an example message" sendresults=true 2) index=_internal | head 5 | sendemail to="<myemail>" server=smtp.gmail.com:587 subject="Here is an email from Splunk" message="This is an example message" sendresults=true   ERROR: command="sendemail", (530, b'5.7.0 Authentication Required. Learn more at\n5.7.0 https://support.google.com/mail/?p=WantAuthError - gsmtp',   3)index=_internal | head 5 | sendemail to="<myemail>" server=smtp.gmail.com:587 use_tls=1 subject="Here is an email from Splunk" message="This is an example message" sendresults=true 4)index=_internal | head 5 | sendemail to="<myemail>" subject="Here is an email from Splunk" message="This is an example message" sendresults=true ERROR: command="sendemail", (534, b'5.7.9 Application-specific password required. Learn more at\n5.7.9 https://support.google.com/mail/?p=InvalidSecondFactor - gsmtp')   I have also tried after disabling less secure access apps. Search Query is not even trying to log in I think. Any help is appreciated thanks.
Hi, I have one index for Palo Alto and there are other Palo Alto already integrated and indexed to this index. i want to integrate and add one more Palo Alto to this index. How can i do it?
Hi community. Just preparing for my ARCH practical lab. I heard that it's mandatory to add to the MC the non clustered SH as a search peer. However, I already configured the SH to send its internal ... See more...
Hi community. Just preparing for my ARCH practical lab. I heard that it's mandatory to add to the MC the non clustered SH as a search peer. However, I already configured the SH to send its internal data to the IDX cluster I have deployed. My question is: Do I need to also configure the SH as a search peer on the MC in order to be able to monitor it, or just with the cluster master as a search peer (it automatically adds all the clustered idx to the MC) will it do. In theory if all the SH _internal data is at the IDX layer, the MC would take a look at the IDX cluster that contains the aleady forwarded _internal data from the SH, ritght? Please provide an explanation so I can beat the practical lab. Thanks!
Hi All, In our splunk health dashboard panel, we could see a list of source-types having truncate issues, when digging the _internal logs, we could see the below Warning message. 02-02-2021 18:23:1... See more...
Hi All, In our splunk health dashboard panel, we could see a list of source-types having truncate issues, when digging the _internal logs, we could see the below Warning message. 02-02-2021 18:23:11.436 +1100 WARN LineBreakingProcessor - Truncating line because limit of 10000 bytes has been exceeded with a line length >= 11639 - data_source="/var/icf/logs/xxx.xxx.0/xxx_0.log", data_host="xxxxx", data_sourcetype="xxx.wps.xxx" Followed below steps to further analysis the issue  1) Checked the actual configuration in the HF instances where the parsing is taking place by executing the btool command. bash-4.2$ ./splunk btool --app=appname props list --debug | grep TRUNCATE ( To find the path where app is configured and its TRUNCATE value) 2) ./splunk btool --app=appname props list --debug | grep sourcetype ( To find the Truncate value specific to app and sourcetype) 3) Validated the props.conf details by using the cat /opt/splunk/etc/apps/appname/local/props.conf found the below actual configuration. [sourceytpename] TRUNCATE = 800000 TIME_FORMAT = TIME_PREFIX=\[ DATETIME_CONFIG=/etc/apps/appname/local/datetime.xml SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\[(?:\d{1,2}\/\d{1,2}\/\d{2}\s\d{1,2}:\d{2}:\d{2}\:\d{3}\s|\d{4}-\d{2}-\d{2}T\d{1,2}:\d{2}:\d{2}\.\d{3}(?:Z|[+-]\d\d?:?(?:\d\d)?)) MAX_TIMESTAMP_LOOKAHEAD=30 4) Based on the actual truncate value as reference value, Identify the maximum length and frequency of occurrence for last 7 days sourcetype="xx.xx.xx" | eval length=len(_raw) | stats max(length) as length by sourcetype The maximum length was more then 512273 --> But well below the actual Truncate value=800000 Frequency of occurence was only one time it had reached more then 500000 sourcetype="xx.xx.xxt" | eval length=len(_raw) | where length>=500000 | stats count by _time length Question: 1) When the actual Truncate value=800000 is more then the maximum Truncate value=512273,in this case we should not get any warning alert. 2) By increasing the Truncate value will not solve this issue as the actual truncate value is more then the maximum truncate value. Kindly guide me if how to fix this issue.
Hi everyone, I am currently receiving data / logs via my buckes. The following logs have been categorized in the sourcetype following: aws: s3 I would like to create a condition or make sure that ... See more...
Hi everyone, I am currently receiving data / logs via my buckes. The following logs have been categorized in the sourcetype following: aws: s3 I would like to create a condition or make sure that certain files in my s3 bucket are stored in another sourcetype and applicate a parsing to line. Example from: index=main sourcetype=aws:s3 To: index=main sourcetype=s3_logs_customer Hi write this in on input.conf but not work: [source::s3://mypath/*_Report_ProdValid_*.csv] REPORT-s3-logs-customer = s3-logs-customer [ s3-logs-customer ] DATETIME_CONFIG=CURRENT SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 INDEXED_EXTRACTIONS=csv KV_MODE=none category=Structured description=Comma-separated value format. Set header and other settings in "Delimited Settings" disabled=false pulldown_type=true FIELD_NAMES=id1,id2,id3,id4 FIELD_QUOTE=' FIELD_DELIMITER=,   Could you give me an example of code to insert in the inputs.conf and transform.conf files to achieve my purpose? Thanks a lot
Hello, Our goal is to define some alerts based on some custom searches from our indexed data. We wrote the search query and we are able to validate it against data - it matches correctly when using ... See more...
Hello, Our goal is to define some alerts based on some custom searches from our indexed data. We wrote the search query and we are able to validate it against data - it matches correctly when using preset time intervals and we are able to see, for instance, "14 events matched" in the summary and also the contents of those events. When we move to All time (real-time) and inject events into the system, we are able to see something like "0 of 2 events matched" in the summary (for 2  events that we injected), however there is nothing displayed in the search results (and as a consequence the alert that uses this search isn't triggered). When we search again with a preset time interval (e.g. Last 15 minutes) and with the same query as before we are able to see those newly injected events. Our search query normally looks something like this: ``` index="snmp_data" SNMPv2-SMI::enterprises.3317.1.2.2.0.1 | rex field=_raw "UDP:\s\[(?<source_ip_address>[^\]]+)\].*SNMPv2-MIB::snmpTrapOID\.0\s=\sOID:\sSNMPv2-SMI::enterprises\.3317\.1\.2\.2\.0\.1.*RFC1269-MIB::bgpPeerState\.(?<peer_ip_address>[^\s]+)\s=\s[^:]+:\s" | lookup dnslookup clientip as peer_ip_address outputnew clienthost as "peer_hostname" | rex field="peer_hostname" mode=sed "s/.my.domain.net//" | lookup dnslookup clientip as source_ip_address outputnew clienthost as "source_hostname" | rex field="source_hostname" mode=sed "s/.my.domain.net//" | eval time=strftime(_time, "%H:%M:%S %d-%m-%y") ``` But we've simplified it to just `index="snmp_data" SNMPv2-SMI::enterprises.3317.1.2.2.0.1`, in order to rule out regex parsing time and DNS lookup time, but we still get the same behavior. What's more puzzling is that for other searches which look similar in complexity, the real-time alerting and searching works just fine... How can we further troubleshoot what's going on? Thanks
My front end use jsnlog.logger to catch all unhandled exceptions in js code and call a handler in the .Net backend to post the exception details. I have verified this and it is working fine. However ... See more...
My front end use jsnlog.logger to catch all unhandled exceptions in js code and call a handler in the .Net backend to post the exception details. I have verified this and it is working fine. However in AppD business transactions, this is being shown as an error, not sure what to do with this.. this error is just polluting the whole reporting.. Found the below link, it is talking about something related to overwriting window.onerror event, not sure if this is the cause: https://docs.appdynamics.com/display/PRO21/Handle+the+window.onerror+Event Below screenshots shows what I see on my AppD reporting: So I am not clear on why it is being considered an error here. Any help would be highly appreciated here. ^ Edited by @Ryan.Paredez for an improved title and minor readability. 
I have a warning on Monitoring console of my cluster master, that my Index disk utilisation is critically high. I have a 30 day retention policy set on all my indexes however the disk utilisation is ... See more...
I have a warning on Monitoring console of my cluster master, that my Index disk utilisation is critically high. I have a 30 day retention policy set on all my indexes however the disk utilisation is continuing to rise. In monitoring console - Indexing - indexes and volumes - instance - Indexes table. I am seeing Data Age vs Frozen age that I have data much older than my retention policy. To me that would suggest I have hot buckets that have not rolled over perhaps due to data ingest latency or incorrect timestamps preventing the roll over. Could this be the source of my increasing disk utilisation? My understanding is that if a rolling restart of my Indexers occurred then all buckets would roll over, however I know there has been a restart within the time frame indicated by the data age.
Hello Team, Where can I get the license for this app? https://splunkbase.splunk.com/app/1660/#/details
All, Need some help in the following topic. II have defined a few html tables and I am populating the values of a column from the result token of my base search.  I am trying to write a script which ... See more...
All, Need some help in the following topic. II have defined a few html tables and I am populating the values of a column from the result token of my base search.  I am trying to write a script which will enable the users to click on the first column values of those html tables and set a couple of other tokens based on the clicked value. 1) A tier_token based on the clicked texts, like Tier1, Tier2 etc. 2) a client_group token id of the table. If I populate the table values with Static values like 10, 20 etc. . I am able to perform the clicks and set my tokens as I want. But when the values get populated from the result tokens($result.fieldName$) from the base search or let's say any token value get set in the table , the clicks don't work at all. Could anyone please guide me, what am I doing wrong? TABLE :    <row> <panel> <title>PRD</title> <html> <table id="PRD" class="table"> <tr> <th>Tier</th> <th>Points</th> </tr> <tr> <td>Tier1</td> <td>$client_group$</td> </tr> <tr> <td>Tier2</td> <td>10</td> </tr> <tr> <td>Tier3</td> <td>$tier4_points_PRD$</td> </tr> <tr> <td>Tier4</td> <td>$tier5_points_PRD$</td> </tr> <tr> <td>Tier5</td> <td>$tier6_points_PRD$</td> </tr> <tr> <td>Tier Undefined</td> <td>$tier1_points_PRD$</td> </tr> </table> </html> </panel> <panel> <html> <table id="HPS" class="table"> <tr> <th>Tier</th> <th>Points</th> </tr> <tr> <td>Tier1</td> <td>$tier2_points_HPS$</td> </tr> <tr> <td>Tier2</td> <td>$tier3_points_HPS$</td> </tr> <tr> <td>Tier3</td> <td>$tier4_points_HPS$</td> </tr> <tr> <td>Tier4</td> <td>$tier5_points_HPS$</td> </tr> <tr> <td>Tier5</td> <td>$tier6_points_HPS$</td> </tr> <tr> <td>Tier Undefined</td> <td>$tier1_points_HPS$</td> </tr> </table> </html> </panel> and so on....   JS:   var components = [ "splunkjs/ready!", "underscore", "splunkjs/mvc/simplexml/ready!", "jquery" ]; // Require the components require(components, function( mvc, _, ignored, $ ) { console.log("Inside Custom Html Table Tokens JS"); setTimeout(function(){ //click only on first child(1st columns tds) $("table td:first-child").on("click", function() { console.log("Click Performed"); var submitted_tokens = mvc.Components.get('submitted'); var default_tokens = mvc.Components.get('default'); console.log("Tier Token initial:" + default_tokens.get('tier_token') ); console.log("ID Token initial:" + default_tokens.get('client_group') ); var texts = $(this).text(); //get text of td which is clicked var ids = $(this).closest("table").attr('id'); //get closest table where click event has taken place using the attribute `id` of table if ( ids == "PRD" || ids == "CMT" || ids == "HPS" || ids == "RES" || ids == "FS") { if (ids =="HPS") { ids = "H&PS"; } console.log("TEXT " + texts + " ID " + ids); submitted_tokens.set('tier_token',texts); submitted_tokens.set('client_group',ids); default_tokens.set('tier_token',texts); default_tokens.set('client_group',ids); submitted_tokens.set('form.tier_token',texts); submitted_tokens.set('form.client_group',ids); default_tokens.set('form.tier_token',texts); default_tokens.set('form.client_group',ids); console.log("Tier Token newly set:" + default_tokens.get('tier_token') ); console.log("ID Token newly set:" + default_tokens.get('client_group') ); } }) },100); });    
Hello Community, I have to build a temper-proof archive solution with data ingested in splunk. The last couple days I thought about it and I would appreciate your ideas or at best a known/experience... See more...
Hello Community, I have to build a temper-proof archive solution with data ingested in splunk. The last couple days I thought about it and I would appreciate your ideas or at best a known/experienced Best Practice advice. The idea behind this is to forward or store splunk indexed data temper-proof (and non deleteable), so that I can be sure the data CAN NOT be altered anymore. Recently I build this with a indexer forwarding to a syslog-server (syslog-format), the data then is copied to a WORM-Storage. But I am not convinced that this solution is the ideal one. It works, but there are a few to much "error-sources" in the chain. The other idea is to use the data integrity function to ensure, that the data is not altered and still valid. If Iam right, the indexed data can only be deleted but not altered? I am also convied of this idea, because I had to handle the checksum files and this could be a lot with 250GB indexed data per day. In sum there are two ideas: Target: temper-proof/non-deleteable data from indexed events // a goodie would be a fully seured transport of the data 1. IDX Forward (syslog-format) -> Syslog-Server -> Copy to WORM-Storage 2. Use data integrity function -> Store Checksums in WORM-Storage, because the data itself can only be deleted. I hope some of you built such a archive solution in the past and can help me out. BR, Tom
I need to count the number objects grouped by a transaction command. The command is: index=* sourcetype="pan:*" | transaction src_ip maxspan=2min | table src_ip, app I need to provide a count for... See more...
I need to count the number objects grouped by a transaction command. The command is: index=* sourcetype="pan:*" | transaction src_ip maxspan=2min | table src_ip, app I need to provide a count for "app" and then limit the results to only those groups with more than 5 apps returned within the time frame. Thank you, Mike
Hi My servers (clients) are running splunk stream.  I believe within the deployment server will contain the configurations that is telling the client what to stream (dns, dhcp, http, etc).  Where ca... See more...
Hi My servers (clients) are running splunk stream.  I believe within the deployment server will contain the configurations that is telling the client what to stream (dns, dhcp, http, etc).  Where can I find this within the deployment server or is it available in a conf file somewhere on the clients?    
Hi Splunk, Newbie here  want to ask about alert. for example we have data like Name Time StatusCode AAA 2021-02-02 08:00 404 AAA 2021-02-02 08:01 200 BBB 2021-02-02 09:00 503... See more...
Hi Splunk, Newbie here  want to ask about alert. for example we have data like Name Time StatusCode AAA 2021-02-02 08:00 404 AAA 2021-02-02 08:01 200 BBB 2021-02-02 09:00 503 CCC 2021-02-02 09:01 404 BBB 2021-02-02 09:30 200 CCC 2021-02-02 09:30 200 How to create a alert base on table with cron every per 5 minutes. if StatusCode != 200 alert notif startdown and if StatusCode =200 alert notif Solved.   Example for the alert base on table: "Hi AAA, you are down on 2021-02-02 08:00" and email again if the AAA StatusCode changed to 200 "Hi AAA, you are now SOLVED on 2021-02-02 08:01" done, until the StatusCode Changed to !=200 the alert send me the email again.   Another Example: "Hi BBB, you are down on 2021-02-02 09:00" then the StatusCode Changed to 200 "Hi BBB, you are now SOLVED on 2021-02-02 09:30"   On the splunk alert menu, we didn't find  for reset alert when trigger condition is no longer true. So we need a help and advice.   Thank you