All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, we have a problem with long JSON events that have a length over 5000 chars  (under 5000 works fine). The auto-field-extraction stops around the 5000 char limit, so fields beyond that are n... See more...
Hello, we have a problem with long JSON events that have a length over 5000 chars  (under 5000 works fine). The auto-field-extraction stops around the 5000 char limit, so fields beyond that are not recognized at all. Also the JSON structure of these long events isn't formatted in the result list after a search. We set extraction_cutoff in limits.conf to 10000 and restarted, but it didn't help.  Is there another limit  we can set? Thanks in advance!  
I'm trying to set up some HTTP Origins for which to return Access-Control-Allow-* (CORS) headers. According to the Splunk documentation, I need to add these to the [httpServer] stanza of the server.c... See more...
I'm trying to set up some HTTP Origins for which to return Access-Control-Allow-* (CORS) headers. According to the Splunk documentation, I need to add these to the [httpServer] stanza of the server.conf file, under the crossOriginSharingPolicy heading. I'm getting a problem when the url has a slash in it, like: https://address.test.frontend.aws.cloud/network Splunk raises an error during server start, with the following message: ValueError: invalid literal for int() with base 10 I think the issue is due to the slash in the url, I tested removing the /network part and the server started fine.    
We made a clean installation of on-prem Splunk Enterprise 8.0.9 and Enterprise Security 6.4.0. When correlation search returns results, we would like to append these results to an email via adaptive ... See more...
We made a clean installation of on-prem Splunk Enterprise 8.0.9 and Enterprise Security 6.4.0. When correlation search returns results, we would like to append these results to an email via adaptive response action "Send Email".  We had selected the option to include an inline-table, but regardless of this setting, the table with results is still not added to the email. There are two additional  findings we discovered: If we try to append results of standard alert search (non-correlation search) to an email it works. If we set sendresults = 1 in $SPLUNK_HOME/etc/system/local/alert_actions.conf it also works but not for all correlation searches... Has anybody encountered such problems and how did you solve it?
Hi Everyone, I want to know hardware requirement for intermediate forwarder server.  CPU, DISK, RAM. Thanks !
I have a timechart with more than 1 time series and would like to run the fit command on each of the time series separately. Is this possible with foreach or is another approach needed?
Hello Splunk team! I'm having an issue setting the level parameter on the configuration object for SplunkLogger constructor on splunk-logging JS package version "^0.10.1". This is my code: const c... See more...
Hello Splunk team! I'm having an issue setting the level parameter on the configuration object for SplunkLogger constructor on splunk-logging JS package version "^0.10.1". This is my code: const config = { token: `${process.env.SPLUNK_TOKEN}`, url: `${process.env.SPLUNK_HOST}`, level: "error" }; const splunkLogger = new SplunkLogger(config); It doesn’t matter which level I set, it always sending all the logs severities.  Could you please help me on this? Thanks in advance!
Hi,  I was trying to get the difference of 2nd and 3rd row and display it as 4th status value, below is my search  index = prod_e2 sourcetype=prod_csv type="n" | dedup order | stats count ... See more...
Hi,  I was trying to get the difference of 2nd and 3rd row and display it as 4th status value, below is my search  index = prod_e2 sourcetype=prod_csv type="n" | dedup order | stats count | eval status = "first" | append [search index=prod_e2 sourcetype=prod_csv type="n" and desc="2" |dedup order |stats count | eval status="submit" |table status count] | append [search (index= prod_e2 sourcetype=prod_csv type="n" and stat_desc="2" and order_num !="0" ) or (index = prod_e2 sourcetype=prod_csv) | dedup order | stats count | eval status = "created" | table status count] I was able to get it as  status        count first              20 submit        10 created       50 but I want it as  status                    count first                           20 submit                     10 created                    50 difference               40
I get the above message when I try to save. This is happening despite refreshing the page. Looks like a bug?  
Hi Team How are u? I have a little question I have a index with same informations,      index="epo" source="endpoint"     In this search will return a column with "JustificationText", Which c... See more...
Hi Team How are u? I have a little question I have a index with same informations,      index="epo" source="endpoint"     In this search will return a column with "JustificationText", Which contains a ticket number   And with this number I need to search in another index to get some information   Today i'm doing this way:       index="epo" source="endpoint"​ | rex field="JustificationText" "(?<number>REQ\d{7}|<number>INC\d{7}|<number>TRE\d{7}|<number>CHG\d{7})" | eval TicketNumber = number | dedup ViolationLocalTime IncindetId | join type=left [search index=servicenow sourcetype="snow:service_task" dv_number = TicketNumber] | table Status ViolationLocalTime IncidentId UserName Name JustificationText TotalContentSize RulesToDisplay contact_type dv_u_requested_by dv_location     All the data from the first serach is coming ok but when I do a second search with the variable "TicketNumber" nothing returns to me. If i for example, put a ticket in      | join type=left [search index=servicenow sourcetype="snow:service_task" dv_number = "REQ0000197"]     Data are brought, but the same for all events My question is how can I do this second search using a variable? Thanks in advance! 
Hi All i have a basic dashboard with two panels piechart and table when you click on the slice of pie the table shows the values of the selected pie  in the table  but how can i revert back to the o... See more...
Hi All i have a basic dashboard with two panels piechart and table when you click on the slice of pie the table shows the values of the selected pie  in the table  but how can i revert back to the original table like a clear filter for the tokens or a home button for the orginal table to display can this be achieved without a java script   <dashboard> <label>DrillDown</label> <row> <panel> <chart> <search> <query>index=_*|stats count by sourcetype</query> <earliest>-15m</earliest> <latest>now</latest> </search> <option name="charting.chart">pie</option> <drilldown> <set token="tok_sourcetype">$click.value$</set> </drilldown> </chart> </panel> <panel depends="$tok_sourcetype$"> <table> <search> <query>index=_* sourcetype=*|stats count by sourcetype |search sourcetype=$tok_sourcetype" OR sourcetype=$*" </query> <earliest>-15m</earliest> <latest>now</latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <drilldown> <unset token="tok_sourcetype"></unset> </drilldown> </table> </panel> </row> </dashboard>
I want to filter out transactions(where status ="InProgress ") that started in the previous slot and those completed in the next slot. basically if earliest=04/27/2021:12:0:0 and latest=04/28/2021:12... See more...
I want to filter out transactions(where status ="InProgress ") that started in the previous slot and those completed in the next slot. basically if earliest=04/27/2021:12:0:0 and latest=04/28/2021:12:0:0 I want to display only those transactions which started in the specified period and are in completed status. I have a previous query like this : stats earliest(_time) as InTime ,latest(_time) as OutTime, values(eval(if(code="E100" OR code="E101" OR code="E102"))) as error, count(eval(code="E010")) as messageReceived, count(eval(code="E030" OR code="E031")) as messageCompleted | sort - InTime
When creating an alert that creates a .csv file to be emailed , the .csv contains 9000 with an error that only the first 9000 of the 40,000 results are included. Please advise.
Hi guys. I have a problem with certificate revocation on Splunk forwarder. Description: There are 3 VM with Red Hat: Certification Authority (CA) -  with Easy RSA installed and  Apache server to... See more...
Hi guys. I have a problem with certificate revocation on Splunk forwarder. Description: There are 3 VM with Red Hat: Certification Authority (CA) -  with Easy RSA installed and  Apache server to publish certificates Indexer (IDX)- full Splunk server installation Forwarder(FW) - Splunk forwarder   I managed to create certificate for both IDX and FW then signed them using EasyRSA on CA. System is able to establish SSL connection  between IDX and FW. So far I am HAPPY. But when I use CA to revoke FW certificate Splunk is  not able to detect this change and  system still takes FW certificate as valid.  After reworking FW certificate I have published the new CRL in /var/www/pki/crl.pem . Using browser i am able  to download it and check that certificate was revoked.  From /var/log/httpd/access_log I can tell that IDX or FW have never accessed the CRL. I tried to set sslCommonNameToCheck. This works fine but it is unsuitable  for me because the final solution has hundreds of Forwarders and maintaining the list in sslCommonNameToCheck is too cumbersome. Also tried splunk reload crl with no success. File Settings: IDX(server.conf) [sslConfig] sslRootCAPath = /opt/splunk/etc/auth/myauth/ca.pem IDX(inputs.conf) [splunktcp-ssl:9997] disabled = 0 [SSL] serverCert = /opt/splunk/etc/auth/myauth/myNewServerCertificate.pem sslPassword = $7$qV7bjcVNcqRlm70Y1cpaazqeGFmH6nyfnNN1TSCDu82ZPhnqMw== requireClientCert = true FW(server.conf) [sslConfig] sslRootCAPath = /opt/splunkforwarder/etc/auth/myauth/ca.pem FW(outputs.conf) [tcpout] defaultGroup = indexer2 [tcpout:indexer2] server = xx.xx.xx.xx:9997 clientCert = /opt/splunkforwarder/etc/auth/myauth/myNewClientCertificate.pem sslPassword = $7$hibYhkL2wOexhWDmyBqMEk358HGFaLe4jQ8RT6ruDsEeQmS6Ww==   Thank you for your time and so much needed advice.
Hello All, I am pretty new to splunk and still learning day by day. I have a question...In my organisation we have a ton of jboss servers and we are planning to do centralized logging for all of the... See more...
Hello All, I am pretty new to splunk and still learning day by day. I have a question...In my organisation we have a ton of jboss servers and we are planning to do centralized logging for all of them with each application names as index names. I want to manage all these within deployment server...Instead of creating one app for all applications i want to create a single app for jboss logs  with inputs and outputs deployed from deployment server. Now how can i handle that multiple index issue that i have. Does splunk allow the use of two different inputs.conf....   I was planning to have an inputs.conf created under the etc/system/local with just the value index=app1 and leave the deployment server to take care of jboss logs monitoring. Will this work???   Please suggest me ideas...
hi there- I tried a few things already, but looking to get guidence on this one- I am using the LDAP query module in Splunk to dump out directory information and then present into a simple table, and... See more...
hi there- I tried a few things already, but looking to get guidence on this one- I am using the LDAP query module in Splunk to dump out directory information and then present into a simple table, and running into a challenge simplifying extraction of the date from the AD account creation field: | ldapsearch basedn="XXXXXXXXXXX" search="(&(objectCategory=user)(objectClass=user)(distinguishedName=*))" attrs="displayName,distinguishedName,mail,lastLogonTimestamp,whenCreated"  I want to simplify presentation of the two date and time fields:  lasLogonTimestamp and whenCreated. What I get with these fields today when I output to a table (example) 2019-05-06 16:53:24+00:00 What I want to see: 2019-05-06 What I have tried: adding in: | eval Created=strftime(whenCreated,"%Y%m%d") | prior to my table command. this seems to result in nothing being populated in the new field (I am expecting just a date value) ...I am not sure if the strftime command is correct when it comes to this format of data... thoughts welcomed as always    
Hi all,  I am just learning Splunk because the need for applying our addon to be cloud vetted ( therefore I really don't know much splunk development yet, but need to get this done soon) . It is an ... See more...
Hi all,  I am just learning Splunk because the need for applying our addon to be cloud vetted ( therefore I really don't know much splunk development yet, but need to get this done soon) . It is an addon to connect to our server and pulling in logs. Each pull would have to auth first, therefore we save token locally so only need auth once for certain interval ( lifetime of the token). But that won't do for cloud based addon. (" Storing authentication token in checkpoint which is not allowed in Splunk cloud. Please don't store such sensitive information") Any other way I can save token for addon functionality purpose? . I heard mentioning of storage/passwords endpoint, is that only for user credential or it can be used for saving application token as well?   Thanks in advance! I searched whole day, still not clear the direction. 
I've got logs that contain a timestamp in 24 hour YYYY-MM-DD HH:MM:ss:SSS format (example: 2021-04-29 18:43:07.557).  The timestamp in this log message is +5 hours ahead of the _time of the event.   ... See more...
I've got logs that contain a timestamp in 24 hour YYYY-MM-DD HH:MM:ss:SSS format (example: 2021-04-29 18:43:07.557).  The timestamp in this log message is +5 hours ahead of the _time of the event.     So far I've got this much, which extracts the timestamp from the message but I don't know how to go about showing the difference between these two, especially with the five hour offset.  Ideally would just like to show a third value of the difference in the table.  Appreciate any instruction.    sourcetype="PCF:log" cf_app_name=app1 (msg="*message query here*") | rex field=msg "created on\s+(?<lockTime>\S+\s+\S+)" | table _time,lockTime   _time lockTime Expected 2021-04-28 12:46:37.381 2021-04-28 17:46:33.961 00:00:03.420   I should mention too that only the time portion, not the date, will need the difference calculated.  The YYYY-MM-DD will always be the same between _time and lockTime. 
Hello everyone, I'm new here and to Splunk in general. I have completed Fundamentals 1 and am now on the labs of Module 3 in Fundamentals 2. I am running into an issue with the drilldown editor in th... See more...
Hello everyone, I'm new here and to Splunk in general. I have completed Fundamentals 1 and am now on the labs of Module 3 in Fundamentals 2. I am running into an issue with the drilldown editor in the edit dashboard page. The lab instructions say for me to click the 3 dots on the right of the screen (also known as the More actions button) but when I do the "More actions" bubble pops up and nothing appears, the more actions bubble then stays frozen on there until I leave the page or refresh it, see below. The other 3 buttons work fine. I am using Microsoft Edge and I tried Chrome but got the same result.  Is there another way to access the Drilldown Editor? I can't progress in the course without actually completing the lab.  
Hello, I have events that look like this (for a user with id 123): 2021-04-29 14:30:45 Notification Received [User Id:123, location:null, location id:null] 2021-04-29 14:30:22 Response Sent for us... See more...
Hello, I have events that look like this (for a user with id 123): 2021-04-29 14:30:45 Notification Received [User Id:123, location:null, location id:null] 2021-04-29 14:30:22 Response Sent for user id:123 2021-04-29 14:30:15 Notification Received [User Id:123, location:null, location id:null] 2021-04-29 14:29:56 Notification Received [User Id:123, location:null, location id:null] 2021-04-29 14:29:43 Notification Received [User Id:123, location:null, location id:null] 2021-04-29 14:29:35 Response Sent for user id:123 2021-04-29 14:28:59 Notification Received [User Id:123, location:null, location id:null]   I have a query where I am getting transactions that start with a Notification Received message and end with a Response Sent. My query works, but it does not start a transaction with the earliest instance of a starting point. So currently my query would return transactions with the timestamps of: 1)  2021-04-29 14:28:59  and  2021-04-29 14:29:35 2)  2021-04-29 14:30:15  and  2021-04-29 14:30:22 But I want the second transaction to retrieve the below instead: 2021-04-29 14:29:43  and  2021-04-29 14:30:22   Is there a way to do this in a transaction? Or a way to rewrite the query to get this? My query looks like this: index=INDEX host=HOSTNAME sourcetype=SOURCETYPE | rex field=_raw "Notification\sReceived\s\[User\sId:(?<user_id>\d+),\slocation:\w+,\slocation id:\w+\]" | rex field=_raw "Response\sSent\sfor\suser\sid:(?<user_id>\d+)" | where user_id<2000 | sort 0 user_id, -_time | transaction user_id startswith="Notification" endswith="Response" maxopenevents=2 | where duration>0 | rename _time as message_arrival | eval message_arrived_at=strftime(message_arrival, "%Y-%m-%d %H:%M:%S") | eval response_sent_at=strftime(message_arrival + duration, "%Y-%m-%d %H:%M:%S") | table user_id, message_arrived_at, response_sent_at, duration  
Where do I find data being collected for CPU, RAM in Splunk Ent. Data Inputs for my Windows & Unix hosts? I need this as part of cutting the fat from the license usage. Is there a best practices docu... See more...
Where do I find data being collected for CPU, RAM in Splunk Ent. Data Inputs for my Windows & Unix hosts? I need this as part of cutting the fat from the license usage. Is there a best practices document to trim such data that is either redundant or useless? Thank u