All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone, There is my search : my_severity=error my_app="name" earliest=-48h latest=-24h   | stats count as nb_yesterday by my_method limit=0   | appendcols[search my_severity=error my_app=... See more...
Hello everyone, There is my search : my_severity=error my_app="name" earliest=-48h latest=-24h   | stats count as nb_yesterday by my_method limit=0   | appendcols[search my_severity=error my_app="name" earliest=-24h latest=now | stats count as nb_today by my_method]   | eval increase=round(nb_today*100/nb_yesterday)   | eval status=if(increase>100 OR nb_today>10, "CRITICAL", "GOOD")   | table my_method, nb_yesterday, increase, status, nb_today   | sort nb_today desc my_severity, my_app and my_method are fields that i created myself with my search, i get multiple results (and multiple lines) and i want to send one mail with the list of CRITICAL status like : "Hello, we notice some errors : [name of the method(1)] [status] [increase] [nb_today] [name of the method(2)] [status] [increase] [nb_today] [name of the method(3)] [status] [increase] [nb_today] ... "   How can i send a mail with all the "CRITICAL" status for exemple ?   When i configure the mail alert with this body message :  "The method "$result.my_method$" was $result.status$ with $result.nb_today$ errors in the last 24hours. (That's a $result.increase$% increase) " I only send a mail with the informations of the first line.    Thanks.  
Hello, I just encounter a problem in fit and apply StateSpace Forecast algorithm in MLTK. I can fit and save a model, but when I apply the saved model it doesn't work: it doesn't return an error, b... See more...
Hello, I just encounter a problem in fit and apply StateSpace Forecast algorithm in MLTK. I can fit and save a model, but when I apply the saved model it doesn't work: it doesn't return an error, but it doesn't show the value of prediction.  It just happens in StateSpace and not in another algorithms. I didn't find anything in documentation. I'm afraid is it a problem in new version of MLTK? our internal problem??   Thanks Maryam
In it I have a multiselect check box view which has dynamically created checkboxs of 1000 items. How to include a search text box so the users can type the text and filter the check boxes and select ... See more...
In it I have a multiselect check box view which has dynamically created checkboxs of 1000 items. How to include a search text box so the users can type the text and filter the check boxes and select them. Please help out
Hi everyone, I have a specific question for all of you. In Splunk ESS I created a correlation search and a notable for the monitoring Incident Review section. I have set up a specific notable with... See more...
Hi everyone, I have a specific question for all of you. In Splunk ESS I created a correlation search and a notable for the monitoring Incident Review section. I have set up a specific notable with drilldown to which I pass a field of the CS (Corralation Search)  to perform the specific search and display via the Statistics tab. Corralation Search:   index=* (statusCode=4* OR statusCode=5*) | rename "requestTime" as Time, "statusCode" as Status, "sourceIp" as SourceIp, "httpMethod" as HttpMethod, "endpointRequestId" as "EndpointReqID" | stats values(Status) as Status, values(HttpMethod) as HttpMethod, count by index, SourceIp, EndpointReqID   Notable Drilldown   index=* (statusCode=4* OR statusCode=5*) | search sourceIp="$sourceIp$" | rename "requestTime" as Time, "statusCode" as Status, "sourceIp" as SourceIp, "httpMethod" as HttpMethod, "endpointRequestId" as "EndpointReqID" | stats values(Status) as Status, values(HttpMethod) as HttpMethod, count by index, SourceIp, EndpointReqID   When I open the drilldown from the Notable screen, the following query is returned:   index=* (statusCode=4* OR statusCode=5*) | search sourceIp="$sourceIp$" | rename "requestTime" as Time, "statusCode" as Status, "sourceIp" as SourceIp, "httpMethod" as HttpMethod, "endpointRequestId" as "EndpointReqID" | stats values(Status) as Status, values(HttpMethod) as HttpMethod, count by index, SourceIp, EndpointReqID   Instead of:   index=* (statusCode=4* OR statusCode=5*) | search sourceIp="129.12.x.x" | rename "requestTime" as Time, "statusCode" as Status, "sourceIp" as SourceIp, "httpMethod" as HttpMethod, "endpointRequestId" as "EndpointReqID" | stats values(Status) as Status, values(HttpMethod) as HttpMethod, count by index, SourceIp, EndpointReqID   Why is the $sourceIp$ field not recognized and replaced with the IP address of the CS so that it can perform a specific search? What is the error? Thank you all!
I got a bunch of alerts and reports scheduled - unfortunately most of them are scheduled for the same time. Is it possible to automatically review and correct those times? Does Splunk enterprise secu... See more...
I got a bunch of alerts and reports scheduled - unfortunately most of them are scheduled for the same time. Is it possible to automatically review and correct those times? Does Splunk enterprise security have any "collision avoidance" mechanism in order to prevent that from happening while creating new corellation searches?
Hi, Is Splunk Enterprise shipped with a JRE? IT contains a lot of JARs.. Did not find a typical JRE though. If yes do I find the exact version? How often does Splunk update it? If no, why all the... See more...
Hi, Is Splunk Enterprise shipped with a JRE? IT contains a lot of JARs.. Did not find a typical JRE though. If yes do I find the exact version? How often does Splunk update it? If no, why all the JARs? When looking under var/run/searchpeers I see references from the splunk archiver to OpenJDK8U but only directories, no binaries. thx afx
Hi Splunkers,   I had two questions with regards to the universal forwarder and  a csv file. 1. Is it possible to configure the universal forwarder to forward a file at 11PM every night irrespecti... See more...
Hi Splunkers,   I had two questions with regards to the universal forwarder and  a csv file. 1. Is it possible to configure the universal forwarder to forward a file at 11PM every night irrespective of whether the file has changed or not. (I understand that the whole file will be forwarded each night) 2. How can I force the universal forwarder to resend the whole file ? Can changing the timestamp do the trick ? Thanks, Termcap
Hi, I have a log4j file where the lines are nog parsed correct. can anyone help me with creating a sourcetype for splunk When there is a date in the xml the line is broken (red) because of: props... See more...
Hi, I have a log4j file where the lines are nog parsed correct. can anyone help me with creating a sourcetype for splunk When there is a date in the xml the line is broken (red) because of: props.conf in de splunkforwarder client. [log4j] BREAK_ONLY_BEFORE = \d\d?:\d\d:\d\d pulldown_type = true maxDist = 75 category = Application What can i do to make this one line and only break when the line begins with a date 21 Jan 2021 15:11:25.832 [publicerenExecutorService18] INFO nl.anwb.flow.connector.publicerenhvstatus.PublicerenHvStatusConnectorStub - <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"><SOAP-ENV:Header/><SOAP-ENV:Body><PublicerenHulpverleningStatusRequest xmlns="http://anwb.nl/webservices/publiceren_hulpverlening_status_request/3"> <Header> <BusinessProcesNaam>DistributieHulpverleningStatus</BusinessProcesNaam> <BusinessTransactieReferentie>xxxxx</BusinessTransactieReferentie> <BronSysteem> <BronSysteemNaam>xx</BronSysteemNaam> </BronSysteem> <Volgorde> <Indicator>2021-01-21T14:11:25.825Z</Indicator> </Volgorde> <AfleverKanaal> <Code>xxxx</Code> <AfleverPartij> <Naam>xxxx.</Naam> </AfleverPartij> </AfleverKanaal> <AfleverKanaal> <Code>HV_ONLINE_INZAGE</Code> <AfleverPartij> <Naam>xxxx</Naam> </AfleverPartij> </AfleverKanaal> <AfleverKanaal> <Code>xx</Code> <AfleverPartij> <Naam>xxx</Naam> </AfleverPartij> </AfleverKanaal> </Header> </PublicerenHulpverleningStatusRequest></SOAP-ENV:Body></SOAP-ENV:Envelope>     Kind regards Ido Dijkstra
Hello Everyone,   I need help because I have issues with collect command and with data from LDAP (collected with ldapsearch command). My goal is to collect data from ldap with command "| ldapsearc... See more...
Hello Everyone,   I need help because I have issues with collect command and with data from LDAP (collected with ldapsearch command). My goal is to collect data from ldap with command "| ldapsearch domain=default search="(&(objectClass=user))" attrs="<attribute_list>" " and index it in "ldapdata" index. For this purpose I wanted to use collect command "| collect index=ldapdata sourcetype=ldap". From ldapsearch i get events: _raw1 = {JSON 1} _raw2 = {JSON 2} _raw3 = {JSON 3} . . . _rawN = {JSON N} After collect command I get this events as one big event in ldap index ($ is end of line): _raw1 = {JSON 1}${JSON 2}${JSON 3}$...{JSON N}$ Can somebody advise solution on how to index mentioned data in the index as separated JSON events? Thanks for your help!
Hi All, I'm trying to apply several models in one query, and the model names themselves are evaluated from a subsearch so nothing ahead of the query is known about model names.  Building the list o... See more...
Hi All, I'm trying to apply several models in one query, and the model names themselves are evaluated from a subsearch so nothing ahead of the query is known about model names.  Building the list of model names has been solved, but passing them into the apply API is failing to be interpreted correctly.  A literal interpretation of the field holding the model name is dereferencing the field name itself.   I can properly populate the model name if I use a dashboard token in the search query, where the token is simply being replaced with the contents of the token value before the query is sent to the search.  But this solution won't work outside of dashboards.   Is there a trick I can use to force the replacement of an eval field to what the apply API sees? The general solution below gives the error "Error in 'apply' command: Invalid model name "(matchstr)"" | eval models = "model1,model2" | eval model=split(models,",") | mvexpand model | eval predicted_{model}="\"" + model + "\"" | stats values(*) as * by _time | foreach predicted_* [ apply(<<MATCHSTR>>) ] Likewise setting a model name via eval below gives "Error in 'apply' command: Invalid model name " (singleModel)"" | eval singleModel = "model1" | apply(singleModel)
Follow on question to https://community.splunk.com/t5/Getting-Data-In/Can-batch-read-a-partial-file-such-that-the-of-events-indexed-is/m-p/536557#M89931 [Q] For inputs.conf, can the time_before_clos... See more...
Follow on question to https://community.splunk.com/t5/Getting-Data-In/Can-batch-read-a-partial-file-such-that-the-of-events-indexed-is/m-p/536557#M89931 [Q] For inputs.conf, can the time_before_close setting be used with [batch]? The inputs.conf file specifically indicates which [monitor] settings are compatible with [batch] and this setting is not included. https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf From inputs.conf The following settings work identically as for [monitor::] stanzas, documented above host_regex = <regular expression> host_segment = <integer> crcSalt = <string> recursive = <boolean> whitelist = <regular expression> blacklist = <regular expression> initCrcLength = <integer>    
I have an add on  installed in Deployer which has been configured with Credentials and is working fine. But when i send that app to SHC the add on stops working. Does anyone have any solution to such... See more...
I have an add on  installed in Deployer which has been configured with Credentials and is working fine. But when i send that app to SHC the add on stops working. Does anyone have any solution to such a problem   Thanks and Regards Ravi
Hey there Splunkers,      We are running Splunk 8.0.1.  I have a pretty basic dashboard that contains a few panels with some KPI's  on it, nothing earth-shattering.  I can export the report and see ... See more...
Hey there Splunkers,      We are running Splunk 8.0.1.  I have a pretty basic dashboard that contains a few panels with some KPI's  on it, nothing earth-shattering.  I can export the report and see the contents of the PDF with no issue; however, when I schedule the report to send the report as a PDF, the report is empty.  Any clues on what might be causing this?   As always input is most appreciated.  Thanks.    
I am looking to compare the count of transactions processed in a 3 hour window to the count of transactions made in that same timeframe 3 days prior. I would like to set the count of the first search... See more...
I am looking to compare the count of transactions processed in a 3 hour window to the count of transactions made in that same timeframe 3 days prior. I would like to set the count of the first search as variable such as count1 and likewise for the second search as count2. Then I could do a comparison to alert when the difference in transactions is outside 20%  (where Count1 <= Count2*0.8 OR Count1 >=Count2*1.2)  My search currently looks like this (It is not functional, so I would love to know how to fix it):       index=sales messageType=AuthPaymentReply earliest=-246h latest=-243h | dedup OrderId | search Status="Success" | stats count by Status as Count1 | search [search index=sales messageType=AuthPaymentReply earliest=3h latest=now | dedup OrderId | search Status="Success" | stats count by Status as Count2] | where Count1 <= Count2*0.8 OR Count1 >=Count2*1.2      
Splunk is populating with all of the logs from aws, but GET events like GetBucketPolicy, GetBucketAcls, etc. aren't populating with the information they are "GETting." Here's an example of what my qu... See more...
Splunk is populating with all of the logs from aws, but GET events like GetBucketPolicy, GetBucketAcls, etc. aren't populating with the information they are "GETting." Here's an example of what my queries look like:   requestParameters: {      Host: Blah.us-east-1.amazonaws.com      acl:      bucketName: Blah }   The SET events seem to have those fields filled out though. But all the GET ones have a blank in the requestParameters. I wasn't able to find anything on this in the docs for the addon.
Hi all, Can someone guide me how can we calculate two fields. I have two fields in my lookup file OPEN and Closed. I need their total.  How can we do that.
Hello,   Is there a way to get an alert when (at the time of)  a UF is considered missing?  I don't mean a report of all missing UFs over all time, but when one of them goes offline recently?  ... See more...
Hello,   Is there a way to get an alert when (at the time of)  a UF is considered missing?  I don't mean a report of all missing UFs over all time, but when one of them goes offline recently?   In the Cloud Monitoring Console app, I see there is a screen for Forwarders:Deployment, so I copied the query for the Status & Configuration table with the hopes that might be a good jumping off point - here is my query:         | inputlookup sim_forwarder_assets | makemv delim=" " avg_tcp_kbps_sparkline | `sim_rename_forwarder_type(forwarder_type)` | search NOT [| inputlookup sim_assets | dedup serverName | rename serverName as hostname | fields hostname] | `sim_time_format(last_connected)` | fields hostname, forwarder_type, version, os, arch, status, last_connected | search hostname="***" | search status="*" | search last_connected < -20m@s | rename hostname as host, forwarder_type as Type, version as Version, os as OS, arch as Architecture, status as Status, last_connected as "Last Connected to Indexers"         As I understand it the UF status is set to 'missing' after 15 minutes of inactivity. The above search is run in a short window of say the last 30 minutes.     Is there perhaps a more direct way to get what I need?  Else is there a way to get the above to work?   Thanks for any advice!  
Hi All, I have one requirement . I have one lookup in which there are two columns: RunDateTime and Closed Time RunDateTime - 2021-09-21 03:58:07 CLosedTime - 2019-01-08T14:50:36.000+0000 I... See more...
Hi All, I have one requirement . I have one lookup in which there are two columns: RunDateTime and Closed Time RunDateTime - 2021-09-21 03:58:07 CLosedTime - 2019-01-08T14:50:36.000+0000 I need both RunDateTime and closedTime in below format: RunDateTime- 2021-09-21 ClosedTime- 2019-01-08 query: |inputlookup SDC.csv| table RunDateTime ClosedTime  
Hi All, I have one requirement: I have one lookup where there is one column Case_Status. It has multiple values  for Case status: Resolved Closed- Resolved Resolved - UpdateCase Submitted Pend... See more...
Hi All, I have one requirement: I have one lookup where there is one column Case_Status. It has multiple values  for Case status: Resolved Closed- Resolved Resolved - UpdateCase Submitted Pending  Escalated My requirement is I need only two values that is open and closed I need to include Resolved submitted and pending in OPEN and Escalated, Resolved and Resolved Update Case in Closed. How can I achieve this. My current query: |inputlookup Sdf.csv| table CaseStatus | dedup CaseStatus
Hello, For your awareness my architecture consist of 1SH, 1 Enterprise Security SH,  Cluster of 3 indexes, deployment server with a cluster master, license master, and MC.  I noticed there are no N... See more...
Hello, For your awareness my architecture consist of 1SH, 1 Enterprise Security SH,  Cluster of 3 indexes, deployment server with a cluster master, license master, and MC.  I noticed there are no Notable Events being populated into my notable index. I created events that matched the correlation searches I turned on and also ran those CS searches in search separately to ensure it picked up the events I created. I validated the data models with pivot to ensure data was populating. I also tried to created a manual notable event and nothing showed up in Incident Review. Upon looking at the indexes in the setting menu I see a notable index but nothing is getting populated, likely because I am searching off my index cluster. My deployment server is only managing my core Splunk search head and the I read somewhere that the Splunk_SA_CIM app  needs to have a index.conf  for notable events to be place locally on ES. Can someone please provide some thoughts or suggestions. Thanks in advance..