All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello.    Im trying to run a report that'll show me Multiple authenticatoin failures within a certain time frame. For example, 10 authentication failures within the space of 1 minute. Im trying to ... See more...
Hello.    Im trying to run a report that'll show me Multiple authenticatoin failures within a certain time frame. For example, 10 authentication failures within the space of 1 minute. Im trying to get the visualization right, to show me a table view per user that has failed 10 times within the space of a minute.  Also trying to get it to show day/time stamps too. Does anyone know how to do this?    Thankyou    
Hi, I got an error message like this, I can't able to generate an event samples. can anyone help me to get to solve this?  Unable to initialize modular input "eventgen_modinput" defined in the app ... See more...
Hi, I got an error message like this, I can't able to generate an event samples. can anyone help me to get to solve this?  Unable to initialize modular input "eventgen_modinput" defined in the app "datagen": Introspecting scheme=eventgen_modinput: script running failed (exited with code 1)..
Hi, I have two tables and in first table it contains 13 columns and from second table only one column i need to add to table 1. I am getting the data from a database, can you please help me out. th... See more...
Hi, I have two tables and in first table it contains 13 columns and from second table only one column i need to add to table 1. I am getting the data from a database, can you please help me out. the below query is using to get first table data, | dbxquery query="SELECT BEX_DTE_BSNS, BEX_NUM_COMPL, BEX_NUM_COMPL_TYPE, BEX_TSP_END, BEX_CNT_EXEC, BEX_NUM_GROUP, BEX_NUM_JOB, BEX_NME_NET, BEX_NUM_RESTART, BEX_TSP_START, BEX_NME_BPR, BEX_NME_LOCATION, BEX_IND_SKIPPED FROM \"LoanIQDB\".\"LS2USER\".\"VLS_BATCH_EXEC\"" connection="loaniq_10_10_2_10" the below query is to get that particular column from that 2nd table, | dbxquery query="SELECT BNT_DSC_JOB FROM \"LoanIQDB\".\"LS2USER\".\"VLS_BATCH_NET\"" connection="loaniq_10_10_2_10"
I have the following data in the table.   I need to get the duration of the batch running time, I have the start & end time of each date. I need to calculate Batch_End-Batch_Start.  Normal eva... See more...
I have the following data in the table.   I need to get the duration of the batch running time, I have the start & end time of each date. I need to calculate Batch_End-Batch_Start.  Normal eval is not giving me any output. Hence I was thinking of converting the timestamps in epoc & then doing eval Duration=(End_epoc-Start_epoc). For this I need to convert timestamp like 2021-12-09 11:46:50.000069 to epoch time. Please help.  
I'm not sure what the purpose of this question.    My license page shows like below:    is it 5 or 1?      
Deployment server is not downloading apps and getting the below error.  12-13-2021 08:38:53.140 +0300 WARN ClientSessionsManager - ip=x.x.x.x name=xxxxxx Updating record for sc=xxxxxx app=xxxxxx : a... See more...
Deployment server is not downloading apps and getting the below error.  12-13-2021 08:38:53.140 +0300 WARN ClientSessionsManager - ip=x.x.x.x name=xxxxxx Updating record for sc=xxxxxx app=xxxxxx : action=Download result=Fail checksum=xxxxxxxxxxx
Hi, We have upgraded our Splunk core version to 8.2.2 which is compatible with python3 Post the upgrade, data has stopped coming from log analytics add on. Can someone pls check.
Hi, We have MCAS integrated with spluk. MCAS logs are ingested into splunk. If we need to ingest salesforce logs that are within MCAS into splunk, does MCAS team need to do any setting from their e... See more...
Hi, We have MCAS integrated with spluk. MCAS logs are ingested into splunk. If we need to ingest salesforce logs that are within MCAS into splunk, does MCAS team need to do any setting from their end? or do we have any splunk add-on to ingest salesforce logs that are inside MCAS?
Hi Team  I am trying to find out recent CVE-2021-44228( log4j) I tried " index=aws *log4j*", nut not sure how to find out and create an alert based on this Vulnerability.  Can anyone help me with ... See more...
Hi Team  I am trying to find out recent CVE-2021-44228( log4j) I tried " index=aws *log4j*", nut not sure how to find out and create an alert based on this Vulnerability.  Can anyone help me with the correct search and explain how to create an alert based on this vulnerability   Thanks     
Hi there, I've got a basic search to provide the most recent timestamp for a successful backup using wineventlog data:   index="wineventlog" source="WinEventLog:Application" SourceName="Symantec S... See more...
Hi there, I've got a basic search to provide the most recent timestamp for a successful backup using wineventlog data:   index="wineventlog" source="WinEventLog:Application" SourceName="Symantec System Recovery" host=*grp* | search Message=*6C8F1F7E* OR Message=*6C8F1F7D* OR Message=*6C8F1F7A* | dedup host | table host, _time   However, I'm really struggling to come up with a search that shows me all the *grp* hosts whether they have the successful backup strings in the Message field  (*6C8F1F7E* or *6C8F1F7D* or *6C8F1F7A*) or not. My closest attempt seems to be this:   index="wineventlog" source="WinEventLog:Application" SourceName="Symantec System Recovery" host=*pgrp* | eval success = case(Message like "%6C8F1F7E%",1,Message like "%6C8F1F7D%",1,Message like "%6C8F1F7A%",1,Message like "%",0) | stats sum(success) as Successes by host | where Successes < 1   My hope is for a table with the following columns: Host Last successful backup date/time or "N/A" if there was no successful backup in the selected timerange Days since last backup Any help or advice would be greatly apprecated! Cheers
Hello fellow Splunkers, I'm trying to connect new DB input, but I'm facing a small problem. I've configured a rising column (time) and run my query from the "create new input" screen to make sure t... See more...
Hello fellow Splunkers, I'm trying to connect new DB input, but I'm facing a small problem. I've configured a rising column (time) and run my query from the "create new input" screen to make sure that I get no SQL errors (which I don't) and that I'm getting the data I wanted (which I do) but for some reason Splunk won't let me continue with the creation of the input.  The final part of my query looks like this: "WHERE time>? AND field1='1' ORDER BY time ASC" When trying to create this input, Splunk shows an error saying that my query doesn't accept checkpoint.   My guess is that the "AND" messes-up the expected syntax.   Does anyone have an idea how to work around the problem?   Thanks
Hello Splunkers  How to write stanza to monitor for 2 services...   if I am using this stanza. it will gives 100+ services data... but I need only 2 or 3 services only... [WinHostMon://Service] ... See more...
Hello Splunkers  How to write stanza to monitor for 2 services...   if I am using this stanza. it will gives 100+ services data... but I need only 2 or 3 services only... [WinHostMon://Service] type = service interval = 300 disabled = 0 index = myindex
Hello Team,  We need to integrate the puppet integration with splunk for the security related events are pushed to our Splunk SIEM. is this good to use HEC tokens for this ? or syslog is fine ?   
Splunk DB Connect: Why am I getting "The value is not set for the parameter number 1" when updating the SQL query in the Edit Input panel?   ERROR :    com.microsoft.sqlserver.jdbc.SQLServerE... See more...
Splunk DB Connect: Why am I getting "The value is not set for the parameter number 1" when updating the SQL query in the Edit Input panel?   ERROR :    com.microsoft.sqlserver.jdbc.SQLServerException: The value is not set for the parameter number 1.  
I have a JSON payload that's ingested through a REST API input on a heavy forwarder, with the following configuration in props.conf (on the heavy forwarder, not on the indexer):     [json_result]  ... See more...
I have a JSON payload that's ingested through a REST API input on a heavy forwarder, with the following configuration in props.conf (on the heavy forwarder, not on the indexer):     [json_result]     INDEXED_EXTRACTIONS = json     KV_MODE = none     DATETIME_CONFIG = CURRENT     SHOULD_LINEMERGE = false     TRUNCATE = 200000 The ensuing event in Splunk looks like this (minified): {"totalCount":3,"nextPageKey":null,"result":[{"metricId":"builtin:synthetic.http.resultStatus","data":[{"dimensions":["HTTP_CHECK-02B087D58EC18C33","SUCCESS","SYNTHETIC_LOCATION-2CD023FA5F455E28"],"dimensionMap":{"Result status":"SUCCESS","dt.entity.synthetic_location":"SYNTHETIC_LOCATION-2CD023FA5F455E28","dt.entity.http_check":"HTTP_CHECK-02B087D58EC18C33"},"timestamps":[1639254360000],"values":[1]},{"dimensions":["HTTP_CHECK-02B087D58EC18C33","SUCCESS","SYNTHETIC_LOCATION-833A207E28766E49"],"dimensionMap":{"Result status":"SUCCESS","dt.entity.synthetic_location":"SYNTHETIC_LOCATION-833A207E28766E49","dt.entity.http_check":"HTTP_CHECK-02B087D58EC18C33"},"timestamps":[1639254360000],"values":[1]},{"dimensions":["HTTP_CHECK-02B087D58EC18C33","SUCCESS","SYNTHETIC_LOCATION-1D85D445F05E239A"],"dimensionMap":{"Result status":"SUCCESS","dt.entity.synthetic_location":"SYNTHETIC_LOCATION-1D85D445F05E239A","dt.entity.http_check":"HTTP_CHECK-02B087D58EC18C33"},"timestamps":[1639254360000],"values":[1]}]}]} The text in red reflects what I'm trying to extract from the payload; basically, it's three fields ("Result status", "dt.entity.synthetic_location" and "dt.entity.http_check") and their associated values. I'd like to have three events created from the payload, one event for each occurrence of the three fields, with the fields searchable in Splunk. I've tried this approach in props.conf to get what I'm looking for...     [json_result]         SHOULD_LINEMERGE = false     LINE_BREAKER = },     DATETIME_CONFIG = CURRENT     TRUNCATE = 0     SEDCMD-remove_prefix = s/{"totalCount":.*"nextPageKey":.*"result":\[{"metricId" :.*"data":\[//g     SEDCMD-remove_dimensions = s/{"dimensions":.*"dimensionMap"://g     SEDCMD-remove_timevalues = s/,"timestamps":.*"values":.*}//g     SEDCMD-remove_suffix = s/\]}\]}//g ...but I'm only getting one set of fields to show up as an event in Splunk: And, the fields aren't showing up as "interesting fields" in the left navbar (possibly because the props.conf is not on the indexer?). Any assistance would be greatly appreciated. UPDATE: I referenced this post that's pretty close to what I'm trying to accomplish: https://community.splunk.com/t5/Getting-Data-In/How-to-split-a-json-array-into-multiple-events-with-separate/m-p/139851 The format of the JSON payload cited in this post is different than the format of the payload I'm using, though...so I'm guessing that some additional logic would be necessary to accommodate my format.
Hi  I get a log in below format of JSON obj message{              Dashboard{                              status: SUCCESS                               operationName:gettingResult } In the abo... See more...
Hi  I get a log in below format of JSON obj message{              Dashboard{                              status: SUCCESS                               operationName:gettingResult } In the above logs i get a value of SUCCESS/FAILURE for status. Now my requirement is to calculate a total,totalSuccess and totalFailure based on operationName. Tried the below query but it is not working out    ......messgae.Dashboard.status=*| stats count as total,count(eval(messgae.Dashboard.status=SUCCESS)) as totalSuccess, count(eval(messgae.Dashboard.status=FAILURE)) as totalFailure by messgae.Dashboard.operationName   getting value for total but not for totaSuccess /totalFailure
Hi, This question is related to CVE-2021-44228.  As far as we could see/scan, Splunk binaries, including Universal Forwarders ones, do not rely on or use the Log4j library but we wanted to get some... See more...
Hi, This question is related to CVE-2021-44228.  As far as we could see/scan, Splunk binaries, including Universal Forwarders ones, do not rely on or use the Log4j library but we wanted to get some sort of "official confirmation" of this. Thanks if you can point any public document regarding this and regarding to Splunk potential exposure to this particular CVE.  Best Regards.
Hello! Could somebody please suggest if it is possible to do a map search search more effectively? What I am trying to do: 1. there are events with client transactions. A huge list (thousands ever... See more...
Hello! Could somebody please suggest if it is possible to do a map search search more effectively? What I am trying to do: 1. there are events with client transactions. A huge list (thousands every second). 2. I search for transaction chains, which are suspicious by some conditions for last hour 3. If a transaction chain is suspicious, I make a longer search (last 3 weeks) because some operations do not fit into the last hour. I basically do the same calculations, but with longer time interval and with more strict conditions The following search works, but it takes several minutes and sometimes cancelled due to timeout:           <MY_SEARCH> | stats first(orgCode) AS orgCode first(accountId) AS accountId sum(amount) AS totalAmount sum(controlAmount) AS totalControlAmount by transactionChainRef | where totalControlAmount>0 and totalControlAmount<totalAmount | map search="search <MY_SEARCH> AND message=\"*transactionChainRef\\\":$transactionChainRef$*\" earliest=-3w | eval orgCode=$orgCode$ | eval accountId=$accountId$ | eval totalControlAmount=$totalControlAmount$ | stats first(orgCode) AS orgCode first(accountId) AS accountId sum(amount) AS totalAmount first(totalControlAmount) AS totalControlAmount by transactionChainRef | where totalControlAmount<totalAmount " maxsearches=9999           Unfortunately I cannot make query right away for the last 3 weeks because there will be still transaction chains, which may go outside of the 3 weeks (a chain has finished, say, 2.5 weeks ago; its start may be 5.5 weeks ago). My idea currently is to make a map search by chunks, for example, by 100 transactionChainRefs Thanks in advance!
Hi everyone I installed Splunk 8.2.2.1 and then install Splunk Stream 801 add-on but I can't find streamfwd.conf file in Directory or find Splunk_TA_stream directory. Does anybody face this proble... See more...
Hi everyone I installed Splunk 8.2.2.1 and then install Splunk Stream 801 add-on but I can't find streamfwd.conf file in Directory or find Splunk_TA_stream directory. Does anybody face this problem? did I do anything else to receive NetFlow?
externally externally MHN Server GCP Firewall rule