All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a text file that has 8824 lines. I have configured MAX_EVENT= 1000. Then, Splunk only read the file until line 7599. Why Splunk did not read my file till the end of file? Is there anything tha... See more...
I have a text file that has 8824 lines. I have configured MAX_EVENT= 1000. Then, Splunk only read the file until line 7599. Why Splunk did not read my file till the end of file? Is there anything that I need to configure? It is a health check file. This is my BREAK_ONLY_BEFORE = CIC: Node IP Address|Service Status:|\-{10}|\={12}|Summary:|Memory Details on
Hi Splunkers, we have a behavior that we are not able to understand. The problem is the following: we are performing some search using data model and, when we need to use stats, we would select both... See more...
Hi Splunkers, we have a behavior that we are not able to understand. The problem is the following: we are performing some search using data model and, when we need to use stats, we would select both fields row data and datamodel one. However, when we try this, we are not able to see the raw fields. Let me show an example to better explain. If we try this simple search: | from datamodel:"Authentication" | search is_Failed_Authentication=1 | stats count by log_region log_country user we expect that stats show in outoput the 2 fields we manually added to data, log_region and log_contry, and the one owned by datamodel, which is user. Unfortunately, when the result appear we can see only user in the returned table; log_region and log_country are empty. We know the those data are present and populated because, if we try to replicate the search with same time range but using not datamodel, and so using the specific index, sourcetype and source for windows events, the stats return the output with all 3 fields (in this case, user is of course the specific field of Windows events). Is this normal? Is there a way to use both raw/manually added fields and datamodel ones?
Hi, We are using servicenow which has been integrated with Splunk to generate incidents. The current query works fine for single failure but checking to generate incident if a log backup failed rep... See more...
Hi, We are using servicenow which has been integrated with Splunk to generate incidents. The current query works fine for single failure but checking to generate incident if a log backup failed repeatedly for 3 times and then generate an incident. My current eval looks like: | eval itsi_entity=objectName, itsi_event_key=objectId, itsi_correlation_key=objectId, itsi_summary="Backup "+eventStatus+" for "+objectName, message=message, itsi_message="Alerting time: "+human_readable_time+"~~"+field1+"~~"+field2+"~~"+field3+"~~"+field4+"~~"+field5+"~~"+field6+"~~"+field7+"~~"+field8, itsi_impact=case( message like("%Failed log backup of Oracle Database%") ,"High", message like("%Failed backup of Oracle Database%"),"High", true(), "Medium"), itsi_urgency=case( message like("%Failed log backup of Oracle Database%"), "High", message like("%Failed backup of Oracle Database%"), "High", true(), "Medium") I need to have something in itsi_impact case statements for "failed log backup" failed for 3 times then generate high incident. I tried to keep eval and count fields in case statement but not working.
Hello, I am trying to migrate my dashboard to the new Studio version, splunk did it automatically, but the conversion from the xml to json it is not correct at all. I have that piece of code in my ... See more...
Hello, I am trying to migrate my dashboard to the new Studio version, splunk did it automatically, but the conversion from the xml to json it is not correct at all. I have that piece of code in my xml source : <search>     <query>            |makeresults     </query>     <earliest>$tk_range_date.earliest$</earliest>     <latest>$tk_range_date.latest$</latest>     <progress>           <eval token="toearliest">strptime($job.earliestTime$, "%Y-%m-%dT%H:%M:%S.%3N%z")</eval>           <eval token="tolatest">strptime($job.latestTime$, "%Y-%m-%dT%H:%M:%S.%3N%z")</eval>           <set token="jobearliest">$job.earliestTime$</set>            <set token="joblatest">$job.latestTime$</set>     </progress> </search> Where and how include it in my json source? Thank you!
This is the home page for all other dashboard, hence 4 panels are being shown in a row. Each panel drill downs to their specific dashboards. Here on loading of the page the choropleth map is by defa... See more...
This is the home page for all other dashboard, hence 4 panels are being shown in a row. Each panel drill downs to their specific dashboards. Here on loading of the page the choropleth map is by default showing the map of Africa. But the business wants to show the North & South American continents by default on loading of the page. Is it possible to set the view of the map, like this  
New Ship To contact was confirmed after license was sent. Can the new Ship To access the license that was forwarded to him, or is there a way to send the license to the new Ship To contact?  
Hi All, I am new to Splunk so I am not sure if I am doing this right. For one of the use cases, I am trying to use a condition match in the type dropdown to make a column visible when the condition... See more...
Hi All, I am new to Splunk so I am not sure if I am doing this right. For one of the use cases, I am trying to use a condition match in the type dropdown to make a column visible when the condition match and omit the column when it doesn't. Below is the code I am using where I am creating a new token called cache_type_token and setting the value from the input token cache_type. I am unable to understand if there's some issue with the token being set or the match condition.   <input type="dropdown" token="cache_type" searchWhenChanged="true"> <label>Choose which type of cache to view</label> <choice value="*UTable-*">Utilisation Table</choice> <choice value="ContextualLineCache*">Contextual Line Cache</choice> <choice value="GlobalRuleCache*">Global Rule Cache</choice> <choice value="RTLIM*">RTLIM</choice> <choice value="BATCH*">BATCH</choice> <choice value="COB*">COB</choice> <choice value="EOM*">EOM</choice> <choice value="*">ALL</choice> <default>Utilisation Table</default> <change> <set token='cache_type_token'>$cache_type$</set> <condition match=" $cache_type_token$ == &quot;ContextualLineCache*&quot; "> <set token="cache_type_token">0</set> </condition> <condition match=" $cache_type_token$ != &quot;ContextualLineCache*&quot; "> <set token="cache_type_token">cacheExpired</set> </condition> </change> </input>
Hi all! I know ES ships with a TAXII client to ingest threat intel over TAXII. Does anything exist for users who do not have ES? I am trying to ingest intel (in STIX 2.1) being distributed via a... See more...
Hi all! I know ES ships with a TAXII client to ingest threat intel over TAXII. Does anything exist for users who do not have ES? I am trying to ingest intel (in STIX 2.1) being distributed via a TAXII 2.1 server to Splunk. Thanks!
I have a few error messages in my ES about searches being delayed. I used the below solution. But it only produces a few but not all. I see errors under message related to delayed searches on the mai... See more...
I have a few error messages in my ES about searches being delayed. I used the below solution. But it only produces a few but not all. I see errors under message related to delayed searches on the main ES page under Messages (numerous) about delayed that does not show using the  See https://github.com/dpaper-splunk/public/blob/master/dashboards/extended_search_reporting.xml for a helpful dashboard. Please advise. Thank u & Happy Holidays.  
I'm looking to convert the results for these fields  in PST time zone, so that I can fetch the events based on these fields but not based on the event timestamp. Below two fields are in GMT format. P... See more...
I'm looking to convert the results for these fields  in PST time zone, so that I can fetch the events based on these fields but not based on the event timestamp. Below two fields are in GMT format. Please help FIRST_FOUND_DATETIME="2021-12-12T08:30:29Z", LAST_FOUND_DATETIME="2021-12-20T16:12:28Z    
Hi Team,  I have 10 events - start event time is at 10AM ,next event time  at 10.08AM ,10.15AM,10.18AM and so on.. End event time is 10.56AM and I am able to find the start event time and end event ... See more...
Hi Team,  I have 10 events - start event time is at 10AM ,next event time  at 10.08AM ,10.15AM,10.18AM and so on.. End event time is 10.56AM and I am able to find the start event time and end event time using min(_time) and max(_time) but I need to find the first modified time  i.e the event that occurred at 10.08AM. Please assist
we followed the steps provided on https://www.splunk.com/en_us/blog/bulletins/splunk-security-advisory-for-apache-log4j-cve-2021-44228.html but it seems that files are being recreated , Can anyone pl... See more...
we followed the steps provided on https://www.splunk.com/en_us/blog/bulletins/splunk-security-advisory-for-apache-log4j-cve-2021-44228.html but it seems that files are being recreated , Can anyone please help on that ??, Also i wanted to know if replacing just Apache version rather upgrading splunk could  help to mitigate ? and what should be the steps if i replace?
How do I view details & find the causes? Thx & Happy holidays?
We have below CEF logs coming in from the device where few field doesn't have any value like cs2 below   CEF:0|vendor|product|1.1.0.15361|6099|DirectoryAssetSyncSucceeded|1|cn1label=EventUserId cn1... See more...
We have below CEF logs coming in from the device where few field doesn't have any value like cs2 below   CEF:0|vendor|product|1.1.0.15361|6099|DirectoryAssetSyncSucceeded|1|cn1label=EventUserId cn1=-3 cs1label=EventUserDisplayName cs1=Automated System cs2label=EventUserDomainName cs2= cn2label=AssetId cn2=16699 cs3label=AssetName cs3=ABC.LOCAL AD cn3label=AssetPartitionId cn3=7 cs4label=AssetPartitionName cs4=XYZ.LOCAL partition cs5label=TaskId cs5=9ec9aa87-61b9-11ec-926f-3123456edt   How can we assign 'NULL' value to such field using SEDCMD or any other possible way here?
We need to capture field value for the below CEF log pattern  CEF:0|vendor|product|1.1.0.15361|6099|DirectoryAssetSyncSucceeded|1|cn1label=EventUserId cn1=-3 cs1label=EventUserDisplayName cs1=Automa... See more...
We need to capture field value for the below CEF log pattern  CEF:0|vendor|product|1.1.0.15361|6099|DirectoryAssetSyncSucceeded|1|cn1label=EventUserId cn1=-3 cs1label=EventUserDisplayName cs1=Automated System cs2label=EventUserDomainName cs2= cn2label=AssetId cn2=16699 cs3label=AssetName cs3=ABC.LOCAL AD cn3label=AssetPartitionId cn3=7 cs4label=AssetPartitionName cs4=XYZ.LOCAL partition cs5label=TaskId cs5=9ec9aa87-61b9-11ec-926f-3123456edt   I am using the below regex  (?:([\d\w]+)label=(?<_KEY_1>\S+))(?=.*\1=(?<_VAL_1>[^=]+)(?=$|\s+[\w\d]+=)) Unfortunatelly it is not taking the blank one, like cs2= , which doesn't contain anything so EventUserDomainName should be blank    Kindly suggest 
Hello,  During that crazy 4logj times I would like ask you for advise.  I am new in Splunk/security but I manage to create dashboard for 4logj attemps. I can see some 4logj scanning activity and ... See more...
Hello,  During that crazy 4logj times I would like ask you for advise.  I am new in Splunk/security but I manage to create dashboard for 4logj attemps. I can see some 4logj scanning activity and codes are 404,400 etc = I am not really worry about. But sometimes I have code 200 as I can see this mean: The HTTP 200 OK success status response code indicates that the request has succeeded. The meaning of a success depends on the HTTP request method: ... GET : The resource has been fetched and is transmitted in the message body. i added screenshot.  I am wondering how to investigate it? Should i check for outband traffic? what is the best query? as far i have just one index=firewall 170.210.45.163 AND 31.131.16.127 ?  this is my Uni Lab environment so i just want to develop myself and learn , what you would do if you see such a string? many thanks
I set up a alert for every 15min if the count > 0,but i want the alert to be triggered a mail  for 2nd consecutive time  example: I want a alert for every 15min but i want a triggered action mail fo... See more...
I set up a alert for every 15min if the count > 0,but i want the alert to be triggered a mail  for 2nd consecutive time  example: I want a alert for every 15min but i want a triggered action mail for 2nd consecutive 15min. Is it possible? Please do help me out. Thanks Veeru
While the add-on is trying to fetch the data from Salesforce resulting in the below error. reason=HTTP Error The read operation timed out Traceback: cloudconnectlib.core.exceptions.HTTPError: HTTP... See more...
While the add-on is trying to fetch the data from Salesforce resulting in the below error. reason=HTTP Error The read operation timed out Traceback: cloudconnectlib.core.exceptions.HTTPError: HTTP Error The read operation timed out
Hi, I have events which contain 3 Fields: "StartDate", "Value_per_month" and "Nr_of_Month". They basically disclose some monthly financial flow which beginns at "StartDate" and ends after "Nr_of_Mon... See more...
Hi, I have events which contain 3 Fields: "StartDate", "Value_per_month" and "Nr_of_Month". They basically disclose some monthly financial flow which beginns at "StartDate" and ends after "Nr_of_Month". The goal is to show a sum of "Value_per_month" for each month over all events. In most cases the dates are in the future, so it will be a bit tricky to get this to work. However, at least a table view would be great and use some basic vizualisation on top. I thought I could create fields for each month, for example "value_yyyy-mm" and assign the value to each and then sum up the values in each field accross all events. However I have not found a way to do this dynamically in a loop for X times, based on variable "Nr_of_Month". I have checked combinations of eval, makeresults, foreach, gentimes, etc. Any basic idea how to approach this would be welcome. Many thanks in advance
Hello all, Can I generate scheduled PDF delivery from the command line? I want to export the dashboard as a PDF file and send it via email. Is that possible? May I know the solution if it's possible?