All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Please see this search - i'm trying to add missing field values from another index to this search.   index=1 earliest=-9d latest=now ExternalApiType=Event_DetectionSummaryEvent | fillnull | stats... See more...
Please see this search - i'm trying to add missing field values from another index to this search.   index=1 earliest=-9d latest=now ExternalApiType=Event_DetectionSummaryEvent | fillnull | stats values(ComputerName) AS ComputerName values(DetectName) AS DetectName values(UserName) AS User values(event_platform) AS Platform values(FileVersion) AS SensorVersion P values(MachineDn) AS OU values(SiteName) AS SiteName count(_time) AS count BY _time EventUUID | sort 0 - _time | eval Time=strftime(_time, "%m/%d/%Y %H:%M:%S") | appendcols [ search earliest=-9d latest=now index=json "AuditKeyValues{}.Key"=new_state "AuditKeyValues{}.ValueString"=* | spath | spath AuditKeyValue{} ]   Index=1 has fields ComputerName, DetectName, UserName, _time, EventUUID index=main has fields event_platform, FileVersion, MachineDn, SiteName   I want to pull the fields from index=main into the stats command of the index=1. I thought  it's as simple as adding the index=main at the beginning of the search with an OR: (index=json ExternalApiType=Event_DetectionSummaryEvent) OR (index=main FileVersion=*). But it's not working. I have to have the ExternalApiType value and it's only in the first index. I also tried join with the subsearch, but it didn't work. The original search is for 90 days, so I shouldn't use a subsearch anyways. Thank you.
My requirements consists of lookup file, it consists of list of hosts, as it is the saved results of an alert, so the list of host is the list of server down list. So by using the lookup file to ma... See more...
My requirements consists of lookup file, it consists of list of hosts, as it is the saved results of an alert, so the list of host is the list of server down list. So by using the lookup file to make alert to  run for every minute, it should notify when the host in lookup is back to normal, The problem that i'm having is, once the host is back to normal, then the same host should not be considered further. Only should check with remaining hosts. lookup file that stores list of server down - hostdown.csv Query to find list of down servers | search index=linux sourcetype=df | where ((PercentUsedSpace >= 80) AND (PercentUsedSpace<=90))
We are running Splunk enterprise 8.2.4 and it has been working fine with SSO authentication until I updated the SSL certificate, the certificate that was updated is the one referenced in my web.conf ... See more...
We are running Splunk enterprise 8.2.4 and it has been working fine with SSO authentication until I updated the SSL certificate, the certificate that was updated is the one referenced in my web.conf and my web browser show the new certificate however it broke SSO Please note the updated certificate is also used in authentication.conf by Saml (ClientCert)  Error message below are seen on splunk _internal logs ERROR UiSAML [66314 webui] - IDP failed to authenticate request. Status Message="" Status Code="Responder" ERROR Saml [66314 webui] - Failed to parse issuer. Could not evaluate xpath expression /samlp:Response/samlp:Status/samlp:StatusMessage or no matching nodes found. No value found in SamlResponse for key=/samlp:Response/samlp:Status/samlp:StatusMessageCould not evaluate xpath expression /samlp:Response/samlp:Status/samlp:StatusDetail/Cause or no matching nodes found. No value found in SamlResponse for key=/samlp:Response/samlp:Status/samlp:StatusDetail/CauseCould not evaluate xpath expression //saml:Assertion/saml:Issuer or no matching nodes found. No value found in SamlResponse for key=//saml:Assertion/saml:Issuer How can I fix the problem please?
I would like to know about to add a single field value to outputlookup, as currently there are some fields like id, condition, value is there , but the need is only to ingest condition, Can anyone pr... See more...
I would like to know about to add a single field value to outputlookup, as currently there are some fields like id, condition, value is there , but the need is only to ingest condition, Can anyone provide the query for this.
Hello Upgraded Splunk Enterprise to 9.0.0 today - went OK. Upgraded Splunk Universal Forwarders on Windows Server 2019 to 9.0.0 - upgrade says all went OK. I opened cmd and executed splunk rest... See more...
Hello Upgraded Splunk Enterprise to 9.0.0 today - went OK. Upgraded Splunk Universal Forwarders on Windows Server 2019 to 9.0.0 - upgrade says all went OK. I opened cmd and executed splunk restart The SplunkForwarder restarts OK, but I get the following error:     Invalid key in stanza [webhook] in D:\Program Files\SplunkUniversalForwarder\etc\system\default\alert_actions.conf, line 229: enable_allowlist (value: false)     In the file alert_actions.conf on line 229:     [webhook] enable_allowlist = false       Anyone know why I'm seeing this after the upgrade? Thanks
I want to trigger a Splunk SOAR playbook to iterate through a list of hosts every hour and check if they are online in our EDR tool, and if they are online to display a message to the user via the ED... See more...
I want to trigger a Splunk SOAR playbook to iterate through a list of hosts every hour and check if they are online in our EDR tool, and if they are online to display a message to the user via the EDR API. Although the playbook is already complete, I can't think of a good way to have it execute every hour. I thought about using a Splunk app ingestion to query our Splunk instance every 60 minutes to create a dummy label and container that the playbook could be set to "active" on, but that seems like an awkward work around.    Is there some other app or setting I'm missing that could achieve this goal?  
...
Hi all, I'm working on a deploy with Universal Forwader, Heavy Forwarder and Indexer Cluster and Search Cluster. The problem is this: I'm indexing data from different csv since long time. For the ... See more...
Hi all, I'm working on a deploy with Universal Forwader, Heavy Forwarder and Indexer Cluster and Search Cluster. The problem is this: I'm indexing data from different csv since long time. For the first time yesterday I realized that not all the raw of my csv files are indexed at all.  For example: In a csv I count 24k rows and when I perform a stats count on the index I see only 16/17k rows. Each file rotates every minutes.  In the log there's anything that leads to an error. In the UNIVERSAL FORWARDE I've this in inputs.conf [batch:///var/opt/OV/shared/perfSpi/datafiles/metric/final/F5_ResurcesGroup*] disabled = 0 index = f5_metrics sourcetype = f5_metrics initCrcLength = 100000 move_policy = sinkhole In the HEAVY FORWARDER: props.conf [f5_metrics] INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER=1 HEADER_FIELD_DELIMITER =, FIELD_DELIMITER=, HEADER_FIELD_LINE_NUMBER = 0 SEDCMD-dropheader = s/^"Node.+//g SEDCMD-select_fields = s/([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*)/\1,\2,\4,\5,\9,\17,\18/g #SEDCMD-select_fields = s/([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*)/\1,\4,\5,\9,\17,\18/g TRANSFORMS-f5_fields_name_extract=f5_fields_name_extract and in the transform.conf  [f5_fields_name_extract] REGEX=([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*) FORMAT=NodeID::$1 TimeStamp::$2 period_length::$3 ltmVirtualServStatClientCurConns::$4 ltmVirtualServStatVsUsageRatio1m::$5 DisplayAttribute::$6 PollingInterval::$7 #FORMAT=NodeID::$1 period_length::$2 ltmVirtualServStatClientCurConns::$3 ltmVirtualServStatVsUsageRatio1m::$4 DisplayAttribute::$5 PollingInterval::$6 WRITE_META = true Any suggestion ? Thanks Fabrizio      
Hi everyone, i need help to understand why i'm wrong and how to fix the problem. I've a lookup table in which is stored the last four year of data. That have a seasonality of a month and i want to ... See more...
Hi everyone, i need help to understand why i'm wrong and how to fix the problem. I've a lookup table in which is stored the last four year of data. That have a seasonality of a month and i want to predict the next year.   I use the predict command with the LLP algorithm to estimante the values. Below i show you my query and the output: As you can see, the prediction doesn't work because it simply put the last two values in ciclic way. here the associated table: could you help me to understand where i'm wrong with query or data?   i did the same work months ago with different output, more realistic: Thaks a lot!
I have a SHC consisting of 4 SHs (Splunk on-prem on AWS). One or the other SHs seem to go into down state. The only info I can find is "Failed system status check" of the EC2 instance and splunk in s... See more...
I have a SHC consisting of 4 SHs (Splunk on-prem on AWS). One or the other SHs seem to go into down state. The only info I can find is "Failed system status check" of the EC2 instance and splunk in stopped state. The splunkd.log amd health.log seems to be fine too. Any suggestions which could solve the issue?    
I am investigating a customer's concern that this  particular search is not writing summary to 'stash' sourcetype. This is the SPL we have. I am relatively new to summary index. Any pointers where I ... See more...
I am investigating a customer's concern that this  particular search is not writing summary to 'stash' sourcetype. This is the SPL we have. I am relatively new to summary index. Any pointers where I should be looking at ? | tstats summariesonly=true count sum(Web.bytes_in) AS total_bytes_in sum(Web.bytes_out) AS total_bytes_out from datamodel=Web where sourcetype="qnet:proxysg:access*" groupby _time span=1d Web.category | rename Web.category AS category | eval total_bytes=total_bytes_in + total_bytes_out | collect index=security_summary source="SavedSearch.Qnet_Daily_Category_Stats" sourcetype="SavedSearch.Qnet_Daily_Category_Stats"
Hi All,   I have provided multiple inputs in my dashboard studio and their tokens are being  used across multiple searches in the same Dashboard. I am seeing this issue:   I have to cl... See more...
Hi All,   I have provided multiple inputs in my dashboard studio and their tokens are being  used across multiple searches in the same Dashboard. I am seeing this issue:   I have to click submit  button  multiple times. My multi select inputs look like this and  they are interdependent on each other:   Example: Values of B is dependent on A Values of C is dependent on A, B Values of D is dependent on A, B, C Values of  E is dependent on A,B,C,D   like that. Now say if i have selected A,B,C ....D will not  update its values until and  unless i click submit button.......similarly E will also not update its new values until and unless after selecting D, i press again submit button. Is there any option for refresh button ? If i remove submit button, my whole dashboard starts refreshing  with each selection of input and this eats away a  lot of time. 
I have field called URN, ControlFlowID, RequestID and SpanID Requirement is to get data for each URN,how many controlflowid and for each controlflowID, how many requestID and for each requestID how ... See more...
I have field called URN, ControlFlowID, RequestID and SpanID Requirement is to get data for each URN,how many controlflowid and for each controlflowID, how many requestID and for each requestID how many SpanID needs to populate data in a table view by merging multivalue in a single row. can anyone help me on this. Eg: URN    ControlFlowID     RequestID      SpanID URN1    CTRLFLOW1       REQ1               SpanID1 URN1     CTRLFLOW1       REQ2             SpanID2 URN1      CTRLFLOW1     REQ3               SpanID3 Requirement as below: URN    ControlFlowID     RequestID      SpanID                 CTRLFLOW1        REQ1              SpanID1 URN1                                       REQ2               SpanID2                  CTRLFLOW2        REQ3                SpanID3
i am trying to setup alert for one event , am running on query at specific time.   If there are 8 records , email should sent as Success , else it should sent as fail.   Currently i have setup a ... See more...
i am trying to setup alert for one event , am running on query at specific time.   If there are 8 records , email should sent as Success , else it should sent as fail.   Currently i have setup a cron susessfully and reeivng proper alert. So now incase there less than 8 rows i need to get failure email i e missing am unable to to find the settings for the same.    
Would like to know what is the main difference in lantern.splunk use case library and research.splunk detections/analytic stories? Quite new to enterprise security. Not sure which one i should star... See more...
Would like to know what is the main difference in lantern.splunk use case library and research.splunk detections/analytic stories? Quite new to enterprise security. Not sure which one i should start with.    
Hello everyone I'm fairly familiar with routing data based on the logs themselves, however, I was wondering if there was a way to call an external mapping table in the transfoms.conf file.   Logs ... See more...
Hello everyone I'm fairly familiar with routing data based on the logs themselves, however, I was wondering if there was a way to call an external mapping table in the transfoms.conf file.   Logs would contain one identifiable serial number   Firewall 1 with serial number xxxxxxxxxxxx Firewall 2 with serial number yyyyyyyyyyyy Firewall 3 with serial number zzzzzzzzzzzz   And we would like to send each log to a different indexer depending on that serial number. Serial numbers are included in the logs and we have a mapping table  that looks like this:   serial number     indexer xxxxxxxxxxxx    indexer 1 yyyyyyyyyyyy    indexer 2 zzzzzzzzzzzz    indexer 3   and so on...   The only way I see right now is to create one manual entry in the props and transform files and I was wondering if there was a way to call an external mapping table, that way, whenever a new firewall comes into play, we would only need to update the table and not props and transforms files.   Thank you
Hi everyone. I am a new user to Splunk.  Recently, I have met some trouble with trying to extract a certain message out from a field I want. I have a field called Message, which logs the message sen... See more...
Hi everyone. I am a new user to Splunk.  Recently, I have met some trouble with trying to extract a certain message out from a field I want. I have a field called Message, which logs the message sent to a web server. However, I only want to retrieve a specific field when the message contains the desired field that I want.  Example: I want to retrieve the user's name when service is invoked.  Time Message 2021-05-15T01:51:52.321Z Session ID 1234 has been created 2021-05-15T01:51:52.321Z Invoked by user David from IP 127.256.25.16 2021-05-15T01:51:52.321Z Configuration Reading - Start   Hence, I only want to extract the name David, when that specific message log containing the name appears. Does anyone have any clue how I can extract that field specifically when it appears? Thanks in advance.  EDITED: Hey Splunk Users,  If you met the same problem as I did, where the message logs change constantly, do make sure to search for the message you are looking for first, before drilling down for the specific field.  In my case: | search Message="Invoked by user *" | rex field=Message "Invoked by user (?<user>\w+)"
Hello everyone! I want to combine two searches or find another solution. Here my problem: I need a timechart where i can show the occurences of some ID´s (example for an ID: 345FsdEE344FED-... See more...
Hello everyone! I want to combine two searches or find another solution. Here my problem: I need a timechart where i can show the occurences of some ID´s (example for an ID: 345FsdEE344FED- 354235werfDF2) and put an average line over it. Graph Idea: Orange: Timechart with a distinct count for the ID´s Green: Stats with average for the count of the ID´s       index=example_dev | bin span=1m _time | stats dc(TEST_ID) as count_of_testid by _time     For the timeframe i want to be flexibel but for the span 15 minutes are ok. Thank you all a lot and have a nice day.
After following the jboss setup tutorial https://docs.splunk.com/Documentation/AddOns/released/JBoss/Setup I am able to search wildfly23 server jmx logs with index="main" sourcetype="jboss:jmx"  bu... See more...
After following the jboss setup tutorial https://docs.splunk.com/Documentation/AddOns/released/JBoss/Setup I am able to search wildfly23 server jmx logs with index="main" sourcetype="jboss:jmx"  but when I search for server log with index="main" sourcetype="jboss:server:log" it is not showing any result. Below are the input.conf file [jboss://dumpAllThreads] disabled = 0 account = wildfly duration = 10 index = main sourcetype = jboss:jmx [monitor://D:/Wildfly_9/wildfly-23.0.2.Final/standalone/log/server.log*] disabled = false followTail = false index = main sourcetype = jboss:server:log [monitor://D:/Wildfly_9/wildfly-23.0.2.Final/standalone/log/*gc.log*] disabled = false followTail = false index = main sourcetype = jboss:gc:log [monitor://D:/Wildfly_9/wildfly-23.0.2.Final/standalone/log/access.log*] disabled = false followTail = false index = main sourcetype = jboss:access:log