All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Since a notable event is generated from a correlated search event, is there a way to output the notable event "event_id" from the correlated search event? I have a use case where I need to update not... See more...
Since a notable event is generated from a correlated search event, is there a way to output the notable event "event_id" from the correlated search event? I have a use case where I need to update notable event fields that's associated with a specific correlated search event.
The input that I have created to ingest data from my database using DBConnect is not indexing any events.  I looked in splunk_app_db_connect.server.log and found the following message.   2021-03-02 ... See more...
The input that I have created to ingest data from my database using DBConnect is not indexing any events.  I looked in splunk_app_db_connect.server.log and found the following message.   2021-03-02 09:45:55.885 -0600 [dw-2298 - PUT /api/inputs/DailyGivenNotificationSuccess] ERROR c.s.d.m.repository.DefaultConfigurationRepository - action=failed_to_get_the_conf reason=HTTP 401 -- call not properly authenticated com.splunk.HttpException: HTTP 401 -- call not properly authenticated My db input is using the default user "admin".  How do I get this call to be properly authenticated?
I have events that often have lager JSON data in them, however, I need to send additional data along with them. Typically my events will look something like this:   timestamp="2021-03-02 11:46:48,7... See more...
I have events that often have lager JSON data in them, however, I need to send additional data along with them. Typically my events will look something like this:   timestamp="2021-03-02 11:46:48,745" correlationKey="30C05D96A7544BF3948034BE0C" level=INFO message="{'json': 'test'}"   What I would like to do is cover it so that the JSON gets formatted but allows me to keep the rest of the data in the event so that I can use the items like correlationKey. Is there some way I could do this? Perhaps with a custom source type? Right now if the JSON is too large it bogs down my search UI really badly when it has hundreds of lines of JSON. I want to be able to log the raw JSON response, though.
Can someone assist extracting fields using the string below? The first line is header info: date, protocol, response_status, response_type each line following (one to many) is a website and an erro... See more...
Can someone assist extracting fields using the string below? The first line is header info: date, protocol, response_status, response_type each line following (one to many) is a website and an error code i can't figure out a regex to capture the header line AND the successive lines of websites and error codes.    02-Mar-2021 UDP Response Found Response Type: ABC www.site1.com 404 www.site10.com 100 www.site4.com 400 .....   Thanks in Advance.  
I haven't been able to pull in my Cortex logs for some time now ,   And I think the issue is that the dashboard searches are looking for the field "actions" and the field from Cortex is called "act" ... See more...
I haven't been able to pull in my Cortex logs for some time now ,   And I think the issue is that the dashboard searches are looking for the field "actions" and the field from Cortex is called "act"   Since "actions" doesn't exist in the logs, the events  don't show up in the dashboard.    Has anyone else noticed this issue?  The logs are in splunk if I search on index=pan_logs sourcetype=pan:traps4  i see them . 
We setup a webhook in Splunk Enterprise to send search result to webhook receiver periodically. Our question is: 1, when the receiver crashes and cannot receive data, can Splunk detect the disconnec... See more...
We setup a webhook in Splunk Enterprise to send search result to webhook receiver periodically. Our question is: 1, when the receiver crashes and cannot receive data, can Splunk detect the disconnection and save unsent data in local until receiver is recovered? 2, If yes, when the webhook receiver is recovered, can Splunk resume data transportation automatically? Or it needs manual trigger?
Hi guys,   I'm going crazy and I'm completely lost. I'm trying to create a query that displays concurrent connections. I understand that this has been previously asked but what seemed to be the sol... See more...
Hi guys,   I'm going crazy and I'm completely lost. I'm trying to create a query that displays concurrent connections. I understand that this has been previously asked but what seemed to be the solution or "most popular" answer didn't work for me and seemed to be too complicated. I'm trying to create a search that finds concurrent connections and then creates a table with the time, user and when the concurrency occurred. I know I am missing something because even though I get no errors, I see a message saying "572781 events were ignored due to missing or invalid start or duration fields." This is my search index=fw tag=vpn |eval "start"=cisco_vpn_start |eval "start"=ftnt_fgt_vpn_start |eval "stop"=cisco_vpn_end |eval "stop"=ftnt_fgt_vpn_end |eval "total_time"=start-stop | concurrency duration=total_time | timechart span=5m max(concurrency) as concurrency | where concurrency > 0 | table concurrency, user, _time   Explained, we basically have to vpns Duration: the total time from when one session starts until it ends. Concurrency: measures the number of events which have spans that overlap with the start of each event. I'm using 5 minutes as a time span and I want it to display the events where there is at least one concurrent connection. My brain is fried and I can't figure out what I'm doing wrong. I've been biting my nails and I think I peeled off all my nail polish. Any help would be greatly appreciated.  
I have a parameterized query which returns results. I have an alert action to send the results to some location as configurable item Based on the different criterion.....I should call the alert act... See more...
I have a parameterized query which returns results. I have an alert action to send the results to some location as configurable item Based on the different criterion.....I should call the alert action passing different location values. I am currently creating manually all the different criterion wrappers and attach specific location values. If there is a way either in spl or command line to call the alert action on top of the results returned then instead of creating wrapper savedsearches, I could easily script and pass appropriate values and invoke the splunk command to execute the spl. In nutshell: Calling alert action from spl or using command line on the query results.  
I am trying to forward log files from our Aruba Controller to Splunk but not sure how to configure the data input I set up a data input of UDP port 514 but what should the source type be? aruba:s... See more...
I am trying to forward log files from our Aruba Controller to Splunk but not sure how to configure the data input I set up a data input of UDP port 514 but what should the source type be? aruba:syslog? The Aruba Controller has an option for syslog formatting of either CEF or RFC 3164. Which format is more Splunk friendly?  
I have a requirement to see which users have logged into multiple servers before logging out of the previous server.   I currently have this Search Set up:  index="fed-prod" L_Action="New session... See more...
I have a requirement to see which users have logged into multiple servers before logging out of the previous server.   I currently have this Search Set up:  index="fed-prod" L_Action="New session" | stats values(L_Server) as Linux_Server dc(L_Server) as host_count by L_user |where host_count > 1   This search finds all the users who have logged  to multiple servers but does tell me if they have logged out of the other server first or allow me to narrow the time down to within a certain window.  I currently do not have a active feed into splunk and upload data manually do to licensing restrictions   The report would need to be ran weekly. I would like to do this one of two ways. First option would be add in a time premaster to the current above search that checks the time stamp of the log for it to be within a 15 minute window if the user logged into two. If the time stamps of the two logs are within 15 minutes it out puts a finding of the User and servers it logged into.  The second option would be to to do some sort of sub search. that would check to see which users logged in to what servers. then check to see if they logged out before logging into another one.     
I have a field from the search query called source which has a pattern of "text:text:text:dynamicText:dynamicText:dynamicText" where text->hardcoded values and dynamicText->keeps changing for differ... See more...
I have a field from the search query called source which has a pattern of "text:text:text:dynamicText:dynamicText:dynamicText" where text->hardcoded values and dynamicText->keeps changing for different logs.   I want to extract the 2nd dynamic text as its own field and then perform a stats count on that field.  I'm not able to figure out how to navigate over the 1st dynamic field using regular expressions
can we export the alert form splunk cloud instance to export it to new instance? If yes then how we can export all alert? is there any script available to do this?
Hi Can you please let me know if there is an addon for Bamboo, if yes can you  provide the link of Bamboo Addon available for Splunk 8.1  
Hi, I have a main search that look like this   index=main RESPONSE_CODE="0" earliest =-4mon@mon latest=mon@mon |stats count AS Total_success BY MERCHANT_CODE   This will produce a table that has ... See more...
Hi, I have a main search that look like this   index=main RESPONSE_CODE="0" earliest =-4mon@mon latest=mon@mon |stats count AS Total_success BY MERCHANT_CODE   This will produce a table that has each merchant and their sale for 4 months. The sub-search that I want to incorporate is   index=backend earliest_time=@d | table CODE ACQ_BANK   This table has the merchant code (which is the same as above MERCHANT_CODE) and their corresponding bank.  And because the data need to be update daily, I limit the search to the latest possible. I want to produce a table that have that have 3 columns, which are MERCHANT_CODE, Total_success, ACQ_BANK. Thank you in advance.
Hello, I have a query (e.g. "....... " | stats count, avg(...)) and after that I get as result Count avg 20        40 What I would like to have before that is a separate column that I can name my... See more...
Hello, I have a query (e.g. "....... " | stats count, avg(...)) and after that I get as result Count avg 20        40 What I would like to have before that is a separate column that I can name myself. So desired: OwnColumn Count AVG XYZ                 20           40 How can I include this in my query?
This is the search that merges identities, according to the search preview: | inputlookup append=T "administrative_identity_lookup" | fillnull value="administrative_identities" _source | inputlook... See more...
This is the search that merges identities, according to the search preview: | inputlookup append=T "administrative_identity_lookup" | fillnull value="administrative_identities" _source | inputlookup append=T "simple_identity_lookup" | fillnull value="static_identities" _source | eval identity=split(identity, "|"),identity=if(_source=="administrative_identities", mvappend(identity,email,replace(email,"@.*","")), identity),identity=if(_source=="static_identities", mvappend(identity,email,replace(email,"@.*","")), identity),identity=mvjoin(mvdedup(identity), "|") | table "_source","bunit","category","email","endDate","first","last","managedBy","nick","phone","prefix","priority","startDate","suffix","watchlist","work_city","work_country","work_lat","work_long","identity" | eval `iden_mktime_meval(startDate)`,`iden_mktime_meval(endDate)`,identity=mvsort(identity) | sort 0 +identity | inputlookup append=T "identity_lookup_expanded" | entitymerge "identity"   If identity_lookup_expanded is empty (because I've flushed it), it produces results. If I pipe these results into an "outputlookup identity_lookup_expanded" it is populating identity_lookup_expanded ok (except all text is lowercase, but see my other question regarding that). Now if I run the above search again, it returns no results. It's ok until it hist entitymerge. What does this mean?
All, Is there a way to get the java version via the REST API?  /applications/foo/nodes seems like the logical resource but doesn't have the java version.  It does have the machine and app agent type... See more...
All, Is there a way to get the java version via the REST API?  /applications/foo/nodes seems like the logical resource but doesn't have the java version.  It does have the machine and app agent type and version but I don't see the java version anywhere. thanks
We upgraded to enterprise security 6.0.2 and now every single piece of text in identity_lookup_expanded is lowercased. For instance, instead of having 'first' be "Gabriel" it's now "gabriel". Does t... See more...
We upgraded to enterprise security 6.0.2 and now every single piece of text in identity_lookup_expanded is lowercased. For instance, instead of having 'first' be "Gabriel" it's now "gabriel". Does this for every single fields, including job title ('category') etc. How can I fix it?
Can we get a query to fetch the savedsearches/dashboards which are running with timerange more than 24 hours   In oour search environment users are running the search in higher timeranges which is ... See more...
Can we get a query to fetch the savedsearches/dashboards which are running with timerange more than 24 hours   In oour search environment users are running the search in higher timeranges which is causing much load to our Infra. till it gets stabilized we wanted to allow users to run only searches within 24 hours not like Last 3 months / All time / Before This data kind of queries. So we wanted to filter such queries and inform the users. If its much critical we will allow them. Is there any way to pull like that?
Hi, We have implemented service now Integration with APPDynamics with service and we see lot of noise for the alert that are getting generated and we want to fine tune it and is there any best pract... See more...
Hi, We have implemented service now Integration with APPDynamics with service and we see lot of noise for the alert that are getting generated and we want to fine tune it and is there any best practices for the same which talks about correlation and finetuning to reduce the same