All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to speed up a search on Splunk. The search looks through millions of logs for matches to around 100 event types (each event type has multiple strings to match) so it has ended up being ve... See more...
I am trying to speed up a search on Splunk. The search looks through millions of logs for matches to around 100 event types (each event type has multiple strings to match) so it has ended up being very slow. The original search I have is:   eventtype=fail_type* source="*console" host = $jenkins_server$ | timechart count by eventtype   Which plots a timechart of the different types of fails in the console logs of jenkins which is what I want. I tried to speed up the job by getting it to only look through logs from failing jobs. I can get a table of failing console logs using the search below but if I try to use those console paths for a new search by adding "| search source=console_path" it doesn't work   event_tag="job_event" host = $jenkins_server$ | eval job_result=if(type="started", "INPROGRESS", job_result) `utc_to_local_time(job_started_at)` | search (job_result=FAILURE OR job_result=UNSTABLE OR job_result=ABORTED) | eval console_path= "*" + build_url + "console*" | table console_path build_url job_result   Apricate any help or suggestions for other ways to speed up the search
How do I pull together a chart of all our user accounts, with the last time that user logged in?   I currently have:  eventtype=wineventlog_security (EventCode=4776 OR EventCode=4777 OR EventCode=... See more...
How do I pull together a chart of all our user accounts, with the last time that user logged in?   I currently have:  eventtype=wineventlog_security (EventCode=4776 OR EventCode=4777 OR EventCode=680 OR EventCode=681) | stats max(Time) by Logon_Account   I am getting the time but also need to display the date. I am also getting a lot of service accounts, is there an easy way to filter those out?
Hey guys, I'm having trouble updating SPlunk from version 8.1.0 to version 8.2. When running the command "rpm -i --replacepkgs splunk-8.2.2.1-ae6821b7c64b-linux-2.6-x86_64.rpm", it displays several... See more...
Hey guys, I'm having trouble updating SPlunk from version 8.1.0 to version 8.2. When running the command "rpm -i --replacepkgs splunk-8.2.2.1-ae6821b7c64b-linux-2.6-x86_64.rpm", it displays several alerts as below: How this alert occurs for several files. file /opt/splunk/share/splunk/search_mrsparkle/exposed/pcss/version-5-and-earlier/admin_lite.pcss from install of splunk-8.2.2.1-ae6821b7c64b.x86_64 conflicts with file from package splunk-8.1.0.1-24fd52428b5a.x86_64 should I go about resolving the problem?
I work for a utility company and, among many things, we have an index for some environmental and system totals. This index is used to to compute yesterday's sales and compare to same day last year, w... See more...
I work for a utility company and, among many things, we have an index for some environmental and system totals. This index is used to to compute yesterday's sales and compare to same day last year, we also do some calculations for one year to date compared to previous year to date. This means that the dashboards may access events two years old. The data is a single event per day, going back to 1995. After loading the data (Which is via DB Connect, from SQL table) everything is great for a while and then one day the data up until about 18 months ago is gone. I am guessing it is being rolled to frozen via some kind of default. What setting should I use to keep all the data in the index and searchable? 
Hi Below data is dynamic, sample input table is given below, rows are order may vary (for simplicity I have put the data in order to understand easily).   Input: Feature Name Browser Name ... See more...
Hi Below data is dynamic, sample input table is given below, rows are order may vary (for simplicity I have put the data in order to understand easily).   Input: Feature Name Browser Name Result Feature 1 B1 Pass Feature 1 B1 Pass Feature 1 B1 Pass Feature 1 B1 Pass Feature 1 B2 Fail Feature 1 B2 Pass Feature 1 B2 Pass Feature 1 B2 Pass Feature 1 B3 Pass Feature 1 B3 Pass Feature 1 B3 Pass Feature 1 B3 Fail Feature 1 B4 Pass Feature 1 B4 Pass Feature 1 B4 Fail Feature 1 B4 Pass   Based on the above input table, output needs to be generated as listed below.  Cumulative result needs to be generated based on the browser name and result for each feature.  If any one of result fails on particular a browser, feature is considered failed.   Output: Feature 1 B1 Pass Feature 1 B2 Fail Feature 1 B3 Fail Feature 1 B4 Fail   Would you please help me to generate expected output as listed.
When I create an ITSM alert and use $result.Activity$ the correct value for the "Activity" field appears in ITSM.  How do I represent a field called "Start Time UTC{}"? 
I am trying to speed up a search on Splunk. The search looks through millions of logs for matches to around 100 event types (each event type has multiple strings to match) so it has ended up being ve... See more...
I am trying to speed up a search on Splunk. The search looks through millions of logs for matches to around 100 event types (each event type has multiple strings to match) so it has ended up being very slow. The original search I have is:   eventtype=fail_type* source="*console" host = $jenkins_server$ | timechart count by eventtype   Which plots a timechart of the different types of fails in the console logs of jenkins which is what I want. I tried to speed up the job by getting it to only look through logs from failing jobs. I can get a table of failing console logs using the search below but if I try to use those console paths for a new search by adding "| search source=console_path" it doesn't work   event_tag="job_event" host = $jenkins_server$ | eval job_result=if(type="started", "INPROGRESS", job_result) `utc_to_local_time(job_started_at)` | search (job_result=FAILURE OR job_result=UNSTABLE OR job_result=ABORTED) | eval console_path= "*" + build_url + "console*" | table console_path build_url job_result   Apricate any help or suggestions for other ways to speed up the search
Hi, I'm trying to filter the results from one search based on the results from another search. Example: Consider the following table of data user eventId Joe 1 Joe 2 Bob 3   ... See more...
Hi, I'm trying to filter the results from one search based on the results from another search. Example: Consider the following table of data user eventId Joe 1 Joe 2 Bob 3   I have created a search that returns only eventIds generated by user Joe and creates a token with the result           <search> <query> "event created" user=Joe | table eventId </query> <done> <set token="eventId">$result.eventId$</set> </done> </search>             I have another table with the following data eventId eventName 1 myEvent_1 2 myEvent_2 3 myEvent_3   What I would like to do is create a search that will return just the eventId and eventName that was generated by user Joe using the token created in the first search. So far I have this query           "event names" eventId=$eventId$ | table eventId eventName           This query is only returning the first result from the token list rather than every result. Is there a way to use the token this way to return results from all values in the token? I would like to avoid using JOIN or subsearches as I will need to create multiple tables with the same token filter and those methods would start to get very slow. Thanks in advance!
This is mostly just a curiosity, motivated by this post on how to compare a particular time interval across multiple larger time periods. Effectively the solution seems to be to generate a list of ti... See more...
This is mostly just a curiosity, motivated by this post on how to compare a particular time interval across multiple larger time periods. Effectively the solution seems to be to generate a list of time intervals and run map subsearches on each entry. When I have multiple time periods that I'd like to run stats on, I typically use a multisearch command followed by a chart, as follows:     | multisearch [ index=potato et=<et1> lt=<lt2> | eval series=1 ] [ index=potato et=<et2> lt=<lt2> |eval series=2 ] . . . [ index=potato et=<etn> lt=<ltn> | eval series=n ] | timechart count by series     I suppose you could make it work by substituting the et's and lt's via subsearch, but it won't work if the number of time intervals, n, is also dynamically generated by some prior search.  I know you can use a number of different techniques, but they all have different drawbacks. You could use map, which offers pretty much all the flexibility/dynamic-ness you need (I've abused it plenty of times doing things like map search=`searchstring($searchstring$)` ), but there are performance issues with this as subsearches can time out as it doesn't offer the same optimization as multisearch does when you just need to string multiple streams together.  You can just search the entire timerange and use some eval logic to filter out the time intervals you need, but isn't this suboptimal since you're searching more events than you need? Multisearch seems to be great at streaming multiple different time intervals together and I'd love to have that optimization without  At this point, would you just have to resort to REST to schedule searches? How would we tie the data together? I'm not very familiar with what is possible with REST as all of my experience is with just plain SPL.  In a word, how do we stream events across multiple, dynamically generated time intervals without running into subsearch limitations?  
Hi, I've been asked to add inputs to my organization's Splunk Enterprise from Cisco Routing and Switching Gear. I remember reading in the documentation that Splunk's recommended best practice in is ... See more...
Hi, I've been asked to add inputs to my organization's Splunk Enterprise from Cisco Routing and Switching Gear. I remember reading in the documentation that Splunk's recommended best practice in is to use a syslog-collector running a UF or HF between the Routers/Switches and Indexers to keep the Indexers from becoming flooded with incoming traffic.   I had thought to do this with a Linux machine running a UF or HF and ingesting Router/Switch log traffic before forwarding the traffic along to Splunk Enterprise but I've been told an additional machine, even a virtual one, is a no go. I don't believe a Windows syslog-collector would work either as Windows can't do this with native tools, and I've been told 3rd party software is also a no-go.   Could anyone recommend a solution that doesn't use a Linux syslog-collector, a Windows syslog-collector running 3rd party software,  that meets Splunk's best practice suggestions?   Thank you, 603_Dan 
Hi - I briefly need to ensure that events from one UF (multiple sources) are duplicated in two indexes on one index cluster (developmentmumbledashboardupdatesetc). I can't find any references in th... See more...
Hi - I briefly need to ensure that events from one UF (multiple sources) are duplicated in two indexes on one index cluster (developmentmumbledashboardupdatesetc). I can't find any references in the docs; looks like most people want to know how to NOT duplicate events. Anyone done this ?  Got advice for how to proceed ? Thanks, -Rob  
Hello, I have some issues writing a PROPS configuration file for the following  source data stored in text file. I  also used TIMESTAMP_FIELDS= timeStamp there, to have field values under field name... See more...
Hello, I have some issues writing a PROPS configuration file for the following  source data stored in text file. I  also used TIMESTAMP_FIELDS= timeStamp there, to have field values under field names.  But, it's not working. My PROPS configuration and a sample event are given below.  Any help will be highly appreciated. Thank you so much.    [ __auto__learned__ ] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) TIMESTAMP_FIELDS=timeStamp TIME_PREFIX =^\{\"timeStamp\"\:\" TIME_FORMAT=%Y-%m-%d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD=29   {"timeStamp":"2021-06-21 14:53:56 EDT","appName":"OSD","userType":"FILTER","StatCd":null,"Amt":null,"errorMsg":"","eventId":"APP_ENTRY","eventType":"VIEW","fileSourceCd":null,"ipAddr":"11.212.41.151","mftCd":null,"outputCd":null,"planNum":null,"reasonCd":null,"returnCd":"00","sessionId":"XWGMwkncVD0m60OQBOahu8s/qG1c=","Period":null,"cat":"234207501","Type":null,"userId":"cdabea740a-g9a0-408f-a6a7-5ae70c689e6d","vsardata":{"uri":"/osd/rest/accountSummary","host":"appsa.rup.afsiep.net","ipAddress":"11.212.41.151","Id":"AXSabea753c-d9a0-408f-a6a7-5ae70c689e6d","requestId":"as58510cd-0459-614b7bc4-1afdd700-0bf875285d76","referer":https://saada.ruer.egsiep.net/osd/,"responseStatus":0}}  
I am currently working on a project that requires me to set up db connect to have data from mongodb. When I am trying the unity jdbc driver, it isn't working. Can anyone please tell me a workaround, ... See more...
I am currently working on a project that requires me to set up db connect to have data from mongodb. When I am trying the unity jdbc driver, it isn't working. Can anyone please tell me a workaround, ir any miss I might have not noticed? It is quite urgent, thanks in anticipation.
Hi all, I'm setting up an alerting process that monitors different servers on a single index and sends an alert out if no events are fired over a 24 hour period. It's set to run at midnight and lo... See more...
Hi all, I'm setting up an alerting process that monitors different servers on a single index and sends an alert out if no events are fired over a 24 hour period. It's set to run at midnight and look back over the last 24 hours. If no events are found on any of the hosts, it should send an email with the details of that host. I'd like to set up 1 alert, if possible, rather than setting up an alert for each host.  I should specify that host is internal, not the 'splunk_server', but each host is one of our servers. Here's what I tried so far: index=#### sourcetype=#### source=/usr/app/*/logs/#####.txt | rex field=source "\/[^\/]+\/[^\/]+\/(?<env>[^\/]+)\/.*"                // extract our the environment | stats count by host                                                                                            // count by host | lookup ######.csv serverLower AS host output IP   // add in the ip of the host to add to the table | table env, host, IP, count                                                         // inline table passed to alert This will give us a correct assessment of count by host, but will just return that count to the email alert. I'd like to only send out an email when a count is 0 for a specific host, and then send only the details of that host with count=0.  Thanks
Hi Splunkers, How to create Incidents on SNOW from Splunk SPL? We have "ServiceNow Event Integration" alert action in use which creates incidents when an alerts triggers an event but trying to use th... See more...
Hi Splunkers, How to create Incidents on SNOW from Splunk SPL? We have "ServiceNow Event Integration" alert action in use which creates incidents when an alerts triggers an event but trying to use the same from Splunk search. Tried using sendalert command as below and got an error: | sendalert servicenow param.severity="4" param.assigned_to="Assignment group" param.short_description="Alert Name" param.description="This is a test" param.u_environment="Dev" param.node=hostname param.resource="Nothing" param.type="Name" Error: Error in 'sendalert' command: Alert action "Servicenow" not found.
Hello, I ahve below list of files in a directory and many more - below are few examples..... 210928105858:jira:HDL-APP004036:/hboprod/itdept/jira/domain/logs:$ ll total 147936 -rw-r--r-- 1 jira j... See more...
Hello, I ahve below list of files in a directory and many more - below are few examples..... 210928105858:jira:HDL-APP004036:/hboprod/itdept/jira/domain/logs:$ ll total 147936 -rw-r--r-- 1 jira jira 376923 Sep 26 23:59 access_log.2021-09-26 -rw-r--r-- 1 jira jira 1547320 Sep 28 00:00 access_log.2021-09-27 -rw-r--r-- 1 jira jira 891543 Sep 28 10:56 access_log.2021-09-28 -rw-r--r-- 1 jira jira 881194 Sep 28 10:02 atlassian-jira-gc-2021-09-20_11-52-13.log.0.current -rw-r--r-- 1 jira jira 208279 Sep 28 10:49 atlassian-jira-gc-2021-09-28_10-04-10.log.0.current -rw-r----- 1 jira jira 8964 Sep 20 11:52 catalina.2021-09-20.log -rw-r--r-- 1 jira jira 8965 Sep 28 10:04 catalina.2021-09-28.log -rw-r--r-- 1 jira jira 768821 Sep 28 10:12 catalina.out -rw-r--r-- 1 jira jira 0 Sep 20 11:52 host-manager.2021-09-20.log -rw-r--r-- 1 jira jira 0 Sep 28 10:04 host-manager.2021-09-28.log -rw-r----- 1 jira jira 0 Sep 17 00:14 localhost.2021-09-17.log -rw-r--r-- 1 jira jira 0 Sep 20 11:52 localhost.2021-09-20.log -rw-r--r-- 1 jira jira 0 Sep 28 10:04 localhost.2021-09-28.log -rw-r--r-- 1 jira jira 0 Sep 20 11:52 manager.2021-09-20.log -rw-r--r-- 1 jira jira 0 Sep 28 10:04 manager.2021-09-28.log I want to monitor catalina.out and access_log files only and not others.   I have configure monitoring stanza for catalina.out and it is working as expected for me. [monitor:////hboprod/itdept/jira/domain/logs/catalina.out] sourcetype = log4j ignoreOlderThan = 7d crcSalt = <string>   I need help for writing monitoring stanza for access_log as this files gets created daily with that days date in it name. How can i configure this files to be monitored?
At our organization we use Splunk with Apache to provide LDAP authentication using smart cards. We are required to provide a consent banner upon typing in the website to get to our Splunk environment... See more...
At our organization we use Splunk with Apache to provide LDAP authentication using smart cards. We are required to provide a consent banner upon typing in the website to get to our Splunk environment.  I don't have much knowledge on Apache at all, and googling doesn't have much information on setting up a banner which then redirects me to Splunk after accepting.   Looking for assistance to see if any one else has been able to successfully implement a consent banner for their Splunk environments?    
I think this is a pretty basic question, but I'd appreciate some help with it.  I'm trying to produce an exportable, email-able report (CSV or Excel) of remote worker locations that shows how often p... See more...
I think this is a pretty basic question, but I'd appreciate some help with it.  I'm trying to produce an exportable, email-able report (CSV or Excel) of remote worker locations that shows how often people are logging in from each state.  I'd like to show a count of how many logins per IP, per UserId,  per state on unique days but I haven't been able to find a way to get that count into a table. The basic query I'm using is this: `m365_default_index` sourcetype="o365:management:activity" Workload=AzureActiveDirectory Operation=UserLoggedIn | iplocation ClientIP | search Country="United States" | eval Time=strftime(_time, "%Y-%m-%d %H:%M:%S") | rename Region AS State UserId AS User ClientIP AS "Client IP" | fields "State", "User", "Client IP", "Time" | table State User Time "Client IP" | sort State, User, - Time But it returns more rows than I can output, and I'd rather have a count than just all the rows of logins. If I could figure out how to add a count so that the table's output was more like: State User Client IP "Count of logins on unique days" where I could apply a search filter on some count > x and I could turn that into an email-able report, that would be ideal.
Hello Tripwire-enterprise-add-on-for-splunk was installed on Splunk 8.2.1. I referred to the tripwire document, but the settings page does not run. $SPLUNK_HOME/etc/apps/TA-tripwire_enterprise/loc... See more...
Hello Tripwire-enterprise-add-on-for-splunk was installed on Splunk 8.2.1. I referred to the tripwire document, but the settings page does not run. $SPLUNK_HOME/etc/apps/TA-tripwire_enterprise/local/app.conf [install] python.version = python2 Is there anything that needs to be corrected?    
Hello guys, how to use the source file modification date instead of "guessed" or extracted timestamp from csv file? I'm using specific sourcetype and extracting fields at search time (fields transf... See more...
Hello guys, how to use the source file modification date instead of "guessed" or extracted timestamp from csv file? I'm using specific sourcetype and extracting fields at search time (fields transformations) Thanks. Splunk 7.3.4