All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have users in multiple roles. Some role have higher permission and with access to a list of indexes. How can I view the effective permission for this user. Will user have the least privilege role o... See more...
I have users in multiple roles. Some role have higher permission and with access to a list of indexes. How can I view the effective permission for this user. Will user have the least privilege role or the highest privilege role.
Brand newbie here... After I finished the tutorial, I tried to import WebSphere Application Server files for the first time. I have an 11 MB SystemOut.log file which I'm trying to import into Splunk... See more...
Brand newbie here... After I finished the tutorial, I tried to import WebSphere Application Server files for the first time. I have an 11 MB SystemOut.log file which I'm trying to import into Splunk. Well, it shows one event from the latest date in the file.. and then skips back 11 months. There's plenty of data from the current year.  Considering this is a filetype which Splunk natively recognizes, I wouldn't expect any configuration to get it parsed properly. I tried installing the WebSphere add-in and that didn't help the situation Any ideas?   Thanks!
I am populating dropdown options with the following search. Right now, this is the search.     | search service="$service_tok$" | stats dc(region) by region Platform | sort - Platform | rex field=... See more...
I am populating dropdown options with the following search. Right now, this is the search.     | search service="$service_tok$" | stats dc(region) by region Platform | sort - Platform | rex field=region "_(?<parse_regions>[^_]+)$" | eval formatted_region = coalesce(parse_regions, region)     I am doing some formatting to make my list look like this:      Azure - Global Azure - Central US AWS - Global AWS - ap-northeast-1     However, we would like to add two rows with 'label' fields called  "AWS" and "Azure" so that we can style them in CSS to be the labels in a sectioned list like so:     *Azure* Global Central US __________ *AWS* Global ap-northeast-1      any ideas how I could add these 2 rows and have the sort work out to where the labels are at the top? I have tried to add these choices with appendpipe, but the row appears, then disappears before the search completes.
I am getting this error frequently and I can see the index queue is 99% for many indexers in the cluster. I am not able to figure out what is causing this issue. During this period indexing is consid... See more...
I am getting this error frequently and I can see the index queue is 99% for many indexers in the cluster. I am not able to figure out what is causing this issue. During this period indexing is considerable slow and logs are not ingesting for many source type. I am not able to figure out what is causing this issue(which source). After sometime it go back to normal. I am worried this can case issue in the future.
Hello All, We have an Oracle test server sending plain text audit logs via syslog to Splunk, and though we have the InfoSec and Splunk Add-On for Oracle Database add-ons installed, the logs are not ... See more...
Hello All, We have an Oracle test server sending plain text audit logs via syslog to Splunk, and though we have the InfoSec and Splunk Add-On for Oracle Database add-ons installed, the logs are not getting the appropriate CIM-compliant sourcetypes added to them so they're not showing up as expected in the authentications area of the IS dashboards and searches.  We decided to go with plain text audit logs as we didn't need the inventory and performance events, so opted against using DB Connect. Using a forwarder is not out of the question, we just chose syslog since it meant not having to install anything on the Oracle server. I've been reading the documentation and it seems unclear whether the Splunk Add-On for Oracle Database add-on must be installed on indexers in addition to the search head in order for the sourcetypes, tags, etc to get added like I thought they would. The documentation says it's conditional: "Required if you are monitoring files locally on Oracle servers with universal forwarders." Since we're not using a forwarder, perhaps I just missed something in the configuration of the add-on? As far as I can tell all the apps and their pre-reqs are configured and enabled. Edited for additional info: Splunk 7, Splunk Add-on for Oracle v3.7.0, and it appears we have one search head and two indexers.
Hi, We installed Splunk DBConnect 3.4.0 on our Search Head Cluster We configured identity and connections successfully and were able to get results to |dbxquery command against the connections Rec... See more...
Hi, We installed Splunk DBConnect 3.4.0 on our Search Head Cluster We configured identity and connections successfully and were able to get results to |dbxquery command against the connections Recently we upgraded to Splunk Enterprise to version 8.0.6 and cannot run |dbxquery anymore We tried to re-save Identity and Connections again, but cannot save it on one search head out of 3, it triggers the error: Database connection "connection_name" is invalid . !200051! While it is possible to save the same connection on other 2 servers -  |dbxquery doesn't work anywhere  Any ideas are appreciated Thanks!
Hi Team, I have three below conditions to create a logic according to it. Case 1: operation="OVERRIDE" should print but not a name="IP BLOCK TYPE",value="Private" Sample log for IP BLOCK TYPE:  [... See more...
Hi Team, I have three below conditions to create a logic according to it. Case 1: operation="OVERRIDE" should print but not a name="IP BLOCK TYPE",value="Private" Sample log for IP BLOCK TYPE:  [name="IP BLOCK TYPE",value="Private",operation="OVERRIDE"]  Case 2: operation="OVERRIDE" is not present in the logs at all Sample log for IP BLOCK TYPE:  [name="IP BLOCK TYPE",value="Public"]  So for above two conditions I had used  below query to fetched the desired data>> rex field=_raw "operation=\"(?<IP_Block_Type>.\w+)\"" | where isnotnull(IP_Block_Type)   Case 3:  Sample log for IP BLOCK TYPE:  [name="IP BLOCK TYPE",value="Public",descendants_action={option_with_ea:"INHERIT",option_without_ea:"NOT_INHERIT"},operation="OVERRIDE"]  Case numbers 1 and 2 queries look good because those logs don't contain details like case 3 (e.g. descendants_action={option_with_ea:"INHERIT",option_without_ea:"NOT_INHERIT"}).  I tried the same filter but got to know that query is taking data from 3rd case also which is not required. So basically I don't want to print anything from case number 3. Please help to get the answers for this. @gcusello  @Nisha18789   @ITWhisperer  Thanks,
Hi Team, I have two conditions as below and I need to find out the operation="OVERRIDE" and other should be block 1> [name="IP BLOCK TYPE",value="Private",operation="OVERRIDE"]  In first case I ap... See more...
Hi Team, I have two conditions as below and I need to find out the operation="OVERRIDE" and other should be block 1> [name="IP BLOCK TYPE",value="Private",operation="OVERRIDE"]  In first case I applied  >>> rex field=_raw "operation=\"(?<IP_Block_Type>.\w+)\"" | where isnotnull(IP_Block_Type) and I got operation="OVERRIDE" value only in the IP_Block_Type column. 2>[name="IP BLOCK TYPE",value="Public",descendants_action={option_with_ea:"INHERIT",option_without_ea:"NOT_INHERIT"},operation="OVERRIDE"]  In the second condition, I'm looking for a logic which would not take "descendants_action={option_with_ea:"INHERIT",option_without_ea:"NOT_INHERIT"}"  but it should give operation="OVERRIDE" in a result. @gcusello @Nisha18789   @ITWhisperer 
Hey All,  This may be something very basic, but I can't seem to find exactly what I'm looking to do on the forums. For context, I'm trying to look at device details users have during their actions ... See more...
Hey All,  This may be something very basic, but I can't seem to find exactly what I'm looking to do on the forums. For context, I'm trying to look at device details users have during their actions within an application. Things like IP Addresses, and User Agents. I was hoping to break down the number of unique days an IP address (or similar detail) was used. Using a very basic chart search I can get these numbers:   index=x sourcetype=x [userID that would be passed via subquery]| chart count by _time span=1d ipAddress   _time 1.1.1.1 2.2.2.2 2020-10-15 1 0 2020-10-15 0 0 2020-10-15 1 0 2020-10-15 0 0 2020-10-15 3 9 2020-10-15 2 0   and I would like to convert this into something like: 1.1.1.1 = 4 unique days  2.2.2.2 = 1 unique day The idea being that the userID was passed to this outer query because of some other criteria indicative of a possible compromise (such as a certain sequence of events on that profile, or a known IOC), then this search would determine if there was an outlier IP address that was used during the most recent event. (being able to filter out any events that occurred from an IP/useragent that has already been used for >X unique days) Open to any suggestions, just kind of tinkering around
Hi All, We have schedule the job which would run a tstats command on an accelerated data model for yesterday’s data & this populates the count value to an index called “xyz” via collect command. ts... See more...
Hi All, We have schedule the job which would run a tstats command on an accelerated data model for yesterday’s data & this populates the count value to an index called “xyz” via collect command. tstats count as "COUNT VALUE"  from datamodel="abc"  where .....  |collect index=xyz addTime=T When I am running tstats query and index=xyz count query for couple of days, the results are matching (which they should) but when I am running this tstats query on the same dataset for the same time period, after say a few days & comparing with the index=xyz for that date, the tstats query gives me a different result (though index=xyz result is same as what I got that day). The tstats count value seems to be increasing with time... May I know why the tstats count values are changing over the period & how to fix this issue? Thanks AG
Has anyone been able to track "unintended" disconnections from Citrix VDI with Splunk? We have a DB Connection to the Citrix database and the UF on our VMs but unsure where to find out when a user is... See more...
Has anyone been able to track "unintended" disconnections from Citrix VDI with Splunk? We have a DB Connection to the Citrix database and the UF on our VMs but unsure where to find out when a user is "kicked" out of VDI or if they closed the session another way.
I have an array of pre-defined string values. I want to check which of these values have not occured at search time for the last 60 mins.   I have my query in such a format [ "", "", "", ........... See more...
I have an array of pre-defined string values. I want to check which of these values have not occured at search time for the last 60 mins.   I have my query in such a format [ "", "", "", ............  ] NOT IN [ search query ]   This does not work as the hardcoded strings are not a search query. What do I do here? Basically I need the list of strings which haven't appeared in the last 60 mins among the logs.
Good day all, This is my first post so please bear with me I am working on a search for the Netskope CASB product     index=test user=johndoe |stats count by app, activity, action, alert_name, a... See more...
Good day all, This is my first post so please bear with me I am working on a search for the Netskope CASB product     index=test user=johndoe |stats count by app, activity, action, alert_name, alert_type, site, _time |sort _time     My search seems to contain incompatible fields, action=allow and fields alert_name, alert_type, for the "action" field the values will be either ( allow, block, alert ) the allow action will never have an "alert_name" or "alert_type" associated with it but I need to see those values for when the action is "alert" or "block" With my current search above I only see action=block & action=alert never any action=allow I want to be able to see action=allow and if action=block i want to see "alert_name", "alert_type", if action=allow then "alert_name", "alert_type" will have empty values I am really hoping I made sense here Thanks and have a great day!
how to add Vertical Scrollbar for a table cell inside a pannel , we have vertical scroll bar configured for Pannel  & similarly i want vertical scrollbar configuered for a multivalued cell in table ... See more...
how to add Vertical Scrollbar for a table cell inside a pannel , we have vertical scroll bar configured for Pannel  & similarly i want vertical scrollbar configuered for a multivalued cell in table is there any way we can achive this ?    
Need to build a report for SOX Compliance capturing Linux OS logs ingested in Splunk. Any idea how to build the report? Sample query pls..
I have list of events that was able to tabulate it as Id      | LOG --------------- 1     | A:message1 1    | B: notification2 2     | A:message3 2    | B: notification4   I need to split LOG... See more...
I have list of events that was able to tabulate it as Id      | LOG --------------- 1     | A:message1 1    | B: notification2 2     | A:message3 2    | B: notification4   I need to split LOG and then those row based in ID id       |  message          | notification ----------------------------------------- 1      | A: message1     | B: notification2 2      | A: message3     | B: notification4  
Hello everyone, I have a good search (SPL) to see what was the last fired alerts but I don't have one to see what was not, do you how to do? Regards, Rafael Santos
Hello I have this Splunk built In rule: "  Brute Force Access Behavior Detected Over 1d"     | tstats `summariesonly` values(Authentication.app) as app,count from datamodel=Authentication.Authent... See more...
Hello I have this Splunk built In rule: "  Brute Force Access Behavior Detected Over 1d"     | tstats `summariesonly` values(Authentication.app) as app,count from datamodel=Authentication.Authentication where earliest=-1d by Authentication.action,Authentication.src,index | `drop_dm_object_name(\"Authentication\")` | eval success=if(action=\"success\",count,0),failure=if(action=\"failure\",count,0) | stats values(app) as app,sum(failure) as failure,sum(success) as success by src,index | where success > 0 | `mltk_apply_upper(\"app:failures_by_src_count_1d\", \"medium\", \"failure\")` | rex field=index \"(?<bu_prefix>[a-zA-Z]+)\" | lookup org_lookup.csv bu_prefix OUTPUTNEW Organization"   1. How can I add to this query indication of which user was used? 2. The query shows app list + number of failures + number of successes, but no correlation of failures/successes to apps, how can I do that? 3. How can I add to the query failure reason? 4. If there is IP address only made several failed login attempts to one user. How can we catch such a scenario?   Thanks!
 I have a data and created a table like this: Eligibility Count 01-Country 31   Now I would like to see how those country are counted to 31 . The output would be like this: Countrie... See more...
 I have a data and created a table like this: Eligibility Count 01-Country 31   Now I would like to see how those country are counted to 31 . The output would be like this: Countries count GERMANY 4 MALAWI 10 SERBIA 6 SRI LANKA 5 SWAZILAND 1 UKRAINE 5   Any help will be greatly appreciated. Thank you
Hi All, We could see a time difference in timing between splunk and AWS logs for same query (lambda, unique identifier) We observe 3 minutes of time difference between Splunk and AWS logs When execu... See more...
Hi All, We could see a time difference in timing between splunk and AWS logs for same query (lambda, unique identifier) We observe 3 minutes of time difference between Splunk and AWS logs When executed the below query we could see some  index="XXX" sourcetype=unify:ticketwork source=*api-gateway-logs-xxxx/ticketwork* "\"activityName\":\"createWorkTicket\"" *INCIDNETNO12144* | eval delay_sec=_indextime - _time | timechart span=1d min(delay_sec) avg(delay_sec) max(delay_sec) by host minimum delay in sec =21.59 maximum delay in sec =203.92 avg delay in sec =112.755 index="XXX" sourcetype=unify:ticketwork source=*api-gateway-logs-xxxx/ticketwork* "\"activityName\":\"createWorkTicket\"" *INCIDNETNO12144* | eval indexed_time=strftime(_indextime,"%+") | eval latency=_indextime - _time | table _time,indexed_time,latency,index,_raw latency is  203.92 sec latency is 21.59 sec  Can you guide me what are steps should be considered to start troubleshooting this issue.