All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I created a table that outputs the organization,  threshold, count, and response time. If the response time is greater than the threshold, then I want the  response time  value to turn red. However, ... See more...
I created a table that outputs the organization,  threshold, count, and response time. If the response time is greater than the threshold, then I want the  response time  value to turn red. However, the threshold is different for every organization. Is there a way to dynamically set the threshold on the table for the Response Time column to turn red based on its respective threshold.    For example, organization A will have a threshold of 3, while organization B will have a threshold of 10.  I want the table to display all the organizations,count, the response time, and the threshold.    index | stats count as "volume" avg as "avg_value" by organization, _time, threshold   Kindly help. 
So "index=_internal" on your cloud instance doesn't see any data where host=X? Your connections look like they're by IP, but I thought Splunk cloud's little "connect my forwarder up" app did it by D... See more...
So "index=_internal" on your cloud instance doesn't see any data where host=X? Your connections look like they're by IP, but I thought Splunk cloud's little "connect my forwarder up" app did it by DNS entries? Can you confirm one or the other of those things is right? Maybe the forwarder's time is off or TZ is incorrectly specified, have you checked over a longer period like 24 or 48 hours? Also a second point to the TZ issue - if the times are in the future, it can be more difficult to find it in Splunk. Try an *all time* search. I know, it sucks, but one does what one must sometimes.  index=_internal host=*myhost* | stats count by host Try a wildcard ilke I suggested, using "myhost" as any string that should be reasonably unique in the hostname for the host sending in data. You can also confirm what hostname it's sending in as by looking in etc/system/local/server.conf on the UF, there's a "hostname" field. If that's picked something "wrong" then guess what?  Your data will show up as whatever it's picked!
We are gathering logs from various devices that contain security, performance, and availability-related information. These logs are all being sent to Splunk. We utilize both Splunk core and the ES A... See more...
We are gathering logs from various devices that contain security, performance, and availability-related information. These logs are all being sent to Splunk. We utilize both Splunk core and the ES App. Since we have to pay separately for both core and the ES App based on ingestion, we are exploring options to minimize costs. Is there a mechanism available for selecting which logs can be sent to the ES App for processing? If such an option exists, we would only need to send security-specific logs to the ES App, significantly reducing our Splunk ES App licensing costs. Splunk Enterprise Security
    <layout class="ch.qos.logback.classic.PatternLayout"> <pattern>%msg</pattern> </layout>   in you HEC appender you need to set '%msg' as the pattern, but NOT the one you us... See more...
    <layout class="ch.qos.logback.classic.PatternLayout"> <pattern>%msg</pattern> </layout>   in you HEC appender you need to set '%msg' as the pattern, but NOT the one you use for the Console Appender (which is the 'defaultPattern')
If a search "index=_internal" over the last 24 hours is empty, I can think of a couple of reasons. Most likely - your role doesn't have administrative access.  (More specifically, it doesn't have ac... See more...
If a search "index=_internal" over the last 24 hours is empty, I can think of a couple of reasons. Most likely - your role doesn't have administrative access.  (More specifically, it doesn't have access to the _internal index, which is usually limited to admins).  Either log in as an administrator with access to _internal, or have your Splunk folks add this index to your role. It's also possible that you have DBX installed on a heavy forwarder.  That HF has been told its outputs need to go to your real indexer(s), but it's never been told to *search* the indexer when someone searches for "index=_internal".  The steps you might need are https://docs.splunk.com/Documentation/Splunk/9.2.0/DistSearch/Configuredistributedsearch#Use_Splunk_Web Anyway, if you can confirm the above two things, either one of them is the issue, or you can report back here with what you've found!   -Rich
In my search I have a field (ResourceId) that contains various cloud resource values. One of these values is InstanceId. The subsearch is returning a list of "active" instances. What I ultimately nee... See more...
In my search I have a field (ResourceId) that contains various cloud resource values. One of these values is InstanceId. The subsearch is returning a list of "active" instances. What I ultimately need to do is filter out only those InstanceIds from the ResourceIds field that DO NOT match the InstanceIds returned from the subsearch (the active instances), while keeping all other values in the ResourceId field. Sample ResourceId values: i-987654321abcdefg (active; WAS returned by subsearch) i-123abcde456abcde (inactive; was NOT a returned value from subsearch) bucket-name sg-12423adssvd Intended Output: i-987654321abcdefg bucket-name sg-12423adssvd Search (in progress):   index=main ResourceId=* | join InstanceId type=inner [search index=other type=instance earliest=-2h] | eval InstanceId=if(in(ResourceId, InstanceId), InstanceId, "NULL") | table InstanceId      
In my limited testing, SOAR doesn't seem to like handling custom functions within a single code block. It doesn't want to wait for the custom function to actually finish before moving on. For refere... See more...
In my limited testing, SOAR doesn't seem to like handling custom functions within a single code block. It doesn't want to wait for the custom function to actually finish before moving on. For reference, first_code_block just calls a custom function and second_code_block runs phantom.completed() on that function. If you have to call the function from within a code block, you can add a callback. This will make sure the code doesn't move on until the run finishes. I wasn't able to get the callback to work on a second function within the same block. (One note on this: Phantom will call the last two lines of the code block before the custom function finishes) phantom.custom_function(... callback=second_code_block) The easiest method by far is to just put each custom function into their own block, then do whatever processing you need in a custom code block below. By default, SOAR will wait for any simultaneous blocks to finish before running the next step.  
Hello, I'm currently developing a Splunk app and am having trouble bundling saved searches to appear in the Search & Reporting app. My intention is to include a list of searches in my app packa... See more...
Hello, I'm currently developing a Splunk app and am having trouble bundling saved searches to appear in the Search & Reporting app. My intention is to include a list of searches in my app package (in savedsearches.conf or elsewhere) that will appear in the Reports tab of S&R. I've done some digging but haven't found a solution that works. Is this possible? I'm developing in Splunk Enterprise 9.1.3.
Thank you for an update. I tried suggested SPL and added rename field to see if data exists ------------------- | eval Date = strftime(strptime("policy_applied_at","%FT%T.%6NZ"), "%b-%d-%Y") | ... See more...
Thank you for an update. I tried suggested SPL and added rename field to see if data exists ------------------- | eval Date = strftime(strptime("policy_applied_at","%FT%T.%6NZ"), "%b-%d-%Y") | eval Time = strftime(strptime("policy_applied_at","%FT%T.%6NZ"), "%H:%M") | rename "policy_applied_at" as "Last Refresh Time" | table "Last Refresh Time", Date, Time ------------- The rename, but not trimmed field has data, other two are empty Last Refresh Time Date Time 2024-02-19T11:16:58.930104Z     2024-02-19T11:16:54.980418Z     2024-02-19T11:18:44.875386Z    
I deleted onee
The first one did end up working for me. The second one for whatever reason was throwing Error in 'SearchParser': Mismatched ']'. Not a big deal for me since the first one works, but figured I'd ment... See more...
The first one did end up working for me. The second one for whatever reason was throwing Error in 'SearchParser': Mismatched ']'. Not a big deal for me since the first one works, but figured I'd mention it. | rex field=_raw "Key\": \"Owner\", \"ValueString\": \"(?<Owner>[^"])\"}," The second one is what I thought I was doing... capturing everything until it saw "},    Thank you for helping me with this!
So Im understanding here that yeah, syslog itself doesn't lend well to being balanced. Splunk Universal forwarder should not be used to receive Syslog traffic, but a Heavy forwarder "can". And if I ... See more...
So Im understanding here that yeah, syslog itself doesn't lend well to being balanced. Splunk Universal forwarder should not be used to receive Syslog traffic, but a Heavy forwarder "can". And if I wanted to load balance a pair of Splunk UF's then I should setup a real load balancer.. And if I want to LB syslog,  should also setup a real LB and have the syslog and splunk functions on dedicated hosts.  this look accurate?
Yup, And these Splunk instances aren't listening on tcp/udp ports so that's good.
I am trying to look at cpu and mem statistics on my indexers and search heads, but the index only ever goes back 15 days, almost to the hour, but I need to look a a specific date almost a month ago. ... See more...
I am trying to look at cpu and mem statistics on my indexers and search heads, but the index only ever goes back 15 days, almost to the hour, but I need to look a a specific date almost a month ago. Any ideas on why this could be and how can get around it?
If policy_refresh_at is a string, you could ty parsing it (to an epoch timestamp) before formatting, something like this: |eval Date = strftime(strptime(policy_refresh_at,"%FT%T.%6NZ"), "%b-%d-%Y") ... See more...
If policy_refresh_at is a string, you could ty parsing it (to an epoch timestamp) before formatting, something like this: |eval Date = strftime(strptime(policy_refresh_at,"%FT%T.%6NZ"), "%b-%d-%Y") | eval Time = strftime(strptime(policy_refresh_at,"%FT%T.%6NZ"), "%H:%M")
Hi. I have a single filed for date and time of event - 2024-02-19T11:16:58.930104Z I would like to have to fields Date and Time as well as one more calculated fields I can use to find records not... See more...
Hi. I have a single filed for date and time of event - 2024-02-19T11:16:58.930104Z I would like to have to fields Date and Time as well as one more calculated fields I can use to find records not changed in last 2 days or 48 hours what ever is better for the search. I tried  |eval Date = strftime(policy_refresh_at, "%b-%d-%Y") | eval Time = strftime(policy_refresh_at, "%H:%M") or | eval Date=substr(policy_refresh,10,1) The result come empty in both cases. So nothing to calculate on Please advise, Thank you Please advise on
I am using the | fields _raw to show the entire content of the source file as a single event.  It works for most of my log files less than 100K.  For occasionally larger files, the search will break ... See more...
I am using the | fields _raw to show the entire content of the source file as a single event.  It works for most of my log files less than 100K.  For occasionally larger files, the search will break the results into multiple events and missing out the details.  How can I fix it?  Or is there another way to return the file contents?  I know users can click on the Show Source in the event action, but my search queries are part of a dashboard drilldown on file names.
@bowesmana Thanks, Try what you mentioned but not work as I expected, Change my mind, Is it possible to create table like this? PF              Host1      Host2      Host3 red.            50.   ... See more...
@bowesmana Thanks, Try what you mentioned but not work as I expected, Change my mind, Is it possible to create table like this? PF              Host1      Host2      Host3 red.            50.              20.           89 purple.      30.              80.          1 green.        80.             12.           -
I am not sure why different eventtypes can't be combined into the same search - assuming they can, try something like this index="indexName" eventType="group.user_membership.add" OR eventType="use... See more...
I am not sure why different eventtypes can't be combined into the same search - assuming they can, try something like this index="indexName" eventType="group.user_membership.add" OR eventType="user.authentication.sso" | spath "target{}.displayName" |rename target{}.displayName as grpID | eval groupName=mvindex(grpID, 1) | rename "target{}.alternateId" AS "targetId" | rename "target{}.type" AS "targetType" ``` Assuming target_useris already extracted for sso events (otherwise extract it here) ``` | eval target_user=if(eventType=="user.authentication.sso",target_user,mvindex(targetId, mvfind(targetType, "User"))) | table target_user groupName date | eventstats values(groupName) as groupName by target_user | where eventType="user.authentication.sso" | stats count by date
I need help to write a search query where the result from the one query is passed onto the second query 1 we import the users from the active directory group in the okta group and the event eventTyp... See more...
I need help to write a search query where the result from the one query is passed onto the second query 1 we import the users from the active directory group in the okta group and the event eventType="group.user_membership.add" captures this Json event Following the query get me the name of the group and user name. index="indexName"   eventType="group.user_membership.add" | spath "target{}.displayName" |rename target{}.displayName as grpID| eval groupName=mvindex(grpID, 1) |  rename "target{}.alternateId" AS "targetId" | rename "target{}.type" AS "targetType"| eval target_user=mvindex(targetId, mvfind(targetType, "User")) | table target_user groupName 2. After the user is added to the Okta group, I want to find the occurrence of the user authentications during time range  . I can separately find user authentication using eventType="user.authentication.sso" this event doesn't have a group name. index="indexName"   eventType="user.authentication.sso"  target_user  | stats count by date How do I pass the user in the first query to the second query. I cannot use subsearch since the main search eventype is not the same as the second sub search.   Basically, I want to create a report by groupname/username authentications for the selected time range Any help is appreciated.