All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We are working to link server information to the services in the ServiceNow CMDB. We are looking for example to relationship between CI.  
I'm very disheartened to hear about this. I run the Idaho Falls Splunk Users Group and will present next month on using Windows Event Logs to find intruders. You are welcome to join our group and att... See more...
I'm very disheartened to hear about this. I run the Idaho Falls Splunk Users Group and will present next month on using Windows Event Logs to find intruders. You are welcome to join our group and attend. I will give many examples you can cut and paste into your Splunk instance. You may need to modify them slightly for your environment, but they will give you an idea of how to build additional use cases. Personally, I never re-invent the wheel. There is a lot of detection out there.  that I look for before building my own. I would start by googling "Splunk," "Threat detection," and then '"Splunk threat detection" github.' There are many people who aren't insecure people, like the people you work with who willingly share their talents. We all got where we are with the help of others. It's sad to hear you are being treated this way. Once you join the user groups, you can contact me directly. I will take time to work with you as time permits. I will also point you to resources that will help you grow in the field and use Splunk for building use cases. I'll include a couple of resources here for you. Another thing to keep in mind is that all threat hunting that finds positive activity should lead to a signature. There are tons of threat-hunting Splunk searches out there, and they can also be used as use cases. You may need to tune them (cast a narrower net) before putting them into production, but they will give you a good idea of how to build out detection.  You will find that industry leaders are always sharing their research and knowledge. Jack Crook was one of my mentors when I was new in this career. Jack and I share the same frustration of attending conferences where many share "theories" but not content. He is big on sharing content and actual Splunk searches and use cases. You can follow his blog. I have included it below. As you grow in this career, remember to not be like the others that have treated you so poorly. Remember, this is a very negative reflection on them, not you! I hope that this helps, and I hope to talk soon!  http://findingbad.blogspot.com  https://www.udemy.com/course/cybersecurity-monitoring-detection-lab/?couponCode=ST9MT22024  https://github.com/splunk/security_content/tree/develop/detections/application https://www.detectionengineering.net https://github.com/west-wind/Threat-Hunting-With-Splunk
Hello all,  While developing my Splunk Add-On, I've run into a blocker concerning the timepicker on search. Currently, changing the time from the default "Last 24 Hours" to any other value has no ... See more...
Hello all,  While developing my Splunk Add-On, I've run into a blocker concerning the timepicker on search. Currently, changing the time from the default "Last 24 Hours" to any other value has no effect and the search always returns all of the elements from my KV store. I've read through about a dozen forum threads but haven't found a clear answer to this problem.  Any help with what settings/files need to be configured would be appreciated! I am developing in Splunk Enterprise 9.1.3
Ok, thanks for letting me know. 
I figured it out. I added if statements in my search x=if(avg <= threshold, avg,0) and then went to the source code and assigned x a color. 
I created a table that outputs the organization,  threshold, count, and response time. If the response time is greater than the threshold, then I want the  response time  value to turn red. However, ... See more...
I created a table that outputs the organization,  threshold, count, and response time. If the response time is greater than the threshold, then I want the  response time  value to turn red. However, the threshold is different for every organization. Is there a way to dynamically set the threshold on the table for the Response Time column to turn red based on its respective threshold.    For example, organization A will have a threshold of 3, while organization B will have a threshold of 10.  I want the table to display all the organizations,count, the response time, and the threshold.    index | stats count as "volume" avg as "avg_value" by organization, _time, threshold   Kindly help. 
So "index=_internal" on your cloud instance doesn't see any data where host=X? Your connections look like they're by IP, but I thought Splunk cloud's little "connect my forwarder up" app did it by D... See more...
So "index=_internal" on your cloud instance doesn't see any data where host=X? Your connections look like they're by IP, but I thought Splunk cloud's little "connect my forwarder up" app did it by DNS entries? Can you confirm one or the other of those things is right? Maybe the forwarder's time is off or TZ is incorrectly specified, have you checked over a longer period like 24 or 48 hours? Also a second point to the TZ issue - if the times are in the future, it can be more difficult to find it in Splunk. Try an *all time* search. I know, it sucks, but one does what one must sometimes.  index=_internal host=*myhost* | stats count by host Try a wildcard ilke I suggested, using "myhost" as any string that should be reasonably unique in the hostname for the host sending in data. You can also confirm what hostname it's sending in as by looking in etc/system/local/server.conf on the UF, there's a "hostname" field. If that's picked something "wrong" then guess what?  Your data will show up as whatever it's picked!
We are gathering logs from various devices that contain security, performance, and availability-related information. These logs are all being sent to Splunk. We utilize both Splunk core and the ES A... See more...
We are gathering logs from various devices that contain security, performance, and availability-related information. These logs are all being sent to Splunk. We utilize both Splunk core and the ES App. Since we have to pay separately for both core and the ES App based on ingestion, we are exploring options to minimize costs. Is there a mechanism available for selecting which logs can be sent to the ES App for processing? If such an option exists, we would only need to send security-specific logs to the ES App, significantly reducing our Splunk ES App licensing costs. Splunk Enterprise Security
    <layout class="ch.qos.logback.classic.PatternLayout"> <pattern>%msg</pattern> </layout>   in you HEC appender you need to set '%msg' as the pattern, but NOT the one you us... See more...
    <layout class="ch.qos.logback.classic.PatternLayout"> <pattern>%msg</pattern> </layout>   in you HEC appender you need to set '%msg' as the pattern, but NOT the one you use for the Console Appender (which is the 'defaultPattern')
If a search "index=_internal" over the last 24 hours is empty, I can think of a couple of reasons. Most likely - your role doesn't have administrative access.  (More specifically, it doesn't have ac... See more...
If a search "index=_internal" over the last 24 hours is empty, I can think of a couple of reasons. Most likely - your role doesn't have administrative access.  (More specifically, it doesn't have access to the _internal index, which is usually limited to admins).  Either log in as an administrator with access to _internal, or have your Splunk folks add this index to your role. It's also possible that you have DBX installed on a heavy forwarder.  That HF has been told its outputs need to go to your real indexer(s), but it's never been told to *search* the indexer when someone searches for "index=_internal".  The steps you might need are https://docs.splunk.com/Documentation/Splunk/9.2.0/DistSearch/Configuredistributedsearch#Use_Splunk_Web Anyway, if you can confirm the above two things, either one of them is the issue, or you can report back here with what you've found!   -Rich
In my search I have a field (ResourceId) that contains various cloud resource values. One of these values is InstanceId. The subsearch is returning a list of "active" instances. What I ultimately nee... See more...
In my search I have a field (ResourceId) that contains various cloud resource values. One of these values is InstanceId. The subsearch is returning a list of "active" instances. What I ultimately need to do is filter out only those InstanceIds from the ResourceIds field that DO NOT match the InstanceIds returned from the subsearch (the active instances), while keeping all other values in the ResourceId field. Sample ResourceId values: i-987654321abcdefg (active; WAS returned by subsearch) i-123abcde456abcde (inactive; was NOT a returned value from subsearch) bucket-name sg-12423adssvd Intended Output: i-987654321abcdefg bucket-name sg-12423adssvd Search (in progress):   index=main ResourceId=* | join InstanceId type=inner [search index=other type=instance earliest=-2h] | eval InstanceId=if(in(ResourceId, InstanceId), InstanceId, "NULL") | table InstanceId      
In my limited testing, SOAR doesn't seem to like handling custom functions within a single code block. It doesn't want to wait for the custom function to actually finish before moving on. For refere... See more...
In my limited testing, SOAR doesn't seem to like handling custom functions within a single code block. It doesn't want to wait for the custom function to actually finish before moving on. For reference, first_code_block just calls a custom function and second_code_block runs phantom.completed() on that function. If you have to call the function from within a code block, you can add a callback. This will make sure the code doesn't move on until the run finishes. I wasn't able to get the callback to work on a second function within the same block. (One note on this: Phantom will call the last two lines of the code block before the custom function finishes) phantom.custom_function(... callback=second_code_block) The easiest method by far is to just put each custom function into their own block, then do whatever processing you need in a custom code block below. By default, SOAR will wait for any simultaneous blocks to finish before running the next step.  
Hello, I'm currently developing a Splunk app and am having trouble bundling saved searches to appear in the Search & Reporting app. My intention is to include a list of searches in my app packa... See more...
Hello, I'm currently developing a Splunk app and am having trouble bundling saved searches to appear in the Search & Reporting app. My intention is to include a list of searches in my app package (in savedsearches.conf or elsewhere) that will appear in the Reports tab of S&R. I've done some digging but haven't found a solution that works. Is this possible? I'm developing in Splunk Enterprise 9.1.3.
Thank you for an update. I tried suggested SPL and added rename field to see if data exists ------------------- | eval Date = strftime(strptime("policy_applied_at","%FT%T.%6NZ"), "%b-%d-%Y") | ... See more...
Thank you for an update. I tried suggested SPL and added rename field to see if data exists ------------------- | eval Date = strftime(strptime("policy_applied_at","%FT%T.%6NZ"), "%b-%d-%Y") | eval Time = strftime(strptime("policy_applied_at","%FT%T.%6NZ"), "%H:%M") | rename "policy_applied_at" as "Last Refresh Time" | table "Last Refresh Time", Date, Time ------------- The rename, but not trimmed field has data, other two are empty Last Refresh Time Date Time 2024-02-19T11:16:58.930104Z     2024-02-19T11:16:54.980418Z     2024-02-19T11:18:44.875386Z    
I deleted onee
The first one did end up working for me. The second one for whatever reason was throwing Error in 'SearchParser': Mismatched ']'. Not a big deal for me since the first one works, but figured I'd ment... See more...
The first one did end up working for me. The second one for whatever reason was throwing Error in 'SearchParser': Mismatched ']'. Not a big deal for me since the first one works, but figured I'd mention it. | rex field=_raw "Key\": \"Owner\", \"ValueString\": \"(?<Owner>[^"])\"}," The second one is what I thought I was doing... capturing everything until it saw "},    Thank you for helping me with this!
So Im understanding here that yeah, syslog itself doesn't lend well to being balanced. Splunk Universal forwarder should not be used to receive Syslog traffic, but a Heavy forwarder "can". And if I ... See more...
So Im understanding here that yeah, syslog itself doesn't lend well to being balanced. Splunk Universal forwarder should not be used to receive Syslog traffic, but a Heavy forwarder "can". And if I wanted to load balance a pair of Splunk UF's then I should setup a real load balancer.. And if I want to LB syslog,  should also setup a real LB and have the syslog and splunk functions on dedicated hosts.  this look accurate?
Yup, And these Splunk instances aren't listening on tcp/udp ports so that's good.
I am trying to look at cpu and mem statistics on my indexers and search heads, but the index only ever goes back 15 days, almost to the hour, but I need to look a a specific date almost a month ago. ... See more...
I am trying to look at cpu and mem statistics on my indexers and search heads, but the index only ever goes back 15 days, almost to the hour, but I need to look a a specific date almost a month ago. Any ideas on why this could be and how can get around it?
If policy_refresh_at is a string, you could ty parsing it (to an epoch timestamp) before formatting, something like this: |eval Date = strftime(strptime(policy_refresh_at,"%FT%T.%6NZ"), "%b-%d-%Y") ... See more...
If policy_refresh_at is a string, you could ty parsing it (to an epoch timestamp) before formatting, something like this: |eval Date = strftime(strptime(policy_refresh_at,"%FT%T.%6NZ"), "%b-%d-%Y") | eval Time = strftime(strptime(policy_refresh_at,"%FT%T.%6NZ"), "%H:%M")