All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

New to splunk and been struggling manipulating search results into a final result that I am looking for. In powershell where I'm familiar, I would just use a series of variables and return a final re... See more...
New to splunk and been struggling manipulating search results into a final result that I am looking for. In powershell where I'm familiar, I would just use a series of variables and return a final result set. I am trying to accomplish the below. (each target_name has multiple disk_group) 1) i need to find the latest Usable_Free_GB for each disk_group in each target_name and sum them 2) i need to find the latest Usable_Total_GB for each disk_group in each target_name and sum them I can get #1 and #2 in different searches, but am struggling to get them together to return a result set like this: Target_Name UsableSpaceFree TotalUsableSpace Target_Name1 123 456 Target_Name2 234 567   This is the closest I can get. But I need to only have 2 rows returned with all three fields populated    Once I can get the result set grouped by Target_Name, I then need to use eval to create a new field like the below using the values from #1 and #2   eval percent_free=round((UsableSpaceFree/TotalUsableSpace)*100,2)   Target_Name UsableSpaceFree TotalUsableSpace percent_free Target_Name1 123 456 ? Target_Name2 234 567 ?  
I have the following data, and I want a graph with the age as the x axis, and height as the y axis. name and value are fields pulled out of a "rex field=_raw" command. name value heigh... See more...
I have the following data, and I want a graph with the age as the x axis, and height as the y axis. name and value are fields pulled out of a "rex field=_raw" command. name value height age height age height  age height  age 100 1 2 105 3 107 4 108
The Splunk Trust and members of the community will be hosting open office hours for anybody who wanted to chat about anything Splunk related. Please visit the office_hours  channel in slack or dro... See more...
The Splunk Trust and members of the community will be hosting open office hours for anybody who wanted to chat about anything Splunk related. Please visit the office_hours  channel in slack or drop comments here if there is any topic you'd like to see discussed!
Hi Splunkers, I have to create an alert when there is a root user login in AWS. For this, I am ingesting cloudtrail logs to distributed splunk env. I want to add organization wide aws accounts to ge... See more...
Hi Splunkers, I have to create an alert when there is a root user login in AWS. For this, I am ingesting cloudtrail logs to distributed splunk env. I want to add organization wide aws accounts to get logs. Adding every single account and creds in Splunk add-on for AWS is difficult. Kindly suggest a way to onboard cloudtrail logs from multiple accounts. Thanks
Hi Splunkers, I would like to know what happens to logging in below scenarios when there is an outage. I would like to know if splunk restores logging when the systems are back from outage or does i... See more...
Hi Splunkers, I would like to know what happens to logging in below scenarios when there is an outage. I would like to know if splunk restores logging when the systems are back from outage or does it lose the logs. 1. logs are getting forwarded from an app 2.  synced from an S3 bucket 3. pulled via API 4. data coming through heavy forwarder Thanks
Hi I need to calculate the EPS averaged over a month, any ideas?
Is there a way we can authenticate to DUO MFA enabled Splunk using python API/SDK? Appreciate your help. 
Hi All, We just upgraded our HWF to version 8.2.5 and now when we start splunk we get this this message: "ERROR: Detected httpout stanza in outputs.conf , forwarding data over HTTP is only suppor... See more...
Hi All, We just upgraded our HWF to version 8.2.5 and now when we start splunk we get this this message: "ERROR: Detected httpout stanza in outputs.conf , forwarding data over HTTP is only supported on Universal Forwarders. For more information, see " This HWF is outputting all data to the HEC on another splunk instance and is working fine. What is the meaning of the message?  Is this function going to be deprecated?  Note the page it refers to says things like "Supported on Splunk universal forwarders only." It would be a real concern if this is to be deprecated - does anyone have any idea? Thanks, Keith
I have a script that sends effectively yum outputs to receivers/simple.  props.conf says [yumstuff] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Miscellaneous ... See more...
I have a script that sends effectively yum outputs to receivers/simple.  props.conf says [yumstuff] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Miscellaneous pulldown_type = 1 I expect the each post to be one event.  But some posts get broken into multiple events for unknown reasons.  My guess is that those posts are longer, although I couldn't find any applicable limit in limits.conf.  The broken ones are not all that long to start.  I examined one that was broken into three "events ".  Combined, they have 18543 chars, 271 lines.  The closest attribute in limits.conf I can find is maxchars, but that's for [kv] only, and the limit is already high: [kv] indexed_kv_limit = 1000 maxchars = 40960 The way it is broken also confuses me.  My post begins with a timestamp, followed by some bookkeeping kv pairs, then yum output.  If this breakage is caused by limits, I would expect the event containing the first part to be the biggest, to the extent it exceeds that limit.  But in general, the "event" corresponding to the end of the post is the biggest; even stranger, the middle "event" generally is extremely small containing only one line.  In the post I examined, for example, the first "event" contained 6710 chars, the second, 71 chars, the last, 11762 chars.  The breaking points are not special, either.  For example, 2022-02-09T19:51:28+00:00 ... ... ---> Package iwl6000g2b-firmware.noarch 0:18.168.6.1-79.el7 will be updated ---> Package iwl6000g2b-firmware.noarch 0:18.168.6.1-80.el7_9 will be an update <break> ---> Package iwl6050-firmware.noarch 0:41.28.5.1-79.el7 will be updated <break> ---> Package iwl6050-firmware.noarch 0:41.28.5.1-80.el7_9 will be an update ---> Package iwl7260-firmware.noarch 0:25.30.13.0-79.el7 will be updated ...   Where should I look?
Running Splunk 8.2 I have discovered that after completing a dashboard within Dashboard Studio, bundling it up to move from Splunk instance to another, the background (png/jpeg) that was attached to ... See more...
Running Splunk 8.2 I have discovered that after completing a dashboard within Dashboard Studio, bundling it up to move from Splunk instance to another, the background (png/jpeg) that was attached to the original dashboard is missing.  In the source code it does reference a KV store hash. The KV store has been "upgraded/updated" from the original environment that the dashboard was designed on to the current Splunk environment that the background seems to be missing. Possible solution would be to save the png within the app, however, I can't upload or reference that image in the app via the GUI given the Splunk host-separated environment (Search Head, Master, etc all are different virtual hosts and are not hosts I can simply click "browse" for within dashboard studio as that references the local box which isn't part of the Splunk environment).  And there doesn't seem to be good logic on how to change the source code to identify the png within the App.  I've seen other Splunk questions and the response right now is it's a possible "bug." Is there anyone trying to do something similar that has an effective work around? 
Does anyone here have any experience running the Crowdstrike Falcon Sensor in their Splunk environment? I've found the following: https://docs.splunk.com/Documentation/Splunk/8.2.5/ReleaseNotes/Runni... See more...
Does anyone here have any experience running the Crowdstrike Falcon Sensor in their Splunk environment? I've found the following: https://docs.splunk.com/Documentation/Splunk/8.2.5/ReleaseNotes/RunningSplunkalongsideWindowsantivirusproducts but it references on-access AV, and Crowdstrike is a behavioral AV and that likely isn't totally applicable. I have a case open with Splunk with this same question but I wondered if the community had any experience; do's/don'ts; best practices; etc. My gut is that I won't see a substantive performance impact but I'd love to have a little more knowledge before I start deploying the agent. Trying to search for this online has proven neigh impossible since CS-->Splunk integration is very common and almost all the search hits focus on ingesting CS logs, not actually running the agent on a Splunk environment. For reference I have a modestly sized distributed architecture with three search-heads and three indexers (not clustered) in addition to a deployment and multiple forwarders.
I see in the docs splunk doc that summary indexing does not count against your license. It also says that summary indexes are built via transforming searches over event data.  If I use a scheduled ... See more...
I see in the docs splunk doc that summary indexing does not count against your license. It also says that summary indexes are built via transforming searches over event data.  If I use a scheduled report that does not use a transforming command and saves the data to an index will that count against the license? Ie I want to extract a subset of data from the main index and save certain fields to a new index so that a role doesn't have access to all the data.
I have this table and I'm trying to send it as a report/alert every morning to our teams chat group   This is how it's getting sent out, its only showing the first result of every row ... See more...
I have this table and I'm trying to send it as a report/alert every morning to our teams chat group   This is how it's getting sent out, its only showing the first result of every row heres the Query  | webping http://CTXSDC1CVDI041.za.sbicdirectory.com:4444/grid/console | append [ webping http://CTXSDC1CVDI042.za.sbicdirectory.com:4444/grid/console ] | append [ webping http://CTXSDC1CVDI043.za.sbicdirectory.com:4444/grid/console ] | append [ webping http://CTXSDC1CVDI044.za.sbicdirectory.com:4444/grid/console ] | append [ webping http://CTXSDC1CVDI045.za.sbicdirectory.com:4444/grid/console ] | append [ webping http://CTXSDC1CVDI046.za.sbicdirectory.com:4444/grid/console ] | append [ webping http://CTXSDC1CVDI047.za.sbicdirectory.com:4444/grid/console ] | append [ webping http://CTXSDC1CVDI048.za.sbicdirectory.com:4444/grid/console ]| append [ webping http://ctxsdc1cvdi013.za.sbicdirectory.com:4444/grid/console ] | append [ webping http://CTXSDC1CVDI049.za.sbicdirectory.com:4444/grid/console ]| append [ webping http://CTXSDC1CVDI050.za.sbicdirectory.com:4444/grid/console ] | eval timed_out = case(timed_out=="False", "Machine On", timed_out=="True", "Machine Off") | eval response_code=if(response_code==200, "Hub and Node Up", "Hub and Node Down") | rex field=url "http:\/\/(?<host_name>[^:\/]+)" | table host_name response_code timed_out total_time  
Hello I have a table I want this I am not sure which tool (chart, table anything else) and arguments would be best to explore and learn in order to get the result I want. Do you have... See more...
Hello I have a table I want this I am not sure which tool (chart, table anything else) and arguments would be best to explore and learn in order to get the result I want. Do you have any advice? Thank you.
How to customize the Phantom dashboard time filters dropdown box (see screenshot below)? For a Phantom instance, we have started exploring using the data retention features of Splunk Phantom keeping... See more...
How to customize the Phantom dashboard time filters dropdown box (see screenshot below)? For a Phantom instance, we have started exploring using the data retention features of Splunk Phantom keeping less than 1 year of Phantom data. It is desired to have a maximum filter equal to the current number of days for data retention. Otherwise, users are misled by time filters that are more than current number of days for data retention. A feature that might nice to have is a way to tie the Phantom dashboard time filters dropdown box to the days of data retention.    
All, I need some help on a problem I am trying to solve. Problem: I need to calculate the average user events per unique user, per day over a 14 day period (excluding weekends). Basically, we h... See more...
All, I need some help on a problem I am trying to solve. Problem: I need to calculate the average user events per unique user, per day over a 14 day period (excluding weekends). Basically, we have users logging into a system and I want to see if a threshold of say 10% or more is reached that is outside of the norm for a particular user. The output would then list the username who is in violation of the above. Thanks for any guidance...
Seeing ERROR message "may have returned partial results" from few indexers".  Logs from those indexers are showing following error messages.   WARN CacheManagerHandler - Localization failure has ... See more...
Seeing ERROR message "may have returned partial results" from few indexers".  Logs from those indexers are showing following error messages.   WARN CacheManagerHandler - Localization failure has been reported, cache_id="bid|index_name~8264~6A0ED00A-E4AB-4B46-9F69-CD517B4C8965|", sid="remote_*_1646235912.747477_6AFBB424-8451-40A4-A05C-A0337BDBC296", errorMessage='waitFor probe, cache_id="bid|index_name~8264~6A0ED00A-E4AB-4B46-9F69-CD517B4C8965|", did not localize all files before reaching download_status=idle files={"file_types":["tsidx","bloomfilter","deletes"]} local_files={"file_types":["dma_metadata","strings_data","sourcetypes_data","sources_data","hosts_data","lex","tsidx","bloomfilter","journal_gz","other"]} failure_code=0 failure_reason='   Any idea, whats causing this and how to fix it?
hi everyone, i'm trying to parse json inline.  i'm using kv mode= json already but i'm trying to achieve selective groups. essentially i want to capture two groups if they have an "exclusion type... See more...
hi everyone, i'm trying to parse json inline.  i'm using kv mode= json already but i'm trying to achieve selective groups. essentially i want to capture two groups if they have an "exclusion type" sample json. [{"ruleGroupId":"AWS#AWSManagedRulesAmazonIpReputationList","terminatingRule":null,"nonTerminatingMatchingRules":[],"excludedRules":null},{"ruleGroupId":"AWS#AWSManagedRulesBotControlRuleSet","terminatingRule":null,"nonTerminatingMatchingRules":[],"excludedRules":null},{"ruleGroupId":"AWS#AWSManagedRulesCommonRuleSet","terminatingRule":null,"nonTerminatingMatchingRules":[],"excludedRules":[{"exclusionType":"EXCLUDED_AS_COUNT","ruleId":"SizeRestrictions_BODY"}]},{"ruleGroupId":"AWS#AWSManagedRulesKnownBadInputsRuleSet","terminatingRule":null,"nonTerminatingMatchingRules":[],"excludedRules":null}] so for this i wanted to capture only the ruleGroupId name if it has excludedRules not null, then capture the exclusionType   any help would be appreciated.    
Hello, we have Splunk running in an AWS Account and getting AWS CloudWatchMetrics data from that account is no issue at all. However we have a second AWS Account and i currently find no way to assu... See more...
Hello, we have Splunk running in an AWS Account and getting AWS CloudWatchMetrics data from that account is no issue at all. However we have a second AWS Account and i currently find no way to assume the IAM Role of that other Account. Using an IAM User is forbidden due to security reasons. Of course we are using the standard Splunk App for AWS Would help a ton if anyone has an idea. Regards, Mike
Hi, I can't get Splunk to use  the content of timestamp_start as _time. This is an example of log: canale=<value>;an=<value>;num_fattura=<value>;data_emissione=2022-01-01;timestamp_start=2022-03... See more...
Hi, I can't get Splunk to use  the content of timestamp_start as _time. This is an example of log: canale=<value>;an=<value>;num_fattura=<value>;data_emissione=2022-01-01;timestamp_start=2022-03-02 11:22:00;timestamp_end=2022-03-02 11:22:02;total_time=1.56035;http_code=200;purl=<value> and this is what I get as _time 2022-01-01 11:22:00 I found a configuration that should work so I edited the props.conf file on the deployment server but even if I can see the "new" props.conf on the forwarder and on the deployment server, new indexed files still have the wrong timestamp. [my_sourcetype] SHOULD_LINEMERGE=false NO_BINARY_CHECK=true TIME_FORMAT=%Y-%m-%d %H:%M:%S TIME_PREFIX=.*\d*-\d*-\d*\;timestamp_start= MAX_TIMESTAMP_LOOKAHEAD=19 After editing the props.conf, I reloaded the deployment server (splunk reload deploy-server) and then I restarted Splunk on the deployment server and on the forwarder. My Splunk version is 6.5.1. Thanks for any help you may be able to give me!