All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a distributed environment with 2 independent search heads.  I run the same search on both, and one shows a field that the other does not.  I can't figure out why.  I can't find any data models... See more...
I have a distributed environment with 2 independent search heads.  I run the same search on both, and one shows a field that the other does not.  I can't figure out why.  I can't find any data models that mention the index or sourcetype I'm searching.  Is there a way to show me if a data model is being used in my search? The logs are coming from an IBM i-series system using syslog through sc4s.
hello all, I have an app that to perform an action I cant insert the required parameter as a list. but as a string. this is a bit issue because I am using data value from action results as the para... See more...
hello all, I have an app that to perform an action I cant insert the required parameter as a list. but as a string. this is a bit issue because I am using data value from action results as the parameter to insert, for example:  "my_App_action:action_result.data.*.device_id" and as far as I understand, action_result.data collection is always an array. so I can not use directly this action results returned parameter as a parameter to insert to my action. the only workaround I found is to add a code block that gets the datapath-parameter as input, and outputs the value_name[0]. is there a better workaround for this?  
In previous versions of Splunk (at least up to 9.1.0), we could re-arrange the Apps menu by dragging the apps up or down in the Launcher app.  Now that Launcher seems to have been rebuilt with Dashbo... See more...
In previous versions of Splunk (at least up to 9.1.0), we could re-arrange the Apps menu by dragging the apps up or down in the Launcher app.  Now that Launcher seems to have been rebuilt with Dashboard Studio that capability is no longer present.  Is there a new way for users to re-arrange their Apps menu?
Hello, I'm looking to change our indexing architecture We have dozens of AWS accounts. We use the Splunk AWS app to ingest the data from a SQS queue. Currently, we have a single SQS-based input typ... See more...
Hello, I'm looking to change our indexing architecture We have dozens of AWS accounts. We use the Splunk AWS app to ingest the data from a SQS queue. Currently, we have a single SQS-based input type for each individual AWS account that grabs all the data and applies the index and a catch-all sourcetype named aws:logbucket. From there, we route the data to a more specific sourcetype based on the type of data. aws:logbucket will be changed to aws:cloudwatch:vpcflowlogs, aws:cloudtrail, aws:config, etc. This has worked well enough for us, but I now have a new requirement. For each of these AWS accounts, I want a separate index for the specific AWS service by AWS account. ie) awsaccount1-vpcflow, awsaccount1-cloudtrail, awsaccount2-vpcflow, etc. We use S2, so storing aws:cloudtrail with aws:cloudwatch:vpcflow hurts the performance of aws:cloudtrail data. Searching for aws:cloudtrail data requires us to write back all aws:cloudwatch:vpcflow data back to disk. This has accounted for 120x more buckets required written to disk for aws:cloudtrail since it's stored with VPCFlow. Expanding these indexes to be more specific will have huge performance improvements for my Splunk environment I would like to use a lookup table to match the source of the SQS-based S3 to specify the index and sourcetype. I am unable to do this using regex and FORMAT, since the bucket names and index names are not a 1-1 match. ie) for s3://acc1/cloudtrail/..., I would like to have a lookup table that tells it to route to index account1 and sourcetype aws:cloudtrail, for s3://acc2/config/... I would like to have it route to index account2 and sourcetype aws:config. After that long summary... how do I technically implement this and how will a lookup with ~300-400 different rows affect performance? Thank you, Nate      
I am getting an error when installing PHP agent on the RHEL server.  PHP version id: 7.4 PHP extensions directory: /usr/lib64/php/modules PHP ini directory: /etc/ PHP thread safety: NTS Controll... See more...
I am getting an error when installing PHP agent on the RHEL server.  PHP version id: 7.4 PHP extensions directory: /usr/lib64/php/modules PHP ini directory: /etc/ PHP thread safety: NTS Controller Host: https:\/\/xxxxxxxx.saas.appdynamics.com\/controller\/ Controller Port: 8090 Application Name: WebApp Tier Name: DemoWebTier Node Name: DemoNode Account Name: xxxxxxxx Access Key: xxxxxxxx SSL Enabled: true HTTP Proxy Host: HTTP Proxy Port: HTTP Proxy User: HTTP Proxy Password File: TLS Version: TLSv1.2 [Error] Agent installation does not contain PHP extension for PHP 7.4 i was installing the agent using shell script method. please let me know if someone has faced similar issue and how can we fix it. Thanks
Hello,    How do I obtain an NFR license (or the like)? We have integrations with Splunk but no way to test/evaluate them. The previous parties that handled this are no longer with the company and ... See more...
Hello,    How do I obtain an NFR license (or the like)? We have integrations with Splunk but no way to test/evaluate them. The previous parties that handled this are no longer with the company and we don't have much information. 
hi i would like some help on how to extract the next 5 lines after a keyword where it extracts the full line where the keyword is part of. example below....   where the keyword is the 'ethernet' ... See more...
hi i would like some help on how to extract the next 5 lines after a keyword where it extracts the full line where the keyword is part of. example below....   where the keyword is the 'ethernet' ********************************************** Redundant-ethernet Information: Name Status Redundancy-group reth0 Down Not configured reth1 Up 1 reth2 Up 1 reth3 Up 1 reth4 Down Not configured reth5 Down Not configured reth6 Down Not configured reth7 Down Not configured reth8 Down Not configured reth9 Up 2 Redundant-pseudo-interface Information: Name Status Redundancy-group lo0 Up 0   *****************************************   example value of a field now would be..   Redundant-ethernet Information: Name Status Redundancy-group reth0 Down Not configured reth1 Up 1 reth2 Up 1 reth3 Up 1   thanks, if it can be generic enough enough so that i can use it for other rex searches that of similar data   
    index=myindex source="/var/log/nginx/access.log" | eval status_group=case(status!=200, "fail", status=200, "success") | stats count by status_group | eventstats sum(count) as total | ev... See more...
    index=myindex source="/var/log/nginx/access.log" | eval status_group=case(status!=200, "fail", status=200, "success") | stats count by status_group | eventstats sum(count) as total | eval percent= round(count*100/total,2) | where status_group="fail"     Looking at nginx access logs for a web application.  This query tells me the amount of failures (non 200), total amount of calls (all msgs in log) and the % of failures vs total.  As follows: status_group count percent total fail 20976 2.00 1046605   What I'd like to do next is timechart these every 30m to see what % of failures I get in 30 min windows but the only attempt where I got close did it as a % of the total calls in the log skewing the result completely.  Basically a row like above but for every 30 min of my search period.  Feel free to rewrite the entire query as I cobbled this together anyway.
I have a "cost" for two different indexes that I want to calculate in one and the same SPL. As the "price" is different depending on index, I can't just use a "by" clause in my count/sum as I don't k... See more...
I have a "cost" for two different indexes that I want to calculate in one and the same SPL. As the "price" is different depending on index, I can't just use a "by" clause in my count/sum as I don't know how to apply the separate costs in that way. Let's say... idxCheap costs $10 per event. idxExpensive costs $20 per event. I've written this SPL that works, although the "cost" data ends up in a unique column for each index. The count is still in the same column. index=idxCheap OR index=idxExpensive | stats count by index | eval idxCheapCost = case(index="idxCheap", count*10) | eval idxExpensiveCost = case(index="idxExpensive", count*20)  The results looks like this: count idxCheapCost idxExpensiveCost index 44892 448920   idxCheap 155   3100 idxExpensive   Any pointers on how to most efficiently and dynamically achieve this?
I have used Splunk to threat hunt many times and have aspirations to build a distributed Splunk instance in the feature. I decided to start learning the installation, configuration, and deployment pr... See more...
I have used Splunk to threat hunt many times and have aspirations to build a distributed Splunk instance in the feature. I decided to start learning the installation, configuration, and deployment process of Splunk, by building a standalone instance. I get to a point where I think I have completed all the steps necessary to have a functioning Splunk set up. (connections are established on 8089 and 9997) and my web page is good. As soon as my apps are pushed to my (client)  this is when Splunk starts throwing an error stating indexers and ques are full. it also appears I am getting no logs from my applications. Any help is greatly appreciated. 
I am using the below query to merge 2 queries using append. However, I am unable to get the value of the field named "Code" from the first query under | search "Some Logger" printed in the Statistics... See more...
I am using the below query to merge 2 queries using append. However, I am unable to get the value of the field named "Code" from the first query under | search "Some Logger" printed in the Statistics section: index=* sourcetype=* host=* | search "Some Logger" | rex "LoggerName\|(?<time>\w+)\|(?<Service>\w+)\|(?<Type>\w+)\|(?<brand>\w+)\|(?<template>\w+)\|(?<hashId>[\w-]+)\|(?<Code>\w+)" | table Code | append [ search host=* | search "LoggerName2*" | rex field=_raw "field1=(?<field1>)\}" | rex field=_raw "field2=(?<field2>)," | rex field=_raw "field3=(?<field3>[a-zA-z-_0-9\\s]*)" | rex field=_raw "(?<field4>[\w-]+)$" | rex field=_raw "field5=(?<field5>)," | rex field=_raw "field6=(?<field6>)," | table field1,field2 ] The result from the 2nd/child query i.e. | search "LoggerName2*" is printing just fine in a tabular format. Value of the code field is API response code i.e. can be either 2XX, 3XX, 4XX, 5XX. Could someone please help ? Thanks!
I have raw data like:     Error=REQUEST ERROR | request is not valid.|","time":"1707622073040"     and I want to extract "REQUEST ERROR | request is not valid." to a new field, so I try to use ... See more...
I have raw data like:     Error=REQUEST ERROR | request is not valid.|","time":"1707622073040"     and I want to extract "REQUEST ERROR | request is not valid." to a new field, so I try to use rex to match until |" with below query but it still only returns "REQUEST ERROR"     |rex field=_raw "Error\=(?<ErrDesc>[^|\"]+)"      
I am trying to script the installation for the Mac Splunk Universal Forwarder package.  The package is a disk image (.dmg). I understand that we can mount the image using hidutil and access the vo... See more...
I am trying to script the installation for the Mac Splunk Universal Forwarder package.  The package is a disk image (.dmg). I understand that we can mount the image using hidutil and access the volume to find the .pkg file.  The issue comes from where we attempt to run installer pkg the end user is prompted to answer dialog boxes, which we do not want to occur.   Is there a switch to use to install the extracted pkg or dmg file silently to install the app on Mac OS Machine ?
I see a lot of deprecated errors in _internal index. How can this error be resolved ?
Is it possible to use something like this: GitHub - okfse/sweden-geojson: Tiny GeoJSON files of Sweden's municipalities and regions or this: GitHub - perliedman/svenska-landskap: Sveriges landskap... See more...
Is it possible to use something like this: GitHub - okfse/sweden-geojson: Tiny GeoJSON files of Sweden's municipalities and regions or this: GitHub - perliedman/svenska-landskap: Sveriges landskap som öppen geodata i GeoJSON With Splunk? If so, are there any manuals/instructions/blog posts etc you could point me to describing how to achieve this? Best regards
I have a number of devices that send logs to Splunk. I want to know when devices stop logging. For this example search: index="mydevices" logdesc="Something that speeds the search" | top limit=40 ... See more...
I have a number of devices that send logs to Splunk. I want to know when devices stop logging. For this example search: index="mydevices" logdesc="Something that speeds the search" | top limit=40 devicename How can i find "devicename"s that have logged in the last week that haven't logged in the last 30 minutes? if that makes sense. Iain.
"I need to create a dashboard with two queries in one dashboard, one query having a fixed time range of "Today" and the other query needs to select "earliest and latest" from the drop down. The data ... See more...
"I need to create a dashboard with two queries in one dashboard, one query having a fixed time range of "Today" and the other query needs to select "earliest and latest" from the drop down. The data dropdown will have two values "Yesterday" and "last week". Last week is the day from last week (if today is Feb 13, last week should show data from Feb Feb 06)" for.eg  index="abc" sourcetype="Prod_logs" | stats count(transactionId) AS TotalRequest (***earliest and latest needs to be derived as per user selection from drop down) | appendcols [search index="abc" sourcetype="Prod_logs" earliest=@d  latest=now (****Today's data****) | stats count(transactionId) AS TotalRequest]      
Hi All,    I am trying to pass time variables to the search when I click on a value in drilldown dashbaord. Below is the the source of the dashboard    <form version="1.1"> <label>test12</lab... See more...
Hi All,    I am trying to pass time variables to the search when I click on a value in drilldown dashbaord. Below is the the source of the dashboard    <form version="1.1"> <label>test12</label> <fieldset submitButton="false"> <input type="time" token="field1"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>test12</title> <table> <search> <query>index=_internal status=* sourcetype=splunkd |lookup test12 name AS status OUTPUT value | stats count by value</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">row</option> <option name="refresh.display">progressbar</option> <drilldown target="_blank"> <set token="drilldown_srch">index=_internal status=* sourcetype=splunkd |lookup test12.csv name as status output value | where value=$row.value$</set> <link>search?q=$drilldown_srch|u$</link> </drilldown> </table> </panel> </row> </form> I tried adding the time variables in the link as below but no luck <link>search?q=$drilldown_srch?earliest=$field1.earliest&latest=$field1.latest$|u$</link> Thanks
Hello, this app was working fine for me until I updated to Splunk Enterprise 9.1.2, whereupon the urllib library keeps making errors where it does not understand HTTPS. From some rudimentary googling... See more...
Hello, this app was working fine for me until I updated to Splunk Enterprise 9.1.2, whereupon the urllib library keeps making errors where it does not understand HTTPS. From some rudimentary googling, it appears this may be related to the Splunk python urllib library not being compiled to use SSL. Would it be possible to refactor this app to use the http request helper functions?             bash-4.2$ /opt/splunk/bin/python3 getSplunkAppsV1.py Traceback (most recent call last): File "getSplunkAppsV1.py", line 92, in <module> main() File "getSplunkAppsV1.py", line 87, in main for app_json in iterate_apps(app_func): File "getSplunkAppsV1.py", line 76, in iterate_apps data = get_apps(limit, offset, app_filter) File "getSplunkAppsV1.py", line 35, in get_apps data = json.load(urllib.request.urlopen(url)) File "/opt/splunk/lib/python3.7/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/opt/splunk/lib/python3.7/urllib/request.py", line 525, in open response = self._open(req, data) File "/opt/splunk/lib/python3.7/urllib/request.py", line 548, in _open 'unknown_open', req) File "/opt/splunk/lib/python3.7/urllib/request.py", line 503, in _call_chain result = func(*args) File "/opt/splunk/lib/python3.7/urllib/request.py", line 1420, in unknown_open raise URLError('unknown url type: %s' % type) urllib.error.URLError: <urlopen error unknown url type: https>         (The same error is produced when I use python version 2)
Hi,  I created a column chart in Splunk that shows month but will like to also indicate the day of the week for each of those months Sample query ------------------- index=_internal | bucket _... See more...
Hi,  I created a column chart in Splunk that shows month but will like to also indicate the day of the week for each of those months Sample query ------------------- index=_internal | bucket _time span =1d |eval month=strftime(_time,"%b") | eval day=strftime(_time,"%a") | stats avg(count) as Count max(count) as maximum by month, day