All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Can someone guide me on how can I ingest logs from a SFTP server? I have available Heavy Forwarders that sit outside the SFTP location.    Thanks!
I have table from on it when I click on edit button I'm taking value from that row and set to the filters above. In case of checkbox I have 2 choices, if on one of the system and Bi Report has... See more...
I have table from on it when I click on edit button I'm taking value from that row and set to the filters above. In case of checkbox I have 2 choices, if on one of the system and Bi Report has only one is selected, it's value get set in the filters above but if both system and Report is selected the filters not get set at all. Any help appreciated.  
Hi Team. I'm looking for a way to rename a correlation search that has been created with the wrong format. The CS is currently disabled but I don't see a way to actually rename it. If i delete it... See more...
Hi Team. I'm looking for a way to rename a correlation search that has been created with the wrong format. The CS is currently disabled but I don't see a way to actually rename it. If i delete it from Saved searches, Will it remove all notables which got created from this custom CS created? Regards Varun  
I am attempting to list all of my inputs that I have configured in "Splunk_TA_aws"   I am able to list all of my inputs set up as Cloudtrail, S3,  and SQS. I am unable to see my 20+ (all workin... See more...
I am attempting to list all of my inputs that I have configured in "Splunk_TA_aws"   I am able to list all of my inputs set up as Cloudtrail, S3,  and SQS. I am unable to see my 20+ (all working) inputs for Cloudwatch logs. I have attempted to hit: https://blahblah:8089/servicesNS/admin/Splunk_TA_aws/data/inputs/aws_cloudwatch_logs/ and https://blahblah:8089/servicesNS/admin/Splunk_TA_aws/data/inputs/aws_cloudwatch/ 0 results show up. As mentioned hitting the other endpoints for Cloudtrail, s3, and SQS are showing correctly. Is there another path/location to gather my inputs set up as Cloudwatch logs?
We're intermittently getting this error (so far twice in 2 weeks) when trying to use the lookup command on a kvstore. The full error message is " External command based lookup <kv_store> is not ava... See more...
We're intermittently getting this error (so far twice in 2 weeks) when trying to use the lookup command on a kvstore. The full error message is " External command based lookup <kv_store> is not available because KV Store status is currently unknown". We only found the error through the logs a few hours after the failure because the scheduled search with the lookup command didn't run successfully. When ran manually or on its next schedule, the search was running fine. KV store is also working as intended upon checking. I couldn't find information online on what the "unknown" status means regarding kv stores. Has anyone else seen this error?
Hi  We have a requirement to pull data from third-party aws account. Third party provider will push the data to a S3 bucket in their aws account and we are looking to pull that to an on-prem Splunk... See more...
Hi  We have a requirement to pull data from third-party aws account. Third party provider will push the data to a S3 bucket in their aws account and we are looking to pull that to an on-prem Splunk instance. There is an aws Splunk add-in splunkbase , are we able to use this add-on to pull data from a third-party aws account , if so how is it authenticated against third-party account? Please point me to any documentation available  Any suggestions?
Hi everyone! I'm still fairly new to Splunk so sorry if it is a simple question. I have some logs that does not show the field names when you have done a search. But when I expand the event, I... See more...
Hi everyone! I'm still fairly new to Splunk so sorry if it is a simple question. I have some logs that does not show the field names when you have done a search. But when I expand the event, I can see the names.   Is it not possible to have the field names shown in the first picture?  
I have 2 kind of logs where there are two types of uri which i want to rex into different fields {logType=DOWNSTREAM_RESPONSE, requestUri=https://google.come.com:8000/google/api/updateapi?&lo=en_US... See more...
I have 2 kind of logs where there are two types of uri which i want to rex into different fields {logType=DOWNSTREAM_RESPONSE, requestUri=https://google.come.com:8000/google/api/updateapi?&lo=en_US&sc=RT, duration=22, requestId=znXdSxbJQw6iVTtEeykZVA, globalTrackingId=null, requestTrackingId=null, request={body={"a":{"b":{"country":"US", }}}, method=POST, requestUri=https://google.come.com:443/google/api/updateapi?&lo=en_US&sc=RT}, response=(200 OK, { "body="{} }, "headers="{}, "statusCode=OK", statusCodeValue=200}")"} {logType=DOWNSTREAM_RESPONSE, requestUri=https://google.come.com:8000/google/api/deleteapi, duration=33, requestId=asdasd, globalTrackingId=null, requestTrackingId=null, request={body={"a":{"b":{"country":"US", }}}, method=POST, requestUri=https://google.come.com:443/google/api/updateapi?&lo=en_US&sc=RT}, response=(200 OK, { "body="{} }, "headers="{}, "statusCode=OK", statusCodeValue=200}")"} http= https URL= google.come.com:8000 service = /google api= /api/updateapi api= /api/deleteapi params= ?&lo=en_US&sc=RT is there a way to regex this?  
I have created a dashboard with multiple panels with each panel based on a dedicated report.  Here's an example. This works great but there must be a better way to do this instead of having 4 reports... See more...
I have created a dashboard with multiple panels with each panel based on a dedicated report.  Here's an example. This works great but there must be a better way to do this instead of having 4 reports per dashboard when the only difference between the search is the time.     <dashboard> <label>Packets</label> <row> <panel> <title>Daily</title> <chart> <search ref="Packets Daily"></search> <option name="charting.drilldown">none</option> </chart> </panel> </row> <row> <panel> <title>Weekly</title> <chart> <search ref="Packets Weekly"></search> <option name="charting.drilldown">none</option> </chart> </panel> </row> <row> <panel> <title>Monthly</title> <chart> <search ref="Packets Monthly"></search> <option name="charting.drilldown">none</option> </chart> </panel> </row> <row> <panel> <title>Yearly</title> <chart> <search ref="Packets Yearly"></search> <option name="charting.drilldown">none</option> </chart> </panel> </row> </dashboard>     What I would prefer to do is have one single report with each panel having the correct time frame.  I do not want a time picker displayed on the dashboard. I tried using earliest and latest syntax in each search section of the xml but the time is inherited from the report.  Thanks in advance.
Hello, I downloaded and installed Splunk Enterprise 9.x on my laptop. I am creating a Studio Dashboard and experiencing an issue with images. I am using a token in the image path to change the image... See more...
Hello, I downloaded and installed Splunk Enterprise 9.x on my laptop. I am creating a Studio Dashboard and experiencing an issue with images. I am using a token in the image path to change the image based on a dropdown input. Therefore, the images can't be held in the kvstore. They must remain local to my PC. In my production environment at work, RedHat, this works. The image path - within the DS Configuration Pane - is  /en-US/static/app/search/images Although the path on the Linux server is really etc/apps/search/appserver/static/images Splunk manipulates the path name in some strange way. apps becomes app, static is placed before the search (the app name).   I am using Windows on my laptop, and I can't figure out the correct path for the image URL. The Windows path for these images is C:\Program Files\Splunk\etc\apps\search\appserver\static\images What is Dashboard Studio Splunk-Speak for this path in Windows? Thanks in advance for your help. God bless, Genesius      
Hello Splunkers, Is there a way to add padding or control white space between the single value blocks in the aggregated fields? i.e I have three blocks created in single-value using the trellis v... See more...
Hello Splunkers, Is there a way to add padding or control white space between the single value blocks in the aggregated fields? i.e I have three blocks created in single-value using the trellis viz. This creates three nice blocks, but I would like to increase the white space between them if possible. Or use 3 separate single value viz, and space them with padding, etc. thank you for a great resource, eholz
I have this working query which needs some additional detailing. index=_internal earliest=-1h@h latest=@h | lookup api uri OUTPUT operation service | rex "duration=(?<response_time>[^,]+)" | mul... See more...
I have this working query which needs some additional detailing. index=_internal earliest=-1h@h latest=@h | lookup api uri OUTPUT operation service | rex "duration=(?<response_time>[^,]+)" | multikv | eval ReportKey="Today" | append [ search index=_internal earliest=-7d-1h@h latest=-7d@h | lookup api uri OUTPUT operation service | rex "duration=(?<response_time>[^,]+)" | multikv | eval ReportKey="lastweek" | eval _time=_time+604800] | stats first(uri) as apiName avg(response_time) as avgresponse_time count by operationName ReportKey Is there a way to get like this where it will compare all average response time and then give the percentile differences. operation uri today_avg (response_time) lastweek_avg (response_time) % differrences avg(response_time) today count last week count abc /api/abc 222 333   12312 42343 xyz /api/xyz 867 4234   87978 67867
Hello All - Is it possible to create a search or alert that is based on dynamic variables? The end goal I'm trying to achieve is to send an email if any of the tests exceed a 10% increase in run ti... See more...
Hello All - Is it possible to create a search or alert that is based on dynamic variables? The end goal I'm trying to achieve is to send an email if any of the tests exceed a 10% increase in run time.  I have the following search query which generates a table I want, however I want this to run every night on between versionN and versionN-1.       index="fe" source="regress_rpt" pipeline="soc" version IN("23ww10b","23ww11a") dut="*" (testlist="*") (testName="*") status="*" earliest=-1mon latest=now() | eval lastTestPathElement=replace(testPath, ".*/" ,"") |search lastTestPathElement="**" | chart max(cyclesPerCpuSec) AS max:cyclesPerCpuSec BY version lastTestPathElement | transpose header_field=version column_name=test_run | eval cycles_version_delta=('23ww11a' - '23ww10b') ,diff_percentage=round('cycles_version_delta'/'23ww11a' * 100, 1) ,status=if(diff_percentage < 10, "PASS", "FAIL")     Results Table test_run 23ww10b 23ww11a cycles_version_delta diff_percentage status basic_test 631.68 663.80 32.12 4.80 PASS basic_test.1 457.48 742.98 285.50 38.40 FAIL basic_test.2 730.04 691.25 -.38.79 -5.60 PASS   This search is hard-coded to the version 23ww10b and 23ww11a.  I'd like to be able to automatically run this search on the latest version and latest version - 1 as well as send an email if there is any FAIL in the status column.   What is the best way to do this, if even possible.   Thanks,   Phil
I am trying to build an Alert for login failures in AWS CloudTrail. In general I have it working -- but my joins are missing some of the desired events.  Specifically, I am building an 'index' value ... See more...
I am trying to build an Alert for login failures in AWS CloudTrail. In general I have it working -- but my joins are missing some of the desired events.  Specifically, I am building an 'index' value consisting of the username+IP, e.g. | eval user_IP = username + src_ip but I now see that some seemingly-identical values are being evaluated as separate. For instance, when you click on the Selected Values view (left-side in the results) there will be 2 separate entries which -- at least on-screen -- appear to be identical. WHAT THE POPUP SHOWS user_IP 2 Values, 100% of events Values                Count firstuser172.31.1.1   2 firstuser172.31.1.1   1 I suspect there is a hidden character in the second Value. Or, maybe a trailing space (though there is none when I try adding each to the search). ---- How can I modify my 'eval' to generate values without hidden characters? (I already tried adding a lower() function but without success)
Hello Community, Yesterday i realize that i can't reach my heavy forwarder, i already tried restarting the splunkd service but i still can't   get web access.  My splunk is running over windows s... See more...
Hello Community, Yesterday i realize that i can't reach my heavy forwarder, i already tried restarting the splunkd service but i still can't   get web access.  My splunk is running over windows server 2019. Please, someone can help me ,  i dont know which log should i check or what are usually the first steps? Thank you for your help.
Hi, I am getting errors similar to below for 5 inputs.conf.spec stanzas:   03-22-2023 09:03:52.484 +0000 WARN SpecFiles [45520 ConfReplicationThread] - Found parameter "python.version" inside "/a... See more...
Hi, I am getting errors similar to below for 5 inputs.conf.spec stanzas:   03-22-2023 09:03:52.484 +0000 WARN SpecFiles [45520 ConfReplicationThread] - Found parameter "python.version" inside "/apps/splunk/splunk/etc/apps/splunk_app_soar/README/inputs.conf.spec", scheme "audit://", but this parameter will be ignored as it does not contain the correct sequence of characters (a parameter name must match the regex "([0-9a-zA-Z][0-9a-zA-Z_-]*)").   The 5 stanzas are: Splunk_TA_paloalto/README/inputs.conf.spec", scheme "iot_security://" TA-tenable/README/inputs.conf.spec", scheme "tenable_io://" TA-tenable/README/inputs.conf.spec", scheme "tenable_securitycenter://" TA-tenable/README/inputs.conf.spec", scheme "tenable_securitycenter_mobile://" splunk_app_soar/README/inputs.conf.spec", scheme "audit://" The definitions for python.version in each stanza are: Splunk_TA_paloalto/README/inputs.conf.spec: [] python.version = python3 TA-tenable/README/inputs.conf.spec: [tenable_io://] python.version = python3 TA-tenable/README/inputs.conf.spec: [tenable_securitycenter://] python.version = python3 TA-tenable/README/inputs.conf.spec: [tenable_securitycenter_mobile://] python.version = python3 splunk_app_soar/README/inputs.conf.spec: [audit://] python.version = {default|python|python2|python3} All the definitions for python.version seem to match the regex requirement stated in the warning. Also I have other spec files with python.version defined in the same way that are not causing these messages: system/README/inputs.conf.spec: [script://] python.version = {default|python|python2|python3} TA-MS-AAD/README/alert_actions.conf.spec: [dismiss_azure_alert] python.version = python3 Anyone have any ideas how to stop these messages being generated?
hey, I need to build a report, that contains approx 500 thousand events. the requirement is  that the report will contain three rows - I need to count if httpStatus is ok or not, and classify each... See more...
hey, I need to build a report, that contains approx 500 thousand events. the requirement is  that the report will contain three rows - I need to count if httpStatus is ok or not, and classify each eventId in its propper position. (the requirement is that we will have minimal amount of rows!!! I cant duplicate or have more then 10 rows) so basically the report looks like this: I have uri column that contains all of my desired info, and all of my calculations of median, avg, precentage etc, are based on the time field as follows:     |*MY SEARCH * |stats count(request.uri) as totalCount values(uri) as uri values(timeTaken.total) as newTime perc95(timeTaken.total) as prec95 perc5(timeTaken.total) as prec5 median(timeTaken.total) as med avg(timeTaken.total) as average max(date) as maxDate min(date) as minDate values(timeTaken.total) as time by status |table uri totalCount prec95 prec5 med average status maxDate minDate time       now my question is- I need to add a new line of totals, based on the other lines. beacuse Im using functions such as avg, median etc, I dont think I can use |addtotals and a very important note is that all of my values in the columns time and uri are not distinct. that means they can appear more then once, and then my calculations are wrong, and I cant base a following stats based on the previous one. Ive tried using list, but it has a limit of 100 values, and I have  hundred of thousands.  what can I do to add another total row that will calculate all of my events ? Ive tried adding |appendPipe it this way based on the results Ive gotten in the stats command, but of course I got wrong values (because the time result is not distinct, and the values shown in the stats are distinct) thats my report after adding the total calculation (that didnt work)       |*MY SEARCH * |stats count(request.uri) as totalCount values(uri) as uri values(timeTaken.total) as newTime perc95(timeTaken.total) as prec95 perc5(timeTaken.total) as prec5 median(timeTaken.total) as med avg(timeTaken.total) as average max(date) as maxDate min(date) as minDate values(timeTaken.total) as time by status |appendpipe [stats sum(totalCount) as totalCount values(uri) as uri values(newTime) as newTime perc95(time) as prec95 perc5(time) as prec5 median(time) as med avg(time) as average| eval status="TOTAL"] |table uri totalCount prec95 prec5 med average status maxDate minDate time       I really hope that Ive made my question clear thank's in advance    
Going through the documentation for the prompt block, I see there is a way to send the prompt to the dynamic role "Playbook run owner" however I am not seeing it as an option under "User or Role" dro... See more...
Going through the documentation for the prompt block, I see there is a way to send the prompt to the dynamic role "Playbook run owner" however I am not seeing it as an option under "User or Role" drop down in my prompt block configuration panel. Is this an error and if not, is there a way to send the prompt to the user who ran the playbook some other way?
I'm having an AWS ECS Cluster and have configured it with Splunk logging and splunk-format: raw in task definition like below:   { "logConfiguration": { "logDriver": "splunk", "secretOptions": [ ... See more...
I'm having an AWS ECS Cluster and have configured it with Splunk logging and splunk-format: raw in task definition like below:   { "logConfiguration": { "logDriver": "splunk", "secretOptions": [ { "valueFrom": "myarn", "name": "splunk-token" } ], "options": { "splunk-url": "my-splunk-url", "splunk-source": "my-splunk-source", "splunk-format": "raw" } } }   All my dashboards in Splunk are expecting this format. The message are getting truncated at 4kb. Changing the format to inline does not truncate the messages but using this new format would require a lot of rework in the Splunk Dashboards. Is there a way to get this to work with splunk-format: raw without having message getting truncated?
Hello there, To keep it simple, I am trying to figure out how to make an alert depend on other alert. Imagine triggering an alert because there is "fail" in some event, but if in the same day there... See more...
Hello there, To keep it simple, I am trying to figure out how to make an alert depend on other alert. Imagine triggering an alert because there is "fail" in some event, but if in the same day there is "success" in the same source, the first alert would be closed and the "success" will be alerted instead. Am I making any sense? can anyone help? If it matters I am using Alert manager add-on Cheers,