All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a value in my events called type, which is a single digit integer (1, 2, 3, etc.) I would like to create a new string field in my search based on that value. So, something like this pseudo... See more...
I have a value in my events called type, which is a single digit integer (1, 2, 3, etc.) I would like to create a new string field in my search based on that value. So, something like this pseudocode... if type = 1 then desc = "pre" if type = 2 then desc = "current" if type = 3 then desc = "post" I realize the splunk doesn't do if/then statements but I thought that was the easiest way to explain. Thanks
Hi Everyone Sample logs: {"kubernetes":{"container_name":"sign-template-services","namespace_name":"merch-ps-signs-stress-1","pod_name":"sign-template-services-14-chfbn"},"message":"::ffff:100... See more...
Hi Everyone Sample logs: {"kubernetes":{"container_name":"sign-template-services","namespace_name":"merch-ps-signs-stress-1","pod_name":"sign-template-services-14-chfbn"},"message":"::ffff:100.65.19.1 - - [05-Mar-2020 09:58:48 CST] \"GET /health HTTP/1.1\" 200 30 - **7.807** ms\n","hostname":"ocp-usc1-lle-b-app-f-g3q9.c.kohls-openshift-lle.internal","@timestamp":"2020-03-05T15:58:48.231999+00:00","cluster_name":"ocp.gcpusc1-b.lle.xpaas"} {"kubernetes":{"container_name":"sign-template-services","namespace_name":"merch-ps-signs-ci","pod_name":"sign-template-services-39-gb69d"},"message":"::ffff:100.109.92.1 - - [05-Mar-2020 09:57:31 CST] \"GET /health HTTP/1.1\" 200 30 - **33.245** ms\n","hostname":"ocp-usc1-lle-c-app-f-7ml9.c.kohls-openshift-lle.internal","@timestamp":"2020-03-05T15:57:31.808739+00:00","cluster_name":"ocp.gcpusc1-c.lle.xpaas"} We need to extract a field called "Response_Time" which is highlighted in these logs. The data is available in the field "message". I have tried the below regex but it does not seem to work. index=kohls_prod_infrastructure_openshift_raw kubernetes.container_name=sign-template-services | rex field=MESSAGE "\d{3} d{2} - (?\d+) ms\"" Please help! Thanks.
I need to status symbol based on status UP and Down, but i want to it in single value visualization i.e. need to replace "UP" value with circle check green & "DOWN" with circle cross red. below i... See more...
I need to status symbol based on status UP and Down, but i want to it in single value visualization i.e. need to replace "UP" value with circle check green & "DOWN" with circle cross red. below is the sample code: <dashboard> <label>Test</label> <row> <panel> <single> <search> <query>| makeresults | eval Status1="UP"</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="colorBy">value</option> <option name="colorMode">none</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051", "0x0877a6", "0xf8be34", "0xf1813f", "0xdc4e41"]</option> <option name="rangeValues">[0,30,70,100]</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">0</option> <option name="useThousandSeparators">1</option> </single> <single> <search> <query>| makeresults | eval Status2="DOWN"</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="colorBy">value</option> <option name="colorMode">none</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051", "0x0877a6", "0xf8be34", "0xf1813f", "0xdc4e41"]</option> <option name="rangeValues">[0,30,70,100]</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">0</option> <option name="useThousandSeparators">1</option> </single> </panel> </row> </dashboard>
Does anyone know the magic that will scale the preview window y-axis down to a more meaningful range based on the data being previewed? Some automatically scale while many do not. Trying to figure ... See more...
Does anyone know the magic that will scale the preview window y-axis down to a more meaningful range based on the data being previewed? Some automatically scale while many do not. Trying to figure out the difference Bad Scale Good Scale
I have a use-case with my organization that would require writing additional columns to events. The users want a dashboard that receives data every day and each row of these records needs an addition... See more...
I have a use-case with my organization that would require writing additional columns to events. The users want a dashboard that receives data every day and each row of these records needs an additional column added which can have a drop down menu to choose from. The problem is that they want to "save" the drop down selections "back to the dashboard" which I haven't seen done in Splunk before. I think it would require a |makeresults command whenever a Submit button is clicked at the bottom of the table in question. Has anyone ever seen or done something like this?
My ultimate goal is to process csv files directly from an S3 bucket/prefix for splunk to index. I have the Splunk Add-on for Amazon Web Services installed and was attempting to create a new input Cu... See more...
My ultimate goal is to process csv files directly from an S3 bucket/prefix for splunk to index. I have the Splunk Add-on for Amazon Web Services installed and was attempting to create a new input Custom Data Type using SQS-Based S3. I have a bucket created and I have an SQS created. As I am filling out the AWS Input Configuration info, everything is good until I select the SQS Queue Name. I can see the SQS queue that I created, but when I select it the "Save" button turns grey and I cannot save the configuration. Also, I can no longer select the drop down for the SQS Queue Name as my mouse turns to a red circle with a line through it. I have not been able to figure this out. Any help would be appreciated.
AWS had announced that they would deprecate deprecate the path-based access model that is used to specify the address of an object in an S3 bucket and this kicks in from 30th Sept 2020. Example: ... See more...
AWS had announced that they would deprecate deprecate the path-based access model that is used to specify the address of an object in an S3 bucket and this kicks in from 30th Sept 2020. Example: Current format: https://s3.amazonaws.com/jbarr-public/images/ritchie_and_thompson_pdp11.jpeg New format https://jbarr-public.s3.amazonaws.com/images/ritchie_and_thompson_pdp11.jpeg More info on this can be found here My question is, what changes needs to be done on splunk configs end so that we continue to receive data (in my case cloudtrail) from S3 buckets with the new naming convention. i can only see from the config files that the only place where the "bucket_name " and "hostname is referanced is in inputs.conf in splunk_TA_aws. Do i need an upgrade of the Splunk TA to support this. i am currently on Splunk TA version 4.5.0 and splunk version 7.1.1
We are using Splunk App for Jenkins on Splunk Cloud 7.2.9, but would like to upgrade to v8.0 soon. On Splunk Base, Splunk App for Jenkins 2.0.3 support 8.0, but not yet certified for Splunk Cloud. ... See more...
We are using Splunk App for Jenkins on Splunk Cloud 7.2.9, but would like to upgrade to v8.0 soon. On Splunk Base, Splunk App for Jenkins 2.0.3 support 8.0, but not yet certified for Splunk Cloud. Hopefully soon. My concern is the app v2.0.3 only lists support for 8.0 and 7.3. We would like to first upgrade the app on splunk cloud 7.2.9 and then upgrade to SC 8.0. Is there any reason why Jenkins app 2.0.3 can't also support 7.2.x? Also, would love to hear an estimated timeframe on when Jenkins app 2.0.3 will be Splunk Cloud certified.
Is there a way to pull a report of all the users for my controller and when was the last time that they logged in? I know i can pull a controller report and seen recent logins, but i would like to s... See more...
Is there a way to pull a report of all the users for my controller and when was the last time that they logged in? I know i can pull a controller report and seen recent logins, but i would like to see last activity for ALL users on my controller.
Hi @ all, how could I create a timechart if I have just a error message and no field value? For example: index="*" thisistheerrormessage | timechart thisistheerrormessage span=1h Thanks and B... See more...
Hi @ all, how could I create a timechart if I have just a error message and no field value? For example: index="*" thisistheerrormessage | timechart thisistheerrormessage span=1h Thanks and BR, Michael
The requirements state that all cluster systems must run the same OS. Does this include the same OS version level of a particular OS? example: Running the same distro Linux OS on all systems but... See more...
The requirements state that all cluster systems must run the same OS. Does this include the same OS version level of a particular OS? example: Running the same distro Linux OS on all systems but one could be version 12 vs 15 or simple different service pack level of the same major version. SP1 vs SP2 and so on. Would this be acceptable ? Or with Windows, running 2016 and 2019 together in the same cluster. Thanks
Greeting all, There are some custom apps out there on universal forwarders. They may be working now, but they need to be put in custom deployment apps so that they are not lost. Any ideas on s... See more...
Greeting all, There are some custom apps out there on universal forwarders. They may be working now, but they need to be put in custom deployment apps so that they are not lost. Any ideas on setting up an alert or report to track these forwarders with custom data inpouts? Thanks!
How can i exclude a single value from a field which generates multiple value in the single event. for eg- if in a single event there are two values for the same field - user.purchase="TRUE" and user... See more...
How can i exclude a single value from a field which generates multiple value in the single event. for eg- if in a single event there are two values for the same field - user.purchase="TRUE" and user.purchase="False". If i have to exclude user.purchase="False' and need to display only user.purchase= " True" how can i do it. I have already tried search user.purchase != "False" --- It dint worked. I have also tried with NOT operator and Where command but nothing worked out. Can someone please help me excluding this value
We upgraded from 7.02 to 7.3.4 and Dark-Mode is not working in our SHC. Anyone experience this or know the underlying files associated with "Dark-Mode"?
Hi All, I need to show a pie for failed and succeed values, we know those values from the field "type" but 3 of them would be considered failed and 1 succeed This is the current search: type="bo... See more...
Hi All, I need to show a pie for failed and succeed values, we know those values from the field "type" but 3 of them would be considered failed and 1 succeed This is the current search: type="bootup.bootupFailed" OR type="ForcedPortalReload" OR type="ClassLoadingFailed" OR type="allWidgetsInitializedAndLoaded" | stats dc(devRef) count by type Of course this one is showing me a pie with all 4... but I need the pie to show only succeed and failed failed -> type="bootup.bootupFailed" OR type="ForcedPortalReload" OR type="ClassLoadingFailed" succeed -> type="allWidgetsInitializedAndLoaded" How can I do it?
Hey all, We have hit-and-miss identification of servers that fall off of Splunk monitoring. There needs to be a critical alert if a non-decommissioned server: 1.Stops reporting to Splunk, or ... See more...
Hey all, We have hit-and-miss identification of servers that fall off of Splunk monitoring. There needs to be a critical alert if a non-decommissioned server: 1.Stops reporting to Splunk, or 2.Stops phoning home to the deployment server Is there a weay to query the rest api from the search head to determind Deployment server contact? Any help is much apprreciated..
I have a query in a dashboard, which needs to be run from 15th of last month to 14th of current month, how can I set this timerange in my query so whenever it runs it takes timerange between 15th of ... See more...
I have a query in a dashboard, which needs to be run from 15th of last month to 14th of current month, how can I set this timerange in my query so whenever it runs it takes timerange between 15th of last month to 14th of current month. for ex- Time Range for running the query should be (2/15/20 12:00:00.000 AM to 3/14/20 12:00:00.000 AM)
Hi Splunkers, we are using the maps plus to visualize the location of a vehicle involved in a particular event. In dashboard one can simply select the event name from a drop-down and the relevant ... See more...
Hi Splunkers, we are using the maps plus to visualize the location of a vehicle involved in a particular event. In dashboard one can simply select the event name from a drop-down and the relevant location gets displayed on the map. The issue is that when a new token value is selected from the drop-down the new markers are getting added to the previous markers in the map. Somehow the map is not fully refreshing and thus not displaying the new marker values for the selected event. It works fine if we refresh the browser window after selecting the new token value from the drop-down. Can anyone suggest a way to refresh the complete page or at-least the map, after selecting an event from drop-down so that only the new values are displayed on the map and old values disappear. Would be really greatful for the help provided.
Currently we are using the following endpoints to add AWS accounts in the Splunk AddOn for AWS: https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/configs/conf-aws_account_ext https://loca... See more...
Currently we are using the following endpoints to add AWS accounts in the Splunk AddOn for AWS: https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/configs/conf-aws_account_ext https://localhost:8089/servicesNS/nobody/Splunk_TA_aws/storage/passwords We have a python script that goes out using Boto3 and requests Key Id and Secrete Access Key for our AWS user. We get those Keys and then we send them to both endpoints to add one of our AWS accounts to the Add on. When doing this it seems that if we get a Secrete Access Key that has a '/' or a '+' sign (special character) in the key the account creation does happen but then if we go so one of our Account Inputs like an S3 input we receive the following error: "An error occurred (SignatureDoesNotMatch) when calling the GetCallerIdentity operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.. Please make sure the AWS Account and Assume Role are correct." Now if I get a key that DOES NOT have a special character in it then it works fine. I have searched and I have not found a 100% fix for this anywhere, or what the proper way is to send a Key with special characters to the API endpoint in python. I am using the requests library. Does anyone have any suggestions? Any help will be MUCH appreciated.
I am experimenting on a test system and have a simple shell script that consists of one line to call Python 3 to run a python script. Splunk 8.0.1 on RedHat 7.6 I do not understand what I am seeing... See more...
I am experimenting on a test system and have a simple shell script that consists of one line to call Python 3 to run a python script. Splunk 8.0.1 on RedHat 7.6 I do not understand what I am seeing or why. So the shell script is called myprog.sh and is in /data/splunk/etc/apps/myprog/bin and consists of the call: /data/splunk/bin/python3.7m /data/splunk/etc/apps/myprog/bin/myprog.py myprog.py just runs a few function calls to see what the current directory and id are and what files it contains and outputs them on STDERR so I can see then in splunkd.log $>cat myprog.py #! /data/splunk/bin/python3 import sys import os def run() : print("CWD ", os.getcwd(), " ID ", os.geteuid(), file=sys.stderr) files = os.listdir('.') for file in files: print (file, file=sys.stderr) if __name__ == '__main__': run() If run interactively it reports the current directory is /data/splunk/etc/apps/myprog/bin and my id is 2000 (splunk) exactly as I would have expected If configured and run as through Settings -> Data Input -> Script (set to every 60 seconds), the current directory is reported as / - i.e. the root directory: 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" CWD / ID 2000 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" boot 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" dev 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" home 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" proc 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" run 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" sys 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" tmp 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" var 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" etc 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" root 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" usr 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" bin 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" sbin 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" lib 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" lib64 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" media 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" mnt 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" opt 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" srv 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" .autorelabel 03-05-2020 12:58:04.884 +0100 ERROR ExecProcessor - message from "/data/splunk/etc/apps/myprog/bin/myprog.sh" data Is this expected behaviour? It seems to me to be worrisome that an app could have access to the entire file system like this - even if as user splunk rather than user root. Furthermore, I guess if someone were to run this script on a system where splunkd was running as root (frowned on, I know), the results could be "interesting". Any thoughts would be welcome! Thanks.