All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I'm facing an error where all my scheduled searches are triggered, there no logs showing any error while trying to send the email, BUT I did not receive any email. My splunk emailing system is ... See more...
Hi, I'm facing an error where all my scheduled searches are triggered, there no logs showing any error while trying to send the email, BUT I did not receive any email. My splunk emailing system is integrated with AWS SES and I'm able to send (and receive) email using 'sendemail' command from the splunk search which tells me that the SES credentials are correct.  Below are the logs showing that no errors while trying to send email. Thanks! 2021-08-11 22:56:01,898 +0000 INFO sendemail:139 - Sending email. subject="|prod-us-west-2| SplunkAlert: TSD SES Email Alert Test ", results_link="https://aws-prod-west-splunk.tscloudservice.com/app/TsdApp/@go?sid=scheduler__admin__TsdApp__RMD5cd8884db53568853_at_1628722560_51987_FDE004B7-D863-4836-AC24-BFAEBA400C09", recipients="[u'muhammad.mufthi@example.com']", server="email-smtp.us-west-2.amazonaws.com"
I need to restrict my Splunk instance to be only accessible on localhost. To do this, I created a new web.conf file and put it in /opt/splunk/etc/system/local. The file as the following contents: [s... See more...
I need to restrict my Splunk instance to be only accessible on localhost. To do this, I created a new web.conf file and put it in /opt/splunk/etc/system/local. The file as the following contents: [settings] server.socket_host = 127.0.0.1 When I restart splunk, I get the following: Waiting for web server at http://[random char]:8000 to be available........... Rather than seeing 127.0.0.1, I see random characters. It just sits there. What am I missing in the config file?  
I was able to build a large dashboard with 10+ panels using the loadjob command spanning the last day of any triggered results.  However, I am now looking to built the same dashboard where each panel... See more...
I was able to build a large dashboard with 10+ panels using the loadjob command spanning the last day of any triggered results.  However, I am now looking to built the same dashboard where each panel will span a week (7-days) of any triggered results. Loadjob was the only command that minimized loading of each panel.  Is there anyway to use loadjob, or a similar command, that shows a timechart spanning 7-days? For example: | loadjob savedsearch=tech123:Residential:"name of enabled alert" artifact_offset=0 | timechart span=1d count by incident_type But I've tried using earliest=-7d in every  possible spot and I've used the time picker, but I haven't found a resolution yet... any thoughts or ideas or solutions?
Hey Everyone, I wanted to see if anyone could help me with correlation searches firing and creating a notable event on the Incident Review page  but then not producing the same 1 for 1 match when I ... See more...
Hey Everyone, I wanted to see if anyone could help me with correlation searches firing and creating a notable event on the Incident Review page  but then not producing the same 1 for 1 match when I run the search manually. What I did was  look at a specific correlation search that fired in the Incident Review page over the last 24 hrs. I then took that search and ran it in a new search with the 24 hr time frame picker. The notable events said that 77 events for that correlation search existed but the search results would return either a 0 or varying numbers if let it finish and ran over and over a few times (none of them being 77). I made sure it wasn't a count issue where an event had multiple counts that in total added up to the total number but was only shown as one row. The issue seems to be the data models. I run the searches from the index(s) and get vastly different numbers than the Incident Review page which is vastly different than the data model correlation search. Does anyone have any ideas on why I'm not getting a 1=1=1 match between the Incident Review, correlation search with data models, and the raw index searches? Any and all help/insight is greatly appreciated!
I have MC on the ES & tried my SPLs but need your help please. I need to find the apps, name of skipped searches & why the searches were skipped? Thank u in advance.
By default, when you chart by '_time', the major ticks displayed over a multiple week time span always uses Mondays in the major tick labels.  Since I am allowing people to select individual days of ... See more...
By default, when you chart by '_time', the major ticks displayed over a multiple week time span always uses Mondays in the major tick labels.  Since I am allowing people to select individual days of the week for week over week comparisons, it would be nice to show the selected day of the week as the major tick label. Is there a charting control variable that can be used for this? thanks in advance, Bob Eubanks In the attached screen shot, the user has selected 'Fridays' for comparison, yet the chart uses Mondays as the major tick label.
I'm running a tiny proof-of-concept Splunk environment across 2 VMs. SE is on VM1 (Ubuntu 20.04), version 8.1.1. The universal forwarder is on VM2 (Ubuntu 20.04) and is sending the Splunk_TA_nix add-... See more...
I'm running a tiny proof-of-concept Splunk environment across 2 VMs. SE is on VM1 (Ubuntu 20.04), version 8.1.1. The universal forwarder is on VM2 (Ubuntu 20.04) and is sending the Splunk_TA_nix add-on metric data back just fine. I have installed/configured version 7.3 of the Splunk Stream Add-On for Stream Forwarders on the universal forwarder and installed the Splunk Stream App on the SE VM, also version 7.3.  On the forwarder there are the following conf files in /opt/splunkforwarder/etc/apps/Splunk_TA_stream/local: ----inputs.conf---- splunk_stream_app_location = https://10.0.2.15:8000/en-us/custom/splunk_app_stream/ stream_forwarder_id =  disabled = 0 --------------------------- ----streamfwd.conf---- port = 8889 ipAddr = 127.0.0.1 ---------------------------- I can't get the network stream data from the forwarder into the SE search/reporting app, or the SE Stream app. The /opt/splunkforwarder/var/log/splunk/streamfwd.log is the only thing from the stream add-on on the forwarder that will place any data in SE at all and includes an error that says: (CaptureServer.cpp:2211) stream.CaptureServer - unable to ping server (<longerrorcode>): Unable to establish connection to 10.0.2.15: wrong version number 8.1 should be compatible with the 7.3 installs of either stream app. Additionally I haven't seen anything mandating a specified version number anywhere.  Things I have tried: I can successfully ping SE at https://10.0.2.15:8000. Tried modifying the .conf files in apps/default on the forwarder, which the docs say you're not supposed to do. Didn't work. Tried all manner of switching port numbers in the .conf files. Restarted many, many times.  I am out of ideas. Someone please help?    
Is there a way to tell which users are hitting their search limits? I've looked for these events in the internal indexes but cant find them. Maybe a rest search?
I am new to the SOC environment. I was tasked to create a personal dashboard. What items/data should I put into the dashboard?
I am trying to set up Splunk Add-on for AWS to pull my logs from my AWS account into splunk. I have a Splunk Enterprise setup on prem in an AWS EC2 server. I used the Splunk Enterprise AMI. I have at... See more...
I am trying to set up Splunk Add-on for AWS to pull my logs from my AWS account into splunk. I have a Splunk Enterprise setup on prem in an AWS EC2 server. I used the Splunk Enterprise AMI. I have attached an EC2 instance role that has administrator access. When I try to configure an input, I get the error - Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [400]: Bad Request -- An error occurred (InvalidClientTokenId) when calling the GetCallerIdentity operation: The security token included in the request is invalid. Please make sure the AWS Account and Assume Role are correct.". See splunkd.log/python.log for more details.   Note: I did not add any account or IAM role manually in the Splunk UI. The IAM role was autodiscovered by Splunk, and is visible in the Account tab in the Configurations page.
My log is formatted like this: labels: {        app: splunk-kubernetes-metrics        app.kubernetes.io/managed-by: Helm        chart: splunk-kubernetes-metrics-1.4.1        engine: fluentd    ... See more...
My log is formatted like this: labels: {        app: splunk-kubernetes-metrics        app.kubernetes.io/managed-by: Helm        chart: splunk-kubernetes-metrics-1.4.1        engine: fluentd        heritage: Helm        release: splunk-monitor How do I find a list of fields and their values? I want to list all the values in field labels. Thanks!
Here is my setup. inputs.conf: [script://./bin/lsof.sh] interval = 600 sourcetype = lsof source = lsof props.conf: [script://./bin/lsof.sh] #also tried[lsof] & [source::lsof] TRANSFORMS-null... See more...
Here is my setup. inputs.conf: [script://./bin/lsof.sh] interval = 600 sourcetype = lsof source = lsof props.conf: [script://./bin/lsof.sh] #also tried[lsof] & [source::lsof] TRANSFORMS-null = null_splunk_user, null_splunk_command, null_splunk, lsof_normal_queue transforms.conf: [null_splunk_user] REGEX = ^\S+\W+\d+\W+splunk\W+ DEST_KEY = queue FORMAT = nullQueue [null_splunk_command] REGEX = ^splunkd\W+\d+\W+splunk DEST_KEY = queue FORMAT = nullQueue [null_splunk] REGEX = ^splunkd DEST_KEY = queue FORMAT = nullQueue [lsof_normal_queue] REGEX = . DEST_KEY = queue FORMAT = indexQueue sample of data: splunkd 52507 splunk cwd DIR 202,1 4096 2 / splunkd 52507 splunk rtd DIR 202,1 4096 2 / splunkd 52507 splunk txt REG 202,1 76073192 409182 /opt/splunk/bin/splunkd python2.7 53347 splunk cwd DIR 202,1 4096 2 / splunk 53347 splunk rtd DIR 202,1 4096 2 / splunk 53347 splunk txt REG 202,1 577688 411002 /opt/splunk/bin/splunk splunkd 887 root cwd DIR 259,1 4096 2 / splunkd 887 root rtd DIR 259,1 4096 2 / splunkd 887 root txt REG 259,1 76073192 401488 /opt/splunk/bin/splunkd   On the indexer you can see that the props & transforms rules: /opt/splunk/bin/splunk cmd btool props list --debug | grep lsof /opt/splunk/etc/slave-apps/Splunk_TA_nix/local/props.conf [lsof] /opt/splunk/bin/splunk cmd btool transforms list --debug | grep null_splunk /opt/splunk/etc/slave-apps/Splunk_TA_nix/local/transforms.conf [null_splunk] /opt/splunk/etc/slave-apps/Splunk_TA_nix/local/transforms.conf [null_splunk_command] /opt/splunk/etc/slave-apps/Splunk_TA_nix/local/transforms.conf [null_splunk_user] /opt/splunk/etc/slave-apps/Splunk_TA_nix/local/transforms.conf [lsof_normal_queue]   I've tried multiple iterations of regexes/props/transforms. I've been restarting the index clusters after each update to no avail. The majority  of the data I'm attempting to drop is on the indexers themselves, splunk monitoring splunk.
Hi,   I've a lookup that looks like this -  clientid url  abc accounts/*/balance abc accounts/*/name xyz /user/*/details   And I've log like -  app endpoint responsecode ms1 accounts/12345/... See more...
Hi,   I've a lookup that looks like this -  clientid url  abc accounts/*/balance abc accounts/*/name xyz /user/*/details   And I've log like -  app endpoint responsecode ms1 accounts/12345/balance 200 ms2 prod/accounts/98765/name 500 . . ms1 /user/randomuserid/details 403   I want to search with the uri field from lookup, which contains regex and additionally doesn't exactly match with the endpoint field of log (it's like this - *uri*==endpoint).    I am trying to get result like this -  app url clientid  ms1 accounts/*/balance abc  ms1 /user/*/details xyz ms2 accounts/*/name abc   Is it doable via inputlookup? I've around 2500 rows in my lookup file.  
It would be appreciated if I can get a response to the below. We have a new request to integrate IBM Identity Verify with Splunk. We are replacing the old ISIM/ISAM. Is there an App or has anyone i... See more...
It would be appreciated if I can get a response to the below. We have a new request to integrate IBM Identity Verify with Splunk. We are replacing the old ISIM/ISAM. Is there an App or has anyone integrated IBM Identity Verify with Splunk to share some insight? Thanks
Hi  Is it possible to have 1 x PC and 4 times monitors with different data displayed on each monitor. Thanks
We want to replicate this table (especially the circled row). We have to divide data (from 1 to 3 and from 4 to 6) for each week of the month but we actually don't know if it's possible to exactly r... See more...
We want to replicate this table (especially the circled row). We have to divide data (from 1 to 3 and from 4 to 6) for each week of the month but we actually don't know if it's possible to exactly replicate the table using Splunk. There's a way to do it?  
Hi All,   We are trying to push the props and transforms config files from Cluster Master to all indexers. Source types are visible but the rules are not applied from the config files. Please assi... See more...
Hi All,   We are trying to push the props and transforms config files from Cluster Master to all indexers. Source types are visible but the rules are not applied from the config files. Please assist on this issue. Thanks in Advance.
Hi all, Is it possible pass multiple value to a Token from one search to another?  This is what I try to do. First Panel search: Index="some_DHCP" | where src_hostname like "1-computer" | search ... See more...
Hi all, Is it possible pass multiple value to a Token from one search to another?  This is what I try to do. First Panel search: Index="some_DHCP" | where src_hostname like "1-computer" | search src_ip=* | dedup src_ip | table src_hostname src_ip src_hostname     src_ip 1-computer          10.0.0.1 1-computer          10.0.0.2 From this search I might have one or more src_ip, depending on timespan, and want to use them both in next search in an other Panel. So far I have done like this to pass to next serach: <done> <set token="IP_answ">$result.src_ip$</set> </done> Second Panel search: Index="some_FW" src_ip="$IP_answ$" dest_ip=* | table src_ip dest_ip As it is now I will only have 1 IP (latest) to pass to the next Panel search "IP_answ". And I can understand that, but I can not find any solution when I searching the web or this community how to solve this with multiple values and Append the second IP to the second Panel. Any suggestions? Thanks in advance and regards, /Tomas
Hello all, Is it possible to use the "Splunk Add-on for CyberArk EPM" when CyberArk EPM is integrated with SAML? https://splunkbase.splunk.com/app/5160/#/overview  
Hello! We have index with cisco events and now we need to parse some fields such as device_mac and device_name. But we can't do it by regex because we get unstructured data from cisco (fields are sw... See more...
Hello! We have index with cisco events and now we need to parse some fields such as device_mac and device_name. But we can't do it by regex because we get unstructured data from cisco (fields are swapped). For example in this log first there is device type, and after mac And the next one comes first mac, and after device type Could you please help me? How i can parse this fields? Thanks!