All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to set up Splunk Add-on for AWS to pull my logs from my AWS account into splunk. I have a Splunk Enterprise setup on prem in an AWS EC2 server. I used the Splunk Enterprise AMI. I have at... See more...
I am trying to set up Splunk Add-on for AWS to pull my logs from my AWS account into splunk. I have a Splunk Enterprise setup on prem in an AWS EC2 server. I used the Splunk Enterprise AMI. I have attached an EC2 instance role that has administrator access. When I try to configure an input, I get the error - Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [400]: Bad Request -- An error occurred (InvalidClientTokenId) when calling the GetCallerIdentity operation: The security token included in the request is invalid. Please make sure the AWS Account and Assume Role are correct.". See splunkd.log/python.log for more details.   Note: I did not add any account or IAM role manually in the Splunk UI. The IAM role was autodiscovered by Splunk, and is visible in the Account tab in the Configurations page.
My log is formatted like this: labels: {        app: splunk-kubernetes-metrics        app.kubernetes.io/managed-by: Helm        chart: splunk-kubernetes-metrics-1.4.1        engine: fluentd    ... See more...
My log is formatted like this: labels: {        app: splunk-kubernetes-metrics        app.kubernetes.io/managed-by: Helm        chart: splunk-kubernetes-metrics-1.4.1        engine: fluentd        heritage: Helm        release: splunk-monitor How do I find a list of fields and their values? I want to list all the values in field labels. Thanks!
Here is my setup. inputs.conf: [script://./bin/lsof.sh] interval = 600 sourcetype = lsof source = lsof props.conf: [script://./bin/lsof.sh] #also tried[lsof] & [source::lsof] TRANSFORMS-null... See more...
Here is my setup. inputs.conf: [script://./bin/lsof.sh] interval = 600 sourcetype = lsof source = lsof props.conf: [script://./bin/lsof.sh] #also tried[lsof] & [source::lsof] TRANSFORMS-null = null_splunk_user, null_splunk_command, null_splunk, lsof_normal_queue transforms.conf: [null_splunk_user] REGEX = ^\S+\W+\d+\W+splunk\W+ DEST_KEY = queue FORMAT = nullQueue [null_splunk_command] REGEX = ^splunkd\W+\d+\W+splunk DEST_KEY = queue FORMAT = nullQueue [null_splunk] REGEX = ^splunkd DEST_KEY = queue FORMAT = nullQueue [lsof_normal_queue] REGEX = . DEST_KEY = queue FORMAT = indexQueue sample of data: splunkd 52507 splunk cwd DIR 202,1 4096 2 / splunkd 52507 splunk rtd DIR 202,1 4096 2 / splunkd 52507 splunk txt REG 202,1 76073192 409182 /opt/splunk/bin/splunkd python2.7 53347 splunk cwd DIR 202,1 4096 2 / splunk 53347 splunk rtd DIR 202,1 4096 2 / splunk 53347 splunk txt REG 202,1 577688 411002 /opt/splunk/bin/splunk splunkd 887 root cwd DIR 259,1 4096 2 / splunkd 887 root rtd DIR 259,1 4096 2 / splunkd 887 root txt REG 259,1 76073192 401488 /opt/splunk/bin/splunkd   On the indexer you can see that the props & transforms rules: /opt/splunk/bin/splunk cmd btool props list --debug | grep lsof /opt/splunk/etc/slave-apps/Splunk_TA_nix/local/props.conf [lsof] /opt/splunk/bin/splunk cmd btool transforms list --debug | grep null_splunk /opt/splunk/etc/slave-apps/Splunk_TA_nix/local/transforms.conf [null_splunk] /opt/splunk/etc/slave-apps/Splunk_TA_nix/local/transforms.conf [null_splunk_command] /opt/splunk/etc/slave-apps/Splunk_TA_nix/local/transforms.conf [null_splunk_user] /opt/splunk/etc/slave-apps/Splunk_TA_nix/local/transforms.conf [lsof_normal_queue]   I've tried multiple iterations of regexes/props/transforms. I've been restarting the index clusters after each update to no avail. The majority  of the data I'm attempting to drop is on the indexers themselves, splunk monitoring splunk.
Hi,   I've a lookup that looks like this -  clientid url  abc accounts/*/balance abc accounts/*/name xyz /user/*/details   And I've log like -  app endpoint responsecode ms1 accounts/12345/... See more...
Hi,   I've a lookup that looks like this -  clientid url  abc accounts/*/balance abc accounts/*/name xyz /user/*/details   And I've log like -  app endpoint responsecode ms1 accounts/12345/balance 200 ms2 prod/accounts/98765/name 500 . . ms1 /user/randomuserid/details 403   I want to search with the uri field from lookup, which contains regex and additionally doesn't exactly match with the endpoint field of log (it's like this - *uri*==endpoint).    I am trying to get result like this -  app url clientid  ms1 accounts/*/balance abc  ms1 /user/*/details xyz ms2 accounts/*/name abc   Is it doable via inputlookup? I've around 2500 rows in my lookup file.  
It would be appreciated if I can get a response to the below. We have a new request to integrate IBM Identity Verify with Splunk. We are replacing the old ISIM/ISAM. Is there an App or has anyone i... See more...
It would be appreciated if I can get a response to the below. We have a new request to integrate IBM Identity Verify with Splunk. We are replacing the old ISIM/ISAM. Is there an App or has anyone integrated IBM Identity Verify with Splunk to share some insight? Thanks
Hi  Is it possible to have 1 x PC and 4 times monitors with different data displayed on each monitor. Thanks
We want to replicate this table (especially the circled row). We have to divide data (from 1 to 3 and from 4 to 6) for each week of the month but we actually don't know if it's possible to exactly r... See more...
We want to replicate this table (especially the circled row). We have to divide data (from 1 to 3 and from 4 to 6) for each week of the month but we actually don't know if it's possible to exactly replicate the table using Splunk. There's a way to do it?  
Hi All,   We are trying to push the props and transforms config files from Cluster Master to all indexers. Source types are visible but the rules are not applied from the config files. Please assi... See more...
Hi All,   We are trying to push the props and transforms config files from Cluster Master to all indexers. Source types are visible but the rules are not applied from the config files. Please assist on this issue. Thanks in Advance.
Hi all, Is it possible pass multiple value to a Token from one search to another?  This is what I try to do. First Panel search: Index="some_DHCP" | where src_hostname like "1-computer" | search ... See more...
Hi all, Is it possible pass multiple value to a Token from one search to another?  This is what I try to do. First Panel search: Index="some_DHCP" | where src_hostname like "1-computer" | search src_ip=* | dedup src_ip | table src_hostname src_ip src_hostname     src_ip 1-computer          10.0.0.1 1-computer          10.0.0.2 From this search I might have one or more src_ip, depending on timespan, and want to use them both in next search in an other Panel. So far I have done like this to pass to next serach: <done> <set token="IP_answ">$result.src_ip$</set> </done> Second Panel search: Index="some_FW" src_ip="$IP_answ$" dest_ip=* | table src_ip dest_ip As it is now I will only have 1 IP (latest) to pass to the next Panel search "IP_answ". And I can understand that, but I can not find any solution when I searching the web or this community how to solve this with multiple values and Append the second IP to the second Panel. Any suggestions? Thanks in advance and regards, /Tomas
Hello all, Is it possible to use the "Splunk Add-on for CyberArk EPM" when CyberArk EPM is integrated with SAML? https://splunkbase.splunk.com/app/5160/#/overview  
Hello! We have index with cisco events and now we need to parse some fields such as device_mac and device_name. But we can't do it by regex because we get unstructured data from cisco (fields are sw... See more...
Hello! We have index with cisco events and now we need to parse some fields such as device_mac and device_name. But we can't do it by regex because we get unstructured data from cisco (fields are swapped). For example in this log first there is device type, and after mac And the next one comes first mac, and after device type Could you please help me? How i can parse this fields? Thanks!
Dear Community, I have the following search query:   index="myIndex" host="myHost" source="mySource.log" 2021081105302743 "started with profile"   The above gives me the following result:   ... See more...
Dear Community, I have the following search query:   index="myIndex" host="myHost" source="mySource.log" 2021081105302743 "started with profile"   The above gives me the following result:   Progam has run, 2021081105302743 started with profile TEST_PROFILE_01   I would like to remove everything before TEST_PROFILE_01 , giving me just the profile. Beforehand I do not know what profile is used. Therefore I guess what I want is: Remove everything before "profile" Also remove "profile" Then, I want to display the profile in a "Single Value".   I have used the below in a table before, but now that I am using Single Value, I don't know which field to use. Also if I use a string instead of the # below in the table, it won't work. | eval _raw = replace(_raw,"^[^#]*#", "")   I have 2 questions: When using a Single Value Panel, what field do I use in the above search at the position _raw (what to replace it with)? When I search for the data as shown in the query located at the top, the data is shown in the "Event" field. Is this the field I should use? At the position of the # I would like to use "profile", but I don't know how to edit the regex accordingly. I could use some help on this matter. Thanks in advance.
Hello, when i search from index=alfa_cisco_ice and see the errors: AutoLookupDriver - Could not load lookup='LOOKUP-cisco_asa_ids_lookup' reason='Error in 'lookup' command: Must specify one or more ... See more...
Hello, when i search from index=alfa_cisco_ice and see the errors: AutoLookupDriver - Could not load lookup='LOOKUP-cisco_asa_ids_lookup' reason='Error in 'lookup' command: Must specify one or more lookup fields.' Please help, how too fix this problem?  And in inspector i see alot of log like  SearchOperator:kv - Invalid key-value parser, ignoring it, transform_name='cisco_dest_ipv6'.   SearchOperator:kv - Invalid key-value parser, ignoring it, transform_name='cisco_fw_connection'    
Greetings-  We clone a working group in LDAP and expecting the cloned group to show in Splunk Ldap page with the new Ldap group name. LDap windows team has indicated that the cloned group looks all... See more...
Greetings-  We clone a working group in LDAP and expecting the cloned group to show in Splunk Ldap page with the new Ldap group name. LDap windows team has indicated that the cloned group looks all good. Is there a configuration to set so I can see the newly cloned LDAP group ?  
i have max results of 300000 in a report.But my shc is failing to send csv in a email. Please find the below settings.I tride to changed them to 300000 still its not working. Also i restarted after t... See more...
i have max results of 300000 in a report.But my shc is failing to send csv in a email. Please find the below settings.I tride to changed them to 300000 still its not working. Also i restarted after the change, Fyi:the report containing lesser then 175000 they are working perfectly fine. Can some one help me with this? $SPLUNK_HOME/etc/system/local/limits.conf [scheduler] max_action_results = 175000   [searchresults] maxresultrows = 175000   $SPLUNK_HOME/etc/system/local/alert_actions.conf   [default] maxresults = 175000
Hi community, i have the following tstats output "| tstats count WHERE fromzone="*INTRANET*" index=*_*_* by index source getport" The getport field is for different indexes always 5 digits long fo... See more...
Hi community, i have the following tstats output "| tstats count WHERE fromzone="*INTRANET*" index=*_*_* by index source getport" The getport field is for different indexes always 5 digits long for e.g. (index A has Port 22001, index B has 25003, index C has 35002) Now i want to filter out all field values from the field getport without the "1" at the end. Thanks for your help!
Hello, I'm asking your help to merge two indexes. The first index is simply JSON documents compound. The second index is made up of JSON documents too but with array of documents. For example: Firs... See more...
Hello, I'm asking your help to merge two indexes. The first index is simply JSON documents compound. The second index is made up of JSON documents too but with array of documents. For example: First index { "field1": "value1", "field2": "value2", } Second index    { ...other fields... documents: [{ "field1": "value1" "field2": "value2" }, { "field1": "value1" "field2": "value2" }] }   I want to be able to retrieve and flatmap documents from the second index and then merge it with the first index to be able to do stats operations. Thank you 
Hello Splunkers. We want to deploy a splunk product in our environment, to monitor Infrastructure along with some Automations, automations like; 1. Predictive analysis - Finding the defect before i... See more...
Hello Splunkers. We want to deploy a splunk product in our environment, to monitor Infrastructure along with some Automations, automations like; 1. Predictive analysis - Finding the defect before it actually appears in the infrastructure.  2. Minimize MTTR, MTTD and MTTI. 3. Executing scripts on destination machine, to resolve the defect, like deletion of some garbage files to free up storage etc..   Thanks in Advance.
Hi all, my question is regarding towards the addon of security Essentials.   i have different instances of Splunk running and all have there own Searches. I ingested these into Security Essentials... See more...
Hi all, my question is regarding towards the addon of security Essentials.   i have different instances of Splunk running and all have there own Searches. I ingested these into Security Essentials (SE). now i want to gather all of content of these different SE instances into one.   now what i dit was use the export function to JSON: From there i got to the manege snapshots page and pressed the export button, here i got a JSON output encoded base64 code. this works! But now!.. if i am searching on my bookmarks i need to restore each snapshot to see that content.. what i want is 1 snapshot with all my content in one (merge all snapshots together).   i tried to merge de contents of the sse_bookmarks_backup but then the restore button does not work.  
Hi Splunkers   I've tried to read some data from MS SQL Server. The data is json like. It works for a while and then I encounter with this message:   ERROR HttpInputDataHandler - Failed processin... See more...
Hi Splunkers   I've tried to read some data from MS SQL Server. The data is json like. It works for a while and then I encounter with this message:   ERROR HttpInputDataHandler - Failed processing http input, token name=db-connect-http-input, channel=n/a, source_IP=127.0.0.1, reply=6, events_processed=802, http_input_body_size=11904838 ERROR HttpInputDataHandler - Parsing error : While expecting event's raw text: String value too long. valueSize=5246755, maxValueSize=5242880, totalRequestSize=11904838     After that no data getting in. Is there any way to increase the maxValueSize? or my problem is originated from elsewhere Thanks in advance