All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi, I'am lily. I want to get network traffic datas from keysight vision e10s(smart tab device). how to get it using stream forwarder?
Hi All,  We have an Monitoring console and due to a recent release, we observed all the, aggregator queue, typing queue & index queue fill ratio has reached 100%.  I have checked indexer performance... See more...
Hi All,  We have an Monitoring console and due to a recent release, we observed all the, aggregator queue, typing queue & index queue fill ratio has reached 100%.  I have checked indexer performance dashboards in monitoring console, and I wasn't able to find out any relevant error which might have caused it. The data ingestion rate through licensing console looked same as we have every day & Can someone, please point me right steps to troubleshoot this? Thanks. 
I used Splunk Add on for AWS to send log files stored in S3 to SQS using S3 event notifications, and configured Splunk to read the log files from SQS.   However, I got an error saying that the ... See more...
I used Splunk Add on for AWS to send log files stored in S3 to SQS using S3 event notifications, and configured Splunk to read the log files from SQS.   However, I got an error saying that the S3 test message that is always sent first by S3 event notifications could not be parsed.   Splunk on EC2 is given KMS decryption privileges as shown below.   "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "sqs:*", "s3:*", "kms:Decrypt" ], "Resource": [ "arn:aws:sqs:ap-northeast-1:*************:poc-splunk-vpcflowlog*", "arn:aws:s3:::poc-splunk-vpcflowlog", "arn:aws:s3:::poc-splunk-vpcflowlog/*"     What could be the cause?
Hello! We keep going over our license usage. We cant seem to find what is causing us to go over. we've gone over 3 times now. Any suggestion on how to find what is causing this, please?
Hello, good day team! How are you? I did the download and instalation for this app but I can't found the "TA genesys cloud", where can I download it? The TA lives in another repository? Please, co... See more...
Hello, good day team! How are you? I did the download and instalation for this app but I can't found the "TA genesys cloud", where can I download it? The TA lives in another repository? Please, could you help me to get this TA please? If currently the TA doesn't lives in the Splunkbase, could you send me the TA via email please? Regards in advance! Carlos Martínez. carloshugo.martinez@edenred.com Edenred.
Hi Splunkers, I have a problem with a Per-Event Index Routing use case. In involved environment, there are some data currently collected in a index named ot. Here we have some logs that must be spl... See more...
Hi Splunkers, I have a problem with a Per-Event Index Routing use case. In involved environment, there are some data currently collected in a index named ot. Here we have some logs that must be splitted and redirect to other indexes, with naming convention ot_<tecnology>. Inputs.conf involved file is placed under a dedicated app, named simply customer_inputs. The procedure to use is very clear for us: we created, inside above app, props.conf and transforms.conf and worked with key and regex. The strange behavior is this: if we work to redirect one kind of logs, it works perfectly. When we add another log subset, nothing works properly. Let me share you an example.  Scenario 1 In this case, we want: Windows logs must go on ot_windows index. All remaining logs still must go to ot index. We can identify involved logs based on ports; they are coming as network input on port 514 udp, with CEF format. First, our props.conf   [source::udp:514] TRANSFORMS-ot_windows = windows_logs   Second, our transofrms.conf   [windows_logs] SOURCE_KEY = _raw REGEX = <our_regex> DEST_KEY = _MetaData:Index FORMAT = ot_windows   This configuration works fine: Windows logs goes in ot_windows index, all remaining ones still go on ot index. Then, we try another configuration, explained on second scenario. Scenario 2 In this case, we want: Nozomi logs must go on ot_nozomi index. All remaining logs still must go to ot index. Again, we can identify involved logs based on ports; they are coming as network input on port 514 udp, with CEF format. First, our props.conf   [source::udp:514] TRANSFORMS-ot_nozomi = nozomi_logs   Second, out transforms.conf   [nozomi_logs] SOURCE_KEY = _raw REGEX = <our_second_regex> DEST_KEY = _MetaData:Index FORMAT = ot_nozomi   Again, this conf works fine: all Nozomi logs go on dedicated index, ot_nozomi, while all remaining one still go on ot index.  ISSUE So, if we set one of above conf, we got expected behavior. By the way, when we try to merge above confs, nothing works: logs, both Windows and Nozomi, continue to go on ot index. Due they work fine when they are "single", we suspect error is not on regex used, but on how we perform merge. Currently, our merged conf files looks like this: props.conf   [source::udp:514] TRANSFORMS-ot_windows = windows_logs TRANSFORMS-ot_nozomi = nozomi_logs   transforms.conf   [windows_logs] SOURCE_KEY = _raw REGEX = <our_regex> DEST_KEY = _MetaData:Index FORMAT = ot_windows [nozomi_logs] SOURCE_KEY = _raw REGEX = <our_second_regex> DEST_KEY = _MetaData:Index FORMAT = ot_nozomi   Is our assumption right? If yes, what is the correct merge structure?
I installed spunk enterprise on a server named s1.  I installed a forwarder on server f1. Both Windows Server 2019. When I go into Forwarder Management, s1 sees f1, but I can't DO anything with it.... See more...
I installed spunk enterprise on a server named s1.  I installed a forwarder on server f1. Both Windows Server 2019. When I go into Forwarder Management, s1 sees f1, but I can't DO anything with it.  There's nothing on the Forwarder Management screen to CONFIGURE.   If I go to Settings | Data Inputs and try to configure "Remote Performance monitoring" (just as a test, just to monitor something), it says it's going to use WMI and that I should use a forwarder instead. Yes, please.  I want to use a forwarder instead.  I want to user my new forwarder, but I just don't see how.  
Below is the regex used, here we want to extract following fields: DIM TID APPLICATION POSITION CORRLATIONID The rex which i used is extraction DIM, TDI, APPLICATION as one field, but we need t... See more...
Below is the regex used, here we want to extract following fields: DIM TID APPLICATION POSITION CORRLATIONID The rex which i used is extraction DIM, TDI, APPLICATION as one field, but we need them separately. We need to write the rex generic so that it should capture the data if there are different field names as well  
So I'm trying to use #splunkcloud to make calls to a Restful API for which there is no add-on or app available on Splunk Base.   There is nothing under Settings>Data Inputs>Local Inputs to accomplish... See more...
So I'm trying to use #splunkcloud to make calls to a Restful API for which there is no add-on or app available on Splunk Base.   There is nothing under Settings>Data Inputs>Local Inputs to accomplish this task...which kind of blows my mind.  Anyone find a solutions for this or something similar?  TIA
Hi, We get the following exceptions while trying to load APM agent 24.3 in WebLogic 14.1: java.lang.IllegalAccessError: class jdk.jfr.internal.SecuritySupport$$Lambda$225/0x0000000800979c40 (in mod... See more...
Hi, We get the following exceptions while trying to load APM agent 24.3 in WebLogic 14.1: java.lang.IllegalAccessError: class jdk.jfr.internal.SecuritySupport$$Lambda$225/0x0000000800979c40 (in module jdk.jfr) cannot access class com.singularity.ee.agent.appagent.entrypoint.bciengine.FastMethodInterceptorDelegatorBoot (in unnamed module @0x2205a05d) because module jdk.jfr does not read unnamed module @0x2205a05d  java.lang.IllegalStateException: Unable to perform operation: create on weblogic.diagnostics.instrumentation.InstrumentationManager The WebLogic managed server won't start after throwing these exceptions. Any insights on what might be causing these errors? Thanks, Roberto
I don't see checkbox as part of the inputs list. It is possible in simple xml but would like to know how it can be achieved using dashboard studio?    
HI  If I replace, for example, src=10.0.0.1 with my tag containing src=10.0.0.1 in the query, it doesn't work. Please help.
HI, I need to upgrade my correlation search for Excessive Failed Logins with Username, | tstats summariesonly=true values("Authentication.tag") as "tag",dc("Authentication.user") as "user_count",... See more...
HI, I need to upgrade my correlation search for Excessive Failed Logins with Username, | tstats summariesonly=true values("Authentication.tag") as "tag",dc("Authentication.user") as "user_count",values("Authentication.user") as "usernames", dc("Authentication.dest") as "dest_count",count from datamodel="Authentication"."Authentication" where nodename="Authentication.Failed_Authentication" by "Authentication.app","Authentication.src" | rename "Authentication.app" as "app","Authentication.src" as "src" | where 'count'>=6 I would like the query to trigger only when there is a Successful Authentication after 6 failed authentication     thank youu
Hello there,  Here I am writing to see my use case for integration of Splunk cloud/enterprise features on my website.  I am looking for web services regarding integration with Splunk cloud or Splun... See more...
Hello there,  Here I am writing to see my use case for integration of Splunk cloud/enterprise features on my website.  I am looking for web services regarding integration with Splunk cloud or Splunk enterprise. My aim is to render Splunk cloud /enterprise dashboards, reports on my website. I have, Splunk cloud admin account (trial) Splunk enterprise admin account (trial) I want to, Get list of apps of Splunk cloud/enterprise programmatically. After that I will be able to see list of dashboards, reports on desired app. Further, I can select a dashboard, report which I want to embed on my website. This will allow me to easily visualize up-to-date Splunk data on my website. Thank you in advance to consider on my query.
I am unable to find REST API Postman collection for Splunk Enterprise. Can anyone please provide a link to export or download Postman collection for Enterprise ?
Seeing some errors in the internal logs for lookup files. Can someone help me with the reason for these errors? 1) Unable to find filename property for lookup=xyz.csv will attempt to use implicit fi... See more...
Seeing some errors in the internal logs for lookup files. Can someone help me with the reason for these errors? 1) Unable to find filename property for lookup=xyz.csv will attempt to use implicit filename. 2) No valid lookup table file found for this lookup=* 3) The lookup table '*' does not exist or is not available. - This can be due to the definition or reference of the lookup file is there but the file has been deleted.
Looking for a solution that does certain validations check when we upgrade any splunk addon to latest version. This is to make sure when the addon is upgraded to latest version it does not break any... See more...
Looking for a solution that does certain validations check when we upgrade any splunk addon to latest version. This is to make sure when the addon is upgraded to latest version it does not break any of the existing working configs like field parsing, search execution time, etc. in prod. So we need to check if its possible to create a dashboard or something where in we can compare the old state vs upgraded state of the addon before we deploy to prod. Basic two validations can be CIM fields & search execution time and to kick off this we can pick any one sourcetype.
Hi,  I am trying to installing PureStorage Unified Add-on for Splunk but installing while looking to add configurations I am getting below error in configuration page. I am installing it on my on-pr... See more...
Hi,  I am trying to installing PureStorage Unified Add-on for Splunk but installing while looking to add configurations I am getting below error in configuration page. I am installing it on my on-prem deployment server rather than Splunk Cloud. Can anyone help advise what could be the reason for the same and how to resolve?  Error:  Failed to load current state for selected entity in form! Details Error: Request failed with status code 500 Addon: https://splunkbase.splunk.com/app/5513   Thanks
Hello everyone, I have around 3600 events to review but they all are encoded in HEX, I know I can decode them by hand one by one but this will take a lot of time which i do not have, I spent a few h... See more...
Hello everyone, I have around 3600 events to review but they all are encoded in HEX, I know I can decode them by hand one by one but this will take a lot of time which i do not have, I spent a few hours reading for similar problems here but none helped me, I found an app called decode2 but it was not able to help me either, it wants me to feed it a table to decode and I only have 2 tables, one called time and one called event, nothing else, pointing it to event returns nothing. bellow I'm posting 2 of the events as sample ```\hex string starts here\x00\x00\x00n\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x005\xE6\x00ppt/tags/tag6.\x00\x00\x00\x00]\x00]\x00\xA9\x00\x00N\xE7\x00\x00\x00   \hex start\x00\x00\x00n\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xE5\x00ppt/tags/tag3.-\x00\x00\x00\x00\x00\x00!\x00\xA1   i chanced the first part of the string because it did not let me post, i also deleted the part between tag6. and the next slash, same goes for tag3.-   is there a way to automatically convert all events from hex to text?
Hi Splunkers, I have a strange behavior with a Splunk Enteprise Security SH. In target Environment, we have a Indexer's Cluster queried by 2 SH: a Core one and a Enteprise Security one. For a parti... See more...
Hi Splunkers, I have a strange behavior with a Splunk Enteprise Security SH. In target Environment, we have a Indexer's Cluster queried by 2 SH: a Core one and a Enteprise Security one. For a particular index, If we perform a search on ES SH, we cannot see data. I mean, even if we perform the simplest query possible, which is: index=<index_name>   we go no result. Perhaps, if I try the same search on Core SH, data are shown. The behavior in my mind is very strange because it happened only with this specific index; all other remaining indexes return the same identical data, both  performing query on ES SH and Core SH. So in a nuthshell we can say: Index that return result on SH Core: N Index tha return result on ES Core: N - 1