All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, Have logs for both request to a server and its response. However, in some cases the response won't be received and want to get those missed records only to a new table. User id is the common fun... See more...
Hi, Have logs for both request to a server and its response. However, in some cases the response won't be received and want to get those missed records only to a new table. User id is the common functionality appearing in both logs. index=mtest ("X-Responding-Instance:ms*" OR "HTTP request to ms is registered successfully") | rex field=_raw ".*X-userid: (?<Success_UserId>.*)" | table Success_UserId usrId X-userid is coming as a header in response and I have to extract value from there. 'usrId' is already coming along with the 'registered successfully' message as a field and I can extract it without rex. The moment I add '| tableSuccess_UserId usrId' to above query, gets two table with values, but the records are coming in alternate lines and that maybe the reason I'm not able to compare between them. Is there any option to compare between the data in two tables and find out the records of usrId, that are missing in Success_UserId table?
We have a centalized Kinesis stream to ingest AWS Cloudwatch log groups from multiple AWS accounts with this setup  https://docs.splunk.com/Documentation/AddOns/released/AWS/Kinesis. The source show... See more...
We have a centalized Kinesis stream to ingest AWS Cloudwatch log groups from multiple AWS accounts with this setup  https://docs.splunk.com/Documentation/AddOns/released/AWS/Kinesis. The source showing up in Splunk in this case is the centralized account, and not from the origin source. Anyone has suggestion how the origin source AWS account can also be sent to Splunk?  
Hi, I am using Splunk Cloud trail,  and I am trying to install Splunk Add-on for Amazon Kinesis Firehose by following this documentation. In order to enable HTTP event collector I need to open a ti... See more...
Hi, I am using Splunk Cloud trail,  and I am trying to install Splunk Add-on for Amazon Kinesis Firehose by following this documentation. In order to enable HTTP event collector I need to open a ticket and ask Splunk Support to enable this feature, but as I am using trial version I am not allowed to open a ticket. Does anyone have the same case? Regards, Sameh
Hello, Folks.  I have a field that represents a date but in this format (YY/MM/DD). For example:  on 07/23/20 the field value will be 200723.  I need to transform this value into a date (DD/... See more...
Hello, Folks.  I have a field that represents a date but in this format (YY/MM/DD). For example:  on 07/23/20 the field value will be 200723.  I need to transform this value into a date (DD/MM/YY).  I tried to use: | eval MyDateEpoch=strptime(MyDate,"%Y%m%d") but doesn't work.  Can you help me ?
Hello Splunk community, We had the splunk heavy forwarder set up on one machine, and SQL server database on the other machine. On "Splunk DB Connect" app, when we try the "New Input" on "Data Lab" t... See more...
Hello Splunk community, We had the splunk heavy forwarder set up on one machine, and SQL server database on the other machine. On "Splunk DB Connect" app, when we try the "New Input" on "Data Lab" tab: 1. We are able to select the Connection 2. We are able to select Catalog (Dbname) 3. We are able to select Schema (dbo) 4. We are able to view list of tables and when select "tablename", we see the sql text on "SQL Editor": SELECT * from "Dbname"."dbo"."tablename" But the query could not return any data back to the "Preview Data" window. Status of data loading stopped at 20%. When Click the "Execute SQL" button on the page, nothing changes. The status bar stopped same at 20%.  Also, we have no issue to run the same query and get the data back on SSMS on the same machine. I am very new to splunk, any help and suggestions are much appreciated!
I'm on Splunk Enterprise 8.0.5 for this question. Upgrading ES from 5.2.2 to 6.1.1: The Splunk docs say install 6.1.1 on the Deployer via the GUI first which will put ES 6.1.1 app in the $SPLUNK_HO... See more...
I'm on Splunk Enterprise 8.0.5 for this question. Upgrading ES from 5.2.2 to 6.1.1: The Splunk docs say install 6.1.1 on the Deployer via the GUI first which will put ES 6.1.1 app in the $SPLUNK_HOME/etc/apps  directory. I'm clear here so far Then it says choose a MODE before pushing 6.1.1 out using      splunk apply shcluster-bundle      Which we know will take the apps in $SPLUNK_HOME/shcluster/apps on the deployer and create a bundle to push out to the SH Members.  So here is my question: When does the 6.1.1 I deployed using the GUI in $SPLUNK_HOME/etc/apps get copied to  $SPLUNK_HOME/shcluster/apps on the Deployer so it can be pushed out in the bundle??? Am I supposed to do that manually?
Hi, while checking our app with AppInspect v2.2.0 an extract in props.conf was flagged with this error : xxxxx check_props_conf_extract_option_has_named_capturing_group [EXTRACT-nfo_hostname] s... See more...
Hi, while checking our app with AppInspect v2.2.0 an extract in props.conf was flagged with this error : xxxxx check_props_conf_extract_option_has_named_capturing_group [EXTRACT-nfo_hostname] setting in props.conf specified a regex without any named capturing group. This is an incorrect usage. Please include at least one named capturing group. File: default/props.conf Line Number: 19 xxxxx Line 19 is : EXTRACT-nfo_hostname = ((\w{3}\s+\d{1,2}\s\d{2}:\d{2}:\d{2})|(1\s\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.{1}\d{2}:\d{2}))\s+(?P<nfo_hostname>[^ ]+)\s+(nfc_id|NFO) It has a named capturing group : (?P<nfo_hostname>[^ ]+) What could be wrong? The error is not present when checking the built spl file with the CLI version of AppInspect v2.2.0 In my opinion it might be a bug in AppInspect. Attempted to  email appinspect@splunk.com, but it is bouncing back. Is there some other channel to reach the AppInspect team?
Hello all, I think I need help on this one.... We have a standalone windows system which is our indexer, management and deployment server.   In the field, we have several flavors of devices running... See more...
Hello all, I think I need help on this one.... We have a standalone windows system which is our indexer, management and deployment server.   In the field, we have several flavors of devices running universal forwarders, i.e. Windows, Linux, Solaris, etc. I am working on a directory monitor which will allow me to see what files are in a directory and report is one is missing or the like.   To test this, I created a scripted input to gather the contents of the directory and forward it to the indexer.   inputs.conf ###### Scripted Input to monitor directory files [script://./bin/dircontents.sh] disabled = 0 interval = 60 sourcetype = Script:dircontents.sh index = filewatch props.conf [Script:dircontents.sh] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) MAX_EVENTS = 10000 TRUNCATE = 0 DATETIME_CONFIG = CURRENT dircontents.sh cd /u01/DeticaHome/UI/data/acquisition/waiting ls | sort With those config files, I deploy the app without issue, but when the script runs I get the following; index=_internal 07-23-2020 09:30:47.841 -0500 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/_server_app_Detica-File-Processing-Mon/bin/dircontents.sh" /bin/sh: /opt/splunkforwarder/etc/apps/_server_app_Detica-File-Processing-Mon/bin/dircontents.sh: cannot execute It appears the permissions of the script are not correct.  I checked and the deploy script,  dircontents.sh, permissions are 655 at deployment.  I changed the permissions to 755 manually and the script took off and started working, but this was a manual intervention which is not optimal.    The Universal forwarder was installed and running as root.   To get this right, I need 755 permissions of the script fo the scripted input.  What have I missed?  Any insight would be great at this point. Thanks in advance, Rcp
Hi, I am using UF for syslog. In inputs.conf made index=cisco and sourcetype=syslog:ios and able to receive logs in console. Now i am receiving too much logs. Having below keywords in logs. "DOMAI... See more...
Hi, I am using UF for syslog. In inputs.conf made index=cisco and sourcetype=syslog:ios and able to receive logs in console. Now i am receiving too much logs. Having below keywords in logs. "DOMAIN-2-IME" "DOMAIN-2-IME_DETAILS" "DOMAIN-5-TCA" tried blacklist = "Domain" in inputs.conf but failed to filter it. Plese help me how to filter logs with Keywords in logs.
Hello, I have to preservate all the raw events that are stored in a 3 hour window of time and I need to be sure that data can be searcheable independently from the index my data is originally stored.... See more...
Hello, I have to preservate all the raw events that are stored in a 3 hour window of time and I need to be sure that data can be searcheable independently from the index my data is originally stored. what would be the best way to achieve this? I mean, I can just store the data in a saved report. But my doubt is how to search that data as if I was looking in the original index. I mean I will need to investigate and make queries over the data in the future.
Hi all, I get following error Message: JZ00L: Login failed. Examine the SQLWarnings chained to this exception for the reason(s). I tried it with DBConnect version 3.3.1 and 3.1.4. I want conn... See more...
Hi all, I get following error Message: JZ00L: Login failed. Examine the SQLWarnings chained to this exception for the reason(s). I tried it with DBConnect version 3.3.1 and 3.1.4. I want connect to following version: Adaptive Server Enterprise/16.0 SP03 PL05 HF1/EBF 28622 SMP/P/x86_64/SLES 11.1/ase160sp03pl05x/3463/64-bit/FBO/Thu Sep 13 09:24:26 2018 I tried it with different JDBC drivers it is still not working. Can i find any additional error messages from Splunk or are there any other debug possibilities? kind regards Kathrin
I have a distributed setup of Splunk ES, with separate SH, indexers and forwarder. I set some flows (sFlow, Netflow to forwarder). However, forwarder's IP is set in a "host" field of all logs. How ca... See more...
I have a distributed setup of Splunk ES, with separate SH, indexers and forwarder. I set some flows (sFlow, Netflow to forwarder). However, forwarder's IP is set in a "host" field of all logs. How can I keep the original device address (i.e. an address of a router that is sending those flows).
I am trying to get G-suite logs into splunk. Here are some of the logs I wish to ingest: Drive logs Login Logs Mobile Device Logs Google Password Change Logs Can this Splunk Add-on for Google C... See more...
I am trying to get G-suite logs into splunk. Here are some of the logs I wish to ingest: Drive logs Login Logs Mobile Device Logs Google Password Change Logs Can this Splunk Add-on for Google Cloud collect G-suite Logs too? Please advise. Thank you
HI there, I am relatively new to Splunk but was given a task that I found very difficult. One of our customers is expecting an audit and has a variety of Reports and Alerts in one of their app. The... See more...
HI there, I am relatively new to Splunk but was given a task that I found very difficult. One of our customers is expecting an audit and has a variety of Reports and Alerts in one of their app. They would like to pop up an alert every time a someone modify any of the reports/alerts and the alert should give them what action has been done (e.g. modify, delete, add, remove) who done it, when and what has been changed (for instance if query's search is tampered with). I've tried everything that has been posted here and it's always close but no cigar. It seems that the internal logs of _audit and _internal indexes do not log these changes. In the end I came up to REST API. What I did is as follows: 1)  I typed the search below to get the alerts/reports         rest splunk_server=local /servicesNS/-/{app_name}/saved/searches | fields title search eai:acl:owner eai:acl:app alert_type updated cron_schedule auto_summarize.suspend_period dispatch.earliest_time dispatch.latest_time id          2) I exported the results as a csv file and renamed the search column to oldSearch 3) I imported back the csv (compareSearches) as lookup and used the following query         | rest splunk_server=local /servicesNS/-/{app_name}/saved/searches | fields title search eai:acl:owner eai:acl:app alert_type updated cron_schedule auto_summarize.suspend_period dispatch.earliest_time dispatch.latest_time id | join [| inputlookup compareSearches.csv | table title oldSearch] | where search!=oldSearch         4) That almost gave me what I wanted but not exactly.  This search only catches changes in the reports/alerts queries. But if report/alert gets deleted or the schedule time is changed. 5) The below line would create an additional fields with info regarding what type of change occured.         | eval changeType = if(search!=oldSearch, "Query changed", "Other change occured")         6) So at the end I still don't know who did the action and apart from changing the query I stuck on the rest of the changes and how can I display them.  I read all post regarding similar cases but nothing worked for me. Any help would be much appreciated. Thank you.   p.s. At first I used one account to create a report and to modify it and the ID field gave me something like https://127.0.0.1:8089/servicesNS/{mu_name}/a1siem/saved/searches/{my_report} However when using another account it gives back  https://127.0.0.1:8089/servicesNS/nobody/a1siem/saved/searches/{my_report}
Hi, index=myindex |search name=*| bin span=1d _time | stats dc(name) as name by _time here i am getting the number of names in last 7 days with count for each day. Like the image shown. Now when t... See more...
Hi, index=myindex |search name=*| bin span=1d _time | stats dc(name) as name by _time here i am getting the number of names in last 7 days with count for each day. Like the image shown. Now when the count dropped from 140 to 132 i want a query which can show the missing 8 names.   
Hi how I can get a list of all users that run savedsearch?
How do we filter certain logs on HF using inputs.conf Tried the below 2 ways but no luck. ---------------------------------------------------------   [monitor:///syslog/cisco/ios/]   blackli... See more...
How do we filter certain logs on HF using inputs.conf Tried the below 2 ways but no luck. ---------------------------------------------------------   [monitor:///syslog/cisco/ios/]   blacklist = IME_ID = "*" blacklist1 = TCA_ID = "*"     -------------------------------------------------------------   [monitor:///syslog/cisco/ios/] blacklist2="DOMAIN-2-IME_DETAILS" blacklist3 = "DOMAIN-2-IME_DETAILS"  
i used that appender in log4j.xml and i have localy splunk enterprise installed   <Appenders> <Http name="http" url="https://localhost:8000.cloud.splunk.com/services/collector" token="f8196ae1-ef66... See more...
i used that appender in log4j.xml and i have localy splunk enterprise installed   <Appenders> <Http name="http" url="https://localhost:8000.cloud.splunk.com/services/collector" token="f8196ae1-ef66-459e-a1c0-17359053ea14" disableCertificateValidation="true"> <PatternLayout pattern="%-5p | %d{yyyy-MM-dd HH:mm:ss} | [%t] %C{2} (%F:%L) - %m%n" /> </Appenders>     and i got this error ERROR StatusLogger Unable to send HTTP in appender [http] Solution please
Hello, I have this INFO message: INFO BucketMover - will attempt to freeze bkt="/opt/splunk/var/lib/splunk/idx/db/rb_1588633961_1588517402_205_9C902924-4080-41E4-B824-9B4D721A8889" reason=" maxTota... See more...
Hello, I have this INFO message: INFO BucketMover - will attempt to freeze bkt="/opt/splunk/var/lib/splunk/idx/db/rb_1588633961_1588517402_205_9C902924-4080-41E4-B824-9B4D721A8889" reason=" maxTotalDataSize" I want re-index the bucket removed but i can't find it, for example i run: find /opt/splunk/ -name "rb_1588633961_1588517402_205_9C902924-4080-41E4-B824-9B4D721A8889" but no result. Can i index data removed after the max size reashed.   Thank you.
Hi Everyone, I want to add few conf files in Managed Splunk Cloud app. Is it possible with out Splunk support to vet the app and deploy in Splunk Cloud? Thanks in Advance.