All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello   I'm looking to run a search in a Firewall log index for connections to a know IP range and trying to decide which is the more efficient way of doing it. As can be seen I'm excluding certain... See more...
Hello   I'm looking to run a search in a Firewall log index for connections to a know IP range and trying to decide which is the more efficient way of doing it. As can be seen I'm excluding certain ports we expect to see traffic on I seem to remember that using "!=" is also considered inefficient as well open to other suggestions for using something else. Current search looks like this index="pan_logs" sourcetype="pan:traffic" dest_ip="91.226.*.*" | lookup dnslookup clientip AS src_ip OUTPUT clienthost AS remotehost | search dest_port!=48600 dest_port!=22 | table _time dest_ip dest_port remotehost
Hi, Does Splunk bundle the Adobe Flash Player in Splunk Enterprise? Are there any Splunk dependencies on Flash? Thanks!
Hi team, I am trying to set up HANA DB Connection in Splunk. What I have did so far: 1. put HANA JDBC client 'ngdbc.jar' into folder '$SPLUNKHOME/etc/apps/splunkappdbconnect/drivers' 2. Manuall... See more...
Hi team, I am trying to set up HANA DB Connection in Splunk. What I have did so far: 1. put HANA JDBC client 'ngdbc.jar' into folder '$SPLUNKHOME/etc/apps/splunkappdbconnect/drivers' 2. Manually create a dbconnectiontypes.conf file with below information and put it into folder '$SPLUNKHOME/etc/apps/splunkappdbconnect/local' displayName = HANA jdbcUrlFormat = jdbc:sap:xxxxxxx:xxxxx jdbcDriverClass = com.sap.db.jdbc.Driver supportedVersions = 2.0 3. I start splunk on MacOS with command ./splunk start 4. I login to local splunk and go to Configuration->Connection->Create a New Connection. Issue I met:  In the drop down list of Connection Type, there is no 'HANA' for me to select. (Please refer to the screenshot) Can you please advise what's the issue here and how to resolve it?  
We have been using the Qualys TA to ingest Vulnerability data for quite some time now and notice that occasionally, there is a mis-match in the data seen in Qualys vs the data ingested in Splunk. It ... See more...
We have been using the Qualys TA to ingest Vulnerability data for quite some time now and notice that occasionally, there is a mis-match in the data seen in Qualys vs the data ingested in Splunk. It eventually catches up (we guess in subsequent runs) but we would prefer it to be available to support reporting requirements. A few input parameters to quote We are on the most recent version of the TA We use multi threads The TA runs on a HF sending to the indexers Internal logs with debug mode enabled seems to indicate no errors whatsoever. The following query indicates that logging has occurred for approximately the same number of hosts everyday. I am deliberately ignoring a few messages to refine the transaction output     index=_internal host=<host> sourcetype=*qualys* host_detection NOT "Will not run" NOT "Running for host_detection" NOT "getting idset inbound queue..." NOT "inboundqueue empty" NOT "waiting for more work" | transaction startswith="Running Now" endswith="Qualys Host Detection Populator finished." | convert ctime(_time) AS Time |table Time,_raw |rename _raw AS "Job Run History"     The scan completed the previous evening and the Splunk pull happened the next day early hours. Qualys has the data but Splunk did not which doesn't make sense. A few questions. Has anybody else encountered issues and if so, what settings would you recommend to address this? Anything obvious that we are missing that is causing the issue in the first place? Does the Qualys data get published to be accessed by the API immediately? If it doesn't , we will move our job in-line with that Any limits that we should be aware of? Any inputs are appreciated. Tagging @prabhasgupte as I see you have replied to previous queries on Qualys!  
Hi All,  How do I get the browser URL for the search query while executing a custom command in a python script? I know for scripted alerts this value is stored in SPLUNK_ARG_6. But this variable is ... See more...
Hi All,  How do I get the browser URL for the search query while executing a custom command in a python script? I know for scripted alerts this value is stored in SPLUNK_ARG_6. But this variable is not set while I am executing custom commands. Can anybody help me out here? SPLUNK_ARG_0 Script name SPLUNK_ARG_1 Number of events returned SPLUNK_ARG_2 Search terms SPLUNK_ARG_3 Fully qualified query string SPLUNK_ARG_4 Name of saved search SPLUNK_ARG_5 Trigger reason (for example, "The number of events was greater than 1") SPLUNK_ARG_6 Browser URL to view the saved search SPLUNK_ARG_8 File in which the results for this search are stored (contains raw results)  
Hi  We have a splunk add-on for aws to pull the logs from s3 bucket. we are using the sqs based s3 inputs created to read the logs for s3 bucket, however we are noticing that through this option sp... See more...
Hi  We have a splunk add-on for aws to pull the logs from s3 bucket. we are using the sqs based s3 inputs created to read the logs for s3 bucket, however we are noticing that through this option splunk service seems like ommiting some files from reading even though it has consumed the sqs message and deleted the message from the quer. I am attaching one of the example from our issue where in at particular time frame which is on july 6th 19 to 20 hrs we have 59 objects in the s3 bucket but splunk had read only 58 files. This is being one of the example to show but we are having this issue very often every hour one or 2 files missing. We have around 8000 to 10000 events in each file which is missing indexed in splunk due to this issue. I have checked all the internal logs which does not show any failure messages while reading this particular s3 object to confirm it was dropped or failed while parsing and processing. Its just not there.  these issue is there every day every hour missing one or the other files missed by splunk inputs. From SQS perspective SQS based S3 input is skipping some objects from s3 but deleting the message from sqs Thanks and Regards Srini
Hi All, I have done a deployment server setup with over 20 machines. The deployment setup is working fine. The security team has come up with a question regarding the communication between the splu... See more...
Hi All, I have done a deployment server setup with over 20 machines. The deployment setup is working fine. The security team has come up with a question regarding the communication between the splunk deployment server and the forwarders. They wanted to know whether there is any API key through which authentication happens when the forwarders contacts the deployment server. Is there any other authentication mechanism which takes place in this communication. Any information would be helpful.   Thanks
Hi! please help us solve the problem of missing data from the Nextcloud server. Splunk was installed and configured according to the instructions. But there is no data in the web interface.  
HI, I'm newbie and trying to upgrade my splunk environment .I have index clustering and search head clustering in my environment.What is the order of upgrading??
Hey everyone! Lately we had an unfortunate incident were most of our logs were deleted from splunk. Luckily we saved the same data at our PostgreSQL DB.    To restore those logs, I want to export t... See more...
Hey everyone! Lately we had an unfortunate incident were most of our logs were deleted from splunk. Luckily we saved the same data at our PostgreSQL DB.    To restore those logs, I want to export the PostgreSQL data via Splunk DB Connect.   I've read about the app and it looks like it can solve most of my concerns. My main issue that I couldn't find solution for is data formating.    To make it backwards compatible to our logs format, I want to parse the db table rows by: 1. Sending the data to splunk as a json  2. Add some custom key:value pairs (that are based / can be calculated from the database row) 3. Not include specific table columns 4. Append metadata to the logs (already found that this point is possible)   Is there a way of achieving those wishes without working with raw SQL? If not, can I see an example of raw SQL that generates the wanted splunk log? Also, is there any other way of exporting & importing data from postgres to splunk that can solve this issue?   Thank you!
Hello, A while ago, we updated from 6.5.2 to 7.3.4, and only now I noticed a very bizarre bug and different behaviour than from the old version. We use summary indexes a lot. You often have daily o... See more...
Hello, A while ago, we updated from 6.5.2 to 7.3.4, and only now I noticed a very bizarre bug and different behaviour than from the old version. We use summary indexes a lot. You often have daily or hourly summaries, and the summary data should have the timestamp of that range, right? Let's say daily, search is setup to collect from -1d@d to @d, and is scheduled to run at 2:32am on July 8. simple meaningless example search:   index=_internal source=splunkd (host=hostnameA OR host=hostnameB) | stats count   When this runs on July 8, 2:32am it will generate a summary entry with a count and a timestamp of July 7 midnight. Now, when you use inputlookup as a filter method, the time when the search ran is used instead. Simple lookup table and search:   test.csv: host hostnameA hostnameB index=_internal source=splunkd [| inputlookup test.csv | fields host] | stats count   Now this will generate a summary entry with a count and timestamp of July 8, 2:32am! While this is as bad as it is, one thing is even worse: This makes it impossible to backfill summaries in a meaningful way. The time used when inputlookup comes into play, it's not the scheduled time, but the time the search was executed. Therefor, when you backfill for 20 days, all data for those 20 days will have almost the same timestamp, from when you started to execute the backfill. I tried   index=_internal source=splunkd | search [| inputlookup test.csv | fields host] | stats count   as well,. but it is broken in the same way. Does anybody know if that is fixed in a later version? This must be a very bizarre bug, right?
Could you please explain me what is black? Is it Variable?
I'm trying  show a cluster map in dash board with base search the pie chart are not correct in cluster map(the count  are less than  search bar) , but choose statistic table would show correct data  ... See more...
I'm trying  show a cluster map in dash board with base search the pie chart are not correct in cluster map(the count  are less than  search bar) , but choose statistic table would show correct data   
Hello, I have a query regarding getting data in using DB Connect App. I am using Splunk cloud instance and DB Connect is installed on IDM. On DB Connect, there is a connection created for MySQL DB.... See more...
Hello, I have a query regarding getting data in using DB Connect App. I am using Splunk cloud instance and DB Connect is installed on IDM. On DB Connect, there is a connection created for MySQL DB. Possibly because of some time zone issues, I am getting the data from MySQL DB to DB Connect with a delay of an hour. I have tried changing timezone under jdbc connection screen and also tried adding "useLegacyDatetimeCode=false" in jdbc path but that has not helped. Can someone please suggest what else I can check here? Thank you.
I'm kind of new in Splunk and found one syntax of replace when I read the official document. Here is the link https://docs.splunk.com/Documentation/Splunk/8.0.4/SearchReference/TextFunctions.  C... See more...
I'm kind of new in Splunk and found one syntax of replace when I read the official document. Here is the link https://docs.splunk.com/Documentation/Splunk/8.0.4/SearchReference/TextFunctions.  Could you please tell me where to find the syntax like "\2/\1/"? It's my first time to see something like this, and I did not find any document about this kind of syntax. Thanks in advance!  
Trying to figure out a successful method for sending MacOS logs to Splunk without involving another tool or agent. We have the Splunk UF installed on the Mac and the Splunk_TA_nix app which pulls som... See more...
Trying to figure out a successful method for sending MacOS logs to Splunk without involving another tool or agent. We have the Splunk UF installed on the Mac and the Splunk_TA_nix app which pulls some data but not all from the Mac. However, most of the logging broke a few years ago when Apple changed over to the Unified Log database format. Any guidance or suggestions on best practices would be much appreciated!
I have been trying to look at statistical figures for failed login attempts over a 30 day period for each user by the hostname. I can get a table showing every failed attempt but want to condense tha... See more...
I have been trying to look at statistical figures for failed login attempts over a 30 day period for each user by the hostname. I can get a table showing every failed attempt but want to condense that down to show a total count of failed attempts and an avg/day, my thinking being that it could be useful to identify attempts to do slow brute forcing from credential stuffing attacks. This is what I have tried so far: index=wineventlog EventCode=4625 | search signature="User name is correct but the password is wrong" | eventstats count(TargetUserName) by hostname as Total_Count | eventstats avg(Total_Count) as Avg_Count | table TargetUserName, hostname, Total_Count, Avg_Count | sort TargetUserName but this ends up giving me the username and hostname but the total and avg fields are blank. Any ideas on how to do this better?   Thanks, Maxy
Hello, I'm doing a table with users and the users IP. And I need to add an extra column with extra data from another index. like a join without the join if possible. My table is this:   index=l... See more...
Hello, I'm doing a table with users and the users IP. And I need to add an extra column with extra data from another index. like a join without the join if possible. My table is this:   index=linux action=* sourcetype=sftp | stats last(_time) as start first(_time) as end values(user) as Usuario values(src) as IP values(character) as "edited characters" by session | rename session as "session number" | fieldformat "end"=strftime(end, "%d/%m/%Y %H:%M:%S") | fieldformat "start"=strftime(start, "%d/%m/%Y %H:%M:%S") | sort - end   And I would need an extra column in my table. This extra column has to grab the value of src of every row, and perform another search, and put the results in the column. It has to be something like: index=myotherindex src=The_src_from_the_row  | stats values(account) by the_src_from_the_row (I need this for every row in the table) because in myotherindex I have the accounts corresponding to the src in the first table so my final table will look something like: session number - start - end - IP - user edited_characters accounts being "accounts" the result of the second search, I mean, the accounts by the src in the table. I don't know if im clear in my request, since english is not my native language, if there is anything you need me to clarify, feel free to ask please. P.D im planning to do a dashboard from this table
As a part of my dashboard, I get the results from a base search and join them in a different search in the panel. In order to do this, I have created a token for the job id and I load the job in the... See more...
As a part of my dashboard, I get the results from a base search and join them in a different search in the panel. In order to do this, I have created a token for the job id and I load the job in the panel when I am doing a join. Base search   <search id="base_search"> <query> <done> <condition> <set token="base_search">$job.sid$</set> </condition> </done> </search>   in my panel, I use this base search as a part of the query join   <query> search sourcetype="ABC" |join type=left [| loadjob $base_search$ ] |stats ... </query>   When the dashboard is refreshed, the token still refers to the old job sid. I would like to know if there is a way to unset the token so that the base query will run again and the token is populated with the current base search result.
Hi everyone,   I am trying to add a field for the current OS time.    Here is my props.conf and transforms.conf   #props.conf [mysourcetype] TRANSFORMS-getdate = get-current-date #transforms.c... See more...
Hi everyone,   I am trying to add a field for the current OS time.    Here is my props.conf and transforms.conf   #props.conf [mysourcetype] TRANSFORMS-getdate = get-current-date #transforms.conf [get-current-date] INGEST_EVAL = current_date=now()     But I have this error:   ERROR regexExtractionProcessor - Error compiling INGEST_EVAL expression for get-current-date: Bad function     Is it a bug?   Cheers, S