Hello, Is the ODBC Driver Visual C++ 2010 Redistributable requirement, a hard requirement, or minimum requirement. Will the ODBC driver work with Visual C++ 2017/higher Redistributable.
I am trying to understand how to remove results where "field_a" and "field_a" each contain a certain value together in the same log... but not all results containing "field_a" or all results containi...
See more...
I am trying to understand how to remove results where "field_a" and "field_a" each contain a certain value together in the same log... but not all results containing "field_a" or all results containing "field_b"... or any other fields. Here are some example of logs: field_a=5 field_b=3 field_a=5 field_b=2 field_a=2 field_b=3 I want to exclude only logs where field_a is equal to "5" AND field_b is equal to "3" ... but keep all other results. So, in the log examples above, I would only want to exclude the first log because that is the only example where BOTH fields contain a specific value... I would want my query to return the last two logs.
We have lots of scheduled searches at the top of the hour. How should we go about distributing them across the hour? We have also scheduled searches running every 5 or 10 minutes and it's difficult t...
See more...
We have lots of scheduled searches at the top of the hour. How should we go about distributing them across the hour? We have also scheduled searches running every 5 or 10 minutes and it's difficult to come with a direction on that.
Hi , While checking logs for a particular ubuntu host , i am getting the below warm message on the internal logs TcpOutputProc - 'sslCertPath' deprecated; use 'clientCert' instead Please suggest...
See more...
Hi , While checking logs for a particular ubuntu host , i am getting the below warm message on the internal logs TcpOutputProc - 'sslCertPath' deprecated; use 'clientCert' instead Please suggest how to resolve this . Could this be the reason y device is not reporting to splunk TcpOutputProc - Cooked connection to ip=x.x.x.x timed out
Hello I am running the following search, which works as it should. What I am trying to build off of it is a way to add a timechart to the search to see daily usage over 2 weeks. | `kva_tstats_sw...
See more...
Hello I am running the following search, which works as it should. What I am trying to build off of it is a way to add a timechart to the search to see daily usage over 2 weeks. | `kva_tstats_switcher("tstats sum(RootObject.bc) as total_bytes from datamodel=indexed_event_counts_hourly where [| tstats count where index=* source=/p01/data/syslogs* by sourcetype | fields - count | rename sourcetype as RootObject.st| return 1000 RootObject.st] by RootObject.st")` | rename RootObject.* as * | sort 100 - total_bytes | eval total_bytes=round(total_bytes/1073741824,1) | rename total_bytes as total_gb | rename st as sourcetype | addcoltotals labelfield=Total label=Total_Sum | sort – Total
Hello all, I need to get the total each column of the date and create a new column that showing the date column base on the value of "PostingDate". Im trying around addtotals for this but i can't g...
See more...
Hello all, I need to get the total each column of the date and create a new column that showing the date column base on the value of "PostingDate". Im trying around addtotals for this but i can't get my output. Many thanks in advance. Input: APPLICATION Set LEVEL PostingDate 2020-05 2020-06 2020-07 2020-08 2020-09 App1 A 2 2020-08 1 6 5 4 4 App2 B 3 2020-08 2 2 2 2 2 App3 C 4 2020-08 1 1 1 1 1 App4 D 5 2020-08 4 8 8 10 7 Output: APPLICATION Set LEVEL PostingDate 2020-05 2020-06 2020-07 2020-08 2020-09 Latest App1 A 2 2020-08 1 6 5 4 4 4 App2 B 3 2020-08 2 2 2 2 2 2 App3 C 4 2020-08 1 1 1 1 1 1 App4 D 5 2020-08 4 8 8 10 7 10 8 17 16 17 14 17
Hi , While checking logs for a particular ubuntu host , i am getting the below warm message on the internal logs TcpOutputProc - 'sslCertPath' deprecated; use 'clientCert' instead Please suggest...
See more...
Hi , While checking logs for a particular ubuntu host , i am getting the below warm message on the internal logs TcpOutputProc - 'sslCertPath' deprecated; use 'clientCert' instead Please suggest how to resolve this . Could this be the reason y device is not reporting to splunk TcpOutputProc - Cooked connection to ip=x.x.x.x timed out
Syslog data from my Fortinet firewall is not being parsed out correctly. I have noticed that there are multiple formats of messages. This first format parses out correctly. ... policyid=474 ...
See more...
Syslog data from my Fortinet firewall is not being parsed out correctly. I have noticed that there are multiple formats of messages. This first format parses out correctly. ... policyid=474 sessionid=3929476361 user="FRED" group="RegularSupport.Grp" srcip=10.120.2.26 ... These do not (specifically the user field is not populated) .... policyid=441 sessionid=3929476369 user="BARNEY" srcip=10.120.36.105 .... (missing group after user field) ..... policyid=471 sessionid=3929476336 user="BETTY" group="TL-AVP-SVP.Grp" srcip=10.120.2.128 .... (has "-" in the text for group) .... policyid=103 sessionid=3929476142 user="WILMA" group="Wkstns_SSLVPN_PD.Grp" srcip=172.24.1.18 ...... (has "_" in the text for group). I tried to do a extract fields on one of the different events, that solved the issue for the new event but the original event no longer works. I assume that there is a regex somewhere that parses this out but I cannot find it. My question is where do I go find out where it is so I can hopefully generate one that works? Scott
Hi rteam,
We have too many index created and now planning to have different retention duration based on sourcetypes. Cant go with one index for each sourcetype, which will impact operations an...
See more...
Hi rteam,
We have too many index created and now planning to have different retention duration based on sourcetypes. Cant go with one index for each sourcetype, which will impact operations and many teams who are searching fir data .
Is there any way to do retention based on sourcetype level. Thanks in advance
I have a drill down in my dashboard.When I select any choice from the drill down other two panels(reports) will appears. if again select another choice from that drill down,still those two panels ar...
See more...
I have a drill down in my dashboard.When I select any choice from the drill down other two panels(reports) will appears. if again select another choice from that drill down,still those two panels are appearing. so i want those panels should hide. when ever i select any choice from drill down previously opened panels should disappear.
Hey splunksters, -Just curious if anyone has had success getting secure syslog over tcp-port 6514 . The safenet applicance is supposed to send data to the indexer which is being treated like the...
See more...
Hey splunksters, -Just curious if anyone has had success getting secure syslog over tcp-port 6514 . The safenet applicance is supposed to send data to the indexer which is being treated like the "syslog" server. I have tried using my own certificates and carefully pointing the various inputs, web, and server.conf files LIKE THIS: https://wiki.splunk.com/Community:SplunkWeb_SSL_SelfSignedCert_NewRootCA AND LIKE THIS: https://community.splunk.com/t5/Getting-Data-In/How-to-configure-my-splunk-app-to-get-data-over-SSL/td-p/85793 -Through playing with the configuration stanzas, I am no longer getting any splunkd errors. -However, the INFO field (in splunkd) provides these msg: IPv4 port 6514 is reserved for raw input (SSL) IPv4 port 6514 is reserved for splunk 2 splunk IPv4 port 6514 will negotiate s2s protocol level 4 creating raw acceptor for IPv4 port 6514 with SSL the server IS listening for port 6514, but wireshark does not show anything coming in or any flags for that port -So, I'm wondering if I need to allow client authentication?? - Do I have to use the Certificates from the safenet side instead? They have sent over 3 certificates (KeySecure client certificate and PKI CA certificate/certificate chain ) If so, How do I do I import/install their certificates and apply them in the .confs Thanks!
I want to integrate Splunk with ServiceNow where incident should get created for every alert triggered in Splunk. Can someone suggest steps to achieve this?
Unable to access Splunk UF old release url to download, https://www.splunk.com/en_us/download/previous-releases/universalforwarder.html
Please suggest, if any different URL available to download ...
See more...
Unable to access Splunk UF old release url to download, https://www.splunk.com/en_us/download/previous-releases/universalforwarder.html
Please suggest, if any different URL available to download the older UF versions (7.3.x)
I have begun to accumulate some reference information about my company's AWS environment based on a bunch of queries. Things like what accounts and VPCs, we have and when they were first seen (among...
See more...
I have begun to accumulate some reference information about my company's AWS environment based on a bunch of queries. Things like what accounts and VPCs, we have and when they were first seen (among other info). Been happily accumulating this data into lookup tables, but now I realize that users on another Search Head Cluster would benefit from what I am doing on my SHC (which is reserved for Splunk ES) Lookup tables don't cut it anymore since they are maintained on the SHC so their data is not available to the other SHC. Is there a best practice on how to maintain such data so that it can be accessed from 2+ SHCs? Some solutions that I can think of: Use a Summary Index. Seems less than ideal because I am shooting for current state including some past info. So using a summary index would probably involve rewriting the current state of objects tracked -would not be the worst thing in the world to rewrite a few thousand entries daily, but I feel like an updatable source is more sensible. Just build all of the KOs in each environment. This incurs the cost of maintaining all KOs in each environment. are there other ways to approach what I want? (I'm really hoping that there is an answer like "you can make a KV store on the indexer")
Hello,
We are having some issues finalizing the installation of our Splunk environment. We have 2 Linux servers: 1 Search Head and 1 Indexer as search peer. We had just finished to set up the searc...
See more...
Hello,
We are having some issues finalizing the installation of our Splunk environment. We have 2 Linux servers: 1 Search Head and 1 Indexer as search peer. We had just finished to set up the search peer in "Distributed search", so we tried to run a search "index=_internal sourcetype=splunkd" on the last 60 minutes but it only returned logs from the Indexer.
But then we realized that TailReader-0 on the Search Head was in error "The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data.". and the related messages such as:
08-19-2020 12:4650.607 +0200 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...
This is weird because we configured the outputs.conf on the Search Head to send data to the Indexerand we configured inputs.conf of the Indexer to receive data so we are not sure what's wrong.
outputs.conf on the SH:
[tcpout]
defaultGroup=indexer
[tcpout:indexer]
server=indexer_hostname:9997
inputs.conf on the IDX:
[splunktcp://9997]
Both servers have been restarted, I guess the queues are full because the Search Head can't send the data, but why ?
The port 9997 is open and the connection from SH to IDX is fine. Also we don't have any forwarder or data input configured so it should not be because of a sudden burst of incoming data.
We restarted the Search Head after this, and now we are not able to run searches anymore, all searches return in error, the job inspector says "This search has encountered a fatal error and has been marked as zombied".
Could it be a performance issue ? Our servers have only 4CPU and 12GB of RAM. Do we need more CPU to solve these issues ?
Thank you very much !
Does anyone have a cheat sheet for btool to help newbies? Here is my version of btool cheat sheet: splunk btool <conf_file_prefix> <sub-cmd> <context> --debug "%search string%"
splunk show confi...
See more...
Does anyone have a cheat sheet for btool to help newbies? Here is my version of btool cheat sheet: splunk btool <conf_file_prefix> <sub-cmd> <context> --debug "%search string%"
splunk show config <config file name> | grep -v "system\/default"
Step 1.
splunk btool inputs list --debug "%search string%" >> /tmp/splunk_inputs.txt
Step 2.
Import into excel using space as a separator.
Step 3. Use excel to filter feature to look for the settings Explanation: <conf_file_prefix>: props, inputs, outputs, transforms <sub-cmd>: list, display, user, dir <context>: --app=search "%serch string%": input the search you're looking for I'd prefer piping the command to "less" command. Splunk documents: https://docs.splunk.com/Documentation/Splunk/8.0.5/Troubleshooting/CommandlinetoolsforusewithSupport#btool https://docs.splunk.com/Documentation/Splunk/8.0.5/Troubleshooting/Usebtooltotroubleshootconfigurati... https://docs.splunk.com/Documentation/Splunk/8.0.5/Troubleshooting/CommandlinetoolsforusewithSupport External Site: https://splunkonbigdata.com/2018/10/03/splunk-btool/ Thanks, everyone who replied. I'd consolidated the information into the top page.
My Enterprise Trial license has expired. How can I log into Splunk Web as the admin use? I don't know the username and password.
---- Below is steps to switch licensing ----
You can change from ...
See more...
My Enterprise Trial license has expired. How can I log into Splunk Web as the admin use? I don't know the username and password.
---- Below is steps to switch licensing ----
You can change from the Enterprise Trial license to a Free license at any time. To switch licenses:
Log in to Splunk Web as a user in the admin role
Select Settings > Licensing
Click Change License Group
Select Free license
Click Save
You are prompted to restart
If your Enterprise Trial license has expired, use the above procedure except that you can only log into Splunk Web as the admin user. No other credentials will work.
Hello, I am trying to combine couple of fields data separated by a dash. Tried few options but could not get the expected output. My query is: index=test sourcetype="test-abc" ("enter start()") ...
See more...
Hello, I am trying to combine couple of fields data separated by a dash. Tried few options but could not get the expected output. My query is: index=test sourcetype="test-abc" ("enter start()") | rename job_id as JOB_ID | stats earliest(_time) AS Earliest by JOB_ID | eval FirstEvent=strftime(Earliest,"%b %d, %Y %H:%M:%S") | eval JOB_ID_STR=tostring(JOB_ID) | eval JOB-ID-WITH-TIME=printf("%s%z", JOB_ID_STR,FirstEvent) In the above query: JOB_ID is a numerical data of length 4 digit. FirstEvent is string of time format of that event. Ex: JOB_ID = 9000 and FirstEvent = Jul 07, 2020 04:56:43 Using the above query and with printf function, JOB-ID-WITH-TIME is returned as 9000Jul 07, 2020 04:56:43. I want the output to be like 9000-Jul 07, 2020 04:56:43 (a dash between JOB_ID and FirstEvent). How to do it? Thanks in advance for your time!
Hi, I have seen a few post on this subject, but none seem to fix my issue. I am trying to calculate the difference between two date/time stamps. | eval CompleteDate=if(isnull(CompleteDate) OR len(C...
See more...
Hi, I have seen a few post on this subject, but none seem to fix my issue. I am trying to calculate the difference between two date/time stamps. | eval CompleteDate=if(isnull(CompleteDate) OR len(CompleteDate)==0,strftime(now(),"%Y-%m-%d %H:%M:%S:%7Q"),CompleteeDate) |eval Start = strptime(AwaitingResponseDate,"%Y-%m-%d %H:%M:%S:%7Q") |eval End = strptime(CompleteDate,"%Y-%m-%d %H:%M:%S:%7Q") |eval WaitTime = Start-End The issue seems to be that the Start field is empty when i add it to a table, however, the End time works. The only difference between start and end is that end is being set by the eval/if statement for CompleteDate because all are null. Start/AwaitingResponseDate is an auto extracted field The date/time format is the same for each filed. This is an example of the AwaitingResponseDate value 2020-07-20 18:35:15.0000000 This is an example of the inserted CompleteDate field from the same record 2020-08-19 09:19:53:0000000 Any help is certainly appreciated.
Hi Eveyone,
Can anyone help me out in this.
I have a field name Request_URL which is different each time.
Below are some examples for my Request_URL
https://xyz/api/connections/c1d30603ddf...
See more...
Hi Eveyone,
Can anyone help me out in this.
I have a field name Request_URL which is different each time.
Below are some examples for my Request_URL
https://xyz/api/connections/c1d30603ddf0
https://yte/api/flow/groups/314e8fead333/controller-services
https://tyu/api/services/968d06b5666b
https://hju/api/processors/b5f990b529f4/run-status
I want to extract "c1d30603ddf0" ,"b5f990b529f4" ,"314e8fead333" portion from every Request_URL as Request_URL is different for each one.
Can someone guide me with the regular expression of it in splunk
Thanks In advance