All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

When configuring a collection, "date" and "number" are both options. I assumed that "date" would be the correct way to store a date, but it only worked for me when I used "number".   | inputlookup ... See more...
When configuring a collection, "date" and "number" are both options. I assumed that "date" would be the correct way to store a date, but it only worked for me when I used "number".   | inputlookup my_kv_lookup where my_time < [ makeresults | eval rel_time = relative_time(now(), "-1d@d") | return $rel_time ]   When I used "date", the same filter worked, but only after returning the entire collection.    | inputlookup my_kv_lookup | where my_time < [ makeresults | eval rel_time = relative_time(now(), "-1d@d") | return $rel_time ]   Is there a way to filter a value stored in a "date" field? Did I make a mistake on what values I put into the "my_time" field (epoch times straight from _time)?  Does it need to be accelerated?
I have two indexes that have done to really high day age, I got a feeling a few of the PC's have had issues and gone back to default date.  I looked around and can't find a easy way to work out what... See more...
I have two indexes that have done to really high day age, I got a feeling a few of the PC's have had issues and gone back to default date.  I looked around and can't find a easy way to work out what events it was or the host, I have came close but want something to say here is the smoking gun.  Cheers for any help as per normal.
I want to search for events related to a list of IPs that are declared on a public URL. How can I insert this URL for ask this list in my Splunk request ?      
Hi, Searching: When a host reaches 1gb memory consumed per day fire an alert? This will has to be real-time.  
Hi team, I have below 2 events: C_BN="[{pmRating:3},{riskOfLoss:9}]" C_BN="[{sysOverallPerformance:3},{sysOverallPotential:4},{pmRating:5},{compBonusTarget:3}]" , and rex pattern to extract key a... See more...
Hi team, I have below 2 events: C_BN="[{pmRating:3},{riskOfLoss:9}]" C_BN="[{sysOverallPerformance:3},{sysOverallPotential:4},{pmRating:5},{compBonusTarget:3}]" , and rex pattern to extract key and value (delim is ":") is "(?P<key>\w+):(?P<value>\w+)". When i run below query: sourcetype=perf_log_bizx AND ACT=SAVE_CALIBRATION_TEMPLATE AND PQ=CALIBRATION_SESSION_ADMIN | rex field=C_BN max_match=0 "(?P<key>\w+):(?P<value>\d+)" | stats count by key value Splunk returns below table to me which is not correct: key value count compBonusTarget 3 2 compBonusTarget 4 1 compBonusTarget 5 1 pmRating 3 3 pmRating 4 1 pmRating 5 1 pmRating 9 1 riskOfLoss 3 1 riskOfLoss 9 1 sysOverallPerformance 3 2 sysOverallPerformance 4 1 sysOverallPerformance 5 1 sysOverallPotential 3 2 sysOverallPotential 4 1 sysOverallPotential 5 1   Expected Result: key value count compBonusTarget 3 1 pmRating 3 1 pmRating 5 1 riskOfLoss 9 1 sysOverallPerformance 3 1 sysOverallPotential 4 1   What is the issue with my query??
Hi All, how to get data from starting day of this week till today and starting day of last week till same day in last week example: I have data like: Date, Transaction_count=100 Date, Transaction... See more...
Hi All, how to get data from starting day of this week till today and starting day of last week till same day in last week example: I have data like: Date, Transaction_count=100 Date, Transaction_count=120 for 07/02/2020 (mm/dd/yyyy) i want sum of transaction_count from 06/28/2020  to 07/02/2020 and also  sum of transaction_count from 06/21/2020 to 06/25/2020 @All, Please help. Appreciate quick response.  @niketn  @lspringer  @carasso 
Hi, I am trying to collect NetFlow data from Cisco router via Splunk_TA_Stream. I config streamfwd.conf according to this post: https://docs.splunk.com/Documentation/StreamApp/7.2.0/DeployStreamApp/C... See more...
Hi, I am trying to collect NetFlow data from Cisco router via Splunk_TA_Stream. I config streamfwd.conf according to this post: https://docs.splunk.com/Documentation/StreamApp/7.2.0/DeployStreamApp/ConfigureFlowcollector this is my configure  [streamfwd] logConfig = streamfwdlog.conf port = 8889 netflowReceiver.0.ip = 192.168.15.253 (this is Splunk Enterprise server IP) netflowReceiver.0.port = 9995 netflowReceiver.0.decoder = netflow I used wireshark to confirm data was recivied via 9995 by splunk server. But no data was index and search command: sourcetype=stream:netflow return nothing. Sorry for my bad English
Hey All,   What I'm trying to do is to build a search query that correlates between fields like in the below example: I need that where message.anomaly.features{}.anomaly has a true value, the... See more...
Hey All,   What I'm trying to do is to build a search query that correlates between fields like in the below example: I need that where message.anomaly.features{}.anomaly has a true value, then to output a new field with the corresponding fields below - 23, location (Even only one of them is good for me)   How can I accomplish that?   Thank you,
Hi everyone, I have some data with a lot of fields. Some fields represent the same data, but with different field names... For example, I have 20 servers reporting their IP, with their IP as part ... See more...
Hi everyone, I have some data with a lot of fields. Some fields represent the same data, but with different field names... For example, I have 20 servers reporting their IP, with their IP as part of the field name... the value of the fields has no meaning and is insignificant. So the data (before any queries, as it arrives) looks like this: Server1_1.1.1.1: 2 Server2_2.2.2.2:10 Server3_3.3.3.3:1 .... Server20_20.20.20.20: 5   What I would like to do is this: Create a new field called "Server_IP". It's Values will be all the IP addresses from the original field NAMES: Server_IP: [1.1.1.1 , 2.2.2.2 , 3.3.3.3 ,...., 20.20.20.20]   Many thanks!
Hello, I would like to connect the Splank database to the existing database of sites, sites are mainly on Wordpress, but there are other CMS(example), in any case, they are all in one place and every... See more...
Hello, I would like to connect the Splank database to the existing database of sites, sites are mainly on Wordpress, but there are other CMS(example), in any case, they are all in one place and everything is on MySQL. I read about connection with MySQL not from Splunk:https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-to-take-in-data-from-MySQL/td-p/21697, and if I understand right - it's possible. The idea of ​​connection is to fill these databases with information for sites in perspective, including the creation of posts, but now we are talking about content on the pages, which is pulled from the database. Tell me if anyone has experience of such manipulations? Is it worth doing such settings for the sake of filling with the requested data, or is it easier to hire a programmer and a parser on the PHP\Python?
I have installed TA-ms-loganalytics on my heavy forwarder and from this add-on i am getting various logs from my azure workspace into splunk. Now i am facing a problem where the event count in splun... See more...
I have installed TA-ms-loganalytics on my heavy forwarder and from this add-on i am getting various logs from my azure workspace into splunk. Now i am facing a problem where the event count in splunk varies from what i have in my azure LA worspace. this is for Activity Logs.  On Azure LA worspace Activity Logs count : 90+ On Splunk Activity Logs count : 6-8 only   is it the add on is breaking at some point? or any blocker in azure end? Kindly help.
Hi Why deploy add-on in deployer or cluster master not have GUI setup on the web interface in TA have setup.conf.   I try to deploy add-on in single instance found page config for connect to syman... See more...
Hi Why deploy add-on in deployer or cluster master not have GUI setup on the web interface in TA have setup.conf.   I try to deploy add-on in single instance found page config for connect to symantec.   I would like config TA symantec about connect between splunk sh and symantec for config adaptive response.
Hi I set up the Description collection in the app, and saw the unicode string that was not encoding during the log. I have the corresponding  sourcetype CHARTSET set to AUTO.(Translated to UTF8, bu... See more...
Hi I set up the Description collection in the app, and saw the unicode string that was not encoding during the log. I have the corresponding  sourcetype CHARTSET set to AUTO.(Translated to UTF8, but there was no change.) I wonder why this problem has occurred, and how to deal with it. The current environment is Windows 10, Splunk 8.0.5, and Splunk Add-on for AWS 5.0.1. In addition, in Splunk Web I see letters as Unicode forms when viewed as "raw," but normally encoded when viewed as "list." I wonder why there is a difference in what is seen. Viewed as "raw" views as "List"
Hi @gcusello , When I am running the following  query it is working fine . |dbquery wmsewprd "select REC_TYPE, CODE_TYPE, CODE_DESC, SHORT_DESC, USER_ID from wmsew.SYS_CODE_TYPE" But when my colle... See more...
Hi @gcusello , When I am running the following  query it is working fine . |dbquery wmsewprd "select REC_TYPE, CODE_TYPE, CODE_DESC, SHORT_DESC, USER_ID from wmsew.SYS_CODE_TYPE" But when my colleague is trying to run the same query he is getting the external command error. External search command 'dbquery' returned error code 1. Script output = 'There is no such database [abcdef] available.for the user. Please help. Regards, Rahul
Hello,  I need Avg time spent on login and logout by the user and want to calculate from the time they logged in and then out and then the total to show. Need a query for this: Average time spent... See more...
Hello,  I need Avg time spent on login and logout by the user and want to calculate from the time they logged in and then out and then the total to show. Need a query for this: Average time spent on the Platform by Users? Example: each user spent how much time on work per day. Query: sourcetype="%forge%" source="/home/amadmin/log/authentication.audit.json" eventName=AM-LOGIN-COMPLETED OR eventName=AM-LOGOUT userId=*
Hi, I am trying to create a table from 2 different sourcetypes. Fields in both source types are same but has different values. I need the uniques fields in 1st column and sourcetype01 field values to... See more...
Hi, I am trying to create a table from 2 different sourcetypes. Fields in both source types are same but has different values. I need the uniques fields in 1st column and sourcetype01 field values to column2 and sourcetype02 field values to column3. Sourcetype: 01 Field01 elementName: SelectSKU    elementType: transaction    metrics: {       TotalHtmlPageResponseTime: 78163      TotalHtmlPageTTFB: 6834      effective90Percentile: 1246 Field02 elementName: SelectProduct    elementType: transaction    metrics: {       TotalHtmlPageResponseTime: 187545      TotalHtmlPageTTFB: 187285      effective90Percentile: 104 Sourcetype: 02 Field01 elementName: SelectSKU    elementType: transaction    metrics: {       TotalHtmlPageResponseTime: 59545      TotalHtmlPageTTFB: 1285      effective90Percentile: 209 Field02 elementName: SelectProduct    elementType: transaction    metrics: {       TotalHtmlPageResponseTime: 187545      TotalHtmlPageTTFB: 187285      effective90Percentile: 756 Field03 elementName: PDP    elementType: transaction    metrics: {       TotalHtmlPageResponseTime: 4546      TotalHtmlPageTTFB: 2455      effective90Percentile: 256 I want to rename the elementName to Transaction and effective90Percentile to 90thPercentile and have a below table: Transaction                            90th Percentlile (Sourcetype01)                       90th Percentlile (Sourcetype02) PDP                                                                                                                                             256 SelectSKU                                      1246                                                                              209 SelectProduct                               104                                                                                756   Any advice is appreciated.  TIA
Hi, Please let me know to which datamodel below logs should be tagged to ? 1)Syslog: Jun 18 06:25:02 ip-00-0-00-000 start-amazon-cloudwatch-agent[0000]: 2020/06/18 06:25:02 Stopping tail as file n... See more...
Hi, Please let me know to which datamodel below logs should be tagged to ? 1)Syslog: Jun 18 06:25:02 ip-00-0-00-000 start-amazon-cloudwatch-agent[0000]: 2020/06/18 06:25:02 Stopping tail as file no longer exists: /var/log/syslog   2) Antivirus Database update logs: Wed May 27 23:46:53 2020 -> daily database available for download (remote version: 00000)   3) Linux Kernal Logs May 27 09:28:45 ip-00-0-0-000 kernel: [    0.000000] Initializing cgroup subsys cpuset
I cannot find what is doing this and the answer that google always brings back from the community is not very good.  Root Cause(s): The number of extremely lagged searches (1) over the last hour e... See more...
I cannot find what is doing this and the answer that google always brings back from the community is not very good.  Root Cause(s): The number of extremely lagged searches (1) over the last hour exceeded the red threshold (1) on this Splunk instance
Hi, I have 2 indexers with the command I confirm that port 9997 is open. In one of the two indexers all the incoming connections remain in the "SYN_RECV" state In the other indexer some are in th... See more...
Hi, I have 2 indexers with the command I confirm that port 9997 is open. In one of the two indexers all the incoming connections remain in the "SYN_RECV" state In the other indexer some are in the "SYN_RECV" state and others are "ESTABLISHED" The funny thing is that one ip range if connected and the other ip range does not tcpdump is a relative test, because those in the "ESTABLISHED" state are not recognized by ping, telnet, or sniffer but deliver the logs to the indexer and the indexer to the search head and are displayed normally The firewall area says that the policies are correct. I am guaranteeing that port 9997 is in the listening state, is it a splunk configuration problem, or is it a transmission level network policy problem? tcp 0 0 0.0.0.0:9997 0.0.0.0:* LISTEN tcp 0 0 172.27.29.71:9997 192.168.71.13:49603 SYN_RECV tcp 0 0 172.27.29.71:9997 192.168.71.13:49601 SYN_RECV tcp 0 0 172.27.29.71:9997 192.168.70.17:59856 ESTABLISHED tcp 315 0 172.27.29.71:9997 192.168.70.16:56015 ESTABLISHED tcp 0 0 172.27.29.71:9997 192.168.71.12:57122 SYN_RECV tcp 0 0 172.27.29.71:9997 192.168.70.14:51241 ESTABLISHED tcp 0 0 172.27.29.71:9997 192.168.71.13:49605 SYN_RECV tcp 0 0 172.27.29.71:9997 192.168.71.12:57119 SYN_RECV tcp 3877460 0 172.27.29.71:9997 172.29.4.39:34311 ESTABLISHED tcp 0 0 172.27.29.71:9997 192.168.71.13:49598 SYN_RECV tcp 3211190 0 172.27.29.71:9997 192.168.70.12:55205 ESTABLISHED
I just want to know which filed name makes more sense to use for the segregation of the log type. for example, we have Linux and windows logs. for separation of the log types in the report or alert ... See more...
I just want to know which filed name makes more sense to use for the segregation of the log type. for example, we have Linux and windows logs. for separation of the log types in the report or alert which filed should be used. "source type" or "event type". and if you define it with a valid reason I would be very glad thanks