All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

HI All, need your help in below query. I use below query to get below output. Query :  index=nw_syslog | rex field=_raw "neighbor\s(?<Alarm>[^\s]+)\s(?<Status>[^\s]+)" | stats max(_time) as Time... See more...
HI All, need your help in below query. I use below query to get below output. Query :  index=nw_syslog | rex field=_raw "neighbor\s(?<Alarm>[^\s]+)\s(?<Status>[^\s]+)" | stats max(_time) as Time latest(Status) AS Status count by nodelabel Alarm Output:  nodelabel Alarm Time Downtime Status count CMDLA 10.207.31.222 2020-07-13 15:18:55 00:03:00 UP 2 NGQIT 10.201.68.17 2020-07-13 15:06:35 00:15:19 DOWN 6 EGCAI 158.29.241.86 2020-07-13 14:48:33 00:33:21 UP 2 MXMXC 10.253.208.70 2020-07-12 14:48:03 1+00:33:51 UP 1 Problem :  I want only output for below conditions.   > All DOWN must show.   > UP with only greater than 2 must be shows. Please help me with the comparing search for this condition.
I have setup the TA-ms-loganalytics on my Splunk enterprise instance, and configured the inputs, i have given the start_date as 08/04/2020 00:00:00 in my inputs configuration, the current data flow (... See more...
I have setup the TA-ms-loganalytics on my Splunk enterprise instance, and configured the inputs, i have given the start_date as 08/04/2020 00:00:00 in my inputs configuration, the current data flow (13/07/2020) is coming fine, but the count is very less OR zero for the past month dates, i validated the events/data are present there in my azure for the respective dates. below is my inputs.conf [log_analytics://SourceLogs1_Backlog] application_id = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX application_key = ************************** event_delay_lag_time = 15 index = myindex sourcetype = mysourcetype interval = 300 log_analytics_query = AuditLogs | where ResourceGroup != "" resource_group = AAAA-BBB-CC start_date = 08/04/2020 00:00:00 subscription_id = XXXXXXX-XXXXXX-XXXXX-XXXX-XXXXX tenant_id = XXXXXXX-XXXXXX-XXXXX-XXXX-XXXXX workspace_id = XXXXXXX-XXXXXX-XXXXX-XXXX-XXXXX disabled = 0   [log_analytics://SourceLogs2_Backlog] application_id = XXXXXXXXXXXXXXXXXXXXXXXXXXX application_key = *************************************** event_delay_lag_time = 15 index = myindex sourcetype = mysourcetype interval = 300 log_analytics_query = AzureDiagnostics | where ResourceGroup != "" resource_group = AAAA-BBB-CC start_date = 08/04/2020 00:00:00 subscription_id = XXXXXXX-XXXXXX-XXXXX-XXXX-XXXXX tenant_id = XXXXXXX-XXXXXX-XXXXX-XXXX-XXXXX workspace_id = XXXXXXX-XXXXXX-XXXXX-XXXX-XXXXX disabled = 0
Hi, We are using Spark Apps to ingest the data into Splunk. For that, we are referring to https://dev.splunk.com/enterprise/docs/java/sdk-java/howtousesdkjava/howtogetdatasdkjava documentation.  We... See more...
Hi, We are using Spark Apps to ingest the data into Splunk. For that, we are referring to https://dev.splunk.com/enterprise/docs/java/sdk-java/howtousesdkjava/howtogetdatasdkjava documentation.  We are planning to use attachWith. Is there any other/better way to achieve the same. There are a lot of examples in Java. Is there any better documentation/example to ingest bulk data(In TBs) via Spark Apps. 
Hi, I'm after suggestions on how to best approach this problem. I want to track over time how often I am seeing a mac address (src_mac) as categorised as: first time: never seen before daily: see... See more...
Hi, I'm after suggestions on how to best approach this problem. I want to track over time how often I am seeing a mac address (src_mac) as categorised as: first time: never seen before daily: seen once per day for last 14 days weekly: seen at least once per week for last 8 weeks occasionally: seen before but not categorised as the above. I then want to timechart this on a day-by-day basis.
Hi Splunk Support team and Community, Recently I Download the Splunk Enterprise, and installed it on a fresh installation of Windows 10 in a VM (the Network type is host to host only). Just a few ... See more...
Hi Splunk Support team and Community, Recently I Download the Splunk Enterprise, and installed it on a fresh installation of Windows 10 in a VM (the Network type is host to host only). Just a few minutes after the installation completed, there was a notification that my Splunk Installation Licence is Expired. Is there any other prerequisite(s) to get a Splunk Free 60 Days trial, that was not mentioned in the installation guide? or Do I have to do something else from this point to get the license? Please Help, and Thank you Juliogalak
hi, i sent Splunk value, for example x=1. after 10 milliseconds i send again x=2 etc. when i search for x. i see in the event viewer, time format of milliseconds. but its always .000, only the sec ... See more...
hi, i sent Splunk value, for example x=1. after 10 milliseconds i send again x=2 etc. when i search for x. i see in the event viewer, time format of milliseconds. but its always .000, only the sec is increase. how i change Splunk time to show the real milliseconds  and not .000? thanks
User complains that the following query is not returning any values in Splunk.  dbquery wmsewprd  "select REC_TYPE, CODE_TYPE, CODE_DESC, SHORT_DESC, USER_ID from SYS_CODE_TYPE" 0 events Time rang... See more...
User complains that the following query is not returning any values in Splunk.  dbquery wmsewprd  "select REC_TYPE, CODE_TYPE, CODE_DESC, SHORT_DESC, USER_ID from SYS_CODE_TYPE" 0 events Time range picker : All time    when he runs the same query at database end it is returning results. Following is the query he is running on database. select REC_TYPE, CODE_TYPE, CODE_DESC, SHORT_DESC, USER_ID from wmsew.SYS_CODE_TYPE;   Why it is not returning any results when we run in splunk.      
Hallo, I would like to investigate the login behaviour of users. I use this search: I receive the following example log: The "abstract" function creates the field "Kontoname". This contain... See more...
Hallo, I would like to investigate the login behaviour of users. I use this search: I receive the following example log: The "abstract" function creates the field "Kontoname". This contains the following values: The values "Machine01p", "Machine02p", "Machine03p" are double information. How can I remove or exclude these values? Many thanks for the support!
I have few dashboards in Splunk 7.1.4 for a client whose data source is their Jira tool. Both Splunk and jira are in same time format(GMT) However, when I chose some date or date and time range fro... See more...
I have few dashboards in Splunk 7.1.4 for a client whose data source is their Jira tool. Both Splunk and jira are in same time format(GMT) However, when I chose some date or date and time range from time picker, I am getting a difference of around 5-5 and a half hour value of time being passed onto the dashboard. Example, I chose may 15 00: hour-June14 23:59 in date range from time picker, values passed on dashboard are 1589481000 and 1592159400,whose date values are : 1589481000 : Thursday, May 14, 2020 6:30:00 PM 1592159400 : Sunday, June 14, 2020 6:30:00 PM Is there a way/reason why splunk is doing that and passing wrong date/time from my chosen values?
I have the query below, but i i dont want the services to like this.. how can i get the names of the services to be visualized on the headers having the status running showing under each service. and... See more...
I have the query below, but i i dont want the services to like this.. how can i get the names of the services to be visualized on the headers having the status running showing under each service. and the server to be in the far left.   I want it to be like this with the services being where the blue colour is and on the left to be the server so i can have servers , services and status of all my servers  
Using Universal forwarder as intermediate forwarder for source universal forwarders can cause Events being merged into one  event randomly. Permanently blocked tcpout queue on  Intermediate Univer... See more...
Using Universal forwarder as intermediate forwarder for source universal forwarders can cause Events being merged into one  event randomly. Permanently blocked tcpout queue on  Intermediate Universal Forwarder(IUF). Randomly merged events. There are few scenarios where this can happen. Assuming there are > 1 IUFs and > 1 indexers. Let's say source UF partially read an event and get's restarted. This partial event is sent to IUF1, IUF1 immediately sends it to Indexer1. Incomplete event now sits in to parsing queue of Indexer1.  After restart, source UF sends rest of the partially read event and few other complete events to IUF2 and indexer2. After that source UF switches to IUF1 and starts sending some events. The moment  IUF1 selects  indexer1, indexer1 will merge previously saved partial event to random new event of same source file of source UF. There are some other scenario where partial event waiting on indexer's parsing queue will be merged with some random event of same file. Another problem with having Intermediate Universal forwarder load balancing Universal Forwarders, is permanently blocked tcpout queue. Following slide explains how there can be a permanently blocked tcpout queue on  Intermediate Universal Forwarder https://conf.splunk.com/files/2019/slides/FN1570.pdf    
Hi @isoutamo , When I am running the following query in verbose mode it giving me results but not in fast mode. index=symantec sourcetype=sep12:scan status=completed | stats count As Dashboard pan... See more...
Hi @isoutamo , When I am running the following query in verbose mode it giving me results but not in fast mode. index=symantec sourcetype=sep12:scan status=completed | stats count As Dashboard panel uses fast mode.What necessary modification do I need to do to get the results in fast mode. Regards, Rahul
I am getting the below error message on a new Indexer that I recently added to a cluster (which previously had 2 Indexers)   Search peer NEW_INDEXER has the following message: The minimum free disk... See more...
I am getting the below error message on a new Indexer that I recently added to a cluster (which previously had 2 Indexers)   Search peer NEW_INDEXER has the following message: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch.    Checking disk space on this Indexer, it seems its already filled to 24Gb, whereas, on old Indexers, one is 12GB and other has used 14 GB. Why is there so much difference in disk space used for this directory between all Indexers ? Also, please advise how this can be fixed. (other than just extending the directory space, which I had already requested storage team to increase)
I have below error when I use search some indexes . 'asset_lookup_by_cidr' KV Store lookup table is empty or has not yet been replicated to the search peer  (path used is: /opt/splunk/var/run/searc... See more...
I have below error when I use search some indexes . 'asset_lookup_by_cidr' KV Store lookup table is empty or has not yet been replicated to the search peer  (path used is: /opt/splunk/var/run/searchpeers/A2557FA9-9D57-4231-A48D-82E29327A675-1594539930/kvstore_s_SA-IdeRjww0FotymhlCIaS1cqkc05a_assetsUsLDHpCAlNCPKOnjIAACK5z5).    
In Splunk clustering, all the indexers are generating decryption failure errors in the splunkd (_internal) logs. Crypto - Decryption operation failed: AES-GCM Decryption failed! AesGcm - AES-GCM Dec... See more...
In Splunk clustering, all the indexers are generating decryption failure errors in the splunkd (_internal) logs. Crypto - Decryption operation failed: AES-GCM Decryption failed! AesGcm - AES-GCM Decryption failed!   What could be the root cause and what is the solution? 
I want to move to data from hot/warm buckets to colddb (as that is in a different location in the end). I've checked indexes.conf definition:   maxHotSpanSecs = <positive integer> * Upper bound... See more...
I want to move to data from hot/warm buckets to colddb (as that is in a different location in the end). I've checked indexes.conf definition:   maxHotSpanSecs = <positive integer> * Upper bound of timespan of hot/warm buckets, in seconds.   I tried changing the above setting, but I don't see data being moved to a new location (colddb) after pushing the configuration.   I found the below question on the community, but this is a too old question from 2014, so I want to confirm this before applying to Production. https://community.splunk.com/t5/Getting-Data-In/How-send-indexed-data-older-than-3-months-to-colddb-monthly/td-p/133530   maxHotSpanSecs = 86400 maxHotBuckets = 3 maxWarmDBCount = 30     What configuration should apply to achieve the above requirement? Do Splunk will automatically move buckets to colddb on restart or we need to perform any manual steps?  
Hello Splunker, I have a below scenario where i am struggling to come up with search query, and would like to ask your expert advise to achieve the same. I have two database tables A and B and i ha... See more...
Hello Splunker, I have a below scenario where i am struggling to come up with search query, and would like to ask your expert advise to achieve the same. I have two database tables A and B and i have ingested them in two different sources in my splunk instance. Table A has data related to Job which basically has fields like JobID, JobName, StartTime, EndTime Table B has data related to Job execution details like JobID, AgentName, JobType, JobDate,JobEndHour Certain Jobs (Table A) takes longer time to finish and to find out the details in terms of whats was going on during the time Job (Which is taking longer) was running can be found from Table B. To find data from Table B, First we need to find out which Agent (AgentName) was handling the job (using JobID, StartTime, EndTime) and once we have Agent details, we have to search again in Table B that during those hours (StartTime, EndTime) what other Jobs were handled including Job in question by the Agent. Both tables has JobID as common field. Any help or pointers are highly appreciated.   Thanks.
I'm calling a REST API using curl on a UF to collect data from a remote DataPower appliance; the output is in JSON format and is written to a flat file that Splunk ingests and indexes. The JSON data ... See more...
I'm calling a REST API using curl on a UF to collect data from a remote DataPower appliance; the output is in JSON format and is written to a flat file that Splunk ingests and indexes. The JSON data looks like this (this snippet represents one event ingested by Splunk with three classes/objects cited in the "ObjectStatus" array; in reality, there can be dozens and dozens of classes/objects within the array): { "_links" : { "self" : {"href" : "/mgmt/status/default/ObjectStatus"}, "doc" : {"href" : "/mgmt/docs/status/ObjectStatus"}}, "ObjectStatus" : [{ "Class" : "DNSNameService", "OpState" : "up", "AdminState" : "enabled", "Name" : "dns", "EventCode" : "0x00000000", "ErrorCode" : "", "ConfigState" : "saved"}, { "Class" : "CRLFetch", "OpState" : "down", "AdminState" : "enabled", "Name" : "crl", "EventCode" : "0x00360010", "ErrorCode" : "No CRLs configured", "ConfigState" : "saved"}, { "Class" : "Statistics", "OpState" : "up", "AdminState" : "enabled", "Name" : "statistics", "EventCode" : "0x00000000", "ErrorCode" : "", "ConfigState" : "saved"}]}   I'm using a custom sourcetype to process the events in Splunk; props.conf looks like this (installed on both the UF and my indexers):     [dp_json]     INDEXED_EXTRACTIONS = json     KV_MODE = none Splunk appears to be processing the events correctly, as the following fields are present (and match up with the expected values):     ObjectStatus{}.AdminState     ObjectStatus{}.Class     ObjectStatus{}.ConfigState     ObjectStatus{}.ErrorCode     ObjectStatus{}.EventCode     ObjectStatus{}.Name     ObjectStatus{}.OpState Here's my dilemma. I would like to identify objects in a particular state. For example:  I would like to know which objects in the array have ObjectStatus{}.OpState equal to "down", with the ObjectStatus{}.Class and ObjectStatus{}.OpState returned for each object that matches. I've tried a search query such as this...     sourcetype=dp_json index=main "ObjectStatus{}.OpState"="down" | table "ObjectStatus{}.Class", "ObjectStatus{}.OpState" ...but this returns every Class from each event, regardless of OpState being "up" or "down". What adjustments are required in order to get the output I'm looking for?
Will a parentheses Surrounded SPL queries make any difference? For Example: (index IN (“indexA*”,”indexB*”) source=”sourceA”) and index IN (“indexA*”,”indexB*”) source=”sourceA” this is a big quer... See more...
Will a parentheses Surrounded SPL queries make any difference? For Example: (index IN (“indexA*”,”indexB*”) source=”sourceA”) and index IN (“indexA*”,”indexB*”) source=”sourceA” this is a big query want to know if adding  parentheses make any difference in performance wise ? 
Hi All, So, I know I can get a list of all enabled saved searches by doing: | rest count=0 /servicesNS/-/-/saved/searches | search disabled=0 | table title However, I want to list all enabled sa... See more...
Hi All, So, I know I can get a list of all enabled saved searches by doing: | rest count=0 /servicesNS/-/-/saved/searches | search disabled=0 | table title However, I want to list all enabled saved searches from all Apps, which are NOT "correlation searches". Any idea how to implement such query?