All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, We have an issue with an external REST API that works properly 99% of the time, but once in a while it publishes data as "back dated". Background: We have configured a Splunk Add-on to fetch t... See more...
Hi, We have an issue with an external REST API that works properly 99% of the time, but once in a while it publishes data as "back dated". Background: We have configured a Splunk Add-on to fetch the API every 300 seconds with a REST API filter (let's say the filter is named "DateCreated") used as part of HTTP GET request indicating the time when an event was generated. If everything would work like expected, this is how we can do the implementation: Filter: DateCreated > (now - 300s) Result: Returns all events from last 5 min time period However, once in a while, the API publishes data with DateCreated that is back dated up to multiple hours, meaning it does not match our initial implementation and resulting in missed events. We have also investigated there is no other filter in the API that we could used to go around this issue 100% of cases. Potential solution: We have been considering a solution where we would do a batch search, e.g. once in 24 hours fetching all events from the API. This would receive all events from past 24 hours (with high certainty the back dated as well) and then would process all events: Ingest events that have not been ingested as part of past API calls Dump the ones that can be considered duplicates (ingested as part of past API calls) Implementing a batch API call feature comes with another problem, it generates duplicate events to index. We want to keep the index clean from duplicate events due to our configured alerting and reporting logic. To have a solution to not have dupe events, we have been considering two options: Add-on would be utilizing KV store, storing unique identifier of every event during the original API call. Batch API call would then utilize the store for duplicate detection, dropping the ones already ingested. This comes with an issue of over time KV store growing and nobody cleaning it. Is there any good way to clean up KV store either once in a while or like set max size to it and it would remove the oldest data automatically? Optimally the clean up could be performed by the add-on. As part of every batch API call, add-on would perform REST API calls against Splunk index where data is already ingested, parsing the unique identifiers and using them to drop duplicates. Does an add-on have permissions to perform Splunk REST API calls natively without additional credentials? If not, what would be the optimal way of creating and storing account information? Any example implementation to mention of Add-on calling Splunk REST API? Any other potential implementation idea? In the end, we want to minimize admin overhead over different Splunk environments performing exact same API calls, but for different entities. We have multiple environments that perform same activity so this should be a solution that can be easily deployed and managed for multiple environments. Thanks.
Hello,  I have this query    Index = s098_prod sourcetype=SERVER_PROD SCRIPT_ID=6SW* NOT (name="Logout" OR name="Login" OR name="Reboot") | dedup sessionnumber | eval enddatetime=if(isNull(enddate... See more...
Hello,  I have this query    Index = s098_prod sourcetype=SERVER_PROD SCRIPT_ID=6SW* NOT (name="Logout" OR name="Login" OR name="Reboot") | dedup sessionnumber | eval enddatetime=if(isNull(enddatetime), "RUNNING", enddatetime) | eval Statustext = "From ".startdatetime. " To ".enddatetime." on ".extracted_host | stats latest(rstatus) AS "Status" latest(Statustext) as Statustext by name   I'm trying to calculate the time deference with the same grouping as Stats. But always return Null. 
Hi All, need help in my query, formatting an IF statement. My Code:        index=opennms "uei.opennms.org/nodes/nodeUp" OR "uei.opennms.org/nodes/nodeDown" | rex field=eventuei "uei.opennms.org... See more...
Hi All, need help in my query, formatting an IF statement. My Code:        index=opennms "uei.opennms.org/nodes/nodeUp" OR "uei.opennms.org/nodes/nodeDown" | rex field=eventuei "uei.opennms.org/nodes/node(?<Status>.+)" | stats max(_time) as Time latest(Status) as Status by nodelabel | lookup ONMS_nodes.csv nodelabel OUTPUT sitecode | table sitecode, nodelabel, Status,Time         My Output:  sitecode nodelabel Status Time ABM ARABMLANCCO1 Up 1/23/2021 14:35 ABM ARABMLANCUA1 Up 1/23/2021 8:26 ABM ARABMLANCUA2 Up 1/23/2021 8:25 ABM ARABMWANRTC1 Up 1/23/2021 8:25 ABM ARABMLANCUA3 Up 1/23/2021 8:25 ABM ARABMLANCUA4 Up 1/23/2021 8:25 ABM ARABMAPNOPT1 Up 1/19/2021 13:37 ZBQ BRZBQLANCUA1 Up 1/19/2021 13:37   Above table am getting from my code. Requirement :  I want to list down all devices from that sitecode which have any of these name ("*WANRTC*" OR "*LANCCO*" OR "*WLNWLC*"OR "*APNINT*") these keyword in nodelabel. rest all site code should be removed from the list.   In my output. Am having ABM site which matches any of that Keyword and that to be displayed, where as ZBQ doesnt have any of that keyword devices in the list, so it should be removed. like and IF ("*WANRTC*" OR "*LANCCO*" OR "*WLNWLC*"OR "*APNINT*")  any of this present in device names, then the complete list to be displayed.   
Hi all,   I have created a custom search command that need some preformated input. To do so, I always run my command with the same splunk commands before : ... | eval a=b+c | stats count by a | bi... See more...
Hi all,   I have created a custom search command that need some preformated input. To do so, I always run my command with the same splunk commands before : ... | eval a=b+c | stats count by a | bin ... | my_custom_command(count,a,b,c) Hence, i have created a macro to wrap all this code, so I only have to call my macro : ... | `my_macro(b,c)` The problem is because it is a macro, it does not have the description of the searchbnf.conf for "my_custom_command".   So I would like to edit the code of "my_custom_command" to "embed" the splunk commands i always run  (the stats, bin, eval connands) before running my own code. Is there a way to do so ? If no, is there a way to create a searchbnf for a macro ?
Hi i would like a help on our current problem. We have this JSON log that we only need to ingest the events that satisfy the following condition. FolderPath = "*windows*" AND FolderPath="*personal... See more...
Hi i would like a help on our current problem. We have this JSON log that we only need to ingest the events that satisfy the following condition. FolderPath = "*windows*" AND FolderPath="*personal*" AND InitiatingProcessSHA1="bbcc123448781q40410bcd6f0a2cc666b52e7abc" (FileName = "*vendor.exe*" OR FileName = "*stellar.exe*") AND InitiatingProcessFileName = "Setup64.exe"   { "AccountName": "jcn001", "AccountSid": "", "ActionType": "ProcessCreated", "DeviceName": "windowhost", "EventCount": 1, "FileName": "Code Helper vendor.exe", "FolderPath": "/private/var/folders/d6/windows/AppTranslocation/personal/d/", "InitiatingProcessCommandLine": "tasklist sample", "InitiatingProcessFileName": "Setup6.exe, "InitiatingProcessFolderPath": "/private/var/folders/d6/test/t/apptranslocation/test/d/visual studio code.app", "InitiatingProcessParentFileName": "Setup64.exe", "InitiatingProcessSHA1": "bbcc123448781q40410bcd6f0a2cc666b52e7abc", "ProcessCommandLine": "tasklist jade /svc", "SHA1": "bbcc123448781q40410bcd6f0a2cc666b52e7abc", "SHA256": "none", "Timestamp": "2021-01-22T20:11:42.103861Z" } { "AccountName": "jcn001", "AccountSid": "", "ActionType": "ProcessComplete", "DeviceName": "windowhost, "EventCount": 1, "FileName": "Code Helper stellar.exe", "FolderPath": "/private/var/folders/d6/windows/AppTranslocation/personal/d/", "InitiatingProcessCommandLine": "sample", "InitiatingProcessFileName": "Code Helper (Renderer)", "InitiatingProcessFolderPath": "/private/var/folders/d6/sample/t/apptranslocation", "InitiatingProcessParentFileName": "Code Helper (Renderer)", "InitiatingProcessSHA1": "bbcc123448781q40410bcd6f0a2cc666b52e7abc", "ProcessCommandLine": "\"tasklist jade /svc ", "SHA1": "bbcc123448781q40410bcd6f0a2cc666b52e7abc", "SHA256": "none", "Timestamp": "2021-01-22T20:11:42.103861Z" } { "AccountName": "jcn001", "AccountSid": "", "ActionType": "Done", "DeviceName": "windowhost", "EventCount": 1, "FileName": "Code Helper reg.exe", "FolderPath": "/private/var/folders/d6/windows/AppTranslocation/company/d/", "InitiatingProcessCommandLine": "sample", "InitiatingProcessFileName": "Code Helper (Renderer)", "InitiatingProcessFolderPath": "/private/var/folders/d6/word/", "InitiatingProcessParentFileName": "Code Helper (Renderer)", "InitiatingProcessSHA1": "bbcc123448781q40410bcd6f0a2cc666b52e7abc", "ProcessCommandLine": "\"tasklist jade /svc ", "SHA1": "bbcc123448781q40410bcd6f0a2cc666b52e7abc", "SHA256": "none", "Timestamp": "2021-01-22T20:11:42.103861Z" }   In the example - the only one that will be ingested is the first log. NOW, the client want us to put the condition onto a REGEX format, so they could put it on configuration file. Is there are way to achieve it ? Converting query to regex with conditions
Is possible to rename values of feeds? i am going to explain it better: I have open source feeds but some values of them are written in different form, for example, i am going to group all malware... See more...
Is possible to rename values of feeds? i am going to explain it better: I have open source feeds but some values of them are written in different form, for example, i am going to group all malware names under the same field but i have this trouble: Malware Name NjRat command & control NjRat Njrat NJraat Njratt c&c   Is possible to modify them at indexing time under the same name NjRat so when i am going to analyze it i have no problem and they are all grouped?  Thanks in advance
Need to install Jira module in python splunk .  But it is not getting installed . How to install any custom module in splunk python 
Hey,    I need help on setting up Splunk_TA_nix add-on for multiple hosts. I have a clustered environment. In my deployment server Splunk_TA_nix is installed. The requirement is I need to collect ... See more...
Hey,    I need help on setting up Splunk_TA_nix add-on for multiple hosts. I have a clustered environment. In my deployment server Splunk_TA_nix is installed. The requirement is I need to collect metrices from 10 Linux servers. So , Do I need to install and configure the same in all the 10 servers? or Can we configure in deployment server only to push the configuration to all the servers,,,,, If so how can we configure in DS and push?   Its new to me , can anyone please help me on this?   Thanks, Dharani
I all as an architect sometimes I find myself in environment where the inputs are misconfigured and splunk servers are receiving traffic directly. For example the search head and indexer receiving tr... See more...
I all as an architect sometimes I find myself in environment where the inputs are misconfigured and splunk servers are receiving traffic directly. For example the search head and indexer receiving traffic directly even if a syslog server/HF is present in the environment. Is there a search with which I can find out each source type and which Splunk  server is receiving the logs and forwarding it to the indexer layer. This will really help in resolving issues. Thanks  
We meet on question is , when the u_worked_date is “2020-12-09”, some of them timestamps is “2020-12-09”, others is “2020-09-12”,I do not know what cause this, Other dates in December can be set as t... See more...
We meet on question is , when the u_worked_date is “2020-12-09”, some of them timestamps is “2020-12-09”, others is “2020-09-12”,I do not know what cause this, Other dates in December can be set as timestamps normally. So I think it maybe a bug of summary index.   Bellow is my SPL   index=idx_snow_task_time sourcetype=snow_task_time | dedup sys_id | table sys_id u_worked_date time_worked rate_type sys_updated_by task u_task_category u_actual_time user | eval _time=strptime(u_worked_date,"%Y-%m-%d") | collect index=idx_summary_snow_task_time_by_worked_date source="Snow Task Time by Worked Date"
Hi I have a dashboard where I display data in single view panels. I would like to show trends, but I want it customized. Right now my options are set to x amount days, months, seconds, ect. My situa... See more...
Hi I have a dashboard where I display data in single view panels. I would like to show trends, but I want it customized. Right now my options are set to x amount days, months, seconds, ect. My situation is that I have three weeks of data that are averaged into a single value. So in my single view panel I want the data to show the current output compared to the % difference of the 3 week avg value. Below I have the output displayed and the -20% showing the trend from one week ago. What I want is for example  my data from previous weeks are1 week ago = 50 , 2 week ago = 60, 3 week ago = 70 all averaged out comes to 60, so instead of the -20% I would like to see 10% ((67-60)/60)*100 (I use a time picker so all data is based of that, so for example if I select  'last 60 minutes' at 10 am the three weeks are based on 9 am - 10 am from 1 week ago, 9 am -10 am from 2 weeks ago, ect.)     Please let me know if you have questions.  
Splunk noob here, Wanted to group our get endpoints under a single entry. We have the following query   index=reporting sourcetype=elilogs cf_app_name=endpoint* "Results.Message"="inbound request" ... See more...
Splunk noob here, Wanted to group our get endpoints under a single entry. We have the following query   index=reporting sourcetype=elilogs cf_app_name=endpoint* "Results.Message"="inbound request" | stats count by "msg.Service.URL" |rename "msg.Service.URL" as "Endpoint" The results come out as  http://endpoint.example.com/sh/bundles 4944 http://endpoint.example.com/sh/bundles/0043005f-a3ce-4f60-8f1d-0a8b076aecdf 3 http://endpoint.example.com/sh/bundles/0067cb65-1de0-4b8e-bdf9-39920f599961 2 http://endpoint.example.com/sh/bundles/008950c2-228c-4871-bab7-50dc01a3297a 2 http://endpoint.example.com/sh/bundles/00c100b8-47ec-4feb-86ae-99f635f8960f 2 http://endpoint.example.com/sh/bundles/00c63a13-2700-440d-b54e-1538db038a1e 2 http://endpoint.example.com/sh/bundles/00e220d1-4f68-487f-ae01-13999811ba31 2 http://endpoint.example.com/sh/bundles/01485473-4b49-4eb8-9a4f-ea5c61f3fe7a 2 http://endpoint.example.com/sh/bundles/0164d5d2-3624-40ca-bf4c-6a3619aead00 2 I want the results with guid be grouped under a single value. So the desired output here would be http://endpoint.example.com/sh/bundles 4944 (stays the same) http://endpoint.example.com/sh/bundles/* 17 (the sum of all the endpoint counts with guid) Trying to use the query like the following without any luck | eval msg.Service.URL=case(like(msg.Service.URL, "http://endpoint.example.com/sh/bundles/%"), "http://endpoint.example.com/sh/bundles/*", 1=1, 'msg.Service.URL')
Hi, I am working on a query where I need to calculate the average of 99th percentile values over a 5 minute period of time for last 24 hours by serviceName.  serviceName is nothing but the web servic... See more...
Hi, I am working on a query where I need to calculate the average of 99th percentile values over a 5 minute period of time for last 24 hours by serviceName.  serviceName is nothing but the web service called by consumer and i am looking to have the response time of some services. Below is my query - index=myapp_prod sourcetype=service_log serviceName=service1 OR serviceName=service2 OR serviceName=service3 | eval responseTime= responseTime/1000000 | timechart span=5m p99(responseTime) as 99thPercentile by serviceName useother=false  which gives a table like this - _time service1 service2 service3 00:05 1.2 0.8 2.4 00:10 1.7 0.34 2.8 00:15 1.5 1.2 3.4   What i want is calculate the average of these and put it in another table. Something like this - serviceName responseTime service1 1.37 service2 0.4 service3 2.1   Hope someone can help.
Hello, I am transferring all my network and some firewall data to splunk. I try to analyse part of my firewall traffic of an IoT network. Therefore I am trying to transfer the source IPs which are ... See more...
Hello, I am transferring all my network and some firewall data to splunk. I try to analyse part of my firewall traffic of an IoT network. Therefore I am trying to transfer the source IPs which are communicating with my IoT devices to user friendly data. This is why I tried to use the „Network Toolkit“ with its lookup called „whois“. But I don‘t get it working. Combining my data with the source IPs - called src_ip - with „whois“ is not producing any data. I only get empty values. search request like: ... | lookup whois host as src_ip OUTPUT ... Is this usage correct? I cannot produce any different output than empty output. Do you have any suggestions? Regards, Jens
some of my data in index "main" shows as 2420 days old, yet my "Frozen Age" is set to 365 days, shouldn't the old data get deleted ? I am looking at this from the monitoring console, Index Detail: I... See more...
some of my data in index "main" shows as 2420 days old, yet my "Frozen Age" is set to 365 days, shouldn't the old data get deleted ? I am looking at this from the monitoring console, Index Detail: Instance, and the status line has this information: Data Age vs Frozen Age (days)  2420 / 360   I am using Splunk Entreprise 8.0.5 and my index "main" is defined as so: [main] repFactor = auto coldToFrozenDir = /opt/splunk_buckets/archive/main/frozendb thawedPath = /opt/splunk_buckets/archive/main/thaweddb # frozen time is 12 months frozenTimePeriodInSecs = 31104000  
we have a McAfee ePolicy Orchestrator 5.10 server and we want to integrate it with splunk. we want to know how to do it, which add-on we have to implement on Splunk? which port we have to active in t... See more...
we have a McAfee ePolicy Orchestrator 5.10 server and we want to integrate it with splunk. we want to know how to do it, which add-on we have to implement on Splunk? which port we have to active in the mcafee server? Do mcafee has an unchangeable default port to send logs to splunk or can we change it to another TCP port ?
Hello,  Trying to monitor a log which changes the first few characters of the log every few minutes, this seems to cause Splunk UF to re-index the whole log each time, since it see's the first few c... See more...
Hello,  Trying to monitor a log which changes the first few characters of the log every few minutes, this seems to cause Splunk UF to re-index the whole log each time, since it see's the first few characters as being different which causes a difference crc.  I have tried many different options but nothing seems to be working properly to index this log properly and without duplicates. Example of log below.   .log  at 02:53 eÎ5  eÎ5   014500000000000003FGR0002TRA00102021/01/24001202:53 32.0850006 same .log a few minutes later at 02:56 ØT&  ØT&   014500000000000003FGR0002TRA00102021/01/24001202:53 32.0850006 014500000000000003FGR0002TRA00102021/01/24001202:55 42.0150006 014500000000000003FGR0002TRA00102021/01/24001202:56 33.0110006   Seems the App that generates this log changes these characters for some reason and there is no way to capture the data without getting duplicate data. Any suggestions? Thanks    
Every event in an index has field XYZ (with a non-null positive number, no exceptions), and yet this search: index=<index> XYZ=* only finds 99.8% of the events.  The way to find the 'missing' 0.2% ... See more...
Every event in an index has field XYZ (with a non-null positive number, no exceptions), and yet this search: index=<index> XYZ=* only finds 99.8% of the events.  The way to find the 'missing' 0.2% of the events is by this search: index=<index> NOT XYZ=* Looking at the missing event's _raw, the data is there, and extracting values from _raw (spath) works -- just not via field names in Splunk Search.  This 'error' only impacts around 0.2% of the events. Has anyone seen anything like this before?  The event is in Splunk, just not searchable.  What do I ask the administrators here to investigate?  
Hi, Could someone please help me with the Alert for High Memory Usage Per Process Whenever the memory used per process is higher that 90% then trigger an alert. Below is the query which I tried bu... See more...
Hi, Could someone please help me with the Alert for High Memory Usage Per Process Whenever the memory used per process is higher that 90% then trigger an alert. Below is the query which I tried but not working. index="index"   sourcetype="PerfmonMk:Process" process_name="sqlservr" | eval Proc_Mem_mb = process_mem_used / (1024 * 1024) | fields Proc_Mem_mb process_name host _time | join host [ search index="index2" sourcetype="WinHostMon" Type=OperatingSystem | eval Tot_Mem_mb = TotalPhysicalMemoryKB/1024 | fields host Tot_Mem_mb ] | eval high_mem_per_proc = ( (Proc_Mem_mb/Tot_Mem_mb) * 100 ) | eval AlertStatus=if(high_mem_per_proc > 90, "Alert", "Ignore") |table _time host process_name Tot_Mem_mb Proc_Mem_mb high_mem_per_proc AlertStatus | search AlertStatus="Alert"  
Hi All,   I have a requirement where I need to show only alternate X axis label when I am running a chart command:  index=xyz  | bin _time span=1m| eval time1 = strftime(_time, "%d %b %H:%M")|chart... See more...
Hi All,   I have a requirement where I need to show only alternate X axis label when I am running a chart command:  index=xyz  | bin _time span=1m| eval time1 = strftime(_time, "%d %b %H:%M")|chart count by time1.  As th query is ran for entire day and span mentioned is 1 minute,  the x-axis label is not being displayed .  Is there any way we can just show only 4 to 5 dates in the x axis label?  Thanks Arjit