All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

A number of sourcetypes are coming up as status=red because their data_last_time_seen field is "stuck". All of these are coming from the Microsoft Teams Add-on for Splunk.  New data is coming in a... See more...
A number of sourcetypes are coming up as status=red because their data_last_time_seen field is "stuck". All of these are coming from the Microsoft Teams Add-on for Splunk.  New data is coming in and the Overview data source tab is recognising it and the new events can also be seen using search. There does appear to be a change in the data format that may be responsible for data_last_time_seen not being able to update, however Clearing and running data sampling again had no effect. Refreshing also has no effect. Is there a way to "refresh" this field? Or any other approaches that can be taken? Thanks 
How can I get the "host" value extracted from a JSON event with "INDEXED_EXTRACTIONS = json" into the events host field? By default, this value ends up in the extracted_host value, and the following... See more...
How can I get the "host" value extracted from a JSON event with "INDEXED_EXTRACTIONS = json" into the events host field? By default, this value ends up in the extracted_host value, and the following INGEST_EVAL does not work: INGEST_EVAL = host:=extracted_host, extracted_host:=null() 
Hello,  When I search for sour type=Xxx for last 60 min window , I found millions of records, so I tried to export , those millions of records in CSV format , and export and tried to download , with... See more...
Hello,  When I search for sour type=Xxx for last 60 min window , I found millions of records, so I tried to export , those millions of records in CSV format , and export and tried to download , within 30 sec , the page is showing me blank like this , any  information please . What might be the issue , please sujjest me .     
Hello everyone, When I was trying to search source type=... Xxx and checked from date from 3 /09/2021 to 6 /07/2021 it's showing me millions of records. And again I searched for 1 may 2021 to 25 Jul... See more...
Hello everyone, When I was trying to search source type=... Xxx and checked from date from 3 /09/2021 to 6 /07/2021 it's showing me millions of records. And again I searched for 1 may 2021 to 25 July 2021 it's showing only 370 events , so  results are not  as we expected, it should be more. Please could you help me. Regarding this issue.   Thanks
Hello Splunk Community,  I have created a query to calculate the business date of the file which arrived to be loaded & the date/time it arrived, which outputs the following (dummy data used):  ... See more...
Hello Splunk Community,  I have created a query to calculate the business date of the file which arrived to be loaded & the date/time it arrived, which outputs the following (dummy data used):  File Business Date Arrival Date Arrival Time File A 22-11-2021 06-12-2021 6.51 File B 22-11-2021 06-12-2021 6.55 File B 22-11-2021 06-12-2021 6.56   I want to create a new column which highlights if a file (with the same business date) arrived more than once on the same day. So for example the output would look like so:  File Business Date Arrival Date Arrival Time Count  File A 22-11-2021 06-12-2021 6.51 1 File B 22-11-2021 06-12-2021 6.55 2 File B 22-11-2021 06-12-2021 6.56 2   Can anyone help improving my query to include this new column?  Thanks,  Zoe
I'm trying to backfill my summary index with 2 months worth of data with a report that gives results from the last minute. This is my report:        action.email.useNSSubject = 1 action.summary_in... See more...
I'm trying to backfill my summary index with 2 months worth of data with a report that gives results from the last minute. This is my report:        action.email.useNSSubject = 1 action.summary_index = 1 action.summary_index._type = event alert.track = 0 cron_schedule = */1 * * * * dispatch.earliest_time = -1m dispatch.latest_time = now display.events.fields = ["host","source","sourcetype","Price","ID","Date","Time"] display.general.type = statistics display.page.search.tab = statistics display.visualizations.show = 0 enableSched = 1 realtime_schedule = 0 request.ui_dispatch_app = myapp request.ui_dispatch_view = search schedule_priority = higher search = index="myindex" sourcetype="mysource" | append [ search index="myindex" sourcetype="mysource" earliest=-1mon@mon latest=@mon | stats avg(Price) as past_avg by ID ] | eventstats values(past_avg) as past_avg by ID | where Price > past_avg | stats values(*) as * by ID | table ID, Price, past_avg I tried to fill it using this command: splunk cmd python fill_summary_index.py -app Myapp -name "Summary_Population" -et -2mon@mon -lt @mon -dedup true but I get this error: *** For saved search 'Summary_Populating' *** Failed to get list of scheduled times for saved search 'Summary_Populating' (app = 'Myapp', error = '[HTTP 404] https://127.0.0.1:8089/servicesNS/Myusername/Myapp/saved/searches/Summary_Populating/scheduled_times?earliest_time=-mon%40mon&latest_time=%40now; [{'type': 'ERROR', 'code': None, 'text': 'Action forbidden.'}]' No searches to run Does anyone have any idea why is this occurring and how to fix it? 
Hi, I've been reading number of posts about how to extract the OS and browser details but I don't think there is a better or clean way to do this. I've a similar requirement where in my logs there i... See more...
Hi, I've been reading number of posts about how to extract the OS and browser details but I don't think there is a better or clean way to do this. I've a similar requirement where in my logs there is a user agent field. Now what I want is to know the browser details along with device like if it's a desktop, mobile etc. Just posting this to see if anyone has figured out anything on this which can save time writing complex SPLs? Any help will be appreciated. 
Hi Team, The issue is like we have duplicate in the summary index, where from single host multiple same records are available. Looking for a very quick help. I would require "query to remove duplic... See more...
Hi Team, The issue is like we have duplicate in the summary index, where from single host multiple same records are available. Looking for a very quick help. I would require "query to remove duplicates from the summary index" Thanks in advance. Regards, Prakash Mohan Doss  
We are using Splunk 7.2.6  as our Syslog server in our network environment. On the Splunk server, I added the IPFIX add-on, and on the NSX-T point the IPFIX target to the SplunkSrv:4739. We use the... See more...
We are using Splunk 7.2.6  as our Syslog server in our network environment. On the Splunk server, I added the IPFIX add-on, and on the NSX-T point the IPFIX target to the SplunkSrv:4739. We use the exact same configuration on our previous NSX-V and we were able to receive all the related packets on Splunk, but in NSX-T we are unable to do so. We check everything, local firewall rules on Splunk included, but still no chance. also, I check the Splunk server with Wireshark and I got IPFIX traffic but it doesn't show in Splunk Any thoughts?
At the beginning I want to say that I did search the forums and I saw the most typical responses like "use logrotate". Sorry, that's not applicable in this case. And the case is: I have a windows-b... See more...
At the beginning I want to say that I did search the forums and I saw the most typical responses like "use logrotate". Sorry, that's not applicable in this case. And the case is: I have a windows-based UF which is supposed to ingest files from several servers shared over CIFS shares. During typical opration it seems to work quite well since we increased the limits on UF, especially those pertaining to number of open file descriptors. But we have issues whenever we have to restart the UF (which mostly happens when we add new sources). Then the files get re-checked one after another and even though we have limits for "ignore_older_than" and so on, and the crc's are saled with the filename so effectively the events aren't re-ingested multiple times, UF opens each file one after another and checks its contents and it takes up to few hours after each UF restart which is kinda annoying. Any hints at optimizing that? Unfortunately, we're quite limited on the source side since we're not ablle to either install the UF's at the log-producing machines (which of course would at least to some extend alleviate the problem) or move the logs away from the source location. So effectively we're stuck with this setup. Might there be some issue with windows sharing so that even though the fishbucket seems to be working properly, the UF still has to scan through whole file from the beginning? That would explain the delay.
hello dear splunkers i have a small question about the aws app. in the security tab there are some views that uses custom commands such as "command_nadefault.py" or "command_acl_inputlookup.py"...i ... See more...
hello dear splunkers i have a small question about the aws app. in the security tab there are some views that uses custom commands such as "command_nadefault.py" or "command_acl_inputlookup.py"...i wanted to know or if it's possible to ask what's their use in the different dashboards that available in the app...i'm not that familiar with python so i'm not able to understand how they being used in the app. thanks in advance,  etai
Hello i have a json log and i cannot figure out how to break the lines correctly this is how it looks like : how can i break the lines that each event will be on is own ?
Hi I have schedule report that run daily, but often failed! number of events about 80,000,000 job inspection log attach to the post. any idea? Thanks
hello everyone I have an ISF for send stream logs to my Splunk indexer  the logs in  "/opt/streamfwd/var/log" show that this error is happening: 2021-12-03 22:56:08 ERROR [140397528004352] (HTT... See more...
hello everyone I have an ISF for send stream logs to my Splunk indexer  the logs in  "/opt/streamfwd/var/log" show that this error is happening: 2021-12-03 22:56:08 ERROR [140397528004352] (HTTPRequestSender.cpp:1408) stream.SplunkSenderHTTPEventCollector - (#7) HTTP request [https://splunk:8088/services/collector/event?index=_internal&host=stream&source=stream&sourcetype=stream:log] response = 400 Bad Request {"text":"Incorrect index","code":7,"invalid-event-number":1}
Hi, I am running multiple applications on one JVM(Tomcat). How can I segregate each application as there is single java agent running?
I have a search query that looks like this:   index="myindex" sourcetype="mysource" earliest=@d latest=now | append [ search index="myindex" sourcetype="mysource" earliest=-1mon@mon latest=@mon ... See more...
I have a search query that looks like this:   index="myindex" sourcetype="mysource" earliest=@d latest=now | append [ search index="myindex" sourcetype="mysource" earliest=-1mon@mon latest=@mon | stats avg(Price) as past_avg by ID ] | stats values(*) as * by ID | table Date, ID, Price, past_avg This gives me: what I'm trying to do is display only those values in the Price column that are smaller then past_avg,  does anyone know how I could achieve that?
Hi Thank you for al the response and glad this is helping me resolve issues while learning splunk Appreciate your help.  1. how to change source for only one column in a table and filter/sort it... See more...
Hi Thank you for al the response and glad this is helping me resolve issues while learning splunk Appreciate your help.  1. how to change source for only one column in a table and filter/sort it based on other column? Below i need to change source for mainline column and associate data to project column. also i need to filter and sort the data alphabetically.  <table> <search> <query>index="wtqlty" source=pdf-fc-002-rh sourcetype="release_pdf_results_json" | table pdf_name, pdf_state, main_line, Req_report, patch_name, started_on, Stream_start, Handover, planned_stopped_on, fco_state, snapshot, stakeholders.project_leader.name, stakeholders.developer.name, air_issues{}.short_description , Quality, questionnaire | rename pdf_name AS PDF, pdf_state AS "PDF State", main_line AS "Mainline", patch_name AS Project, started_on AS "PDF start", planned_stopped_on AS "Planned Stop", fco_state AS "FCO State", stakeholders.project_leader.name AS PL, stakeholders.developer.name AS Developer, air_issues{}.short_description AS Description, questionnaire AS Questionnaire</query> <earliest>0</earliest> <latest></latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <format type="color" field="FCO State"> <colorPalette type="expression">case (match(value,"DRAFT_DEV"), "#DC4E41",match(value,"ACCEPTED"),"#53A051",true(),"#C3CBD4")</colorPalette> </format> <format type="color" field="PDF State"> <colorPalette type="expression">case (match(value,"DRAFT_DEV"), "#DC4E41",match(value,"accepted"),"#53A051",true(),"#C3CBD4")</colorPalette> </format> <drilldown> <condition field="PDF State"> <set token="form.pdf_state_token">$click.value2$</set> </condition> <condition field="PL"> <set token="form.PL">$click.value2$</set> </condition> <condition field="FCO State"> <set token="form.fco_name">$click.value2$</set> </condition> <condition field="Developer"> <set token="form.developer">$click.value2$</set> </condition> <condition field="Snapshot"> <set token="form.snap_shot">$click.value2$</set> </condition> <condition field="Mainline"> <set token="form.main_line">$click.value2$</set> <set token="form.show_clear_filter">*</set> </condition> <condition field="Project"> <set token="form.patch_name">&gt;$click.value2$</set> <link target="_blank">https://stream-dashboard.asml.com/db/overall/$click.value2$/</link> </condition> <condition field="PDF"> <set token="form.pdf_name">$click.value$</set> <link target="_blank">https://at.patchtooling.asml.com/pdf/RH/ML/patches/$row.Project$/</link> </condition>    2.how to change format of text, inside a cell? say if there is an URL for the text in a cell, i need to underline it, so users knows its url link clickable?   3. how can we strike a number based in cell and update with new number(in case of weeks) Appreciate your help. thank you
I'm trying to write a search that will return a table where all average values of the field price grouped by Ids are lower then 1 month ago. This is my attempt:   index="myindex" sourcetype="mysour... See more...
I'm trying to write a search that will return a table where all average values of the field price grouped by Ids are lower then 1 month ago. This is my attempt:   index="myindex" sourcetype="mysourcetype" earliest=-1mon@mon latest=@mon | stats avg(Price) as avg by ID | where avg > [search index="myindex" sourcetype="mysourcetype" earliest=@d | stats avg(Price) as new_avg by ID | return $new_avg] | table * This, however, always returns 0 results even though there are events in these time periods. I even tried subtituting the subsearched with a fixed number and that produces a table. Does anyone now why this isn't working and maybe how to fix it? 
I have the first query First Query :     search criteria | rex field=_raw ".* IPAddress=(?<IPAddress>.+?) " | table IPAddress The above query is returning a table with all IPAddress. I want this da... See more...
I have the first query First Query :     search criteria | rex field=_raw ".* IPAddress=(?<IPAddress>.+?) " | table IPAddress The above query is returning a table with all IPAddress. I want this data to be looked at in the second query. How can we write two queries as single? Second Query :   search criteria  | rex field=_raw ".* IPAddress=(?<IPAddress>.+?)\"" | where IPAddress in (first Query results )  | rex field=_raw ".* value=(?<value>.+?)\"" | table IPAddress,value, _time I tried below but it is empty results <first search > | rex field=_raw ".* IPAddress=(?<IPAddress>.+?)\"" | where IPAddress in ([search <second search> | rex field=_raw ".* IPAddress=(?<IPAddress>.+?) " | fields IPAddress ])| rex field=_raw ".* value=(?<value>.+?)\"" | table IPAddress,value, _time  
could someone who is SPL expert help me reduce this:     |eval dest=replace(dest, "dstdomain|src|any-of|dst|# ", ""), dest=replace(mvjoin(dest, " "), "/32", "|"), dest=split(dest, "|"), dest=split... See more...
could someone who is SPL expert help me reduce this:     |eval dest=replace(dest, "dstdomain|src|any-of|dst|# ", ""), dest=replace(mvjoin(dest, " "), "/32", "|"), dest=split(dest, "|"), dest=split(dest, " ")